[jira] [Commented] (HADOOP-18188) Support touch command for directory
[ https://issues.apache.org/jira/browse/HADOOP-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517828#comment-17517828 ] Akira Ajisaka commented on HADOOP-18188: {quote}I mean if some application expects PathIsDirectoryException to be thrown for touch command used with directory, from that viewpoint this behaviour change might be treated as incompatible? {quote} Thank you [~vjasani] for your comment. It makes sense to me, so I will add a release note to warn the users. Anyway I'm not a fan of adding a new option because the option is not consistent with the touch command in POSIX. > Support touch command for directory > --- > > Key: HADOOP-18188 > URL: https://issues.apache.org/jira/browse/HADOOP-18188 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Currently hadoop fs -touch command cannot update the mtime and the atime of > directory. The feature would be useful when we check whether the filesystem > is ready to write or not without creating any file. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18188) Support touch command for directory
[ https://issues.apache.org/jira/browse/HADOOP-18188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17514393#comment-17514393 ] Akira Ajisaka commented on HADOOP-18188: I don't think it is incompatible. > Support touch command for directory > --- > > Key: HADOOP-18188 > URL: https://issues.apache.org/jira/browse/HADOOP-18188 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Priority: Major > > Currently hadoop fs -touch command cannot update the mtime and the atime of > directory. The feature would be useful when we check whether the filesystem > is ready to write or not without creating any file. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18188) Support touch command for directory
Akira Ajisaka created HADOOP-18188: -- Summary: Support touch command for directory Key: HADOOP-18188 URL: https://issues.apache.org/jira/browse/HADOOP-18188 Project: Hadoop Common Issue Type: Improvement Reporter: Akira Ajisaka Currently hadoop fs -touch command cannot update the mtime and the atime of directory. The feature would be useful when we check whether the filesystem is ready to write or not without creating any file. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17798) Always use GitHub PR rather than JIRA to review patches
[ https://issues.apache.org/jira/browse/HADOOP-17798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-17798. Resolution: Done Updated the wiki and disabled the precommit jobs. > Always use GitHub PR rather than JIRA to review patches > --- > > Key: HADOOP-17798 > URL: https://issues.apache.org/jira/browse/HADOOP-17798 > Project: Hadoop Common > Issue Type: Task > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > > Now there are 2 types of precommit jobs in https://ci-hadoop.apache.org/ > (1) Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs that try to download > patches from JIRA and test them. > (2) hadoop-multibranch job for GitHub PR > The problems are: > - The build configs are separated. The (2) config is in Jenkinsfile, and the > (1) configs are in the Jenkins. When we update Jenkinsfile, I had to manually > update the configs of the 4 precommit jobs via Jenkins Web UI. > - The (1) build configs are static. We cannot use separate config for each > branch. This may cause some build failures. > - GitHub Actions cannot be used in the (1) jobs. > Therefore I want to disable the (1) jobs and always use GitHub PR to review > patches. > How to do this: > 1. Update the wiki: > https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Provideapatch > 2. Disable the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-17798) Always use GitHub PR rather than JIRA to review patches
[ https://issues.apache.org/jira/browse/HADOOP-17798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-17798 started by Akira Ajisaka. -- > Always use GitHub PR rather than JIRA to review patches > --- > > Key: HADOOP-17798 > URL: https://issues.apache.org/jira/browse/HADOOP-17798 > Project: Hadoop Common > Issue Type: Task > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > > Now there are 2 types of precommit jobs in https://ci-hadoop.apache.org/ > (1) Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs that try to download > patches from JIRA and test them. > (2) hadoop-multibranch job for GitHub PR > The problems are: > - The build configs are separated. The (2) config is in Jenkinsfile, and the > (1) configs are in the Jenkins. When we update Jenkinsfile, I had to manually > update the configs of the 4 precommit jobs via Jenkins Web UI. > - The (1) build configs are static. We cannot use separate config for each > branch. This may cause some build failures. > - GitHub Actions cannot be used in the (1) jobs. > Therefore I want to disable the (1) jobs and always use GitHub PR to review > patches. > How to do this: > 1. Update the wiki: > https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Provideapatch > 2. Disable the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17798) Always use GitHub PR rather than JIRA to review patches
[ https://issues.apache.org/jira/browse/HADOOP-17798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-17798: -- Assignee: Akira Ajisaka > Always use GitHub PR rather than JIRA to review patches > --- > > Key: HADOOP-17798 > URL: https://issues.apache.org/jira/browse/HADOOP-17798 > Project: Hadoop Common > Issue Type: Task > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > > Now there are 2 types of precommit jobs in https://ci-hadoop.apache.org/ > (1) Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs that try to download > patches from JIRA and test them. > (2) hadoop-multibranch job for GitHub PR > The problems are: > - The build configs are separated. The (2) config is in Jenkinsfile, and the > (1) configs are in the Jenkins. When we update Jenkinsfile, I had to manually > update the configs of the 4 precommit jobs via Jenkins Web UI. > - The (1) build configs are static. We cannot use separate config for each > branch. This may cause some build failures. > - GitHub Actions cannot be used in the (1) jobs. > Therefore I want to disable the (1) jobs and always use GitHub PR to review > patches. > How to do this: > 1. Update the wiki: > https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Provideapatch > 2. Disable the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13386) Upgrade Avro to 1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13386: --- Summary: Upgrade Avro to 1.9.2 (was: Upgrade Avro to 1.8.x or later) > Upgrade Avro to 1.9.2 > - > > Key: HADOOP-13386 > URL: https://issues.apache.org/jira/browse/HADOOP-13386 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Ben McCann >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 5h 10m > Remaining Estimate: 0h > > Avro 1.8.x makes generated classes serializable which makes them much easier > to use with Spark. It would be great to upgrade Avro to 1.8.x -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13386) Upgrade Avro to 1.8.x or later
[ https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13386: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. > Upgrade Avro to 1.8.x or later > -- > > Key: HADOOP-13386 > URL: https://issues.apache.org/jira/browse/HADOOP-13386 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Ben McCann >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 5h 10m > Remaining Estimate: 0h > > Avro 1.8.x makes generated classes serializable which makes them much easier > to use with Spark. It would be great to upgrade Avro to 1.8.x -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18171) NameNode Access Time Precision
[ https://issues.apache.org/jira/browse/HADOOP-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18171. Resolution: Invalid Setting the value of 0 disables access times for HDFS. You should ask CDP-related question to Cloudera support instead of filing this issue. > NameNode Access Time Precision > -- > > Key: HADOOP-18171 > URL: https://issues.apache.org/jira/browse/HADOOP-18171 > Project: Hadoop Common > Issue Type: Improvement > Environment: As of now we are on CDH version 6.3.4 and we are > planning to upgrade it to CDP version 7.1.4. for that cloudera want us to > disable namenode property dfs.access.time.precision by changing it's value to > 0. Current value for this property is 1 hour. so my question is that how this > value is impacting in current scenario? what is the effect of that and what > will happen If I make it to zero. >Reporter: Doug >Priority: Major > Attachments: namenodeaccesstime.png > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13386) Upgrade Avro to 1.8.x or later
[ https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13386: --- Assignee: PJ Fanning Status: Patch Available (was: Reopened) > Upgrade Avro to 1.8.x or later > -- > > Key: HADOOP-13386 > URL: https://issues.apache.org/jira/browse/HADOOP-13386 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Ben McCann >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Time Spent: 4h 10m > Remaining Estimate: 0h > > Avro 1.8.x makes generated classes serializable which makes them much easier > to use with Spark. It would be great to upgrade Avro to 1.8.x -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16717) Remove GenericsUtil isLog4jLogger dependency on Log4jLoggerAdapter
[ https://issues.apache.org/jira/browse/HADOOP-16717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16717: --- Fix Version/s: 3.2.3 This issue blocks Spark 3.2 on YARN to upgrade to log4j2. Backported to branch-3.2 and branch-3.2.3. > Remove GenericsUtil isLog4jLogger dependency on Log4jLoggerAdapter > -- > > Key: HADOOP-16717 > URL: https://issues.apache.org/jira/browse/HADOOP-16717 > Project: Hadoop Common > Issue Type: Improvement >Reporter: David Mollitor >Assignee: Xieming Li >Priority: Major > Fix For: 3.3.0, 3.2.3 > > Attachments: HADOOP-16717.001.patch, HADOOP-16717.002.patch, > HADOOP-16717.003.patch > > > Remove this method: > {code:java} > /** >* Determine whether the log of clazz is Log4j implementation. >* @param clazz a class to be determined >* @return true if the log of clazz is Log4j implementation. >*/ > public static boolean isLog4jLogger(Class clazz) { > if (clazz == null) { > return false; > } > Logger log = LoggerFactory.getLogger(clazz); > return log instanceof Log4jLoggerAdapter; > } > {code} > This creates a dependency on Log4jLoggerAdapter (slf4j-log4j12) which means > that any project which depends on hadoop-commons needs to carry this > dependency as well. Such a simple use case and such a heavy dependency. The > commons library should not depend on any specific implementation of SLF4J > binding -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer
[ https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17112: --- Fix Version/s: 3.2.3 Cherry-picked to branch-3.2 and branch-3.2.3. > whitespace not allowed in paths when saving files to s3a via committer > -- > > Key: HADOOP-17112 > URL: https://issues.apache.org/jira/browse/HADOOP-17112 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Krzysztof Adamski >Assignee: Krzysztof Adamski >Priority: Blocker > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.2.3 > > Attachments: image-2020-07-03-16-08-52-340.png > > Time Spent: 1h 40m > Remaining Estimate: 0h > > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > !image-2020-07-03-16-08-52-340.png! -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17386) fs.s3a.buffer.dir to be under Yarn container path on yarn applications
[ https://issues.apache.org/jira/browse/HADOOP-17386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-17386. Fix Version/s: 3.4.0 Resolution: Fixed Committed to trunk. Thank you [~monthonk] for your contribution. > fs.s3a.buffer.dir to be under Yarn container path on yarn applications > -- > > Key: HADOOP-17386 > URL: https://issues.apache.org/jira/browse/HADOOP-17386 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Monthon Klongklaew >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > # fs.s3a.buffer.dir defaults to hadoop.tmp.dir which is /tmp or similar > # we use this for storing file blocks during upload > # staging committers use it for all files in a task, which can be a lot more > # a lot of systems don't clean up /tmp until reboot -and if they stay up for > a long time then they accrue files written through s3a staging committer from > spark containers which fail > Fix: use ${env.LOCAL_DIRS:-${hadoop.tmp.dir}}/s3a as the option so that if > env.LOCAL_DIRS is set is used over hadoop.tmp.dir. YARN-deployed apps will > use that for the buffer dir. When the app container is destroyed, so is the > directory. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17386) fs.s3a.buffer.dir to be under Yarn container path on yarn applications
[ https://issues.apache.org/jira/browse/HADOOP-17386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-17386: -- Assignee: Monthon Klongklaew > fs.s3a.buffer.dir to be under Yarn container path on yarn applications > -- > > Key: HADOOP-17386 > URL: https://issues.apache.org/jira/browse/HADOOP-17386 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Monthon Klongklaew >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > # fs.s3a.buffer.dir defaults to hadoop.tmp.dir which is /tmp or similar > # we use this for storing file blocks during upload > # staging committers use it for all files in a task, which can be a lot more > # a lot of systems don't clean up /tmp until reboot -and if they stay up for > a long time then they accrue files written through s3a staging committer from > spark containers which fail > Fix: use ${env.LOCAL_DIRS:-${hadoop.tmp.dir}}/s3a as the option so that if > env.LOCAL_DIRS is set is used over hadoop.tmp.dir. YARN-deployed apps will > use that for the buffer dir. When the app container is destroyed, so is the > directory. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17633) Bump json-smart to 2.4.2 and nimbus-jose-jwt to 9.8 due to CVEs
[ https://issues.apache.org/jira/browse/HADOOP-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17633: --- Summary: Bump json-smart to 2.4.2 and nimbus-jose-jwt to 9.8 due to CVEs (was: Please upgrade json-smart dependency to the latest version) > Bump json-smart to 2.4.2 and nimbus-jose-jwt to 9.8 due to CVEs > --- > > Key: HADOOP-17633 > URL: https://issues.apache.org/jira/browse/HADOOP-17633 > Project: Hadoop Common > Issue Type: Improvement > Components: auth, build >Affects Versions: 3.3.0, 3.2.1, 3.2.2, 3.4.0 >Reporter: helen huang >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.2.3 > > Time Spent: 4h 20m > Remaining Estimate: 0h > > Please upgrade the json-smart dependency to the latest version available. > Currently hadoop-auth is using version 2.3. Fortify scan picked up a security > issue with this version. Please upgrade to the latest version. > Thanks! > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17096) Fix ZStandardCompressor input buffer offset
[ https://issues.apache.org/jira/browse/HADOOP-17096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17096: --- Summary: Fix ZStandardCompressor input buffer offset (was: ZStandardCompressor throws java.lang.InternalError: Error (generic)) > Fix ZStandardCompressor input buffer offset > --- > > Key: HADOOP-17096 > URL: https://issues.apache.org/jira/browse/HADOOP-17096 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 3.2.1 > Environment: Our repro is on ubuntu xenial LTS, with hadoop 3.2.1 > linking to libzstd 1.3.1. The bug is difficult to reproduce in an end-to-end > environment (eg running an actual hadoop job with zstd compression) because > it's very sensitive to the exact input and output characteristics. I > reproduced the bug by turning one of the existing unit tests into a crude > fuzzer, but I'm not sure upstream will accept that patch, so I've attached it > separately on this ticket. > Note that the existing unit test for testCompressingWithOneByteOutputBuffer > fails to reproduce this bug. This is because it's using the license file as > input, and this file is too small. libzstd has internal buffering (in our > environment it seems to be 128 kilobytes), and the license file is only 10 > kilobytes. Thus libzstd is able to consume all the input and compress it in a > single call, then return pieces of its internal buffer one byte at a time. > Since all the input is consumed in a single call, uncompressedDirectBufOff > and uncompressedDirectBufLen are both set to zero and thus the bug does not > reproduce. >Reporter: Stephen Jung (Stripe) >Assignee: Stephen Jung (Stripe) >Priority: Major > Labels: pull-request-available > Fix For: 3.2.2, 3.3.1, 3.4.0 > > Attachments: fuzztest.patch > > Time Spent: 10m > Remaining Estimate: 0h > > A bug in index handling causes ZStandardCompressor.c to pass a malformed > ZSTD_inBuffer to libzstd. libzstd then returns an "Error (generic)" that gets > thrown. The crux of the issue is two variables, uncompressedDirectBufLen and > uncompressedDirectBufOff. The hadoop code counts uncompressedDirectBufOff > from the start of uncompressedDirectBuf, then uncompressedDirectBufLen is > counted from uncompressedDirectBufOff. However, libzstd considers pos and > size to both be counted from the start of the buffer. As a result, this line > https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L228 > causes a malformed buffer to be passed to libzstd, where pos>size. Here's a > longer description of the bug in case this abstract explanation is unclear: > > Suppose we initialize uncompressedDirectBuf (via setInputFromSavedData) with > five bytes of input. This results in uncompressedDirectBufOff=0 and > uncompressedDirectBufLen=5 > (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.java#L140-L146). > Then we call compress(), which initializes a ZSTD_inBuffer > (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L195-L196). > The definition of those libzstd structs is here > https://github.com/facebook/zstd/blob/v1.3.1/lib/zstd.h#L251-L261 - note that > we set size=uncompressedDirectBufLen and pos=uncompressedDirectBufOff. The > ZSTD_inBuffer gets passed to libzstd, compression happens, etc. When libzstd > returns from the compression function, it updates the ZSTD_inBuffer struct to > indicate how many bytes were consumed > (https://github.com/facebook/zstd/blob/v1.3.1/lib/compress/zstd_compress.c#L3919-L3920). > Note that pos is advanced, but size is unchanged. > Now, libzstd does not guarantee that the entire input will be compressed in a > single call of the compression function. (Some of the compression libraries > used by hadoop, such as snappy, _do_ provide this guarantee, but libzstd is > not one of them.) So the hadoop native code updates uncompressedDirectBufOff > and uncompressedDirectBufLen using the updated ZSTD_inBuffer: > https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L227-L228 > Now, returning to our example, we started with 5 bytes of uncompressed input. > Suppose libzstd compressed 4 of those bytes, leaving one unread. This would > result in a ZSTD_inBuffer struct with size=5 (unchanged) and pos=4 (four > bytes were consumed). The hadoop native
[jira] [Resolved] (HADOOP-18130) hadoop-client-runtime latest version 3.3.1 has security issues
[ https://issues.apache.org/jira/browse/HADOOP-18130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18130. Resolution: Not A Problem > hadoop-client-runtime latest version 3.3.1 has security issues > -- > > Key: HADOOP-18130 > URL: https://issues.apache.org/jira/browse/HADOOP-18130 > Project: Hadoop Common > Issue Type: Improvement >Reporter: phoebe chen >Priority: Major > > hadoop-client-runtime latest version 3.3.1 ([Maven Repository: > org.apache.hadoop » hadoop-client-runtime » 3.3.1 > (mvnrepository.com)|https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client-runtime/3.3.1]) > has many security issues. > Beside the ones list in maven repo, it's dependency: > "org.eclipse.jetty_jetty-io" (9.4.40.v20210413) has > [CVE-2021-34429|https://nvd.nist.gov/vuln/detail/CVE-2021-34429] and > [CVE-2021-28169|https://nvd.nist.gov/vuln/detail/CVE-2021-28169] > "com.fasterxml.jackson.core_jackson-databind" (2.10.5.1) has > [PRISMA-2021-0213.|https://github.com/FasterXML/jackson-databind/issues/3328] > Need to upgrade to higher version. > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18130) hadoop-client-runtime latest version 3.3.1 has security issues
[ https://issues.apache.org/jira/browse/HADOOP-18130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17494305#comment-17494305 ] Akira Ajisaka commented on HADOOP-18130: It is not an issue in branch-3.3. * Jetty has been upgraded 9.4.43 by HADOOP-17796 * Jackson has been upgraded to 2.13.0 by HADOOP-18033 > hadoop-client-runtime latest version 3.3.1 has security issues > -- > > Key: HADOOP-18130 > URL: https://issues.apache.org/jira/browse/HADOOP-18130 > Project: Hadoop Common > Issue Type: Improvement >Reporter: phoebe chen >Priority: Major > > hadoop-client-runtime latest version 3.3.1 ([Maven Repository: > org.apache.hadoop » hadoop-client-runtime » 3.3.1 > (mvnrepository.com)|https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client-runtime/3.3.1]) > has many security issues. > Beside the ones list in maven repo, it's dependency: > "org.eclipse.jetty_jetty-io" (9.4.40.v20210413) has > [CVE-2021-34429|https://nvd.nist.gov/vuln/detail/CVE-2021-34429] and > [CVE-2021-28169|https://nvd.nist.gov/vuln/detail/CVE-2021-28169] > "com.fasterxml.jackson.core_jackson-databind" (2.10.5.1) has > [PRISMA-2021-0213.|https://github.com/FasterXML/jackson-databind/issues/3328] > Need to upgrade to higher version. > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18126) Update junit 5 version due to build issues
[ https://issues.apache.org/jira/browse/HADOOP-18126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18126: --- Fix Version/s: 3.3.3 Backported to branch-3.3. > Update junit 5 version due to build issues > -- > > Key: HADOOP-18126 > URL: https://issues.apache.org/jira/browse/HADOOP-18126 > Project: Hadoop Common > Issue Type: Bug > Components: bulid >Reporter: PJ Fanning >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.3 > > Time Spent: 1h > Remaining Estimate: 0h > > {code:java} > Feb 11, 2022 11:31:43 AM org.junit.platform.launcher.core.DefaultLauncher > handleThrowable WARNING: TestEngine with ID 'junit-vintage' failed to > discover tests org.junit.platform.commons.JUnitException: Failed to parse > version of junit:junit: 4.13.2 at > org.junit.vintage.engine.JUnit4VersionCheck.parseVersion(JUnit4VersionCheck.java:54) > {code} > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3980/1/artifact/out/patch-unit-root.txt] > seems like junit.vintage.version=5.5.1 is incompatible with > junit.version=4.13.2 > see 2nd answer on > [https://stackoverflow.com/questions/59900637/error-testengine-with-id-junit-vintage-failed-to-discover-tests-with-spring] > my plan is to upgrade junit.vintage.version and junit.jupiter.version to 5.8.2 > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18126) Update junit 5 version due to build issues
[ https://issues.apache.org/jira/browse/HADOOP-18126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18126. Fix Version/s: 3.4.0 Resolution: Fixed Committed to trunk. Thank you [~pj.fanning] for your contribution! > Update junit 5 version due to build issues > -- > > Key: HADOOP-18126 > URL: https://issues.apache.org/jira/browse/HADOOP-18126 > Project: Hadoop Common > Issue Type: Bug > Components: bulid >Reporter: PJ Fanning >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h > Remaining Estimate: 0h > > {code:java} > Feb 11, 2022 11:31:43 AM org.junit.platform.launcher.core.DefaultLauncher > handleThrowable WARNING: TestEngine with ID 'junit-vintage' failed to > discover tests org.junit.platform.commons.JUnitException: Failed to parse > version of junit:junit: 4.13.2 at > org.junit.vintage.engine.JUnit4VersionCheck.parseVersion(JUnit4VersionCheck.java:54) > {code} > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3980/1/artifact/out/patch-unit-root.txt] > seems like junit.vintage.version=5.5.1 is incompatible with > junit.version=4.13.2 > see 2nd answer on > [https://stackoverflow.com/questions/59900637/error-testengine-with-id-junit-vintage-failed-to-discover-tests-with-spring] > my plan is to upgrade junit.vintage.version and junit.jupiter.version to 5.8.2 > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18126) Update junit 5 version due to build issues
[ https://issues.apache.org/jira/browse/HADOOP-18126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18126: --- Summary: Update junit 5 version due to build issues (was: junit-vintage tests seem to be failing) > Update junit 5 version due to build issues > -- > > Key: HADOOP-18126 > URL: https://issues.apache.org/jira/browse/HADOOP-18126 > Project: Hadoop Common > Issue Type: Bug > Components: bulid >Reporter: PJ Fanning >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > {code:java} > Feb 11, 2022 11:31:43 AM org.junit.platform.launcher.core.DefaultLauncher > handleThrowable WARNING: TestEngine with ID 'junit-vintage' failed to > discover tests org.junit.platform.commons.JUnitException: Failed to parse > version of junit:junit: 4.13.2 at > org.junit.vintage.engine.JUnit4VersionCheck.parseVersion(JUnit4VersionCheck.java:54) > {code} > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3980/1/artifact/out/patch-unit-root.txt] > seems like junit.vintage.version=5.5.1 is incompatible with > junit.version=4.13.2 > see 2nd answer on > [https://stackoverflow.com/questions/59900637/error-testengine-with-id-junit-vintage-failed-to-discover-tests-with-spring] > my plan is to upgrade junit.vintage.version and junit.jupiter.version to 5.8.2 > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18126) junit-vintage tests seem to be failing
[ https://issues.apache.org/jira/browse/HADOOP-18126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-18126: -- Assignee: PJ Fanning > junit-vintage tests seem to be failing > -- > > Key: HADOOP-18126 > URL: https://issues.apache.org/jira/browse/HADOOP-18126 > Project: Hadoop Common > Issue Type: Bug > Components: bulid >Reporter: PJ Fanning >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > {code:java} > Feb 11, 2022 11:31:43 AM org.junit.platform.launcher.core.DefaultLauncher > handleThrowable WARNING: TestEngine with ID 'junit-vintage' failed to > discover tests org.junit.platform.commons.JUnitException: Failed to parse > version of junit:junit: 4.13.2 at > org.junit.vintage.engine.JUnit4VersionCheck.parseVersion(JUnit4VersionCheck.java:54) > {code} > [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3980/1/artifact/out/patch-unit-root.txt] > seems like junit.vintage.version=5.5.1 is incompatible with > junit.version=4.13.2 > see 2nd answer on > [https://stackoverflow.com/questions/59900637/error-testengine-with-id-junit-vintage-failed-to-discover-tests-with-spring] > my plan is to upgrade junit.vintage.version and junit.jupiter.version to 5.8.2 > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17631) Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when restrictSystemProps=true
[ https://issues.apache.org/jira/browse/HADOOP-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17631: --- Fix Version/s: 3.4.0 > Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when > restrictSystemProps=true > -- > > Key: HADOOP-17631 > URL: https://issues.apache.org/jira/browse/HADOOP-17631 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Time Spent: 1h > Remaining Estimate: 0h > > When configuration reads in resources with a restricted parser, it skips > evaluaging system ${env. } vars. But it also skips evaluating fallbacks > As a result, a property like {{fs.s3a.buffer.dir}} > {code} > ${env.LOCAL_DIRS:-${hadoop.tmp.dir}} ends up evaluating as > ${env.LOCAL_DIRS:-${hadoop.tmp.dir}} > {code} > It should instead fall back to the "env var unset" option of > ${hadoop.tmp.dir}. This allows for configs (like for s3a buffer dirs) which > are usable in restricted mode as well as unrestricted deployments. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15983) Remove the usage of jersey-json to remove jackson 1.x dependency.
[ https://issues.apache.org/jira/browse/HADOOP-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17492312#comment-17492312 ] Akira Ajisaka commented on HADOOP-15983: {quote}I count 13 classes in src/main that had to be changed to use jackson 2. {quote} Thank you for your explanation. Sorry I missed the code changes in [https://github.com/pjfanning/jersey-1.x/commit/436ad8813a3ed44fce06b1aacd8c2892dc8be55b] because there are many deletion of files. {quote}If the removing jackson 1 from classpath does not work, can you consider using my version of the lib? {quote} Yes. > Remove the usage of jersey-json to remove jackson 1.x dependency. > - > > Key: HADOOP-15983 > URL: https://issues.apache.org/jira/browse/HADOOP-15983 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15983) Remove the usage of jersey-json to remove jackson 1.x dependency.
[ https://issues.apache.org/jira/browse/HADOOP-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17492078#comment-17492078 ] Akira Ajisaka commented on HADOOP-15983: Thank you [~pj.fanning] for your work. Reading your repo, the jersey-json dropped one class that uses jackson 1. If that works, I assume that excluding jackson 1 dependency instead of using the jersey-json fork should also work. We can try excluding the dependency first because it is more simple. > Remove the usage of jersey-json to remove jackson 1.x dependency. > - > > Key: HADOOP-15983 > URL: https://issues.apache.org/jira/browse/HADOOP-15983 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13386) Upgrade Avro to 1.8.x or later
[ https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-13386: -- Assignee: (was: Kalman) > Upgrade Avro to 1.8.x or later > -- > > Key: HADOOP-13386 > URL: https://issues.apache.org/jira/browse/HADOOP-13386 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Ben McCann >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Avro 1.8.x makes generated classes serializable which makes them much easier > to use with Spark. It would be great to upgrade Avro to 1.8.x -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13386) Upgrade Avro to 1.8.x or later
[ https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17491705#comment-17491705 ] Akira Ajisaka commented on HADOOP-13386: {quote}Can this issue be reconsidered? {quote} +1. You can take it over because there is no progress in https://github.com/apache/hadoop/pull/761 for more than 2 years. > Upgrade Avro to 1.8.x or later > -- > > Key: HADOOP-13386 > URL: https://issues.apache.org/jira/browse/HADOOP-13386 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Ben McCann >Assignee: Kalman >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Avro 1.8.x makes generated classes serializable which makes them much easier > to use with Spark. It would be great to upgrade Avro to 1.8.x -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15983) Remove the usage of jersey-json to remove jackson 1.x dependency.
[ https://issues.apache.org/jira/browse/HADOOP-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17491704#comment-17491704 ] Akira Ajisaka commented on HADOOP-15983: {quote}If I was to this work and published a jar under my own maven groupId - would Hadoop team be interested in using it? It could be a stopgap until Hadoop moves over to Jersey 2. {quote} I'm interested in using the jar because it will take long time to move to Jersey 2. Where is the repository of your fork? > Remove the usage of jersey-json to remove jackson 1.x dependency. > - > > Key: HADOOP-15983 > URL: https://issues.apache.org/jira/browse/HADOOP-15983 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18046) TestIPC#testIOEOnListenerAccept fails
[ https://issues.apache.org/jira/browse/HADOOP-18046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18046. Resolution: Not A Problem HADOOP-18024 has been reverted. Closing. > TestIPC#testIOEOnListenerAccept fails > - > > Key: HADOOP-18046 > URL: https://issues.apache.org/jira/browse/HADOOP-18046 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > {code} > [ERROR] testIOEOnListenerAccept(org.apache.hadoop.ipc.TestIPC) Time elapsed: > 0.007 s <<< FAILURE! > java.lang.AssertionError: Expected an EOFException to have been thrown > at org.junit.Assert.fail(Assert.java:89) > at > org.apache.hadoop.ipc.TestIPC.testIOEOnListenerAccept(TestIPC.java:652) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18037) Backport HADOOP-17796 for branch-3.2
[ https://issues.apache.org/jira/browse/HADOOP-18037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18037. Resolution: Duplicate > Backport HADOOP-17796 for branch-3.2 > > > Key: HADOOP-18037 > URL: https://issues.apache.org/jira/browse/HADOOP-18037 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.2.2 >Reporter: Ananya Singh >Assignee: Ananya Singh >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17796) Upgrade jetty version to 9.4.43
[ https://issues.apache.org/jira/browse/HADOOP-17796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17796: --- Fix Version/s: 3.2.4 Backported to branch-3.2 via https://github.com/apache/hadoop/pull/3757 > Upgrade jetty version to 9.4.43 > --- > > Key: HADOOP-17796 > URL: https://issues.apache.org/jira/browse/HADOOP-17796 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.2.2, 3.3.1, 3.4.0 >Reporter: Wei-Chiu Chuang >Assignee: Renukaprasad C >Priority: Major > Labels: dependency, pull-request-available > Fix For: 3.4.0, 3.3.2, 3.2.4 > > Time Spent: 20m > Remaining Estimate: 0h > > https://github.com/eclipse/jetty.project/security/advisories/GHSA-m6cp-vxjx-65j6 > https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17796) Upgrade jetty version to 9.4.43
[ https://issues.apache.org/jira/browse/HADOOP-17796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17485731#comment-17485731 ] Akira Ajisaka edited comment on HADOOP-17796 at 2/2/22, 11:43 AM: -- Backported to branch-3.2 via [https://github.com/apache/hadoop/pull/3757] Thank you [~ananysin] for the PR. was (Author: ajisakaa): Backported to branch-3.2 via https://github.com/apache/hadoop/pull/3757 > Upgrade jetty version to 9.4.43 > --- > > Key: HADOOP-17796 > URL: https://issues.apache.org/jira/browse/HADOOP-17796 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.2.2, 3.3.1, 3.4.0 >Reporter: Wei-Chiu Chuang >Assignee: Renukaprasad C >Priority: Major > Labels: dependency, pull-request-available > Fix For: 3.4.0, 3.3.2, 3.2.4 > > Time Spent: 0.5h > Remaining Estimate: 0h > > https://github.com/eclipse/jetty.project/security/advisories/GHSA-m6cp-vxjx-65j6 > https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18037) Backport HADOOP-17796 for branch-3.2
[ https://issues.apache.org/jira/browse/HADOOP-18037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17485730#comment-17485730 ] Akira Ajisaka commented on HADOOP-18037: Modified the commit message to HADOOP-17796 and merge the PR. Let me close this issue as duplicate and add the fix/version to HADOOP-17796 for easier tracking. > Backport HADOOP-17796 for branch-3.2 > > > Key: HADOOP-18037 > URL: https://issues.apache.org/jira/browse/HADOOP-18037 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.2.2 >Reporter: Ananya Singh >Assignee: Ananya Singh >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17796) Upgrade jetty version to 9.4.43
[ https://issues.apache.org/jira/browse/HADOOP-17796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17796: --- Summary: Upgrade jetty version to 9.4.43 (was: Update Jetty to 9.4.41 or above) > Upgrade jetty version to 9.4.43 > --- > > Key: HADOOP-17796 > URL: https://issues.apache.org/jira/browse/HADOOP-17796 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.2.2, 3.3.1, 3.4.0 >Reporter: Wei-Chiu Chuang >Assignee: Renukaprasad C >Priority: Major > Labels: dependency, pull-request-available > Fix For: 3.4.0, 3.3.2 > > Time Spent: 20m > Remaining Estimate: 0h > > https://github.com/eclipse/jetty.project/security/advisories/GHSA-m6cp-vxjx-65j6 > https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18099) Upgrade bundled Tomcat to 8.5.75
[ https://issues.apache.org/jira/browse/HADOOP-18099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18099. Fix Version/s: 2.10.2 Resolution: Fixed Merged the PR into branch-2.10. Thank you [~groot] for your contribution! > Upgrade bundled Tomcat to 8.5.75 > > > Key: HADOOP-18099 > URL: https://issues.apache.org/jira/browse/HADOOP-18099 > Project: Hadoop Common > Issue Type: Improvement > Components: httpfs, kms >Affects Versions: 2.10.1 >Reporter: Akira Ajisaka >Assignee: Ashutosh Gupta >Priority: Major > Labels: newbie, pull-request-available > Fix For: 2.10.2 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Let's upgrade to the latest 8.5.x version. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18099) Upgrade bundled Tomcat to 8.5.75
[ https://issues.apache.org/jira/browse/HADOOP-18099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18099: --- Summary: Upgrade bundled Tomcat to 8.5.75 (was: Upgrade bundled Tomcat in branch-2 to the latest) > Upgrade bundled Tomcat to 8.5.75 > > > Key: HADOOP-18099 > URL: https://issues.apache.org/jira/browse/HADOOP-18099 > Project: Hadoop Common > Issue Type: Improvement > Components: httpfs, kms >Affects Versions: 2.10.1 >Reporter: Akira Ajisaka >Assignee: Ashutosh Gupta >Priority: Major > Labels: newbie, pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Let's upgrade to the latest 8.5.x version. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18099) Upgrade bundled Tomcat in branch-2 to the latest
[ https://issues.apache.org/jira/browse/HADOOP-18099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18099: --- Issue Type: Improvement (was: Bug) > Upgrade bundled Tomcat in branch-2 to the latest > > > Key: HADOOP-18099 > URL: https://issues.apache.org/jira/browse/HADOOP-18099 > Project: Hadoop Common > Issue Type: Improvement > Components: httpfs, kms >Affects Versions: 2.10.1 >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > Let's upgrade to the latest 8.5.x version. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18099) Upgrade bundled Tomcat in branch-2 to the latest
[ https://issues.apache.org/jira/browse/HADOOP-18099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18099: --- Component/s: (was: build) > Upgrade bundled Tomcat in branch-2 to the latest > > > Key: HADOOP-18099 > URL: https://issues.apache.org/jira/browse/HADOOP-18099 > Project: Hadoop Common > Issue Type: Bug > Components: httpfs, kms >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > Let's upgrade to the latest 8.5.x version. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18099) Upgrade bundled Tomcat in branch-2 to the latest
[ https://issues.apache.org/jira/browse/HADOOP-18099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18099: --- Labels: newbie (was: ) > Upgrade bundled Tomcat in branch-2 to the latest > > > Key: HADOOP-18099 > URL: https://issues.apache.org/jira/browse/HADOOP-18099 > Project: Hadoop Common > Issue Type: Bug > Components: build, httpfs, kms >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > Let's upgrade to the latest 8.5.x version. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18099) Upgrade bundled Tomcat in branch-2 to the latest
[ https://issues.apache.org/jira/browse/HADOOP-18099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18099: --- Affects Version/s: 2.10.1 > Upgrade bundled Tomcat in branch-2 to the latest > > > Key: HADOOP-18099 > URL: https://issues.apache.org/jira/browse/HADOOP-18099 > Project: Hadoop Common > Issue Type: Bug > Components: httpfs, kms >Affects Versions: 2.10.1 >Reporter: Akira Ajisaka >Priority: Major > Labels: newbie > > Let's upgrade to the latest 8.5.x version. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18099) Upgrade bundled Tomcat in branch-2 to the latest
[ https://issues.apache.org/jira/browse/HADOOP-18099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18099: --- Summary: Upgrade bundled Tomcat in branch-2 to the latest (was: Ungrade bundle Tomcat in branch-2 to the latest) > Upgrade bundled Tomcat in branch-2 to the latest > > > Key: HADOOP-18099 > URL: https://issues.apache.org/jira/browse/HADOOP-18099 > Project: Hadoop Common > Issue Type: Bug > Components: build, httpfs, kms >Reporter: Akira Ajisaka >Priority: Major > > Let's upgrade to the latest 8.5.x version. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18099) Ungrade bundle Tomcat in branch-2 to the latest
Akira Ajisaka created HADOOP-18099: -- Summary: Ungrade bundle Tomcat in branch-2 to the latest Key: HADOOP-18099 URL: https://issues.apache.org/jira/browse/HADOOP-18099 Project: Hadoop Common Issue Type: Bug Components: build, httpfs, kms Reporter: Akira Ajisaka Let's upgrade to the latest 8.5.x version. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17221) update log4j-1.2.17 to atlassian version( To Address: CVE-2019-17571)
[ https://issues.apache.org/jira/browse/HADOOP-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17221: --- Resolution: Duplicate Status: Resolved (was: Patch Available) > update log4j-1.2.17 to atlassian version( To Address: CVE-2019-17571) > - > > Key: HADOOP-17221 > URL: https://issues.apache.org/jira/browse/HADOOP-17221 > Project: Hadoop Common > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-17221-001.patch, image-2020-08-25-07-39-09-201.png > > > Currentlly there are no active release under 1.X in log4j and log4j2 is > incompatiable to upgrade (see HADOOP-16206 ) for more details. > But following CVE is reported on log4j 1.2.17..I think,we should consider to > update to > Atlassian([https://mvnrepository.com/artifact/log4j/log4j/1.2.17-atlassian-0.4]) > or redhat versions > [https://nvd.nist.gov/vuln/detail/CVE-2019-17571] -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17221) update log4j-1.2.17 to atlassian version( To Address: CVE-2019-17571)
[ https://issues.apache.org/jira/browse/HADOOP-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17483658#comment-17483658 ] Akira Ajisaka commented on HADOOP-17221: Thank you [~keegan]. I'm going to close this issue as duplicate. > update log4j-1.2.17 to atlassian version( To Address: CVE-2019-17571) > - > > Key: HADOOP-17221 > URL: https://issues.apache.org/jira/browse/HADOOP-17221 > Project: Hadoop Common > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-17221-001.patch, image-2020-08-25-07-39-09-201.png > > > Currentlly there are no active release under 1.X in log4j and log4j2 is > incompatiable to upgrade (see HADOOP-16206 ) for more details. > But following CVE is reported on log4j 1.2.17..I think,we should consider to > update to > Atlassian([https://mvnrepository.com/artifact/log4j/log4j/1.2.17-atlassian-0.4]) > or redhat versions > [https://nvd.nist.gov/vuln/detail/CVE-2019-17571] -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16555) Update commons-compress to 1.19
[ https://issues.apache.org/jira/browse/HADOOP-16555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16555: --- Fix Version/s: 3.2.2 (was: 3.2.1) > Update commons-compress to 1.19 > --- > > Key: HADOOP-16555 > URL: https://issues.apache.org/jira/browse/HADOOP-16555 > Project: Hadoop Common > Issue Type: Task >Reporter: Wei-Chiu Chuang >Assignee: Yi-Sheng Lien >Priority: Major > Labels: release-blocker > Fix For: 2.10.0, 3.3.0, 2.8.6, 2.9.3, 3.1.3, 3.2.2 > > Attachments: HADOOP-16555.branch-3.2.patch > > > We depend on commons-compress 1.18. The 1.19 release just went out. I think > we should update it. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17593: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: lixianwei >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17593.001.patch > > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17481041#comment-17481041 ] Akira Ajisaka commented on HADOOP-17593: Thank you [~Rigenyi] for your contribution. > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: lixianwei >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17593.001.patch > > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17481030#comment-17481030 ] Akira Ajisaka commented on HADOOP-17593: I'll commit this after fixing tabs. > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: lixianwei >Priority: Major > Attachments: HADOOP-17593.001.patch > > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480828#comment-17480828 ] Akira Ajisaka commented on HADOOP-17593: +1 pending Jenkins. > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: lixianwei >Priority: Major > Attachments: HADOOP-17593.001.patch > > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17593: --- Target Version/s: 3.4.0 Affects Version/s: (was: 3.3.1) Status: Patch Available (was: Open) > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: lixianwei >Priority: Major > Attachments: HADOOP-17593.001.patch > > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18092) Exclude log4j2 dependency from hadoop-huaweicloud module
[ https://issues.apache.org/jira/browse/HADOOP-18092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18092. Resolution: Duplicate Duplicate of HADOOP-17593. Closing. > Exclude log4j2 dependency from hadoop-huaweicloud module > > > Key: HADOOP-18092 > URL: https://issues.apache.org/jira/browse/HADOOP-18092 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Priority: Critical > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > [https://github.com/apache/hadoop/pull/3906#issuecomment-1018401121] > The following log4j2 dependencies must be excluded. > {code:java} > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile {code} -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18092) Exclude log4j2 dependency from hadoop-huaweicloud module
Akira Ajisaka created HADOOP-18092: -- Summary: Exclude log4j2 dependency from hadoop-huaweicloud module Key: HADOOP-18092 URL: https://issues.apache.org/jira/browse/HADOOP-18092 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Akira Ajisaka [https://github.com/apache/hadoop/pull/3906#issuecomment-1018401121] The following log4j2 dependencies must be excluded. {code:java} [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile {code} -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18086. Resolution: Not A Problem > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: László Bodor >Priority: Major > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479260#comment-17479260 ] Akira Ajisaka commented on HADOOP-18086: checker-qual become MIT license since 3.0.0 release according to the commit: [https://github.com/typetools/checker-framework/commit/e0538bfe10d2105fcd881a18694edf638f038cab] hadoop-thirdparty 1.1.1 contains checker-qual 3.8.0, so it is not a problem. {code:java} [INFO] --- maven-dependency-plugin:3.0.2:tree (default-cli) @ hadoop-shaded-guava --- [INFO] org.apache.hadoop.thirdparty:hadoop-shaded-guava:jar:1.1.1 [INFO] \- com.google.guava:guava:jar:30.1.1-jre:compile [INFO] +- com.google.guava:failureaccess:jar:1.0.1:compile [INFO] +- com.google.guava:listenablefuture:jar:.0-empty-to-avoid-conflict-with-guava:compile [INFO] +- com.google.code.findbugs:jsr305:jar:3.0.2:compile [INFO] +- org.checkerframework:checker-qual:jar:3.8.0:compile [INFO] +- com.google.errorprone:error_prone_annotations:jar:2.5.1:compile [INFO] \- com.google.j2objc:j2objc-annotations:jar:1.3:compile {code} > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: László Bodor >Priority: Major > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479250#comment-17479250 ] Akira Ajisaka commented on HADOOP-18086: Now I don't think it's a blocker. The above classes are under checker-qual module and it is MIT-licensed. https://github.com/typetools/checker-framework/tree/master/checker-qual/src/main/java/org/checkerframework/dataflow > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: László Bodor >Priority: Major > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479250#comment-17479250 ] Akira Ajisaka edited comment on HADOOP-18086 at 1/20/22, 10:42 AM: --- Now I don't think it's a blocker. The above classes are under checker-qual module and the module is MIT-licensed. [https://github.com/typetools/checker-framework/tree/master/checker-qual/src/main/java/org/checkerframework/dataflow] [https://github.com/typetools/checker-framework/blob/master/checker-qual/LICENSE.txt] was (Author: ajisakaa): Now I don't think it's a blocker. The above classes are under checker-qual module and it is MIT-licensed. https://github.com/typetools/checker-framework/tree/master/checker-qual/src/main/java/org/checkerframework/dataflow > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: László Bodor >Priority: Major > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18086: --- Labels: (was: release-blocker) > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: László Bodor >Priority: Major > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18086: --- Priority: Major (was: Blocker) > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: László Bodor >Priority: Major > Labels: release-blocker > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18086: --- Labels: release-blocker (was: ) > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: László Bodor >Priority: Blocker > Labels: release-blocker > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18086: --- Issue Type: Bug (was: Wish) > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: László Bodor >Priority: Blocker > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18086: --- Priority: Blocker (was: Major) > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Wish > Components: build >Reporter: László Bodor >Priority: Blocker > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)
[ https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479245#comment-17479245 ] Akira Ajisaka commented on HADOOP-18086: I think it is a blocker. > Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU > GPLv2 license) > -- > > Key: HADOOP-18086 > URL: https://issues.apache.org/jira/browse/HADOOP-18086 > Project: Hadoop Common > Issue Type: Wish > Components: build >Reporter: László Bodor >Priority: Major > > Please refer to TEZ-4378 for further details: > {code} > jar tf > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar > | grep "dataflow" > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/ > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class > org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class > {code} > I can see that checker-qual LICENSE.txt was removed in the scope of > HADOOP-17648, but it has nothing to do with the license itself, only for > [resolving a shading > error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949] > my understanding is that in the current way an Apache licensed package (guava > shaded jar) will contain a GPLv2 licensed software, which makes it a subject > of GPLv2, also triggers license violations in security tools (like BlackDuck) -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16775) DistCp reuses the same temp file within the task attempt for different files.
[ https://issues.apache.org/jira/browse/HADOOP-16775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16775: --- Fix Version/s: 3.3.0 > DistCp reuses the same temp file within the task attempt for different files. > - > > Key: HADOOP-16775 > URL: https://issues.apache.org/jira/browse/HADOOP-16775 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0 >Reporter: Amir Shenavandeh >Assignee: Amir Shenavandeh >Priority: Major > Labels: DistCp, S3, hadoop-tools > Fix For: 3.3.0, 3.2.2 > > Attachments: HADOOP-16775-v1.patch, HADOOP-16775.patch > > > Hadoop DistCp reuses the same temp file name for all the files copied within > each task attempt and then moves them to the target name, which is also a > server side copy. For copies to S3, this will cause inconsistency as S3 is > only consistent for reads after writes, for brand new objects. There is also > inconsistency for contents of overwritten objects on S3. > To avoid this, we should randomize the temp file name and for each temp file > use a different name. > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer
[ https://issues.apache.org/jira/browse/HADOOP-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17112: --- Fix Version/s: 3.4.0 > whitespace not allowed in paths when saving files to s3a via committer > -- > > Key: HADOOP-17112 > URL: https://issues.apache.org/jira/browse/HADOOP-17112 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Krzysztof Adamski >Assignee: Krzysztof Adamski >Priority: Blocker > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: image-2020-07-03-16-08-52-340.png > > Time Spent: 1h 40m > Remaining Estimate: 0h > > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > When saving results through spark dataframe on latest 3.0.1-snapshot compiled > against hadoop-3.2 with the following specs > --conf > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > > --conf > spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter > > --conf > spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol > > --conf spark.hadoop.fs.s3a.committer.name=partitioned > --conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace > we are unable to save the file with whitespace character in the path. It > works fine without. > I was looking into the recent commits with regards to qualifying the path, > but couldn't find anything obvious. Is this a known bug? > !image-2020-07-03-16-08-52-340.png! -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-16410) Hadoop 3.2 azure jars incompatible with alpine 3.9
[ https://issues.apache.org/jira/browse/HADOOP-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reopened HADOOP-16410: Reopening this to closing as duplicate. > Hadoop 3.2 azure jars incompatible with alpine 3.9 > -- > > Key: HADOOP-16410 > URL: https://issues.apache.org/jira/browse/HADOOP-16410 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Jose Luis Pedrosa >Priority: Minor > Fix For: 3.2.2 > > > Openjdk8 is based on alpine 3.9, this means that the version shipped of > libssl is 1.1.1b-r1: > > {noformat} > sh-4.4# apk list | grep ssl > libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] > {noformat} > The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by > [https://issues.jboss.org/browse/JBEAP-16425]. > This results on error running runtime errors (using spark as an example) > {noformat} > 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version > OpenSSL 1.1.1b 26 Feb 2019 > 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking > for metadata directory. > Exception in thread "main" java.lang.NullPointerException > at > org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284) > {noformat} > In my tests creating a Docker image with an updated version of wildly, solves > the issue: 1.0.7.Final > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16410) Hadoop 3.2 azure jars incompatible with alpine 3.9
[ https://issues.apache.org/jira/browse/HADOOP-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-16410. Resolution: Duplicate > Hadoop 3.2 azure jars incompatible with alpine 3.9 > -- > > Key: HADOOP-16410 > URL: https://issues.apache.org/jira/browse/HADOOP-16410 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Jose Luis Pedrosa >Priority: Minor > Fix For: 3.2.2 > > > Openjdk8 is based on alpine 3.9, this means that the version shipped of > libssl is 1.1.1b-r1: > > {noformat} > sh-4.4# apk list | grep ssl > libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] > {noformat} > The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by > [https://issues.jboss.org/browse/JBEAP-16425]. > This results on error running runtime errors (using spark as an example) > {noformat} > 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version > OpenSSL 1.1.1b 26 Feb 2019 > 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking > for metadata directory. > Exception in thread "main" java.lang.NullPointerException > at > org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284) > {noformat} > In my tests creating a Docker image with an updated version of wildly, solves > the issue: 1.0.7.Final > > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18061) Update the year to 2022
[ https://issues.apache.org/jira/browse/HADOOP-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17468347#comment-17468347 ] Akira Ajisaka commented on HADOOP-18061: Cherry-picked to branch-3.3.2 and branch-3.2.3. > Update the year to 2022 > --- > > Key: HADOOP-18061 > URL: https://issues.apache.org/jira/browse/HADOOP-18061 > Project: Hadoop Common > Issue Type: Task >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2 > > Time Spent: 40m > Remaining Estimate: 0h > > Update the year to 2022 -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18063) Remove unused import AbstractJavaKeyStoreProvider in Shell class
[ https://issues.apache.org/jira/browse/HADOOP-18063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18063. Fix Version/s: 3.4.0 3.2.4 3.3.3 Resolution: Fixed Committed to trunk, branch-3.3, and branch-3.2. > Remove unused import AbstractJavaKeyStoreProvider in Shell class > > > Key: HADOOP-18063 > URL: https://issues.apache.org/jira/browse/HADOOP-18063 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.4.0 >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.2.4, 3.3.3 > > Attachments: image-2022-01-01-22-40-50-604.png > > Time Spent: 1h > Remaining Estimate: 0h > > In Shell, there are some invalid imports. > For example: > !image-2022-01-01-22-40-50-604.png! > Among them, AbstractJavaKeyStoreProvider does not seem to be referenced > anywhere. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-18063) Remove unused import AbstractJavaKeyStoreProvider in Shell class
[ https://issues.apache.org/jira/browse/HADOOP-18063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka moved HDFS-16405 to HADOOP-18063: --- Component/s: (was: shell) Key: HADOOP-18063 (was: HDFS-16405) Affects Version/s: 3.4.0 (was: 3.4.0) Issue Type: Bug (was: Improvement) Project: Hadoop Common (was: Hadoop HDFS) > Remove unused import AbstractJavaKeyStoreProvider in Shell class > > > Key: HADOOP-18063 > URL: https://issues.apache.org/jira/browse/HADOOP-18063 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.4.0 >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Attachments: image-2022-01-01-22-40-50-604.png > > Time Spent: 40m > Remaining Estimate: 0h > > In Shell, there are some invalid imports. > For example: > !image-2022-01-01-22-40-50-604.png! > Among them, AbstractJavaKeyStoreProvider does not seem to be referenced > anywhere. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18062) Update the year to 2022
[ https://issues.apache.org/jira/browse/HADOOP-18062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18062. Assignee: (was: Akira Ajisaka) Resolution: Duplicate > Update the year to 2022 > --- > > Key: HADOOP-18062 > URL: https://issues.apache.org/jira/browse/HADOOP-18062 > Project: Hadoop Common > Issue Type: Task > Components: build >Reporter: Akira Ajisaka >Priority: Blocker > Labels: newbie, pull-request-available, release-blocker > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18062) Update the year to 2022
[ https://issues.apache.org/jira/browse/HADOOP-18062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-18062: -- Assignee: Akira Ajisaka > Update the year to 2022 > --- > > Key: HADOOP-18062 > URL: https://issues.apache.org/jira/browse/HADOOP-18062 > Project: Hadoop Common > Issue Type: Task > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Blocker > Labels: newbie, release-blocker > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18062) Update the year to 2022
[ https://issues.apache.org/jira/browse/HADOOP-18062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18062: --- Labels: newbie release-blocker (was: newbie) > Update the year to 2022 > --- > > Key: HADOOP-18062 > URL: https://issues.apache.org/jira/browse/HADOOP-18062 > Project: Hadoop Common > Issue Type: Task > Components: build >Reporter: Akira Ajisaka >Priority: Blocker > Labels: newbie, release-blocker > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18062) Update the year to 2022
Akira Ajisaka created HADOOP-18062: -- Summary: Update the year to 2022 Key: HADOOP-18062 URL: https://issues.apache.org/jira/browse/HADOOP-18062 Project: Hadoop Common Issue Type: Task Components: build Reporter: Akira Ajisaka -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17445) Update the year to 2021
[ https://issues.apache.org/jira/browse/HADOOP-17445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17445: --- Fix Version/s: (was: 3.2.3) > Update the year to 2021 > --- > > Key: HADOOP-17445 > URL: https://issues.apache.org/jira/browse/HADOOP-17445 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.2.2, 3.3.1, 3.4.0, 3.2.3 >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Fix For: 3.2.2, 3.3.1, 3.4.0, 2.10.2 > > Attachments: HADOOP-17445.patch > > > Update the year to 2021. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17570) Apply YETUS-1102 to re-enable GitHub comments
[ https://issues.apache.org/jira/browse/HADOOP-17570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17570: --- Fix Version/s: 2.10.2 Backported to branch-2.10. > Apply YETUS-1102 to re-enable GitHub comments > - > > Key: HADOOP-17570 > URL: https://issues.apache.org/jira/browse/HADOOP-17570 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3 > > Time Spent: 2h 50m > Remaining Estimate: 0h > > Yetus 0.13.0 enabled updating GitHub status instead of commenting the report, > however, the report comments are still useful for some cases. Let's apply > YETUS-1102 to re-enable the comments. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17582) Replace GitHub App Token with GitHub OAuth token
[ https://issues.apache.org/jira/browse/HADOOP-17582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17582: --- Fix Version/s: 2.10.2 Backported to branch-2.10. > Replace GitHub App Token with GitHub OAuth token > > > Key: HADOOP-17582 > URL: https://issues.apache.org/jira/browse/HADOOP-17582 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3 > > Time Spent: 0.5h > Remaining Estimate: 0h > > GitHub App Token expires within 1 hour, so Yetus fails to write GitHub > comments in most cases. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16748) Migrate to Python 3 and upgrade Yetus to 0.13.0
[ https://issues.apache.org/jira/browse/HADOOP-16748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16748: --- Description: (was: Backported to branch-2.10.) Backported to branch-2.10 via https://github.com/apache/hadoop/pull/3832 > Migrate to Python 3 and upgrade Yetus to 0.13.0 > --- > > Key: HADOOP-16748 > URL: https://issues.apache.org/jira/browse/HADOOP-16748 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3 > > Time Spent: 9.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16748) Migrate to Python 3 and upgrade Yetus to 0.13.0
[ https://issues.apache.org/jira/browse/HADOOP-16748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16748: --- Fix Version/s: 2.10.2 Description: Backported to branch-2.10. > Migrate to Python 3 and upgrade Yetus to 0.13.0 > --- > > Key: HADOOP-16748 > URL: https://issues.apache.org/jira/browse/HADOOP-16748 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3 > > Time Spent: 9.5h > Remaining Estimate: 0h > > Backported to branch-2.10. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16054) Update Dockerfile to use Bionic
[ https://issues.apache.org/jira/browse/HADOOP-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16054: --- Fix Version/s: 2.10.2 Backported to branch-2.10. > Update Dockerfile to use Bionic > --- > > Key: HADOOP-16054 > URL: https://issues.apache.org/jira/browse/HADOOP-16054 > Project: Hadoop Common > Issue Type: Improvement > Components: build, test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3 > > Time Spent: 2.5h > Remaining Estimate: 0h > > Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18045) Disable TestDynamometerInfra
[ https://issues.apache.org/jira/browse/HADOOP-18045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18045: --- Fix Version/s: 3.4.0 3.3.3 Target Version/s: 3.4.0, 3.3.3 (was: 3.4.0, 3.2.4, 3.3.3) Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk and branch-3.3. > Disable TestDynamometerInfra > > > Key: HADOOP-18045 > URL: https://issues.apache.org/jira/browse/HADOOP-18045 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.3 > > Time Spent: 0.5h > Remaining Estimate: 0h > > This test is broken and there is no fix provided for a long time. Let's > disable the test to reduce the noise in the daily qbt job. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18045) Disable TestDynamometerInfra
[ https://issues.apache.org/jira/browse/HADOOP-18045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18045: --- Status: Patch Available (was: Open) > Disable TestDynamometerInfra > > > Key: HADOOP-18045 > URL: https://issues.apache.org/jira/browse/HADOOP-18045 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > This test is broken and there is no fix provided for a long time. Let's > disable the test to reduce the noise in the daily qbt job. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18045) Disable TestDynamometerInfra
[ https://issues.apache.org/jira/browse/HADOOP-18045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka reassigned HADOOP-18045: -- Assignee: Akira Ajisaka > Disable TestDynamometerInfra > > > Key: HADOOP-18045 > URL: https://issues.apache.org/jira/browse/HADOOP-18045 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > > This test is broken and there is no fix provided for a long time. Let's > disable the test to reduce the noise in the daily qbt job. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18054) Unable to load AWS credentials from any provider in the chain
[ https://issues.apache.org/jira/browse/HADOOP-18054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18054. Resolution: Invalid > Unable to load AWS credentials from any provider in the chain > - > > Key: HADOOP-18054 > URL: https://issues.apache.org/jira/browse/HADOOP-18054 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs, fs/s3, security >Affects Versions: 3.3.1 > Environment: From top to down. > Kubernetes version 1.18.20 > Spark Version: 2.4.4 > Kubernetes Setup: Pod with serviceAccountName that binds with IAM Role using > IRSA (EKS Feature). > {code:java} > apiVersion: v1 > automountServiceAccountToken: true > kind: ServiceAccount > metadata: > annotations: > eks.amazonaws.com/role-arn: > arn:aws:iam:::role/EKSDefaultPolicyFor-Spark > name: spark > namespace: spark {code} > AWS Setup: > IAM Role with permissions over the S3 Bucket > Bucket with permissions granted over the IAM Role. > Code: > {code:java} > def run_etl(): > sc = > SparkSession.builder.appName("TXD-PYSPARK-ORACLE-SIEBEL-CASOS").getOrCreate() > sqlContext = SQLContext(sc) > args = sys.argv > load_date = args[1] # Ej: "2019-05-21" > output_path = args[2] # Ej: s3://mybucket/myfolder > print(args, "load_date", load_date, "output_path", output_path) > sc._jsc.hadoopConfiguration().set( > "fs.s3a.aws.credentials.provider", > "com.amazonaws.auth.DefaultAWSCredentialsProviderChain" > ) > sc._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", > "true") > sc._jsc.hadoopConfiguration().set("fs.s3a.impl", > "org.apache.hadoop.fs.s3a.S3AFileSystem") > # sc._jsc.hadoopConfiguration().set("fs.s3.impl", > "org.apache.hadoop.fs.s3native.NativeS3FileSystem") > sc._jsc.hadoopConfiguration().set("fs.AbstractFileSystem.s3a.impl", > "org.apache.hadoop.fs.s3a.S3A") > session = boto3.session.Session() > client = session.client(service_name='secretsmanager', > region_name="us-east-1") > get_secret_value_response = client.get_secret_value( > SecretId="Siebel_Connection_Info" > ) > secret = get_secret_value_response["SecretString"] > secret = json.loads(secret) > db_username = secret.get("db_username") > db_password = secret.get("db_password") > db_host = secret.get("db_host") > db_port = secret.get("db_port") > db_name = secret.get("db_name") > db_url = "jdbc:oracle:thin:@{}:{}/{}".format(db_host, db_port, db_name) > jdbc_driver_name = "oracle.jdbc.OracleDriver" > dbtable = """(SELECT * FROM SIEBEL.REPORTE_DE_CASOS WHERE JOB_ID IN > (SELECT JOB_ID FROM SIEBEL.SERVICE_CONSUMED_STATUS WHERE > PUBLISH_INFORMATION_DT BETWEEN TO_DATE('{} 00:00:00', '-MM-DD > HH24:MI:SS') AND TO_DATE('{} 23:59:59', '-MM-DD > HH24:MI:SS')))""".format(load_date, load_date) > df = sqlContext.read\ > .format("jdbc")\ > .option("charset", "utf8")\ > .option("driver", jdbc_driver_name)\ > .option("url",db_url)\ > .option("dbtable", dbtable)\ > .option("user", db_username)\ > .option("password", db_password)\ > .option("oracle.jdbc.timezoneAsRegion", "false")\ > .load() > # Particionado > a_load_date = load_date.split('-') > df = df.withColumn("year", lit(a_load_date[0])) > df = df.withColumn("month", lit(a_load_date[1])) > df = df.withColumn("day", lit(a_load_date[2])) > df.write.mode("append").partitionBy(["year", "month", > "day"]).csv(output_path, header=True) > # Es importante cerrar la conexion para evitar problemas como el > reportado en > # > https://stackoverflow.com/questions/40830638/cannot-load-main-class-from-jar-file > sc.stop() > if __name__ == '__main__': > run_etl() {code} > Log's > {code:java} > + '[' -z s3://mybucket.spark.jobs/siebel-casos-actividades ']' > + aws s3 cp s3://mybucket.spark.jobs/siebel-casos-actividades /opt/ > --recursive --include '*' > download: > s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-casos.py > to ../../txd-pyspark-siebel-casos.py > download: > s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-actividades.py > to ../../txd-pyspark-siebel-actividades.py > download: s3://mybucket.jobs/siebel-casos-actividades/hadoop-aws-3.3.1.jar to > ../../hadoop-aws-3.3.1.jar > download: s3://mybucket.spark.jobs/siebel-casos-actividades/ojdbc8.jar to > ../../ojdbc8.jar > download: > s3://mybucket.spark.jobs/siebel-casos-actividades/aws-java-sdk-bundle-1.11.901.jar > to ../../aws-java-sdk-bundle-1.11.901.jar > ++ id -u > + myuid=0 > ++ id -g > + mygid=0 > + set +e > ++ getent passwd 0 > + uidentry=root:x:0:0:root:/root:/bin/ash > + set -e > + '[' -z root:x:0:0:root:/root:/bin/ash
[jira] [Commented] (HADOOP-18054) Unable to load AWS credentials from any provider in the chain
[ https://issues.apache.org/jira/browse/HADOOP-18054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17464894#comment-17464894 ] Akira Ajisaka commented on HADOOP-18054: JIRA is not for end-user questions. Please use u...@hadoop.apache.org or consulting with AWS customer support. https://hadoop.apache.org/mailing_lists.html > Unable to load AWS credentials from any provider in the chain > - > > Key: HADOOP-18054 > URL: https://issues.apache.org/jira/browse/HADOOP-18054 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs, fs/s3, security >Affects Versions: 3.3.1 > Environment: From top to down. > Kubernetes version 1.18.20 > Spark Version: 2.4.4 > Kubernetes Setup: Pod with serviceAccountName that binds with IAM Role using > IRSA (EKS Feature). > {code:java} > apiVersion: v1 > automountServiceAccountToken: true > kind: ServiceAccount > metadata: > annotations: > eks.amazonaws.com/role-arn: > arn:aws:iam:::role/EKSDefaultPolicyFor-Spark > name: spark > namespace: spark {code} > AWS Setup: > IAM Role with permissions over the S3 Bucket > Bucket with permissions granted over the IAM Role. > Code: > {code:java} > def run_etl(): > sc = > SparkSession.builder.appName("TXD-PYSPARK-ORACLE-SIEBEL-CASOS").getOrCreate() > sqlContext = SQLContext(sc) > args = sys.argv > load_date = args[1] # Ej: "2019-05-21" > output_path = args[2] # Ej: s3://mybucket/myfolder > print(args, "load_date", load_date, "output_path", output_path) > sc._jsc.hadoopConfiguration().set( > "fs.s3a.aws.credentials.provider", > "com.amazonaws.auth.DefaultAWSCredentialsProviderChain" > ) > sc._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", > "true") > sc._jsc.hadoopConfiguration().set("fs.s3a.impl", > "org.apache.hadoop.fs.s3a.S3AFileSystem") > # sc._jsc.hadoopConfiguration().set("fs.s3.impl", > "org.apache.hadoop.fs.s3native.NativeS3FileSystem") > sc._jsc.hadoopConfiguration().set("fs.AbstractFileSystem.s3a.impl", > "org.apache.hadoop.fs.s3a.S3A") > session = boto3.session.Session() > client = session.client(service_name='secretsmanager', > region_name="us-east-1") > get_secret_value_response = client.get_secret_value( > SecretId="Siebel_Connection_Info" > ) > secret = get_secret_value_response["SecretString"] > secret = json.loads(secret) > db_username = secret.get("db_username") > db_password = secret.get("db_password") > db_host = secret.get("db_host") > db_port = secret.get("db_port") > db_name = secret.get("db_name") > db_url = "jdbc:oracle:thin:@{}:{}/{}".format(db_host, db_port, db_name) > jdbc_driver_name = "oracle.jdbc.OracleDriver" > dbtable = """(SELECT * FROM SIEBEL.REPORTE_DE_CASOS WHERE JOB_ID IN > (SELECT JOB_ID FROM SIEBEL.SERVICE_CONSUMED_STATUS WHERE > PUBLISH_INFORMATION_DT BETWEEN TO_DATE('{} 00:00:00', '-MM-DD > HH24:MI:SS') AND TO_DATE('{} 23:59:59', '-MM-DD > HH24:MI:SS')))""".format(load_date, load_date) > df = sqlContext.read\ > .format("jdbc")\ > .option("charset", "utf8")\ > .option("driver", jdbc_driver_name)\ > .option("url",db_url)\ > .option("dbtable", dbtable)\ > .option("user", db_username)\ > .option("password", db_password)\ > .option("oracle.jdbc.timezoneAsRegion", "false")\ > .load() > # Particionado > a_load_date = load_date.split('-') > df = df.withColumn("year", lit(a_load_date[0])) > df = df.withColumn("month", lit(a_load_date[1])) > df = df.withColumn("day", lit(a_load_date[2])) > df.write.mode("append").partitionBy(["year", "month", > "day"]).csv(output_path, header=True) > # Es importante cerrar la conexion para evitar problemas como el > reportado en > # > https://stackoverflow.com/questions/40830638/cannot-load-main-class-from-jar-file > sc.stop() > if __name__ == '__main__': > run_etl() {code} > Log's > {code:java} > + '[' -z s3://mybucket.spark.jobs/siebel-casos-actividades ']' > + aws s3 cp s3://mybucket.spark.jobs/siebel-casos-actividades /opt/ > --recursive --include '*' > download: > s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-casos.py > to ../../txd-pyspark-siebel-casos.py > download: > s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-actividades.py > to ../../txd-pyspark-siebel-actividades.py > download: s3://mybucket.jobs/siebel-casos-actividades/hadoop-aws-3.3.1.jar to > ../../hadoop-aws-3.3.1.jar > download: s3://mybucket.spark.jobs/siebel-casos-actividades/ojdbc8.jar to > ../../ojdbc8.jar > download: > s3://mybucket.spark.jobs/siebel-casos-actividades/aws-java-sdk-bundle-1.11.901.jar > to ../../aws-java-sdk-bund
[jira] [Resolved] (HADOOP-18052) Support Apple Silicon in start-build-env.sh
[ https://issues.apache.org/jira/browse/HADOOP-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18052. Fix Version/s: 3.4.0 3.3.3 Resolution: Fixed Committed to trunk and branch-3.3. > Support Apple Silicon in start-build-env.sh > --- > > Key: HADOOP-18052 > URL: https://issues.apache.org/jira/browse/HADOOP-18052 > Project: Hadoop Common > Issue Type: Improvement > Components: build > Environment: M1 Pro. MacOS 12.0.1. Docker for Mac. >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.3 > > Time Spent: 40m > Remaining Estimate: 0h > > start-build-env.sh uses Dockerfile for x86 in M1 Mac, and the Dockerfile sets > wrong JAVA_HOME. Dockerfile_aarch64 should be used instead. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17096) ZStandardCompressor throws java.lang.InternalError: Error (generic)
[ https://issues.apache.org/jira/browse/HADOOP-17096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17096: --- Fix Version/s: (was: 3.2.3) > ZStandardCompressor throws java.lang.InternalError: Error (generic) > --- > > Key: HADOOP-17096 > URL: https://issues.apache.org/jira/browse/HADOOP-17096 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 3.2.1 > Environment: Our repro is on ubuntu xenial LTS, with hadoop 3.2.1 > linking to libzstd 1.3.1. The bug is difficult to reproduce in an end-to-end > environment (eg running an actual hadoop job with zstd compression) because > it's very sensitive to the exact input and output characteristics. I > reproduced the bug by turning one of the existing unit tests into a crude > fuzzer, but I'm not sure upstream will accept that patch, so I've attached it > separately on this ticket. > Note that the existing unit test for testCompressingWithOneByteOutputBuffer > fails to reproduce this bug. This is because it's using the license file as > input, and this file is too small. libzstd has internal buffering (in our > environment it seems to be 128 kilobytes), and the license file is only 10 > kilobytes. Thus libzstd is able to consume all the input and compress it in a > single call, then return pieces of its internal buffer one byte at a time. > Since all the input is consumed in a single call, uncompressedDirectBufOff > and uncompressedDirectBufLen are both set to zero and thus the bug does not > reproduce. >Reporter: Stephen Jung (Stripe) >Assignee: Stephen Jung (Stripe) >Priority: Major > Labels: pull-request-available > Fix For: 3.2.2, 3.3.1, 3.4.0 > > Attachments: fuzztest.patch > > Time Spent: 10m > Remaining Estimate: 0h > > A bug in index handling causes ZStandardCompressor.c to pass a malformed > ZSTD_inBuffer to libzstd. libzstd then returns an "Error (generic)" that gets > thrown. The crux of the issue is two variables, uncompressedDirectBufLen and > uncompressedDirectBufOff. The hadoop code counts uncompressedDirectBufOff > from the start of uncompressedDirectBuf, then uncompressedDirectBufLen is > counted from uncompressedDirectBufOff. However, libzstd considers pos and > size to both be counted from the start of the buffer. As a result, this line > https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L228 > causes a malformed buffer to be passed to libzstd, where pos>size. Here's a > longer description of the bug in case this abstract explanation is unclear: > > Suppose we initialize uncompressedDirectBuf (via setInputFromSavedData) with > five bytes of input. This results in uncompressedDirectBufOff=0 and > uncompressedDirectBufLen=5 > (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.java#L140-L146). > Then we call compress(), which initializes a ZSTD_inBuffer > (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L195-L196). > The definition of those libzstd structs is here > https://github.com/facebook/zstd/blob/v1.3.1/lib/zstd.h#L251-L261 - note that > we set size=uncompressedDirectBufLen and pos=uncompressedDirectBufOff. The > ZSTD_inBuffer gets passed to libzstd, compression happens, etc. When libzstd > returns from the compression function, it updates the ZSTD_inBuffer struct to > indicate how many bytes were consumed > (https://github.com/facebook/zstd/blob/v1.3.1/lib/compress/zstd_compress.c#L3919-L3920). > Note that pos is advanced, but size is unchanged. > Now, libzstd does not guarantee that the entire input will be compressed in a > single call of the compression function. (Some of the compression libraries > used by hadoop, such as snappy, _do_ provide this guarantee, but libzstd is > not one of them.) So the hadoop native code updates uncompressedDirectBufOff > and uncompressedDirectBufLen using the updated ZSTD_inBuffer: > https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L227-L228 > Now, returning to our example, we started with 5 bytes of uncompressed input. > Suppose libzstd compressed 4 of those bytes, leaving one unread. This would > result in a ZSTD_inBuffer struct with size=5 (unchanged) and pos=4 (four > bytes were consumed). The hadoop native code would then set > uncompressedDirectBufOff=4
[jira] [Updated] (HADOOP-16908) Prune Jackson 1 from the codebase and restrict it's usage for future
[ https://issues.apache.org/jira/browse/HADOOP-16908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-16908: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) Merged the PR into trunk. Thank you [~vjasani] for your contribution! > Prune Jackson 1 from the codebase and restrict it's usage for future > > > Key: HADOOP-16908 > URL: https://issues.apache.org/jira/browse/HADOOP-16908 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 4h 10m > Remaining Estimate: 0h > > The jackson 1 code has silently creeped into the Hadoop codebase again. We > should prune them out. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18052) Support Apple Silicon in start-build-env.sh
[ https://issues.apache.org/jira/browse/HADOOP-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18052: --- Summary: Support Apple Silicon in start-build-env.sh (was: Support start-build-env.sh in M1 Mac) > Support Apple Silicon in start-build-env.sh > --- > > Key: HADOOP-18052 > URL: https://issues.apache.org/jira/browse/HADOOP-18052 > Project: Hadoop Common > Issue Type: Improvement > Components: build > Environment: M1 Pro. MacOS 12.0.1. Docker for Mac. >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > > start-build-env.sh uses Dockerfile for x86 in M1 Mac, and the Dockerfile sets > wrong JAVA_HOME. Dockerfile_aarch64 should be used instead. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18052) Support start-build-env.sh in M1 Mac
Akira Ajisaka created HADOOP-18052: -- Summary: Support start-build-env.sh in M1 Mac Key: HADOOP-18052 URL: https://issues.apache.org/jira/browse/HADOOP-18052 Project: Hadoop Common Issue Type: Improvement Components: build Environment: M1 Pro. MacOS 12.0.1. Docker for Mac. Reporter: Akira Ajisaka Assignee: Akira Ajisaka start-build-env.sh uses Dockerfile for x86 in M1 Mac, and the Dockerfile sets wrong JAVA_HOME. Dockerfile_aarch64 should be used instead. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17534) Upgrade Jackson databind to 2.10.5.1
[ https://issues.apache.org/jira/browse/HADOOP-17534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17534: --- Fix Version/s: 3.2.3 (was: 3.2.4) Cherry-picked to branch-3.2.3. > Upgrade Jackson databind to 2.10.5.1 > > > Key: HADOOP-17534 > URL: https://issues.apache.org/jira/browse/HADOOP-17534 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.2.2 >Reporter: Adam Roberts >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.2.3 > > Time Spent: 50m > Remaining Estimate: 0h > > Hey everyone, we've done a container scan of Hadoop 3.2.2 we are using to > build a shaded version of a Flink uber jar with, and noticed several apparent > problems that are primarily related to > com.faster.xml.jackson.core_jackson-databind. > > Specifically the report claims version 2.4.0 of the library is used (am not > sure about this part personally so I may be mistaken) and the fix suggestion > I see is to move up to either 2.10.5.1, 2.9.10.8, 2.6.7.4 as appropriate. > > I believe 2.10.3 is actually what's currently in use based on > [https://github.com/apache/hadoop/blob/4cf35315838a6e65f87ed64aaa8f1d31594c7fcd/hadoop-project/pom.xml#L75|https://github.com/apache/hadoop/blob/4cf35315838a6e65f87ed64aaa8f1d31594c7fcd/hadoop-project/pom.xml#L75.] > > Hopefully not a far-reaching change as I know changing dependencies can > sometimes have a big knock-on effect, anyway - figured I'd report it incase > someone plans to work on it. > > Again do note that this is using a scan of an image built for Flink 1.11.3, > but using Hadoop so it has a bunch of the same classes in, and I do believe > that in Flink itself, the version of Jackson pulled in does not have the same > problems, thus my thinking it is related to the Hadoop dependencies. > Thanks! -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13500) Synchronizing iteration of Configuration properties object
[ https://issues.apache.org/jira/browse/HADOOP-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13500: --- Fix Version/s: 3.2.3 (was: 3.2.4) Cherry-picked to branch-3.2.3. > Synchronizing iteration of Configuration properties object > -- > > Key: HADOOP-13500 > URL: https://issues.apache.org/jira/browse/HADOOP-13500 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Reporter: Jason Darrell Lowe >Assignee: Dhananjay Badaya >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.3 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > It is possible to encounter a ConcurrentModificationException while trying to > iterate a Configuration object. The iterator method tries to walk the > underlying Property object without proper synchronization, so another thread > simultaneously calling the set method can trigger it. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13500) Synchronizing iteration of Configuration properties object
[ https://issues.apache.org/jira/browse/HADOOP-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17461579#comment-17461579 ] Akira Ajisaka commented on HADOOP-13500: I'll cherry-pick this. Thanks > Synchronizing iteration of Configuration properties object > -- > > Key: HADOOP-13500 > URL: https://issues.apache.org/jira/browse/HADOOP-13500 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Reporter: Jason Darrell Lowe >Assignee: Dhananjay Badaya >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > It is possible to encounter a ConcurrentModificationException while trying to > iterate a Configuration object. The iterator method tries to walk the > underlying Property object without proper synchronization, so another thread > simultaneously calling the set method can trigger it. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15293) TestLogLevel fails on Java 9
[ https://issues.apache.org/jira/browse/HADOOP-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17461543#comment-17461543 ] Akira Ajisaka edited comment on HADOOP-15293 at 12/17/21, 4:28 PM: --- The test is failing in branch-2.10 even in Java 8. Probably the error message is updated in the latest Java 8 as well. I've backported this to branch-2.10 to fix the failure. was (Author: ajisakaa): The test is failing in branch-2.10 even in Java 8. Probably the error message is updated in Java 8 as well. I've backported this to branch-2.10 to fix the failure. > TestLogLevel fails on Java 9 > > > Key: HADOOP-15293 > URL: https://issues.apache.org/jira/browse/HADOOP-15293 > Project: Hadoop Common > Issue Type: Sub-task > Components: test > Environment: Applied HADOOP-12760 and HDFS-11610 >Reporter: Akira Ajisaka >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.1.0, 2.10.2 > > Attachments: HADOOP-15293.1.patch, HADOOP-15293.2.patch > > > {noformat} > [INFO] Running org.apache.hadoop.log.TestLogLevel > [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 9.805 > s <<< FAILURE! - in org.apache.hadoop.log.TestLogLevel > [ERROR] testLogLevelByHttpWithSpnego(org.apache.hadoop.log.TestLogLevel) > Time elapsed: 1.179 s <<< FAILURE! > java.lang.AssertionError: > Expected to find 'Unrecognized SSL message' but got unexpected exception: > javax.net.ssl.SSLException: Unsupported or unrecognized SSL message > at > java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:416) > {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15293) TestLogLevel fails on Java 9
[ https://issues.apache.org/jira/browse/HADOOP-15293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15293: --- Fix Version/s: 2.10.2 The test is failing in branch-2.10 even in Java 8. Probably the error message is updated in Java 8 as well. I've backported this to branch-2.10 to fix the failure. > TestLogLevel fails on Java 9 > > > Key: HADOOP-15293 > URL: https://issues.apache.org/jira/browse/HADOOP-15293 > Project: Hadoop Common > Issue Type: Sub-task > Components: test > Environment: Applied HADOOP-12760 and HDFS-11610 >Reporter: Akira Ajisaka >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.1.0, 2.10.2 > > Attachments: HADOOP-15293.1.patch, HADOOP-15293.2.patch > > > {noformat} > [INFO] Running org.apache.hadoop.log.TestLogLevel > [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 9.805 > s <<< FAILURE! - in org.apache.hadoop.log.TestLogLevel > [ERROR] testLogLevelByHttpWithSpnego(org.apache.hadoop.log.TestLogLevel) > Time elapsed: 1.179 s <<< FAILURE! > java.lang.AssertionError: > Expected to find 'Unrecognized SSL message' but got unexpected exception: > javax.net.ssl.SSLException: Unsupported or unrecognized SSL message > at > java.base/sun.security.ssl.SSLSocketInputRecord.handleUnknownRecord(SSLSocketInputRecord.java:416) > {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13500) Synchronizing iteration of Configuration properties object
[ https://issues.apache.org/jira/browse/HADOOP-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17461535#comment-17461535 ] Akira Ajisaka commented on HADOOP-13500: Thank you [~dbadaya] for your contribution. > Synchronizing iteration of Configuration properties object > -- > > Key: HADOOP-13500 > URL: https://issues.apache.org/jira/browse/HADOOP-13500 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Reporter: Jason Darrell Lowe >Assignee: Dhananjay Badaya >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > It is possible to encounter a ConcurrentModificationException while trying to > iterate a Configuration object. The iterator method tries to walk the > underlying Property object without proper synchronization, so another thread > simultaneously calling the set method can trigger it. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13500) Synchronizing iteration of Configuration properties object
[ https://issues.apache.org/jira/browse/HADOOP-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-13500. Fix Version/s: 2.10.2 Resolution: Fixed Merged PR 3776 into branch-2.10. > Synchronizing iteration of Configuration properties object > -- > > Key: HADOOP-13500 > URL: https://issues.apache.org/jira/browse/HADOOP-13500 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Reporter: Jason Darrell Lowe >Assignee: Dhananjay Badaya >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > It is possible to encounter a ConcurrentModificationException while trying to > iterate a Configuration object. The iterator method tries to walk the > underlying Property object without proper synchronization, so another thread > simultaneously calling the set method can trigger it. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18049) Pin python lazy-object-proxy to 1.6.0 in Docker file as newer versions are incompatible with python2.7
[ https://issues.apache.org/jira/browse/HADOOP-18049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-18049: --- Summary: Pin python lazy-object-proxy to 1.6.0 in Docker file as newer versions are incompatible with python2.7 (was: Hadoop CI fails in precommit due to python2.7 incompatible version of lazy-object-proxy ) > Pin python lazy-object-proxy to 1.6.0 in Docker file as newer versions are > incompatible with python2.7 > -- > > Key: HADOOP-18049 > URL: https://issues.apache.org/jira/browse/HADOOP-18049 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.10.2 >Reporter: Dhananjay Badaya >Assignee: Dhananjay Badaya >Priority: Major > Labels: pull-request-available > Fix For: 2.10.2 > > Time Spent: 50m > Remaining Estimate: 0h > > Latest version of lazy-object-proxy (dependency of pylint) seems incompatible > with python2.7 as per [release > notes|https://pypi.org/project/lazy-object-proxy/1.7.1/] > [https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3776/2/pipeline] > > {code:java} > [2021-12-16T12:37:15.710Z] Collecting lazy-object-proxy (from > astroid<2.0,>=1.6->pylint==1.9.2) > [2021-12-16T12:37:15.710Z] Downloading > https://files.pythonhosted.org/packages/75/93/3fc1cc28f71dd10b87a53b9d809602d7730e84cc4705a062def286232a9c/lazy-object-proxy-1.7.1.tar.gz > (41kB) > [2021-12-16T12:37:16.225Z] Complete output from command python setup.py > egg_info: > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'project_urls' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'python_requires' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'use_scm_version' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] running egg_info > [2021-12-16T12:37:16.225Z] creating > pip-egg-info/lazy_object_proxy.egg-info > [2021-12-16T12:37:16.225Z] writing > pip-egg-info/lazy_object_proxy.egg-info/PKG-INFO > [2021-12-16T12:37:16.225Z] writing top-level names to > pip-egg-info/lazy_object_proxy.egg-info/top_level.txt > [2021-12-16T12:37:16.225Z] writing dependency_links to > pip-egg-info/lazy_object_proxy.egg-info/dependency_links.txt > [2021-12-16T12:37:16.225Z] writing manifest file > 'pip-egg-info/lazy_object_proxy.egg-info/SOURCES.txt' > [2021-12-16T12:37:16.225Z] warning: manifest_maker: standard file '-c' > not found > [2021-12-16T12:37:16.225Z] > [2021-12-16T12:37:16.225Z] Traceback (most recent call last): > [2021-12-16T12:37:16.225Z] File "", line 1, in > [2021-12-16T12:37:16.225Z] File > "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 146, in > [2021-12-16T12:37:16.225Z] distclass=BinaryDistribution, > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/core.py", > line 151, in setup > [2021-12-16T12:37:16.225Z] dist.run_commands() > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/dist.py", > line 953, in run_commands > [2021-12-16T12:37:16.225Z] self.run_command(cmd) > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/dist.py", > line 972, in run_command > [2021-12-16T12:37:16.225Z] cmd_obj.run() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 186, > in run > [2021-12-16T12:37:16.225Z] self.find_sources() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 209, > in find_sources > [2021-12-16T12:37:16.225Z] mm.run() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 293, > in run > [2021-12-16T12:37:16.225Z] self.add_defaults() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 322, > in add_defaults > [2021-12-16T12:37:16.225Z] sdist.add_defaults(self) > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/sdist.py", line 131, in > add_defaults > [2021-12-16T12:37:16.225Z] if self.distribution.has_ext_modules(): > [2021-12-16T12:37:16.225Z] File > "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 70, in > has_ext_modules > [2021-12-16T12:37:16.225Z] return
[jira] [Commented] (HADOOP-18049) Hadoop CI fails in precommit due to python2.7 incompatible version of lazy-object-proxy
[ https://issues.apache.org/jira/browse/HADOOP-18049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17461295#comment-17461295 ] Akira Ajisaka commented on HADOOP-18049: Okay I'll try to backport the patches. Thank you [~weichiu] > Hadoop CI fails in precommit due to python2.7 incompatible version of > lazy-object-proxy > > > Key: HADOOP-18049 > URL: https://issues.apache.org/jira/browse/HADOOP-18049 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.10.2 >Reporter: Dhananjay Badaya >Assignee: Dhananjay Badaya >Priority: Major > Labels: pull-request-available > Fix For: 2.10.2 > > Time Spent: 50m > Remaining Estimate: 0h > > Latest version of lazy-object-proxy (dependency of pylint) seems incompatible > with python2.7 as per [release > notes|https://pypi.org/project/lazy-object-proxy/1.7.1/] > [https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3776/2/pipeline] > > {code:java} > [2021-12-16T12:37:15.710Z] Collecting lazy-object-proxy (from > astroid<2.0,>=1.6->pylint==1.9.2) > [2021-12-16T12:37:15.710Z] Downloading > https://files.pythonhosted.org/packages/75/93/3fc1cc28f71dd10b87a53b9d809602d7730e84cc4705a062def286232a9c/lazy-object-proxy-1.7.1.tar.gz > (41kB) > [2021-12-16T12:37:16.225Z] Complete output from command python setup.py > egg_info: > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'project_urls' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'python_requires' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'use_scm_version' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] running egg_info > [2021-12-16T12:37:16.225Z] creating > pip-egg-info/lazy_object_proxy.egg-info > [2021-12-16T12:37:16.225Z] writing > pip-egg-info/lazy_object_proxy.egg-info/PKG-INFO > [2021-12-16T12:37:16.225Z] writing top-level names to > pip-egg-info/lazy_object_proxy.egg-info/top_level.txt > [2021-12-16T12:37:16.225Z] writing dependency_links to > pip-egg-info/lazy_object_proxy.egg-info/dependency_links.txt > [2021-12-16T12:37:16.225Z] writing manifest file > 'pip-egg-info/lazy_object_proxy.egg-info/SOURCES.txt' > [2021-12-16T12:37:16.225Z] warning: manifest_maker: standard file '-c' > not found > [2021-12-16T12:37:16.225Z] > [2021-12-16T12:37:16.225Z] Traceback (most recent call last): > [2021-12-16T12:37:16.225Z] File "", line 1, in > [2021-12-16T12:37:16.225Z] File > "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 146, in > [2021-12-16T12:37:16.225Z] distclass=BinaryDistribution, > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/core.py", > line 151, in setup > [2021-12-16T12:37:16.225Z] dist.run_commands() > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/dist.py", > line 953, in run_commands > [2021-12-16T12:37:16.225Z] self.run_command(cmd) > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/dist.py", > line 972, in run_command > [2021-12-16T12:37:16.225Z] cmd_obj.run() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 186, > in run > [2021-12-16T12:37:16.225Z] self.find_sources() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 209, > in find_sources > [2021-12-16T12:37:16.225Z] mm.run() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 293, > in run > [2021-12-16T12:37:16.225Z] self.add_defaults() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 322, > in add_defaults > [2021-12-16T12:37:16.225Z] sdist.add_defaults(self) > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/sdist.py", line 131, in > add_defaults > [2021-12-16T12:37:16.225Z] if self.distribution.has_ext_modules(): > [2021-12-16T12:37:16.225Z] File > "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 70, in > has_ext_modules > [2021-12-16T12:37:16.225Z] return super().has_ext_modules() or not > os.environ.get('SETUPPY_ALLOW_PURE') > [2021-12-16T12:37:16.225Z] TypeError: super()
[jira] [Comment Edited] (HADOOP-18049) Hadoop CI fails in precommit due to python2.7 incompatible version of lazy-object-proxy
[ https://issues.apache.org/jira/browse/HADOOP-18049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17461294#comment-17461294 ] Akira Ajisaka edited comment on HADOOP-18049 at 12/17/21, 9:03 AM: --- Merged the PR into branch-2.10. Thank you [~dbadaya] for your quick fix! was (Author: ajisakaa): Merged the PR into branch-2.10. > Hadoop CI fails in precommit due to python2.7 incompatible version of > lazy-object-proxy > > > Key: HADOOP-18049 > URL: https://issues.apache.org/jira/browse/HADOOP-18049 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.10.2 >Reporter: Dhananjay Badaya >Assignee: Dhananjay Badaya >Priority: Major > Labels: pull-request-available > Fix For: 2.10.2 > > Time Spent: 50m > Remaining Estimate: 0h > > Latest version of lazy-object-proxy (dependency of pylint) seems incompatible > with python2.7 as per [release > notes|https://pypi.org/project/lazy-object-proxy/1.7.1/] > [https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3776/2/pipeline] > > {code:java} > [2021-12-16T12:37:15.710Z] Collecting lazy-object-proxy (from > astroid<2.0,>=1.6->pylint==1.9.2) > [2021-12-16T12:37:15.710Z] Downloading > https://files.pythonhosted.org/packages/75/93/3fc1cc28f71dd10b87a53b9d809602d7730e84cc4705a062def286232a9c/lazy-object-proxy-1.7.1.tar.gz > (41kB) > [2021-12-16T12:37:16.225Z] Complete output from command python setup.py > egg_info: > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'project_urls' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'python_requires' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'use_scm_version' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] running egg_info > [2021-12-16T12:37:16.225Z] creating > pip-egg-info/lazy_object_proxy.egg-info > [2021-12-16T12:37:16.225Z] writing > pip-egg-info/lazy_object_proxy.egg-info/PKG-INFO > [2021-12-16T12:37:16.225Z] writing top-level names to > pip-egg-info/lazy_object_proxy.egg-info/top_level.txt > [2021-12-16T12:37:16.225Z] writing dependency_links to > pip-egg-info/lazy_object_proxy.egg-info/dependency_links.txt > [2021-12-16T12:37:16.225Z] writing manifest file > 'pip-egg-info/lazy_object_proxy.egg-info/SOURCES.txt' > [2021-12-16T12:37:16.225Z] warning: manifest_maker: standard file '-c' > not found > [2021-12-16T12:37:16.225Z] > [2021-12-16T12:37:16.225Z] Traceback (most recent call last): > [2021-12-16T12:37:16.225Z] File "", line 1, in > [2021-12-16T12:37:16.225Z] File > "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 146, in > [2021-12-16T12:37:16.225Z] distclass=BinaryDistribution, > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/core.py", > line 151, in setup > [2021-12-16T12:37:16.225Z] dist.run_commands() > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/dist.py", > line 953, in run_commands > [2021-12-16T12:37:16.225Z] self.run_command(cmd) > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/dist.py", > line 972, in run_command > [2021-12-16T12:37:16.225Z] cmd_obj.run() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 186, > in run > [2021-12-16T12:37:16.225Z] self.find_sources() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 209, > in find_sources > [2021-12-16T12:37:16.225Z] mm.run() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 293, > in run > [2021-12-16T12:37:16.225Z] self.add_defaults() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 322, > in add_defaults > [2021-12-16T12:37:16.225Z] sdist.add_defaults(self) > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/sdist.py", line 131, in > add_defaults > [2021-12-16T12:37:16.225Z] if self.distribution.has_ext_modules(): > [2021-12-16T12:37:16.225Z] File > "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 70, in > has_ext_modules > [2021-12-16T12:37:16.225Z] return
[jira] [Resolved] (HADOOP-18049) Hadoop CI fails in precommit due to python2.7 incompatible version of lazy-object-proxy
[ https://issues.apache.org/jira/browse/HADOOP-18049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-18049. Fix Version/s: 2.10.2 Resolution: Fixed Merged the PR into branch-2.10. > Hadoop CI fails in precommit due to python2.7 incompatible version of > lazy-object-proxy > > > Key: HADOOP-18049 > URL: https://issues.apache.org/jira/browse/HADOOP-18049 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.10.2 >Reporter: Dhananjay Badaya >Assignee: Dhananjay Badaya >Priority: Major > Labels: pull-request-available > Fix For: 2.10.2 > > Time Spent: 50m > Remaining Estimate: 0h > > Latest version of lazy-object-proxy (dependency of pylint) seems incompatible > with python2.7 as per [release > notes|https://pypi.org/project/lazy-object-proxy/1.7.1/] > [https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3776/2/pipeline] > > {code:java} > [2021-12-16T12:37:15.710Z] Collecting lazy-object-proxy (from > astroid<2.0,>=1.6->pylint==1.9.2) > [2021-12-16T12:37:15.710Z] Downloading > https://files.pythonhosted.org/packages/75/93/3fc1cc28f71dd10b87a53b9d809602d7730e84cc4705a062def286232a9c/lazy-object-proxy-1.7.1.tar.gz > (41kB) > [2021-12-16T12:37:16.225Z] Complete output from command python setup.py > egg_info: > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'project_urls' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'python_requires' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: > UserWarning: Unknown distribution option: 'use_scm_version' > [2021-12-16T12:37:16.225Z] warnings.warn(msg) > [2021-12-16T12:37:16.225Z] running egg_info > [2021-12-16T12:37:16.225Z] creating > pip-egg-info/lazy_object_proxy.egg-info > [2021-12-16T12:37:16.225Z] writing > pip-egg-info/lazy_object_proxy.egg-info/PKG-INFO > [2021-12-16T12:37:16.225Z] writing top-level names to > pip-egg-info/lazy_object_proxy.egg-info/top_level.txt > [2021-12-16T12:37:16.225Z] writing dependency_links to > pip-egg-info/lazy_object_proxy.egg-info/dependency_links.txt > [2021-12-16T12:37:16.225Z] writing manifest file > 'pip-egg-info/lazy_object_proxy.egg-info/SOURCES.txt' > [2021-12-16T12:37:16.225Z] warning: manifest_maker: standard file '-c' > not found > [2021-12-16T12:37:16.225Z] > [2021-12-16T12:37:16.225Z] Traceback (most recent call last): > [2021-12-16T12:37:16.225Z] File "", line 1, in > [2021-12-16T12:37:16.225Z] File > "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 146, in > [2021-12-16T12:37:16.225Z] distclass=BinaryDistribution, > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/core.py", > line 151, in setup > [2021-12-16T12:37:16.225Z] dist.run_commands() > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/dist.py", > line 953, in run_commands > [2021-12-16T12:37:16.225Z] self.run_command(cmd) > [2021-12-16T12:37:16.225Z] File "/usr/lib/python2.7/distutils/dist.py", > line 972, in run_command > [2021-12-16T12:37:16.225Z] cmd_obj.run() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 186, > in run > [2021-12-16T12:37:16.225Z] self.find_sources() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 209, > in find_sources > [2021-12-16T12:37:16.225Z] mm.run() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 293, > in run > [2021-12-16T12:37:16.225Z] self.add_defaults() > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 322, > in add_defaults > [2021-12-16T12:37:16.225Z] sdist.add_defaults(self) > [2021-12-16T12:37:16.225Z] File > "/usr/lib/python2.7/dist-packages/setuptools/command/sdist.py", line 131, in > add_defaults > [2021-12-16T12:37:16.225Z] if self.distribution.has_ext_modules(): > [2021-12-16T12:37:16.225Z] File > "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 70, in > has_ext_modules > [2021-12-16T12:37:16.225Z] return super().has_ext_modules() or not > os.environ.get('SETUPPY_ALLOW_PURE') > [2021-12-16T12:37:16.225Z] TypeError: super() takes at least 1 argument > (0