[jira] [Created] (HADOOP-18850) Enable dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)

2023-08-16 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18850:
--

 Summary: Enable dual-layer server-side encryption with AWS KMS 
keys (DSSE-KMS)
 Key: HADOOP-18850
 URL: https://issues.apache.org/jira/browse/HADOOP-18850
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, security
Reporter: Akira Ajisaka


Add support for DSSE-KMS

https://docs.aws.amazon.com/AmazonS3/latest/userguide/specifying-dsse-encryption.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17326) mvn verify fails due to duplicate entry in the shaded jar

2022-12-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17326.

Resolution: Cannot Reproduce

Closing. I successfully ran {{mvn verify}} in both trunk and branch-3.3 on my 
local.

Please feel free to reopen this if it's still failing in some environment.

> mvn verify fails due to duplicate entry in the shaded jar
> -
>
> Key: HADOOP-17326
> URL: https://issues.apache.org/jira/browse/HADOOP-17326
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.2, 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>
> Found this when I was chasing a separate shading error with [~smeng].
> In trunk:
> run mvn verify under hadoop-client-module/
> {noformat}
> [INFO] 
> 
> [INFO] Reactor Summary for Apache Hadoop Client Modules 3.4.0-SNAPSHOT:
> [INFO]
> [INFO] Apache Hadoop Client Aggregator  SUCCESS [  2.607 
> s]
> [INFO] Apache Hadoop Client API ... SUCCESS [03:16 
> min]
> [INFO] Apache Hadoop Client Runtime ... SUCCESS [01:30 
> min]
> [INFO] Apache Hadoop Client Test Minicluster .. FAILURE [04:44 
> min]
> [INFO] Apache Hadoop Client Packaging Invariants .. SKIPPED
> [INFO] Apache Hadoop Client Packaging Invariants for Test . SKIPPED
> [INFO] Apache Hadoop Client Packaging Integration Tests ... SKIPPED
> [INFO] Apache Hadoop Client Modules ... SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  09:34 min
> [INFO] Finished at: 2020-10-23T16:38:53-07:00
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on project 
> hadoop-client-minicluster: Error creating shaded jar: duplicate entry: 
> META-INF/services/org.apache.hadoop.shaded.com.fasterxml.jackson.core.JsonFactory
>  -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> swit
>  {noformat}
> This is reproducible in trunk and branch-3.3. However, not reproducible in 
> branch-3.1.
> (branch-3.3 has a different error:
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on project 
> hadoop-client-minicluster: Error creating shaded jar: duplicate entry: 
> META-INF/services/org.apache.hadoop.shaded.javax.ws.rs.ext.MessageBodyReader 
> -> [Help 1])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18532) fix typos in FileSystemShell

2022-11-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18532.

Fix Version/s: 3.4.0
   3.3.9
   Resolution: Fixed

Committed to trunk and branch-3.3.

> fix typos in FileSystemShell
> 
>
> Key: HADOOP-18532
> URL: https://issues.apache.org/jira/browse/HADOOP-18532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.4
>Reporter: guophilipse
>Assignee: guophilipse
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> Fixt typos in FileSystemShell.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18472) Upgrade to snakeyaml 1.33

2022-10-29 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18472.

Fix Version/s: 3.4.0
   3.2.5
   3.3.9
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2.

> Upgrade to snakeyaml 1.33
> -
>
> Key: HADOOP-18472
> URL: https://issues.apache.org/jira/browse/HADOOP-18472
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.5, 3.3.9
>
>
> Recent snakeyaml fixes missed a use case. Relates to HADOOP-18443
> [https://bitbucket.org/snakeyaml/snakeyaml/wiki/Changes]
>  * Fix 
> [#553|https://bitbucket.org/snakeyaml/snakeyaml/issues/553/loaderoptionssetcodepointlimit-not-honored]:
>  LoaderOptions.setCodePointLimit() not honored by loadAll() (thanks to Robert 
> Patrick)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15877) Upgrade ZooKeeper version to 3.5.4-beta and Curator version to 4.0.1

2022-10-01 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-15877.

  Assignee: (was: Akira Ajisaka)
Resolution: Invalid

Now ZooKeeper version is 3.6.3 and Curator version is 5.2.0.

> Upgrade ZooKeeper version to 3.5.4-beta and Curator version to 4.0.1
> 
>
> Key: HADOOP-15877
> URL: https://issues.apache.org/jira/browse/HADOOP-15877
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Reporter: Akira Ajisaka
>Priority: Major
>
> A long-term option to fix YARN-8937.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17405) Upgrade Yetus to 0.13.0

2022-10-01 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17405.

  Assignee: (was: Akira Ajisaka)
Resolution: Duplicate

The Yetus version is already 0.14.0.

> Upgrade Yetus to 0.13.0
> ---
>
> Key: HADOOP-17405
> URL: https://issues.apache.org/jira/browse/HADOOP-17405
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> After HADOOP-17262 and HADOOP-17297, Hadoop is using a non-release version of 
> Apache Yetus. It should be upgraded to 0.13.0 when released.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18302) Remove WhiteBox in hadoop-common module.

2022-09-12 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18302.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk.

> Remove WhiteBox in hadoop-common module.
> 
>
> Key: HADOOP-18302
> URL: https://issues.apache.org/jira/browse/HADOOP-18302
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0, 3.3.9
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> WhiteBox is deprecated, try to remove this method in hadoop-common.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18440) Replace Google Analytics with ASF Matomo in website

2022-09-03 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18440:
--

 Summary: Replace Google Analytics with ASF Matomo in website
 Key: HADOOP-18440
 URL: https://issues.apache.org/jira/browse/HADOOP-18440
 Project: Hadoop Common
  Issue Type: Improvement
  Components: site
Reporter: Akira Ajisaka


We currently use Google Analytics 
(https://github.com/apache/hadoop-site/blob/asf-site/layouts/partials/footer.html#L37)
 in the website but it's not recommended.

Also we should link the Privacy Policy to 
https://privacy.apache.org/policies/privacy-policy-public.html instead of 
Hadoop's one https://hadoop.apache.org/privacy_policy.html

For more details, please check: https://privacy.apache.org/faq/committers.html 
and https://www.apache.org/foundation/marks/pmcs.html#navigation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18390) Fix out of sync import for HADOOP-18321

2022-08-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18390.

Fix Version/s: 3.4.0
   Resolution: Fixed

Merged the PR into trunk. Thank you [~groot]!

> Fix out of sync import for HADOOP-18321
> ---
>
> Key: HADOOP-18390
> URL: https://issues.apache.org/jira/browse/HADOOP-18390
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: groot
>Assignee: groot
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Fix out of sync import for added as part of HADOOP-18321



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18301) Upgrade commons-io to 2.11.0

2022-08-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18301.

Fix Version/s: 3.4.0
   Resolution: Fixed

Merged the PR into trunk. Thank you [~groot] for your contribution.

> Upgrade commons-io to 2.11.0
> 
>
> Key: HADOOP-18301
> URL: https://issues.apache.org/jira/browse/HADOOP-18301
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.3, 3.3.3
>Reporter: groot
>Assignee: groot
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Current version 2.8.0 is almost ~2 years old
> Upgrading to new release to keep up for new features and bug fixes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18294) Ensure build folder exists before writing checksum file.ProtocRunner#writeChecksums

2022-07-12 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18294.

Fix Version/s: 3.4.0
   Resolution: Fixed

Merged the PR into trunk. Thank you [~groot] for your contribution.

> Ensure build folder exists before writing checksum 
> file.ProtocRunner#writeChecksums
> ---
>
> Key: HADOOP-18294
> URL: https://issues.apache.org/jira/browse/HADOOP-18294
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.3
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Ensure build folder exists before writing checksum 
> file.ProtocRunner#writeChecksums



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18297) Upgrade dependency-check-maven to 7.1.1

2022-07-05 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18297.

Fix Version/s: 3.4.0
   Resolution: Fixed

Re-open and close with correct resolution.

> Upgrade dependency-check-maven to 7.1.1
> ---
>
> Key: HADOOP-18297
> URL: https://issues.apache.org/jira/browse/HADOOP-18297
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.3.3
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The OWASP dependency-check-maven Plugin version has corrected various false 
> positives in 7.1.1. We can upgrade to it.
> https://github.com/jeremylong/DependencyCheck/milestone/45?closed=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-18297) Upgrade dependency-check-maven to 7.1.1

2022-07-05 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-18297:


> Upgrade dependency-check-maven to 7.1.1
> ---
>
> Key: HADOOP-18297
> URL: https://issues.apache.org/jira/browse/HADOOP-18297
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.3.3
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The OWASP dependency-check-maven Plugin version has corrected various false 
> positives in 7.1.1. We can upgrade to it.
> https://github.com/jeremylong/DependencyCheck/milestone/45?closed=1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18240) Upgrade Yetus to 0.14.0

2022-05-25 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18240.

Fix Version/s: 3.4.0
   3.2.4
   3.3.4
   2.10.3
   Resolution: Fixed

Committed to trunk, branch-3.3, branch-3.2, and branch-2.10.

> Upgrade Yetus to 0.14.0
> ---
>
> Key: HADOOP-18240
> URL: https://issues.apache.org/jira/browse/HADOOP-18240
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.4, 2.10.3
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Yetus 0.14.0 is released. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17591) Fix the wrong CIDR range example in Proxy User documentation

2022-05-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17591.

Resolution: Duplicate

Fixed by HADOOP-17952. Closing.

> Fix the wrong CIDR range example in Proxy User documentation
> 
>
> Key: HADOOP-17591
> URL: https://issues.apache.org/jira/browse/HADOOP-17591
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.2.2
>Reporter: Kwangsun Noh
>Priority: Trivial
>  Labels: newbie
>
> The CIDR range example on the Proxy user description page is wrong.
>  
> In the Configurations section of the Proxy user page, CIDR 10.222.0.0/16 
> means range 10.222.0.0-15.
>  
> But It's not true. CIDR format 10.222.0.0/16 means 10.222.0.0-10.222.255.255
>  
> as-is : 10.222.0.0-15
> to-be : 10.222.0.0-10.222.255.255
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18224) Upgrade maven compiler plugin to 3.10.1

2022-05-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18224.

Fix Version/s: 3.4.0
   Resolution: Fixed

Merged the PR into trunk. Thanks!

> Upgrade maven compiler plugin to 3.10.1
> ---
>
> Key: HADOOP-18224
> URL: https://issues.apache.org/jira/browse/HADOOP-18224
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Currently we are using maven-compiler-plugin 3.1 version, which is quite old 
> (2013) and it's also pulling in vulnerable log4j dependency:
> {code:java}
> [INFO]
> org.apache.maven.plugins:maven-compiler-plugin:maven-plugin:3.1:runtime
> [INFO]   org.apache.maven.plugins:maven-compiler-plugin:jar:3.1
> [INFO]   org.apache.maven:maven-plugin-api:jar:2.0.9
> [INFO]   org.apache.maven:maven-artifact:jar:2.0.9
> [INFO]   org.codehaus.plexus:plexus-utils:jar:1.5.1
> [INFO]   org.apache.maven:maven-core:jar:2.0.9
> [INFO]   org.apache.maven:maven-settings:jar:2.0.9
> [INFO]   org.apache.maven:maven-plugin-parameter-documenter:jar:2.0.9
> ...
> ...
> ...
> [INFO]   log4j:log4j:jar:1.2.12
> [INFO]   commons-logging:commons-logging-api:jar:1.1
> [INFO]   com.google.collections:google-collections:jar:1.0
> [INFO]   junit:junit:jar:3.8.2
>  {code}
>  
> We should upgrade to 3.10.1 (latest Mar, 2022) version of 
> maven-compiler-plugin.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18240) Upgrade Yetus to 0.14.0

2022-05-17 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18240:
--

 Summary: Upgrade Yetus to 0.14.0
 Key: HADOOP-18240
 URL: https://issues.apache.org/jira/browse/HADOOP-18240
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


Yetus 0.14.0 is released. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17479) Fix the examples of hadoop config prefix

2022-05-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17479.

Fix Version/s: 3.4.0
   3.2.4
   3.3.4
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2. Thank you [~groot] for your 
contribution!

> Fix the examples of hadoop config prefix
> 
>
> Key: HADOOP-17479
> URL: https://issues.apache.org/jira/browse/HADOOP-17479
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.4
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In 
> https://hadoop.apache.org/docs/r3.3.0/hadoop-project-dist/hadoop-common/DownstreamDev.html#XML_Configuration_Files
> {quote}e.g. hadoop, io, ipc, fs, net, file, ftp, kfs, ha, file, dfs, mapred, 
> mapreduce, and yarn.
> {quote}
> * There are two "file"
> * kfs has been removed since Hadoop 2.x



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16515) Update the link to compatibility guide

2022-05-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16515.

Fix Version/s: 3.4.0
   3.3.4
   Resolution: Fixed

Committed to trunk and branch-3.3.

> Update the link to compatibility guide
> --
>
> Key: HADOOP-16515
> URL: https://issues.apache.org/jira/browse/HADOOP-16515
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 3.4.0, 3.3.4
>
> Attachments: HADOOP-16515-01.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> There are many old URLs (http://wiki.apache.org/hadoop/Compatibility) in the 
> source code and they should be replaced with 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line

2022-04-27 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-13332.

  Assignee: (was: Akira Ajisaka)
Resolution: Done

All the sub-tasks completed. Closing this umbrella JIRA.

> Remove jackson 1.9.13 and switch all jackson code to 2.x code line
> --
>
> Key: HADOOP-13332
> URL: https://issues.apache.org/jira/browse/HADOOP-13332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: PJ Fanning
>Priority: Major
> Attachments: HADOOP-13332.00.patch, HADOOP-13332.01.patch, 
> HADOOP-13332.02.patch, HADOOP-13332.03.patch
>
>
> This jackson 1.9 code line is no longer maintained. Upgrade
> Most changes from jackson 1.9 to 2.x just involve changing the package name.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18195) make jackson v1 a runtime scope dependency

2022-04-27 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18195.

Resolution: Duplicate

The jackson v1 dependency has been removed by HADOOP-15983. Closing.

> make jackson v1 a runtime scope dependency
> --
>
> Key: HADOOP-18195
> URL: https://issues.apache.org/jira/browse/HADOOP-18195
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In trunk, jackson v1 is only needed as a transitive dependency of jersey-json 
> (1.19).
> The jackson v1 jars still appear in hadoop pom so it would be useful to make 
> them 'runtime' scope so that no hadoop code can compile with links to this 
> old code.
>  
> [https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-project/3.4.0-SNAPSHOT/hadoop-project-3.4.0-20220407.091529-1274.pom]
>  
> the linked PR is not needed if [https://github.com/apache/hadoop/pull/3988] 
> is merged



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15983) Use jersey-json that is built to use jackson2

2022-04-27 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-15983.

Fix Version/s: 3.4.0
 Hadoop Flags: Incompatible change
 Release Note: Use modified jersey-json 1.20 in 
https://github.com/pjfanning/jersey-1.x/tree/v1.20 that uses Jackson 2.x. By 
this change, Jackson 1.x dependency has been removed from Hadoop.
   Resolution: Fixed

Merged the PR into trunk. Thank you [~pj.fanning] for your contribution! I 
really appreciate this work.

> Use jersey-json that is built to use jackson2
> -
>
> Key: HADOOP-15983
> URL: https://issues.apache.org/jira/browse/HADOOP-15983
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17551) Upgrade maven-site-plugin to 3.11.0

2022-04-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17551.

Fix Version/s: 3.4.0
   3.3.4
   Resolution: Fixed

Committed to trunk and branch-3.3. Thank you [~groot] for your contribution.

> Upgrade maven-site-plugin to 3.11.0
> ---
>
> Key: HADOOP-17551
> URL: https://issues.apache.org/jira/browse/HADOOP-17551
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> After upgrading maven-site-plugin, error messages will be more detailed and 
> it will help debugging.
>  
> Ref: 
> https://issues.apache.org/jira/browse/YARN-10656?focusedCommentId=17291846&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17291846



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18200) Update ZooKeeper to 3.6.x or upper

2022-04-13 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18200:
--

 Summary: Update ZooKeeper to 3.6.x or upper
 Key: HADOOP-18200
 URL: https://issues.apache.org/jira/browse/HADOOP-18200
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, ha
Reporter: Akira Ajisaka


The official document (https://zookeeper.apache.org/releases.html) says 
ZooKeeper 3.5.x will become EoL from 2022-06-01. Let's upgrade to 3.6.x or 
upper.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18178) Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2

2022-04-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18178.

Fix Version/s: 3.3.3
   Resolution: Fixed

Backported to branch-3.3.

> Upgrade jackson to 2.13.2 and jackson-databind to 2.13.2.2
> --
>
> Key: HADOOP-18178
> URL: https://issues.apache.org/jira/browse/HADOOP-18178
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.3
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> https://github.com/FasterXML/jackson-databind/issues/2816



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18188) Support touch command for directory

2022-03-29 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18188:
--

 Summary: Support touch command for directory
 Key: HADOOP-18188
 URL: https://issues.apache.org/jira/browse/HADOOP-18188
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Akira Ajisaka


Currently hadoop fs -touch command cannot update the mtime and the atime of 
directory. The feature would be useful when we check whether the filesystem is 
ready to write or not without creating any file.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17798) Always use GitHub PR rather than JIRA to review patches

2022-03-27 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17798.

Resolution: Done

Updated the wiki and disabled the precommit jobs.

> Always use GitHub PR rather than JIRA to review patches
> ---
>
> Key: HADOOP-17798
> URL: https://issues.apache.org/jira/browse/HADOOP-17798
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now there are 2 types of precommit jobs in https://ci-hadoop.apache.org/
> (1) Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs that try to download 
> patches from JIRA and test them.
> (2) hadoop-multibranch job for GitHub PR
> The problems are:
> - The build configs are separated. The (2) config is in Jenkinsfile, and the 
> (1) configs are in the Jenkins. When we update Jenkinsfile, I had to manually 
> update the configs of the 4 precommit jobs via Jenkins Web UI.
> - The (1) build configs are static. We cannot use separate config for each 
> branch. This may cause some build failures.
> - GitHub Actions cannot be used in the (1) jobs.
> Therefore I want to disable the (1) jobs and always use GitHub PR to review 
> patches.
> How to do this:
> 1. Update the wiki: 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Provideapatch
> 2. Disable the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18171) NameNode Access Time Precision

2022-03-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18171.

Resolution: Invalid

Setting the value of 0 disables access times for HDFS. You should ask 
CDP-related question to Cloudera support instead of filing this issue.

> NameNode Access Time Precision
> --
>
> Key: HADOOP-18171
> URL: https://issues.apache.org/jira/browse/HADOOP-18171
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: As of now we are on CDH version 6.3.4 and we are 
> planning to upgrade it to CDP version 7.1.4. for that cloudera want us to 
> disable namenode property dfs.access.time.precision by changing it's value to 
> 0. Current value for this property is 1 hour. so my question is that how this 
> value is impacting in current scenario? what is the effect of that and what 
> will happen If I make it to zero.
>Reporter: Doug
>Priority: Major
> Attachments: namenodeaccesstime.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17386) fs.s3a.buffer.dir to be under Yarn container path on yarn applications

2022-02-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17386.

Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk. Thank you [~monthonk] for your contribution.

> fs.s3a.buffer.dir to be under Yarn container path on yarn applications
> --
>
> Key: HADOOP-17386
> URL: https://issues.apache.org/jira/browse/HADOOP-17386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Monthon Klongklaew
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> # fs.s3a.buffer.dir defaults to hadoop.tmp.dir which is /tmp or similar
> # we use this for storing file blocks during upload
> # staging committers use it for all files in a task, which can be a lot more
> # a lot of systems don't clean up /tmp until reboot -and if they stay up for 
> a long time then they accrue files written through s3a staging committer from 
> spark containers which fail
> Fix: use ${env.LOCAL_DIRS:-${hadoop.tmp.dir}}/s3a as the option so that if 
> env.LOCAL_DIRS is set is used over hadoop.tmp.dir. YARN-deployed apps will 
> use that for the buffer dir. When the app container is destroyed, so is the 
> directory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18130) hadoop-client-runtime latest version 3.3.1 has security issues

2022-02-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18130.

Resolution: Not A Problem

> hadoop-client-runtime latest version 3.3.1 has security issues
> --
>
> Key: HADOOP-18130
> URL: https://issues.apache.org/jira/browse/HADOOP-18130
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: phoebe chen
>Priority: Major
>
> hadoop-client-runtime latest version 3.3.1 ([Maven Repository: 
> org.apache.hadoop » hadoop-client-runtime » 3.3.1 
> (mvnrepository.com)|https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client-runtime/3.3.1])
>  has many security issues.
> Beside the ones list in maven repo,  it's dependency:
> "org.eclipse.jetty_jetty-io" (9.4.40.v20210413) has 
> [CVE-2021-34429|https://nvd.nist.gov/vuln/detail/CVE-2021-34429] and 
> [CVE-2021-28169|https://nvd.nist.gov/vuln/detail/CVE-2021-28169]
> "com.fasterxml.jackson.core_jackson-databind" (2.10.5.1) has 
> [PRISMA-2021-0213.|https://github.com/FasterXML/jackson-databind/issues/3328]
> Need to upgrade to higher version.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18126) Update junit 5 version due to build issues

2022-02-16 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18126.

Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk. Thank you [~pj.fanning] for your contribution!

> Update junit 5 version due to build issues
> --
>
> Key: HADOOP-18126
> URL: https://issues.apache.org/jira/browse/HADOOP-18126
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bulid
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:java}
> Feb 11, 2022 11:31:43 AM org.junit.platform.launcher.core.DefaultLauncher 
> handleThrowable WARNING: TestEngine with ID 'junit-vintage' failed to 
> discover tests org.junit.platform.commons.JUnitException: Failed to parse 
> version of junit:junit: 4.13.2 at 
> org.junit.vintage.engine.JUnit4VersionCheck.parseVersion(JUnit4VersionCheck.java:54)
>  {code}
> [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3980/1/artifact/out/patch-unit-root.txt]
> seems like junit.vintage.version=5.5.1 is incompatible with 
> junit.version=4.13.2
> see 2nd answer on 
> [https://stackoverflow.com/questions/59900637/error-testengine-with-id-junit-vintage-failed-to-discover-tests-with-spring]
> my plan is to upgrade junit.vintage.version and junit.jupiter.version to 5.8.2
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18046) TestIPC#testIOEOnListenerAccept fails

2022-02-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18046.

Resolution: Not A Problem

HADOOP-18024 has been reverted. Closing.

> TestIPC#testIOEOnListenerAccept fails
> -
>
> Key: HADOOP-18046
> URL: https://issues.apache.org/jira/browse/HADOOP-18046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> {code}
> [ERROR] testIOEOnListenerAccept(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 
> 0.007 s  <<< FAILURE!
> java.lang.AssertionError: Expected an EOFException to have been thrown
>   at org.junit.Assert.fail(Assert.java:89)
>   at 
> org.apache.hadoop.ipc.TestIPC.testIOEOnListenerAccept(TestIPC.java:652)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18037) Backport HADOOP-17796 for branch-3.2

2022-02-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18037.

Resolution: Duplicate

> Backport HADOOP-17796 for branch-3.2
> 
>
> Key: HADOOP-18037
> URL: https://issues.apache.org/jira/browse/HADOOP-18037
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.2
>Reporter: Ananya Singh
>Assignee: Ananya Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18099) Upgrade bundled Tomcat to 8.5.75

2022-01-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18099.

Fix Version/s: 2.10.2
   Resolution: Fixed

Merged the PR into branch-2.10. Thank you [~groot] for your contribution!

> Upgrade bundled Tomcat to 8.5.75
> 
>
> Key: HADOOP-18099
> URL: https://issues.apache.org/jira/browse/HADOOP-18099
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 2.10.1
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 2.10.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Let's upgrade to the latest 8.5.x version.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18099) Ungrade bundle Tomcat in branch-2 to the latest

2022-01-30 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18099:
--

 Summary: Ungrade bundle Tomcat in branch-2 to the latest
 Key: HADOOP-18099
 URL: https://issues.apache.org/jira/browse/HADOOP-18099
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, httpfs, kms
Reporter: Akira Ajisaka


Let's upgrade to the latest 8.5.x version.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18092) Exclude log4j2 dependency from hadoop-huaweicloud module

2022-01-23 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18092.

Resolution: Duplicate

Duplicate of HADOOP-17593. Closing.

> Exclude log4j2 dependency from hadoop-huaweicloud module
> 
>
> Key: HADOOP-18092
> URL: https://issues.apache.org/jira/browse/HADOOP-18092
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hadoop/pull/3906#issuecomment-1018401121]
> The following log4j2 dependencies must be excluded.
> {code:java}
> [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile
> [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile
> [INFO]   +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile
> [INFO]   +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile
> [INFO]   +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile
> [INFO]   \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18092) Exclude log4j2 dependency from hadoop-huaweicloud module

2022-01-23 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18092:
--

 Summary: Exclude log4j2 dependency from hadoop-huaweicloud module
 Key: HADOOP-18092
 URL: https://issues.apache.org/jira/browse/HADOOP-18092
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


[https://github.com/apache/hadoop/pull/3906#issuecomment-1018401121]

The following log4j2 dependencies must be excluded.
{code:java}
[INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile
[INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile
[INFO]   +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile
[INFO]   +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile
[INFO]   +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile
[INFO]   \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18086) Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU GPLv2 license)

2022-01-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18086.

Resolution: Not A Problem

> Remove org.checkerframework.dataflow from hadoop-shaded-guava artifact (GNU 
> GPLv2 license)
> --
>
> Key: HADOOP-18086
> URL: https://issues.apache.org/jira/browse/HADOOP-18086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: László Bodor
>Priority: Major
>
> Please refer to TEZ-4378 for further details:
> {code}
>  jar tf 
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/app/WEB-INF/lib/hadoop-shaded-guava-1.1.1.jar
>  | grep "dataflow"
> org/apache/hadoop/thirdparty/org/checkerframework/dataflow/
> org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/
> org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Deterministic.class
> org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure$Kind.class
> org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/Pure.class
> org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/SideEffectFree.class
> org/apache/hadoop/thirdparty/org/checkerframework/dataflow/qual/TerminatesExecution.class
> {code}
>  I can see that checker-qual LICENSE.txt was removed in the scope of 
> HADOOP-17648, but it has nothing to do with the license itself, only for 
> [resolving a shading 
> error|https://github.com/apache/hadoop-thirdparty/pull/9#issuecomment-822398949]
> my understanding is that in the current way an Apache licensed package (guava 
> shaded jar) will contain a GPLv2 licensed software, which makes it a subject 
> of GPLv2, also triggers license violations in security tools (like BlackDuck)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16410) Hadoop 3.2 azure jars incompatible with alpine 3.9

2022-01-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16410.

Resolution: Duplicate

> Hadoop 3.2 azure jars incompatible with alpine 3.9
> --
>
> Key: HADOOP-16410
> URL: https://issues.apache.org/jira/browse/HADOOP-16410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Jose Luis Pedrosa
>Priority: Minor
> Fix For: 3.2.2
>
>
>  Openjdk8 is based on alpine 3.9, this means that the version shipped of 
> libssl is 1.1.1b-r1:
>   
> {noformat}
> sh-4.4# apk list | grep ssl
> libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
> {noformat}
> The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by 
> [https://issues.jboss.org/browse/JBEAP-16425].
> This results on error running runtime errors (using spark as an example)
> {noformat}
> 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version 
> OpenSSL 1.1.1b 26 Feb 2019
> 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking 
> for metadata directory.
> Exception in thread "main" java.lang.NullPointerException
>  at 
> org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
> {noformat}
> In my tests creating a Docker image with an updated version of wildly, solves 
> the issue: 1.0.7.Final
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16410) Hadoop 3.2 azure jars incompatible with alpine 3.9

2022-01-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-16410:


Reopening this to closing as duplicate.

> Hadoop 3.2 azure jars incompatible with alpine 3.9
> --
>
> Key: HADOOP-16410
> URL: https://issues.apache.org/jira/browse/HADOOP-16410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Jose Luis Pedrosa
>Priority: Minor
> Fix For: 3.2.2
>
>
>  Openjdk8 is based on alpine 3.9, this means that the version shipped of 
> libssl is 1.1.1b-r1:
>   
> {noformat}
> sh-4.4# apk list | grep ssl
> libssl1.1-1.1.1b-r1 x86_64 {openssl} (OpenSSL) [installed] 
> {noformat}
> The hadoop distro ships wildfly-openssl-1.0.4.Final.jar, which is affected by 
> [https://issues.jboss.org/browse/JBEAP-16425].
> This results on error running runtime errors (using spark as an example)
> {noformat}
> 2019-07-04 22:32:40,339 INFO openssl.SSL: WFOPENSSL0002 OpenSSL Version 
> OpenSSL 1.1.1b 26 Feb 2019
> 2019-07-04 22:32:40,363 WARN streaming.FileStreamSink: Error while looking 
> for metadata directory.
> Exception in thread "main" java.lang.NullPointerException
>  at 
> org.wildfly.openssl.CipherSuiteConverter.toJava(CipherSuiteConverter.java:284)
> {noformat}
> In my tests creating a Docker image with an updated version of wildly, solves 
> the issue: 1.0.7.Final
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18063) Remove unused import AbstractJavaKeyStoreProvider in Shell class

2022-01-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18063.

Fix Version/s: 3.4.0
   3.2.4
   3.3.3
   Resolution: Fixed

Committed to trunk, branch-3.3, and branch-3.2.

> Remove unused import AbstractJavaKeyStoreProvider in Shell class
> 
>
> Key: HADOOP-18063
> URL: https://issues.apache.org/jira/browse/HADOOP-18063
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
> Attachments: image-2022-01-01-22-40-50-604.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In Shell, there are some invalid imports.
> For example:
>  !image-2022-01-01-22-40-50-604.png! 
> Among them, AbstractJavaKeyStoreProvider does not seem to be referenced 
> anywhere.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18062) Update the year to 2022

2022-01-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18062.

  Assignee: (was: Akira Ajisaka)
Resolution: Duplicate

> Update the year to 2022
> ---
>
> Key: HADOOP-18062
> URL: https://issues.apache.org/jira/browse/HADOOP-18062
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Blocker
>  Labels: newbie, pull-request-available, release-blocker
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18062) Update the year to 2022

2022-01-03 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18062:
--

 Summary: Update the year to 2022
 Key: HADOOP-18062
 URL: https://issues.apache.org/jira/browse/HADOOP-18062
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Akira Ajisaka






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18054) Unable to load AWS credentials from any provider in the chain

2021-12-23 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18054.

Resolution: Invalid

> Unable to load AWS credentials from any provider in the chain
> -
>
> Key: HADOOP-18054
> URL: https://issues.apache.org/jira/browse/HADOOP-18054
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, fs, fs/s3, security
>Affects Versions: 3.3.1
> Environment: From top to down.
> Kubernetes version 1.18.20
> Spark Version: 2.4.4
> Kubernetes Setup: Pod with serviceAccountName that binds with IAM Role using 
> IRSA (EKS Feature).
> {code:java}
> apiVersion: v1
> automountServiceAccountToken: true
> kind: ServiceAccount
> metadata:
>   annotations:
>     eks.amazonaws.com/role-arn: 
> arn:aws:iam:::role/EKSDefaultPolicyFor-Spark
>   name: spark
>   namespace: spark {code}
> AWS Setup:
> IAM Role with permissions over the S3 Bucket
> Bucket with permissions granted over the IAM Role.
> Code:
> {code:java}
> def run_etl():
> sc = 
> SparkSession.builder.appName("TXD-PYSPARK-ORACLE-SIEBEL-CASOS").getOrCreate()
> sqlContext = SQLContext(sc)
> args = sys.argv
> load_date = args[1]  # Ej: "2019-05-21"
> output_path = args[2]  # Ej: s3://mybucket/myfolder
> print(args, "load_date", load_date, "output_path", output_path)
> sc._jsc.hadoopConfiguration().set(
> "fs.s3a.aws.credentials.provider",
> "com.amazonaws.auth.DefaultAWSCredentialsProviderChain"
> )
> sc._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", 
> "true")
> sc._jsc.hadoopConfiguration().set("fs.s3a.impl", 
> "org.apache.hadoop.fs.s3a.S3AFileSystem")
> # sc._jsc.hadoopConfiguration().set("fs.s3.impl", 
> "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
> sc._jsc.hadoopConfiguration().set("fs.AbstractFileSystem.s3a.impl", 
> "org.apache.hadoop.fs.s3a.S3A")
> session = boto3.session.Session()
> client = session.client(service_name='secretsmanager', 
> region_name="us-east-1")
> get_secret_value_response = client.get_secret_value(
> SecretId="Siebel_Connection_Info"
> )
> secret = get_secret_value_response["SecretString"]
> secret = json.loads(secret)
> db_username = secret.get("db_username")
> db_password = secret.get("db_password")
> db_host = secret.get("db_host")
> db_port = secret.get("db_port")
> db_name = secret.get("db_name")
> db_url = "jdbc:oracle:thin:@{}:{}/{}".format(db_host, db_port, db_name)
> jdbc_driver_name = "oracle.jdbc.OracleDriver"
> dbtable = """(SELECT * FROM SIEBEL.REPORTE_DE_CASOS WHERE JOB_ID IN 
> (SELECT JOB_ID FROM SIEBEL.SERVICE_CONSUMED_STATUS WHERE 
> PUBLISH_INFORMATION_DT BETWEEN TO_DATE('{} 00:00:00', '-MM-DD 
> HH24:MI:SS') AND TO_DATE('{} 23:59:59', '-MM-DD 
> HH24:MI:SS')))""".format(load_date, load_date)
> df = sqlContext.read\
>   .format("jdbc")\
>   .option("charset", "utf8")\
>   .option("driver", jdbc_driver_name)\
>   .option("url",db_url)\
>   .option("dbtable", dbtable)\
>   .option("user", db_username)\
>   .option("password", db_password)\
>   .option("oracle.jdbc.timezoneAsRegion", "false")\
>   .load()
> # Particionado
> a_load_date = load_date.split('-')
> df = df.withColumn("year", lit(a_load_date[0]))
> df = df.withColumn("month", lit(a_load_date[1]))
> df = df.withColumn("day", lit(a_load_date[2]))
> df.write.mode("append").partitionBy(["year", "month", 
> "day"]).csv(output_path, header=True)
> # Es importante cerrar la conexion para evitar problemas como el 
> reportado en
> # 
> https://stackoverflow.com/questions/40830638/cannot-load-main-class-from-jar-file
> sc.stop()
> if __name__ == '__main__':
> run_etl() {code}
> Log's
> {code:java}
> + '[' -z s3://mybucket.spark.jobs/siebel-casos-actividades ']'
> + aws s3 cp s3://mybucket.spark.jobs/siebel-casos-actividades /opt/ 
> --recursive --include '*'
> download: 
> s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-casos.py 
> to ../../txd-pyspark-siebel-casos.py
> download: 
> s3://mybucket.spark.jobs/siebel-casos-actividades/txd-pyspark-siebel-actividades.py
>  to ../../txd-pyspark-siebel-actividades.py
> download: s3://mybucket.jobs/siebel-casos-actividades/hadoop-aws-3.3.1.jar to 
> ../../hadoop-aws-3.3.1.jar
> download: s3://mybucket.spark.jobs/siebel-casos-actividades/ojdbc8.jar to 
> ../../ojdbc8.jar
> download: 
> s3://mybucket.spark.jobs/siebel-casos-actividades/aws-java-sdk-bundle-1.11.901.jar
>  to ../../aws-java-sdk-bundle-1.11.901.jar
> ++ id -u
> + myuid=0
> ++ id -g
> + mygid=0
> + set +e
> ++ getent passwd 0
> + uidentry=root:x:0:0:root:/root:/bin/ash
> + set -e
> + '[' -z root:x:0:0:root:/root:/bin/ash

[jira] [Resolved] (HADOOP-18052) Support Apple Silicon in start-build-env.sh

2021-12-23 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18052.

Fix Version/s: 3.4.0
   3.3.3
   Resolution: Fixed

Committed to trunk and branch-3.3.

> Support Apple Silicon in start-build-env.sh
> ---
>
> Key: HADOOP-18052
> URL: https://issues.apache.org/jira/browse/HADOOP-18052
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
> Environment: M1 Pro. MacOS 12.0.1. Docker for Mac.
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> start-build-env.sh uses Dockerfile for x86 in M1 Mac, and the Dockerfile sets 
> wrong JAVA_HOME. Dockerfile_aarch64 should be used instead.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18052) Support start-build-env.sh in M1 Mac

2021-12-19 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18052:
--

 Summary: Support start-build-env.sh in M1 Mac
 Key: HADOOP-18052
 URL: https://issues.apache.org/jira/browse/HADOOP-18052
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
 Environment: M1 Pro. MacOS 12.0.1. Docker for Mac.
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka


start-build-env.sh uses Dockerfile for x86 in M1 Mac, and the Dockerfile sets 
wrong JAVA_HOME. Dockerfile_aarch64 should be used instead.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13500) Synchronizing iteration of Configuration properties object

2021-12-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-13500.

Fix Version/s: 2.10.2
   Resolution: Fixed

Merged PR 3776 into branch-2.10.

> Synchronizing iteration of Configuration properties object
> --
>
> Key: HADOOP-13500
> URL: https://issues.apache.org/jira/browse/HADOOP-13500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Jason Darrell Lowe
>Assignee: Dhananjay Badaya
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> It is possible to encounter a ConcurrentModificationException while trying to 
> iterate a Configuration object.  The iterator method tries to walk the 
> underlying Property object without proper synchronization, so another thread 
> simultaneously calling the set method can trigger it.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18049) Hadoop CI fails in precommit due to python2.7 incompatible version of lazy-object-proxy

2021-12-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18049.

Fix Version/s: 2.10.2
   Resolution: Fixed

Merged the PR into branch-2.10.

> Hadoop CI fails in precommit due to python2.7 incompatible version of 
> lazy-object-proxy 
> 
>
> Key: HADOOP-18049
> URL: https://issues.apache.org/jira/browse/HADOOP-18049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.10.2
>Reporter: Dhananjay Badaya
>Assignee: Dhananjay Badaya
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Latest version of lazy-object-proxy (dependency of pylint) seems incompatible 
> with python2.7 as per [release 
> notes|https://pypi.org/project/lazy-object-proxy/1.7.1/]
> [https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3776/2/pipeline]
>  
> {code:java}
> [2021-12-16T12:37:15.710Z] Collecting lazy-object-proxy (from 
> astroid<2.0,>=1.6->pylint==1.9.2)
> [2021-12-16T12:37:15.710Z]   Downloading 
> https://files.pythonhosted.org/packages/75/93/3fc1cc28f71dd10b87a53b9d809602d7730e84cc4705a062def286232a9c/lazy-object-proxy-1.7.1.tar.gz
>  (41kB)
> [2021-12-16T12:37:16.225Z] Complete output from command python setup.py 
> egg_info:
> [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: 
> UserWarning: Unknown distribution option: 'project_urls'
> [2021-12-16T12:37:16.225Z]   warnings.warn(msg)
> [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: 
> UserWarning: Unknown distribution option: 'python_requires'
> [2021-12-16T12:37:16.225Z]   warnings.warn(msg)
> [2021-12-16T12:37:16.225Z] /usr/lib/python2.7/distutils/dist.py:267: 
> UserWarning: Unknown distribution option: 'use_scm_version'
> [2021-12-16T12:37:16.225Z]   warnings.warn(msg)
> [2021-12-16T12:37:16.225Z] running egg_info
> [2021-12-16T12:37:16.225Z] creating 
> pip-egg-info/lazy_object_proxy.egg-info
> [2021-12-16T12:37:16.225Z] writing 
> pip-egg-info/lazy_object_proxy.egg-info/PKG-INFO
> [2021-12-16T12:37:16.225Z] writing top-level names to 
> pip-egg-info/lazy_object_proxy.egg-info/top_level.txt
> [2021-12-16T12:37:16.225Z] writing dependency_links to 
> pip-egg-info/lazy_object_proxy.egg-info/dependency_links.txt
> [2021-12-16T12:37:16.225Z] writing manifest file 
> 'pip-egg-info/lazy_object_proxy.egg-info/SOURCES.txt'
> [2021-12-16T12:37:16.225Z] warning: manifest_maker: standard file '-c' 
> not found
> [2021-12-16T12:37:16.225Z] 
> [2021-12-16T12:37:16.225Z] Traceback (most recent call last):
> [2021-12-16T12:37:16.225Z]   File "", line 1, in 
> [2021-12-16T12:37:16.225Z]   File 
> "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 146, in 
> [2021-12-16T12:37:16.225Z] distclass=BinaryDistribution,
> [2021-12-16T12:37:16.225Z]   File "/usr/lib/python2.7/distutils/core.py", 
> line 151, in setup
> [2021-12-16T12:37:16.225Z] dist.run_commands()
> [2021-12-16T12:37:16.225Z]   File "/usr/lib/python2.7/distutils/dist.py", 
> line 953, in run_commands
> [2021-12-16T12:37:16.225Z] self.run_command(cmd)
> [2021-12-16T12:37:16.225Z]   File "/usr/lib/python2.7/distutils/dist.py", 
> line 972, in run_command
> [2021-12-16T12:37:16.225Z] cmd_obj.run()
> [2021-12-16T12:37:16.225Z]   File 
> "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 186, 
> in run
> [2021-12-16T12:37:16.225Z] self.find_sources()
> [2021-12-16T12:37:16.225Z]   File 
> "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 209, 
> in find_sources
> [2021-12-16T12:37:16.225Z] mm.run()
> [2021-12-16T12:37:16.225Z]   File 
> "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 293, 
> in run
> [2021-12-16T12:37:16.225Z] self.add_defaults()
> [2021-12-16T12:37:16.225Z]   File 
> "/usr/lib/python2.7/dist-packages/setuptools/command/egg_info.py", line 322, 
> in add_defaults
> [2021-12-16T12:37:16.225Z] sdist.add_defaults(self)
> [2021-12-16T12:37:16.225Z]   File 
> "/usr/lib/python2.7/dist-packages/setuptools/command/sdist.py", line 131, in 
> add_defaults
> [2021-12-16T12:37:16.225Z] if self.distribution.has_ext_modules():
> [2021-12-16T12:37:16.225Z]   File 
> "/tmp/pip-build-j47m88/lazy-object-proxy/setup.py", line 70, in 
> has_ext_modules
> [2021-12-16T12:37:16.225Z] return super().has_ext_modules() or not 
> os.environ.get('SETUPPY_ALLOW_PURE')
> [2021-12-16T12:37:16.225Z] TypeError: super() takes at least 1 argument 
> (0

[jira] [Resolved] (HADOOP-17783) Old JQuery version causing security concerns

2021-12-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17783.

Target Version/s:   (was: 3.3.0, 3.3.1)
  Resolution: Duplicate

Closing as duplicate.

> Old JQuery version causing security concerns
> 
>
> Key: HADOOP-17783
> URL: https://issues.apache.org/jira/browse/HADOOP-17783
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.3.0
> Environment: Redhat Spark-Hadoop cluster.
>Reporter: Ahmed Abdelrahman
>Priority: Blocker
>
> These fixes are required for Hadoop 3.3.0. 
> Can you please update the following jqueries for the UI, they are causing 
> security and vulnerabilities concerns: 
> URL : http://web-address:8088/static/jquery/jquery-3.4.1.min.js Installed 
> version : 3.4.1 Fixed version : 3.5.0 or latest
> URL : http://web-address:8080/static/jquery-1.12.4.min.js  Installed version 
> : 1.12.4 Fixed version : 3.5.0 or latest
> These also extend to Spark-on-Yarn cluster. I hope I'm not messing up with my 
> files paths!
>  
> Thank you



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18046) TestIPC fails

2021-12-13 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18046:
--

 Summary: TestIPC fails
 Key: HADOOP-18046
 URL: https://issues.apache.org/jira/browse/HADOOP-18046
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira Ajisaka


TestIPC fails 
{code}
[ERROR] testHttpGetResponse(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 0.013 
s  <<< ERROR!
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.net.SocketInputStream.read(SocketInputStream.java:127)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:100)
at org.apache.hadoop.ipc.TestIPC.doIpcVersionTest(TestIPC.java:1783)
at org.apache.hadoop.ipc.TestIPC.testHttpGetResponse(TestIPC.java:1234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)

[ERROR] testIOEOnListenerAccept(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 
0.007 s  <<< FAILURE!
java.lang.AssertionError: Expected an EOFException to have been thrown
at org.junit.Assert.fail(Assert.java:89)
at 
org.apache.hadoop.ipc.TestIPC.testIOEOnListenerAccept(TestIPC.java:652)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15929) org.apache.hadoop.ipc.TestIPC fail

2021-12-13 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-15929.

Resolution: Won't Fix

branch-2.8 is EoL. Please reopen this if this issue applies to active release 
branches.

> org.apache.hadoop.ipc.TestIPC fail
> --
>
> Key: HADOOP-15929
> URL: https://issues.apache.org/jira/browse/HADOOP-15929
> Project: Hadoop Common
>  Issue Type: Test
>  Components: common
>Affects Versions: 2.8.5
>Reporter: Elaine Ang
>Priority: Major
> Attachments: org.apache.hadoop.ipc.TestIPC-output.txt
>
>
> The unit test for module **hadoop-common-project/hadoop-common (version 2.8.5 
> checkout from Github) failed.
> Reproduce:
>  # Clone [Hadoop Github reop|https://github.com/apache/hadoop] and checkout 
> tag release-2.8.5-RC0
>  # Compile & test
> {noformat}
> mvn clean compile 
> cd hadoop-common-project/hadoop-common/
> mvn test{noformat}
>  
> Below is the failed test log when running as non-root user.
>  
> {noformat}
> Failed tests:
>  
> TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testSetTimesSymlinkToDir:233->SymlinkBaseTest.testSetTimesSymlinkToDir:1395
>  expected:<3000> but was:<1542140218000>
>  TestIPC.testUserBinding:1495->checkUserBinding:1516
> Wanted but not invoked:
> socket.bind(OptiPlex/127.0.1.1:0);
> -> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)
> However, there were other interactions with this mock:
> -> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
> -> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
> -> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
> -> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> -> at org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872)
>  TestIPC.testProxyUserBinding:1500->checkUserBinding:1516
> Wanted but not invoked:
> socket.bind(OptiPlex/127.0.1.1:0);
> -> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)
> However, there were other interactions with this mock:
> -> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
> -> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
> -> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
> -> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> -> at 
> org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872){noformat}
>  
>  Attached is a more verbosed test output.  
> [^org.apache.hadoop.ipc.TestIPC-output.txt]
> ^Suggestions regarding how to resolve this would be helpful.^



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18039) Upgrade hbase2 version and fix TestTimelineWriterHBaseDown

2021-12-12 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18039.

Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk.

> Upgrade hbase2 version and fix TestTimelineWriterHBaseDown
> --
>
> Key: HADOOP-18039
> URL: https://issues.apache.org/jira/browse/HADOOP-18039
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> As mentioned on the parent Jira, we can't upgrade hbase2 profile version 
> beyond 2.2.4 until we either have hbase 2 artifacts available that are built 
> with hadoop 3 profile by default or hbase 3 is rolled out (hbase 3 is 
> compatible with hadoop 3 versions only).
> Let's upgrade hbase2 profile version to 2.2.4 as part of this Jira and also 
> fix TestTimelineWriterHBaseDown to create connection only after mini cluster 
> is up.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18045) Disable TestDynamometerInfra

2021-12-12 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18045:
--

 Summary: Disable TestDynamometerInfra
 Key: HADOOP-18045
 URL: https://issues.apache.org/jira/browse/HADOOP-18045
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira Ajisaka


This test is broken and there is no fix provided for a long time. Let's disable 
the test to reduce the noise in the daily qbt job.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18043) TestMultiSchemeAuthenticationHandler fails

2021-12-12 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18043:
--

 Summary: TestMultiSchemeAuthenticationHandler fails
 Key: HADOOP-18043
 URL: https://issues.apache.org/jira/browse/HADOOP-18043
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Akira Ajisaka


TestMultiSchemeAuthenticationHandler fails on trunk
{code:java}
[INFO] Running 
org.apache.hadoop.security.authentication.server.TestMultiSchemeAuthenticationHandler
[ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.957 s 
<<< FAILURE! - in 
org.apache.hadoop.security.authentication.server.TestMultiSchemeAuthenticationHandler
[ERROR] 
testRequestWithLdapAuthorization(org.apache.hadoop.security.authentication.server.TestMultiSchemeAuthenticationHandler)
  Time elapsed: 1.663 s  <<< ERROR!
org.apache.hadoop.security.authentication.client.AuthenticationException: Error 
validating LDAP user
at 
org.apache.hadoop.security.authentication.server.LdapAuthenticationHandler.authenticateWithoutTlsExtension(LdapAuthenticationHandler.java:310)
at 
org.apache.hadoop.security.authentication.server.LdapAuthenticationHandler.authenticateUser(LdapAuthenticationHandler.java:240)
at 
org.apache.hadoop.security.authentication.server.LdapAuthenticationHandler.authenticate(LdapAuthenticationHandler.java:202)
at 
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(MultiSchemeAuthenticationHandler.java:197)
at 
org.apache.hadoop.security.authentication.server.TestMultiSchemeAuthenticationHandler.testRequestWithLdapAuthorization(TestMultiSchemeAuthenticationHandler.java:161)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.naming.NamingException: LDAP connection has been closed
at com.sun.jndi.ldap.LdapRequest.getReplyBer(LdapRequest.java:133)
at com.sun.jndi.ldap.Connection.readReply(Connection.java:469)
at com.sun.jndi.ldap.LdapClient.ldapBind(LdapClient.java:365)
at com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:214)
at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2897)
at com.sun.jndi.ldap.LdapCtx.(LdapCtx.java:347)
at 
com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxFromUrl(LdapCtxFactory.java:225)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:189)
at 
com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:243)
at 
com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:154)
at 
com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:84)
at 
javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:695)
at 
javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313)
at javax.naming.InitialContext.init(InitialContext.java:244)
at javax.naming.InitialContext.(InitialContext.java:216)
at 
javax.naming.directory.InitialDirContext.(InitialDirContext.java:101)
at 
org.apache.hadoop.security.authentication.server.LdapAuthenticationHandler.authenticateWithoutTlsExtension(LdapAuthenticationHandler.java:305)
... 16 more {code}
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16917) Update dependency in branch-3.1

2021-12-12 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16917.

Resolution: Won't Fix

branch-3.1 is EoL.

> Update dependency in branch-3.1
> ---
>
> Key: HADOOP-16917
> URL: https://issues.apache.org/jira/browse/HADOOP-16917
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.1.4
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: dependency-check-report.html
>
>
> Jackson-databind 2.9.10.3 --> 2.10.3
> Zookeeper 3.4.13 --> 3.4.14
> hbase-client 1.2.6 --> 1.2.6.1
> aws-java-sdk-bundle 1.11.271 --> 1.11.563? (this is the version used by trunk)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18042) Fix jetty version in LICENSE-binary

2021-12-12 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18042.

Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk.

> Fix jetty version in LICENSE-binary
> ---
>
> Key: HADOOP-18042
> URL: https://issues.apache.org/jira/browse/HADOOP-18042
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuan Luo
>Assignee: Yuan Luo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We upgraded jetty version in 
> https://issues.apache.org/jira/browse/HADOOP-18001, we also need to modify 
> jetty version in LICENSE-binary.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18040) Use maven.test.failure.ignore instead of ignoreTestFailure

2021-12-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18040.

Fix Version/s: 3.4.0
   2.10.2
   3.2.4
   3.3.3
   Resolution: Fixed

Committed to trunk, branch-3.3, branch-3.2, and branch-2.10.

> Use maven.test.failure.ignore instead of ignoreTestFailure
> --
>
> Key: HADOOP-18040
> URL: https://issues.apache.org/jira/browse/HADOOP-18040
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In HADOOP-16596, "ignoreTestFailure" variable was introduced to ignore unit 
> test failure, however, Maven property "maven.test.failure.ignore" can be used 
> instead and it can simplify the pom.xml.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18035) Skip unit test failures to run all the unit tests

2021-12-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18035.

Fix Version/s: 2.10.2
   3.2.4
   Resolution: Fixed

Merged the PR into branch-3.2 and branch-2.10.

> Skip unit test failures to run all the unit tests
> -
>
> Key: HADOOP-18035
> URL: https://issues.apache.org/jira/browse/HADOOP-18035
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2, 3.2.4
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In branch-3.3 and upper, unit tests failures are ignored in HADOOP-16596. 
> That way we can run all the unit tests in all the modules by simply modifying 
> a file under the project root directory. Without the feature, if there is a 
> test failure in a module (it is likely happen due to the flaky jobs), the 
> tests in the subsequent modules are not executed. I want to introduce the 
> feature in the other branches.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18040) Use maven.test.failure.ignore instead of ignoreTestFailure

2021-12-08 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18040:
--

 Summary: Use maven.test.failure.ignore instead of ignoreTestFailure
 Key: HADOOP-18040
 URL: https://issues.apache.org/jira/browse/HADOOP-18040
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


In HADOOP-16596, "ignoreTestFailure" variable was introduced to ignore unit 
test failure, however, Maven property "maven.test.failure.ignore" can be used 
instead and it can simplify the pom.xml.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18035) Skip unit test failures to run all the unit tests

2021-12-06 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18035:
--

 Summary: Skip unit test failures to run all the unit tests
 Key: HADOOP-18035
 URL: https://issues.apache.org/jira/browse/HADOOP-18035
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


In branch-3.3 and upper, unit tests failures are ignored in HADOOP-16596. That 
way we can run all the unit tests in all the modules by simply modifying a file 
under the project root directory. Without the feature, if there is a test 
failure in a module (it is likely happen due to the flaky jobs), the tests in 
the subsequent modules are not executed. I want to introduce the feature in the 
other branches.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18033) Upgrade Jackson to 2.12 or upper

2021-12-03 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-18033:
--

 Summary: Upgrade Jackson to 2.12 or upper
 Key: HADOOP-18033
 URL: https://issues.apache.org/jira/browse/HADOOP-18033
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


Spark 3.2.0 depends on Jackson 2.12.3. Let's upgrade to 2.12.5 (2.12.x latest 
as of now) or upper.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17994) when exporting table from mysql to hive using Sqoop it gets stuck

2021-11-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17994.

Resolution: Invalid

This Jira is not for asking end-user questions. Given the environment is CDH, 
please ask to Cloudera support.

> when exporting table from mysql to hive using Sqoop it gets stuck
> -
>
> Key: HADOOP-17994
> URL: https://issues.apache.org/jira/browse/HADOOP-17994
> Project: Hadoop Common
>  Issue Type: Task
> Environment: !MySQL to Hive.JPG!
>Reporter: Atiq
>Priority: Blocker
> Attachments: MySQL to Hive.JPG
>
>
> when exporting table from mysql to hive it gets stuck without giving any 
> error and also when checked in hive the table is not present there as well



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17985) Disable JIRA plugin for YETUS on Hadoop

2021-10-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17985.

Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk. Thank you [~gautham] for your contribution.

> Disable JIRA plugin for YETUS on Hadoop
> ---
>
> Key: HADOOP-17985
> URL: https://issues.apache.org/jira/browse/HADOOP-17985
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I’ve been noticing an issue with Jenkins CI where a file jira-json goes 
> missing all of a sudden – jenkins / hadoop-multibranch / PR-3588 / #2 
> (apache.org)
> {code}
> [2021-10-27T17:52:58.787Z] Processing: 
> https://github.com/apache/hadoop/pull/3588
> [2021-10-27T17:52:58.787Z] GITHUB PR #3588 is being downloaded from
> [2021-10-27T17:52:58.787Z] 
> https://api.github.com/repos/apache/hadoop/pulls/3588
> [2021-10-27T17:52:58.787Z] JSON data at Wed Oct 27 17:52:55 UTC 2021
> [2021-10-27T17:52:58.787Z] Patch data at Wed Oct 27 17:52:56 UTC 2021
> [2021-10-27T17:52:58.787Z] Diff data at Wed Oct 27 17:52:56 UTC 2021
> [2021-10-27T17:52:59.814Z] awk: cannot open 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-3588/centos-7/out/jira-json
>  (No such file or directory)
> [2021-10-27T17:52:59.814Z] ERROR: https://github.com/apache/hadoop/pull/3588 
> issue status is not matched with "Patch Available".
> [2021-10-27T17:52:59.814Z]
> {code}
> This causes the pipeline run to fail. I’ve seen this in my multiple attempts 
> to re-run the CI on my PR –
>  # After 45 minutes – [jenkins / hadoop-multibranch / PR-3588 / #1 
> (apache.org)|https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3588/1/pipeline/]
>  # After 1 minute – [jenkins / hadoop-multibranch / PR-3588 / #2 
> (apache.org)|https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3588/2/pipeline/]
>  # After 17 minutes – [jenkins / hadoop-multibranch / PR-3588 / #3 
> (apache.org)|https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3588/3/pipeline/]
> The hadoop-multibranch pipeline doesn't use ASF JIRA, thus, we're disabling 
> the *jira* plugin to fix this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17971) Exclude IBM Java security classes from being shaded/relocated

2021-10-19 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17971.

Fix Version/s: 3.3.2
   3.2.3
   3.4.0
   Resolution: Fixed

Committed to trunk, branch-3.3, branch-3.2, and branch-3.2.3. Thanks [~nmarion] 
for your contribution.

> Exclude IBM Java security classes from being shaded/relocated
> -
>
> Key: HADOOP-17971
> URL: https://issues.apache.org/jira/browse/HADOOP-17971
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Nicholas Marion
>Assignee: Nicholas Marion
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> IBM Java classes are shaded in Hadoop libraries, e.g. hadoop-client-api. When 
> loaded by Spark, UserGroupInformation has exception:
> {noformat}
> org.apache.hadoop.security.KerberosAuthException: failure to login: 
> javax.security.auth.login.LoginException: unable to find LoginModule class: 
> org.apache.hadoop.shaded.com.ibm.security.auth.module.JAASLoginModule
>   at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1986)
>   at 
> org.apache.hadoop.security.UserGroupInformation.createLoginUser(UserGroupInformation.java:719)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:669)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:579)
>   at 
> org.apache.spark.util.Utils$.$anonfun$getCurrentUserName$1(Utils.scala:2609)
>   at 
> org.apache.spark.util.Utils$$$Lambda$1388/0x338e9c30.apply(Unknown 
> Source)
>   at scala.Option.getOrElse(Option.scala:189)
> {noformat}
> When I manually compile UserGroupInformation.java without maven (aka 
> relocation) and inject the class files into hadoop-client-api jar; it works.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17932) distcp file length comparison have no effect

2021-10-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17932.

Fix Version/s: 3.3.2
   3.4.0
   Resolution: Fixed

Committed to trunk and branch-3.3. Thanks [~adol] for your contribution!

> distcp file length comparison have no effect
> 
>
> Key: HADOOP-17932
> URL: https://issues.apache.org/jira/browse/HADOOP-17932
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, tools, tools/distcp
>Affects Versions: 3.3.1
>Reporter: yinan zhan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> the params for compareFileLengthsAndChecksums in RetriableFileCopyCommand 
> have no effect
> current is
> {code:java}
> DistCpUtils.compareFileLengthsAndChecksums(source.getLen(), sourceFS,
> sourcePath, sourceChecksum, targetFS,
> targetPath, skipCrc, source.getLen());{code}
> {code:java}
> public static void compareFileLengthsAndChecksums(long srcLen,
>FileSystem sourceFS, Path source, FileChecksum sourceChecksum,
>FileSystem targetFS, Path target, boolean skipCrc,
>long targetLen) throws IOException {
>   if (srcLen != targetLen) {
> throw new IOException(
> DistCpConstants.LENGTH_MISMATCH_ERROR_MSG + source + " (" + srcLen
> + ") and target:" + target + " (" + targetLen + ")");
>   }
> {code}
> so, compare source.getLen() with source.getLen()...
> It should be like below in history view
> {code:java}
> DistCpUtils.compareFileLengthsAndChecksums(source.getLen(), sourceFS,
> sourcePath, sourceChecksum, targetFS,
> targetPath, skipCrc, bytesRead);
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17941) Update xerces to 2.12

2021-09-29 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17941.

Fix Version/s: 3.3.2
   3.4.0
   Resolution: Fixed

Committed to trunk and branch-3.3.

> Update xerces to 2.12
> -
>
> Key: HADOOP-17941
> URL: https://issues.apache.org/jira/browse/HADOOP-17941
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1
>Reporter: Zhongwei Zhu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Update xerces due to CVE-2012-0881



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17944) Some Hadoop 3.3.1 jdiff files are missing

2021-09-28 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17944:
--

 Summary: Some Hadoop 3.3.1 jdiff files are missing
 Key: HADOOP-17944
 URL: https://issues.apache.org/jira/browse/HADOOP-17944
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


When releasing 3.3.1, HDFS jdiff file is added by 
https://github.com/apache/hadoop/commit/a77bf7cf07189911da99e305e3b80c589edbbfb5,
 but the other jdiff files are missing.

In addition, the base version is still 2.7.2 in hadoop-yarn and 
hadoop-mapreduce when running jdiff. The version should be upgraded.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17910) [JDK 17] TestNetUtils fails

2021-09-26 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17910.

Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk.

> [JDK 17] TestNetUtils fails
> ---
>
> Key: HADOOP-17910
> URL: https://issues.apache.org/jira/browse/HADOOP-17910
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestNetUtils#testInvalidAddress fails.
> {noformat}
> [INFO] Running org.apache.hadoop.net.TestNetUtils
> [ERROR] Tests run: 48, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 4.469 s <<< FAILURE! - in org.apache.hadoop.net.TestNetUtils
> [ERROR] testInvalidAddress(org.apache.hadoop.net.TestNetUtils)  Time elapsed: 
> 0.386 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'invalid-test-host:0' but got unexpected exception: 
> java.net.UnknownHostException: invalid-test-host/:0
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:592)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:551)
>   at 
> org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:109)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
>   at 
> org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:396)
>   at 
> org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:373)
>   at 
> org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:116)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>

[jira] [Created] (HADOOP-17910) [JDK 17] TestNetUtils fails

2021-09-15 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17910:
--

 Summary: [JDK 17] TestNetUtils fails
 Key: HADOOP-17910
 URL: https://issues.apache.org/jira/browse/HADOOP-17910
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira Ajisaka


TestNetUtils#testInvalidAddress fails.
{noformat}
[INFO] Running org.apache.hadoop.net.TestNetUtils
[ERROR] Tests run: 48, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.469 
s <<< FAILURE! - in org.apache.hadoop.net.TestNetUtils
[ERROR] testInvalidAddress(org.apache.hadoop.net.TestNetUtils)  Time elapsed: 
0.386 s  <<< FAILURE!
java.lang.AssertionError: 
 Expected to find 'invalid-test-host:0' but got unexpected exception: 
java.net.UnknownHostException: invalid-test-host/:0
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:592)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:551)
at 
org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:109)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

at 
org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:396)
at 
org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:373)
at 
org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:116)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUn

[jira] [Resolved] (HADOOP-17804) Prometheus metrics only include the last set of labels

2021-09-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17804.

Fix Version/s: 3.3.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk and branch-3.3. Thanks [~Kimahriman] for the contribution!

> Prometheus metrics only include the last set of labels
> --
>
> Key: HADOOP-17804
> URL: https://issues.apache.org/jira/browse/HADOOP-17804
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Adam Binford
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> A prometheus endpoint was added in 
> https://issues.apache.org/jira/browse/HADOOP-16398, but the logic that puts 
> them into a map based on the "key" incorrectly hides any metrics with the 
> same key but different labels. The relevant code is here: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java#L55|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java#L55.]
> The labels/tags need to be taken into account, as different tags mean 
> different metrics. For example, I came across this while trying to scrape 
> metrics for all the queues in our scheduler. Only the last queue is included 
> because all the metrics have the same "key" but a different "queue" label/tag.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17821) Move Ozone to related projects section

2021-08-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17821.

Fix Version/s: asf-site
   Resolution: Fixed

> Move Ozone to related projects section
> --
>
> Key: HADOOP-17821
> URL: https://issues.apache.org/jira/browse/HADOOP-17821
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi-Sheng Lien
>Assignee: Yi-Sheng Lien
>Priority: Major
>  Labels: pull-request-available
> Fix For: asf-site
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Hi all, as Ozone was spun to TLP, it has individual web site.
> Now on Modules part of Hadoop [website|https://hadoop.apache.org/], the link 
> of Ozone website is old page.
> IMHO there are two ways to fix it :
> 1. update it to new page.
> 2. move Ozone to Related projects part on Hadoop website
> Please feel free to give me some feedback, thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17850) Upgrade ZooKeeper to 3.4.14 in branch-3.2

2021-08-16 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17850:
--

 Summary: Upgrade ZooKeeper to 3.4.14 in branch-3.2
 Key: HADOOP-17850
 URL: https://issues.apache.org/jira/browse/HADOOP-17850
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


Upgrade ZooKeeper 3.4.14 to fix CVE-2019-0201 
(https://zookeeper.apache.org/security.html). That way the ZooKeeper version 
will be consistent with BigTop 3.0.0 (BIGTOP-3471).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17799) Improve the GitHub pull request template

2021-08-14 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17799.

Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk.

> Improve the GitHub pull request template
> 
>
> Key: HADOOP-17799
> URL: https://issues.apache.org/jira/browse/HADOOP-17799
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The current Hadoop pull request template can be improved.
> - Require some information (e.g. 
> https://github.com/apache/spark/blob/master/.github/PULL_REQUEST_TEMPLATE)
> - Checklists (e.g. 
> https://github.com/apache/nifi/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
> - Move current notice to comment (i.e. surround with )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17844) Upgrade JSON smart to 2.4.7

2021-08-14 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17844.

Fix Version/s: 3.3.2
   3.2.3
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk, branch-3.3, branch-3.2, and branch-3.2.3.

> Upgrade JSON smart to 2.4.7
> ---
>
> Key: HADOOP-17844
> URL: https://issues.apache.org/jira/browse/HADOOP-17844
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently we are using JSON Smart 2.4.2 version which is vulnerable to - 
> CVE-2021-31684.
> We can upgrade the version to 2.4.7 (2.4.5 or later).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17370.

Fix Version/s: 3.3.2
   3.2.3
   2.10.2
   3.4.0
   Resolution: Fixed

Committed to all the active branches.

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17838) Update link of PoweredBy wiki page

2021-08-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17838.

Fix Version/s: asf-site
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged the PR.

> Update link of PoweredBy wiki page
> --
>
> Key: HADOOP-17838
> URL: https://issues.apache.org/jira/browse/HADOOP-17838
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi-Sheng Lien
>Assignee: Yi-Sheng Lien
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: asf-site
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The [PoweredBy wiki 
> page|https://cwiki.apache.org/confluence/display/hadoop/PoweredBy] on [main 
> page|https://hadoop.apache.org/] is not found.
> IMHO update it to 
> [here|https://cwiki.apache.org/confluence/display/HADOOP2/PoweredBy]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17813) Allow line length more than 80 characters

2021-07-21 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17813:
--

 Summary: Allow line length more than 80 characters
 Key: HADOOP-17813
 URL: https://issues.apache.org/jira/browse/HADOOP-17813
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Akira Ajisaka


Update the checkstyle definition to allow for 100 or 120 characters.

Discussion thread: 
https://lists.apache.org/thread.html/r69c363fb365d4cfdec44433e7f6ec7d7eb3505067c2fcb793765068f%40%3Ccommon-dev.hadoop.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16272) Update HikariCP to 4.0.3

2021-07-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16272.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk. Thank you [~vjasani] for the contribution.

> Update HikariCP to 4.0.3
> 
>
> Key: HADOOP-16272
> URL: https://issues.apache.org/jira/browse/HADOOP-16272
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Yuming Wang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17799) Improve the GitHub pull request template

2021-07-13 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17799:
--

 Summary: Improve the GitHub pull request template
 Key: HADOOP-17799
 URL: https://issues.apache.org/jira/browse/HADOOP-17799
 Project: Hadoop Common
  Issue Type: Task
  Components: build, documentation
Reporter: Akira Ajisaka


The current Hadoop pull request template can be improved.

- Require some information (e.g. 
https://github.com/apache/spark/blob/master/.github/PULL_REQUEST_TEMPLATE)
- Checklists (e.g. 
https://github.com/apache/nifi/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
- Move current notice to comment (i.e. surround with )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17798) Always use GitHub PR rather than JIRA to review patches

2021-07-13 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17798:
--

 Summary: Always use GitHub PR rather than JIRA to review patches
 Key: HADOOP-17798
 URL: https://issues.apache.org/jira/browse/HADOOP-17798
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Akira Ajisaka


Now there are 2 types of precommit jobs in https://ci-hadoop.apache.org/
(1) Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs that try to download 
patches from JIRA and test them.
(2) hadoop-multibranch job for GitHub PR

The problems are:
- The build configs are separated. The (2) config is in Jenkinsfile, and the 
(1) configs are in the Jenkins. When we update Jenkinsfile, I had to manually 
update the configs of the 4 precommit jobs via Jenkins Web UI.
- The (1) build configs are static. We cannot use separate config for each 
branch. This may cause some build failures.
- GitHub Actions cannot be used in the (1) jobs.

Therefore I want to disable the (1) jobs and always use GitHub PR to review 
patches.

How to do this:

1. Update the wiki: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Provideapatch
2. Disable the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17568) Mapred/YARN job fails due to kms-dt can't be found in cache with LoadBalancingKMSClientProvider + Kerberos

2021-07-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17568.

Resolution: Not A Bug

Closing this because the parameter has been documented in HADOOP-17794.

> Mapred/YARN job fails due to kms-dt can't be found in cache with 
> LoadBalancingKMSClientProvider + Kerberos
> --
>
> Key: HADOOP-17568
> URL: https://issues.apache.org/jira/browse/HADOOP-17568
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms, security
>Affects Versions: 3.2.2
>Reporter: Zbigniew Kostrzewa
>Priority: Major
>
> I deployed Hadoop 3.2.2 cluster with KMS in HA using 
> LoadBalancingKMSClientProvider with Kerberos authentication. KMS instances 
> are configured with ZooKeeper for storing the shared secret.
> I have created an encryption key and an encryption zone in `/test` directory 
> and executed `randomtextwriter` from mapreduce examples passing it a 
> sub-directory in the encryption zone:
> {code:java}
> hadoop jar hadoop-mapreduce-examples-3.2.2.jar randomtextwriter 
> /test/randomtextwriter
> {code}
> Unfortunately the job keeps failing with errors like:
> {code:java}
> java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
> owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
> maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in 
> cache
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:363)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.decryptEncryptedDataEncryptionKey(HdfsKMSUtil.java:212)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:972)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:952)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:536)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:544)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:471)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1125)
>   at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1168)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:285)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542)
>   at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
>   at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.(MapTask.java:659)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:779)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
> owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
> maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in 
> cache
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:154)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:592)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:540)
>   

[jira] [Resolved] (HADOOP-12665) Document hadoop.security.token.service.use_ip

2021-07-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-12665.

Fix Version/s: 3.3.2
   3.2.3
   2.10.2
   3.4.0
   Resolution: Fixed

Committed to all the active branches. Thanks!

> Document hadoop.security.token.service.use_ip
> -
>
> Key: HADOOP-12665
> URL: https://issues.apache.org/jira/browse/HADOOP-12665
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17794) Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS

2021-07-08 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17794:
--

 Summary: Add a sample configuration to use 
ZKDelegationTokenSecretManager in Hadoop KMS
 Key: HADOOP-17794
 URL: https://issues.apache.org/jira/browse/HADOOP-17794
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, security
Reporter: Akira Ajisaka


The following parameters should be documented in 
https://hadoop.apache.org/docs/stable/hadoop-kms/index.html#Delegation_Tokens

* hadoop.kms.authentication.zk-dt-secret-manager.enable
* hadoop.kms.authentication.zk-dt-secret-manager.kerberos.keytab
* hadoop.kms.authentication.zk-dt-secret-manager.kerberos.principal
* hadoop.kms.authentication.zk-dt-secret-manager.zkConnectionString
* hadoop.kms.authentication.zk-dt-secret-manager.znodeWorkingPath
* hadoop.kms.authentication.zk-dt-secret-manager.zkAuthType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17792) "hadoop.security.token.service.use_ip" should be documented

2021-07-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17792.

Resolution: Duplicate

> "hadoop.security.token.service.use_ip" should be documented
> ---
>
> Key: HADOOP-17792
> URL: https://issues.apache.org/jira/browse/HADOOP-17792
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> hadoop.security.token.service.use_ip is not documented in core-default.xml. 
> It should be documented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17792) "hadoop.security.token.service.use_ip" should be documented

2021-07-07 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17792:
--

 Summary: "hadoop.security.token.service.use_ip" should be 
documented
 Key: HADOOP-17792
 URL: https://issues.apache.org/jira/browse/HADOOP-17792
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Akira Ajisaka


hadoop.security.token.service.use_ip is not documented in core-default.xml. It 
should be documented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17762) branch-2.10 daily build fails to pull latest changes

2021-06-22 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17762.

Resolution: Duplicate

Probably this issue has been fixed by INFRA-22020.

> branch-2.10 daily build fails to pull latest changes
> 
>
> Key: HADOOP-17762
> URL: https://issues.apache.org/jira/browse/HADOOP-17762
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, yetus
>Affects Versions: 2.10.1
>Reporter: Ahmed Hussein
>Priority: Major
>
> I noticed that the build for branch-2.10 failed to pull the latest changes 
> for the last few days.
> CC: [~aajisaka], [~tasanuma], [~Jim_Brennan]
> https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/329/console
> {code:bash}
> Started by timer
> Running as SYSTEM
> Building remotely on H20 (Hadoop) in workspace 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64
> The recommended git tool is: NONE
> No credentials specified
> Cloning the remote Git repository
> Using shallow clone with depth 10
> Avoid fetching tags
> Cloning repository https://github.com/apache/hadoop
> ERROR: Failed to clean the workspace
> jenkins.util.io.CompositeIOException: Unable to delete 
> '/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir'.
>  Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. 
> (Discarded 1 additional exceptions)
>   at 
> jenkins.util.io.PathRemover.forceRemoveDirectoryContents(PathRemover.java:90)
>   at hudson.Util.deleteContentsRecursive(Util.java:262)
>   at hudson.Util.deleteContentsRecursive(Util.java:251)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:743)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:161)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:154)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:211)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)
>   at hudson.remoting.Request$2.run(Request.java:375)
>   at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:73)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   Suppressed: java.nio.file.AccessDeniedException: 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
>   at java.nio.file.Files.newDirectoryStream(Files.java:457)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:224)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRe

[jira] [Resolved] (HADOOP-17759) Remove Hadoop 3.1.4 from the download page

2021-06-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17759.

Resolution: Done

> Remove Hadoop 3.1.4 from the download page
> --
>
> Key: HADOOP-17759
> URL: https://issues.apache.org/jira/browse/HADOOP-17759
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> Since Hadoop 3.1.x is EoL, 3.1.4 should be removed from 
> https://hadoop.apache.org/releases.html.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17759) Remove Hadoop 3.1.4 from the download page

2021-06-10 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17759:
--

 Summary: Remove Hadoop 3.1.4 from the download page
 Key: HADOOP-17759
 URL: https://issues.apache.org/jira/browse/HADOOP-17759
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Akira Ajisaka


Since Hadoop 3.1.x is EoL, 3.1.4 should be removed from 
https://hadoop.apache.org/releases.html.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16988) Remove source code from branch-2

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-16988:


> Remove source code from branch-2
> 
>
> Key: HADOOP-16988
> URL: https://issues.apache.org/jira/browse/HADOOP-16988
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now, branch-2 is dead and unused. I think we can delete the entire source 
> code from branch-2 to avoid committing or cherry-picking to the unused branch.
> Chen Liang asked ASF INFRA for help but it didn't help for us: INFRA-19581



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16988) Remove source code from branch-2

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16988.

Resolution: Done

> Remove source code from branch-2
> 
>
> Key: HADOOP-16988
> URL: https://issues.apache.org/jira/browse/HADOOP-16988
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now, branch-2 is dead and unused. I think we can delete the entire source 
> code from branch-2 to avoid committing or cherry-picking to the unused branch.
> Chen Liang asked ASF INFRA for help but it didn't help for us: INFRA-19581



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17078) hadoop-shaded-protobuf_3_7 depends on the wrong version.

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17078.

Resolution: Duplicate

> hadoop-shaded-protobuf_3_7 depends on the wrong version.
> 
>
> Key: HADOOP-17078
> URL: https://issues.apache.org/jira/browse/HADOOP-17078
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.1
>Reporter: JiangHua Zhu
>Priority: Major
>
> When using maven to compile hadoop source code, the following exception 
> message appears:
> [*INFO*] 
> **
> [*INFO*] *BUILD FAILURE*
> [*INFO*] 
> **
> [*INFO*] Total time:  29.546 s
> [*INFO*] Finished at: 2020-06-20T23:57:59+08:00
> [*INFO*] 
> **
> [*ERROR*] Failed to execute goal on project hadoop-common: *Could not resolve 
> dependencies for project org.apache.hadoop:hadoop-common:jar:3.3.0-SNAPSHOT: 
> Could not find artifact 
> org.apache.hadoop.thirdparty:hadoop-shaded-protobuf_3_7:jar:1.0.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots)* -> *[Help 1]*
> [*ERROR*] 
> [*ERROR*] To see the full stack trace of the errors, re-run Maven with the 
> *-e* switch.
> [*ERROR*] Re-run Maven using the *-X* switch to enable full debug logging.
> [*ERROR*] 
> [*ERROR*] For more information about the errors and possible solutions, 
> please read the following articles:
> [*ERROR*] *[Help 1]* 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [*ERROR*] 
> [*ERROR*] After correcting the problems, you can resume the build with the 
> command
> [*ERROR*]   *mvn  -rf :hadoop-common*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17078) hadoop-shaded-protobuf_3_7 depends on the wrong version.

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17078:


> hadoop-shaded-protobuf_3_7 depends on the wrong version.
> 
>
> Key: HADOOP-17078
> URL: https://issues.apache.org/jira/browse/HADOOP-17078
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.1
>Reporter: JiangHua Zhu
>Priority: Major
>
> When using maven to compile hadoop source code, the following exception 
> message appears:
> [*INFO*] 
> **
> [*INFO*] *BUILD FAILURE*
> [*INFO*] 
> **
> [*INFO*] Total time:  29.546 s
> [*INFO*] Finished at: 2020-06-20T23:57:59+08:00
> [*INFO*] 
> **
> [*ERROR*] Failed to execute goal on project hadoop-common: *Could not resolve 
> dependencies for project org.apache.hadoop:hadoop-common:jar:3.3.0-SNAPSHOT: 
> Could not find artifact 
> org.apache.hadoop.thirdparty:hadoop-shaded-protobuf_3_7:jar:1.0.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots)* -> *[Help 1]*
> [*ERROR*] 
> [*ERROR*] To see the full stack trace of the errors, re-run Maven with the 
> *-e* switch.
> [*ERROR*] Re-run Maven using the *-X* switch to enable full debug logging.
> [*ERROR*] 
> [*ERROR*] For more information about the errors and possible solutions, 
> please read the following articles:
> [*ERROR*] *[Help 1]* 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [*ERROR*] 
> [*ERROR*] After correcting the problems, you can resume the build with the 
> command
> [*ERROR*]   *mvn  -rf :hadoop-common*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17228) Backport HADOOP-13230 listing changes for preserved directory markers to 3.1.x

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17228.

Resolution: Won't Fix

branch-3.1 is EOL, won't fix.

> Backport HADOOP-13230 listing changes for preserved directory markers to 3.1.x
> --
>
> Key: HADOOP-17228
> URL: https://issues.apache.org/jira/browse/HADOOP-17228
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Backport a small subset of HADOOP-17199 to branch-3.1
> No path capabities, declarative test syntax etc
> just
> -getFileStatus/list
> -markers changes to bucket-info
> -startup info message if option is set
> -relevant test changes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17228) Backport HADOOP-13230 listing changes for preserved directory markers to 3.1.x

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17228:


> Backport HADOOP-13230 listing changes for preserved directory markers to 3.1.x
> --
>
> Key: HADOOP-17228
> URL: https://issues.apache.org/jira/browse/HADOOP-17228
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Backport a small subset of HADOOP-17199 to branch-3.1
> No path capabities, declarative test syntax etc
> just
> -getFileStatus/list
> -markers changes to bucket-info
> -startup info message if option is set
> -relevant test changes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17097) start-build-env.sh fails in branch-3.1

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17097.

Resolution: Won't Fix

branch-3.1 is EoL, won't fix.

> start-build-env.sh fails in branch-3.1
> --
>
> Key: HADOOP-17097
> URL: https://issues.apache.org/jira/browse/HADOOP-17097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Ubuntu 20.04
>Reporter: Akira Ajisaka
>Assignee: Masatake Iwasaki
>Priority: Critical
>
> ./start-build-env.sh fails to install ember-cli
> {noformat}
> npm ERR! Linux 5.4.0-37-generic
> npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
> npm ERR! node v4.2.6
> npm ERR! npm  v3.5.2
> npm ERR! code EMISSINGARG
> npm ERR! typeerror Error: Missing required argument #1
> npm ERR! typeerror at andLogAndFinish 
> (/usr/share/npm/lib/fetch-package-metadata.js:31:3)
> npm ERR! typeerror at fetchPackageMetadata 
> (/usr/share/npm/lib/fetch-package-metadata.js:51:22)
> npm ERR! typeerror at resolveWithNewModule 
> (/usr/share/npm/lib/install/deps.js:456:12)
> npm ERR! typeerror at /usr/share/npm/lib/install/deps.js:457:7
> npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
> npm ERR! typeerror at /usr/share/npm/lib/fetch-package-metadata.js:37:12
> npm ERR! typeerror at addRequestedAndFinish 
> (/usr/share/npm/lib/fetch-package-metadata.js:82:5)
> npm ERR! typeerror at returnAndAddMetadata 
> (/usr/share/npm/lib/fetch-package-metadata.js:117:7)
> npm ERR! typeerror at pickVersionFromRegistryDocument 
> (/usr/share/npm/lib/fetch-package-metadata.js:134:20)
> npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
> npm ERR! typeerror This is an error with npm itself. Please report this error 
> at:
> npm ERR! typeerror 
> npm ERR! Please include the following file with any support request:
> npm ERR! /root/npm-debug.log
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17550) property 'ssl.server.keystore.location' has not been set in the ssl configuration file

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17550.

Resolution: Not A Problem

> property 'ssl.server.keystore.location' has not been set in the ssl 
> configuration file
> --
>
> Key: HADOOP-17550
> URL: https://issues.apache.org/jira/browse/HADOOP-17550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.8.5
>Reporter: hamado dene
>Priority: Major
>
> I trying to install hadoop cluster HA , but datanode does not start properly; 
> I get this errror:
> 2021-02-23 17:13:26,934 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
> java.io.IOException: java.security.GeneralSecurityException: The property 
> 'ssl.server.keystore.location' has not been set in the ssl configuration file.
> at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:199)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1303)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2609)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2544)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2729)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2753)
> Caused by: java.security.GeneralSecurityException: The property 
> 'ssl.server.keystore.location' has not been set in the ssl configuration file.
> at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:152)
> at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:148)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:197)
> ... 8 more
> But in my ssl-server.xml i correctly set this property:
> 
> ssl.server.keystore.location
> /data/hadoop/server.jks
> Keystore to be used by clients like distcp. Must be
> specified.
> 
> 
> 
> ssl.server.keystore.password
> 
> Optional. Default value is "".
> 
> 
> 
> ssl.server.keystore.keypassword
> x
> Optional. Default value is "".
> 
> 
> 
> ssl.server.keystore.type
> jks
> Optional. The keystore file format, default value is "jks".
> 
> 
> Do you have any suggestion to solve this problem?
> my hadoop version is: 2.8.5
> java version: 8
> SO: centos 7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17550) property 'ssl.server.keystore.location' has not been set in the ssl configuration file

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17550:


> property 'ssl.server.keystore.location' has not been set in the ssl 
> configuration file
> --
>
> Key: HADOOP-17550
> URL: https://issues.apache.org/jira/browse/HADOOP-17550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.8.5
>Reporter: hamado dene
>Priority: Major
>
> I trying to install hadoop cluster HA , but datanode does not start properly; 
> I get this errror:
> 2021-02-23 17:13:26,934 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
> java.io.IOException: java.security.GeneralSecurityException: The property 
> 'ssl.server.keystore.location' has not been set in the ssl configuration file.
> at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:199)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1303)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2609)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2544)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2729)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2753)
> Caused by: java.security.GeneralSecurityException: The property 
> 'ssl.server.keystore.location' has not been set in the ssl configuration file.
> at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:152)
> at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:148)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:197)
> ... 8 more
> But in my ssl-server.xml i correctly set this property:
> 
> ssl.server.keystore.location
> /data/hadoop/server.jks
> Keystore to be used by clients like distcp. Must be
> specified.
> 
> 
> 
> ssl.server.keystore.password
> 
> Optional. Default value is "".
> 
> 
> 
> ssl.server.keystore.keypassword
> x
> Optional. Default value is "".
> 
> 
> 
> ssl.server.keystore.type
> jks
> Optional. The keystore file format, default value is "jks".
> 
> 
> Do you have any suggestion to solve this problem?
> my hadoop version is: 2.8.5
> java version: 8
> SO: centos 7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17097) start-build-env.sh fails in branch-3.1

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17097:


> start-build-env.sh fails in branch-3.1
> --
>
> Key: HADOOP-17097
> URL: https://issues.apache.org/jira/browse/HADOOP-17097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Ubuntu 20.04
>Reporter: Akira Ajisaka
>Assignee: Masatake Iwasaki
>Priority: Critical
>
> ./start-build-env.sh fails to install ember-cli
> {noformat}
> npm ERR! Linux 5.4.0-37-generic
> npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
> npm ERR! node v4.2.6
> npm ERR! npm  v3.5.2
> npm ERR! code EMISSINGARG
> npm ERR! typeerror Error: Missing required argument #1
> npm ERR! typeerror at andLogAndFinish 
> (/usr/share/npm/lib/fetch-package-metadata.js:31:3)
> npm ERR! typeerror at fetchPackageMetadata 
> (/usr/share/npm/lib/fetch-package-metadata.js:51:22)
> npm ERR! typeerror at resolveWithNewModule 
> (/usr/share/npm/lib/install/deps.js:456:12)
> npm ERR! typeerror at /usr/share/npm/lib/install/deps.js:457:7
> npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
> npm ERR! typeerror at /usr/share/npm/lib/fetch-package-metadata.js:37:12
> npm ERR! typeerror at addRequestedAndFinish 
> (/usr/share/npm/lib/fetch-package-metadata.js:82:5)
> npm ERR! typeerror at returnAndAddMetadata 
> (/usr/share/npm/lib/fetch-package-metadata.js:117:7)
> npm ERR! typeerror at pickVersionFromRegistryDocument 
> (/usr/share/npm/lib/fetch-package-metadata.js:134:20)
> npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
> npm ERR! typeerror This is an error with npm itself. Please report this error 
> at:
> npm ERR! typeerror 
> npm ERR! Please include the following file with any support request:
> npm ERR! /root/npm-debug.log
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17651) Backport to branch-3.1 HADOOP-17371, HADOOP-17621, HADOOP-17625 to update Jetty to 9.4.39

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17651.

Resolution: Won't Fix

branch-3.1 is EoL. Closing as won't fix.

> Backport to branch-3.1 HADOOP-17371, HADOOP-17621, HADOOP-17625 to update 
> Jetty to 9.4.39
> -
>
> Key: HADOOP-17651
> URL: https://issues.apache.org/jira/browse/HADOOP-17651
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17651) Backport to branch-3.1 HADOOP-17371, HADOOP-17621, HADOOP-17625 to update Jetty to 9.4.39

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17651:


> Backport to branch-3.1 HADOOP-17371, HADOOP-17621, HADOOP-17625 to update 
> Jetty to 9.4.39
> -
>
> Key: HADOOP-17651
> URL: https://issues.apache.org/jira/browse/HADOOP-17651
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



  1   2   3   4   5   6   >