[jira] [Commented] (HADOOP-17628) ABFS: Distcp contract test testDistCpWithIterator is timing out consistently

2021-05-28 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353539#comment-17353539
 ] 

Steve Loughran commented on HADOOP-17628:
-

the fix here is that s3a and abfs tests need to have smaller values of 
getWidth() and getDepth() so there's many fewer files to create and copy. depth 
of 3 and width of 10 == 10^3 files to PUT, copy etc.

Proposed: they leave the depth = 3 but set width = 2, so there's only 8; we can 
rely on the local test suites to stress memory and not these ones

> ABFS: Distcp contract test testDistCpWithIterator is timing out consistently 
> -
>
> Key: HADOOP-17628
> URL: https://issues.apache.org/jira/browse/HADOOP-17628
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>
> The test case testDistCpWithIterator in AbstractContractDistCpTest is 
> consistently timing out.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17531) DistCp: Reduce memory usage on copying huge directories

2021-05-28 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353538#comment-17353538
 ] 

Steve Loughran commented on HADOOP-17531:
-

this is triggering timeouts because it's really slow against object stores. 
6-10 minutes.

we'll have to ask for shallower directories there

> DistCp: Reduce memory usage on copying huge directories
> ---
>
> Key: HADOOP-17531
> URL: https://issues.apache.org/jira/browse/HADOOP-17531
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: MoveToStackIterator.patch, gc-NewD-512M-3.8ML.log
>
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> Presently distCp, uses the producer-consumer kind of setup while building the 
> listing, the input queue and output queue are both unbounded, thus the 
> listStatus grows quite huge.
> Rel Code Part :
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L635
> This goes on bredth-first traversal kind of stuff(uses queue instead of 
> earlier stack), so if you have files at lower depth, it will like open up the 
> entire tree and the start processing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2852: MAPREDUCE-7287. Distcp will delete exists file , If we use "-delete …

2021-05-28 Thread GitBox


steveloughran merged pull request #2852:
URL: https://github.com/apache/hadoop/pull/2852


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2852: MAPREDUCE-7287. Distcp will delete exists file , If we use "-delete …

2021-05-28 Thread GitBox


steveloughran commented on pull request #2852:
URL: https://github.com/apache/hadoop/pull/2852#issuecomment-850622019


   ```
   [INFO]  T E S T S
   [INFO] ---
   [INFO] Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp
   [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
634.789 s - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp
   [INFO]
   [INFO] Results:
   [INFO]
   [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0
   [INFO]
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  10:48 min
   [INFO] Finished at: 2021-05-28T20:14:42+01:00
   [INFO] 

   
   and 
   
   [INFO] Running 
org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractDistCp
   [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
577.782 s - in 
org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractDistCp
   [INFO]
   [INFO] Results:
   [INFO]
   [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0
   [INFO]
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] 

   [INFO] Total time:  09:47 min
   [INFO] Finished at: 2021-05-28T20:15:03+01:00
   [INFO] 

   ```
   
   so: tests are happy, 
   
   +1 from me, given @ayushtkn's approval of the production code.
   
   merging to trunk


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2852: MAPREDUCE-7287. Distcp will delete exists file , If we use "-delete …

2021-05-28 Thread GitBox


steveloughran commented on pull request #2852:
URL: https://github.com/apache/hadoop/pull/2852#issuecomment-850594871


   let me actually check out and do the s3a and abfs tests here, given the 
author has gone to the effort of writing them


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=603653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603653
 ]

ASF GitHub Bot logged work on HADOOP-17725:
---

Author: ASF GitHub Bot
Created on: 28/May/21 18:24
Start Date: 28/May/21 18:24
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3041:
URL: https://github.com/apache/hadoop/pull/3041#discussion_r641733928



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
##
@@ -361,6 +373,39 @@ public void testAccessTokenProviderPrecedence()
 testGlobalAndAccountOAuthPrecedence(abfsConf, null, AuthType.OAuth);
   }
 
+  @Test
+  public void testConfigPropNotFound() throws Exception {
+final String accountName = "account";
+
+final Configuration conf = new Configuration();
+final AbfsConfiguration abfsConf = new AbfsConfiguration(conf, 
accountName);
+
+for (String key : CONFIG_KEYS) {
+  setAuthConfig(abfsConf, true, AuthType.OAuth);
+  abfsConf.unset(key + "." + accountName);
+  testMissingConfigKey(abfsConf, key);
+}
+
+unsetAuthConfig(abfsConf, false);
+unsetAuthConfig(abfsConf, true);
+  }
+
+  private void testMissingConfigKey(final AbfsConfiguration abfsConf,

Review comment:
   Whenever an exception doesn't match expected, its critical for the test 
case to throw up that exception so we can debug what's gone wrong from the test 
report alone. This one loses the stack trace on L399 and L403.
   
   add `verifyCause` to verify cause type and 
GenericTestUtils.assertExceptionContains() to process, something like
   
   ```java
   assertExceptionContains("Configuration property "+confKey+" not found."),
   verifyCause(ConfigurationPropertyNotFoundException.class,
 intercept(TokenAccessProviderException.class,()->
   abfsConf.getTokenProvider().getClass().getTypeName(;
   ```
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603653)
Time Spent: 3h 50m  (was: 3h 40m)

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS

2021-05-28 Thread GitBox


steveloughran commented on a change in pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#discussion_r641733928



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
##
@@ -361,6 +373,39 @@ public void testAccessTokenProviderPrecedence()
 testGlobalAndAccountOAuthPrecedence(abfsConf, null, AuthType.OAuth);
   }
 
+  @Test
+  public void testConfigPropNotFound() throws Exception {
+final String accountName = "account";
+
+final Configuration conf = new Configuration();
+final AbfsConfiguration abfsConf = new AbfsConfiguration(conf, 
accountName);
+
+for (String key : CONFIG_KEYS) {
+  setAuthConfig(abfsConf, true, AuthType.OAuth);
+  abfsConf.unset(key + "." + accountName);
+  testMissingConfigKey(abfsConf, key);
+}
+
+unsetAuthConfig(abfsConf, false);
+unsetAuthConfig(abfsConf, true);
+  }
+
+  private void testMissingConfigKey(final AbfsConfiguration abfsConf,

Review comment:
   Whenever an exception doesn't match expected, its critical for the test 
case to throw up that exception so we can debug what's gone wrong from the test 
report alone. This one loses the stack trace on L399 and L403.
   
   add `verifyCause` to verify cause type and 
GenericTestUtils.assertExceptionContains() to process, something like
   
   ```java
   assertExceptionContains("Configuration property "+confKey+" not found."),
   verifyCause(ConfigurationPropertyNotFoundException.class,
 intercept(TokenAccessProviderException.class,()->
   abfsConf.getTokenProvider().getClass().getTypeName(;
   ```
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=603646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603646
 ]

ASF GitHub Bot logged work on HADOOP-17725:
---

Author: ASF GitHub Bot
Created on: 28/May/21 18:10
Start Date: 28/May/21 18:10
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850585560


   @virajjasani 
   
   > source changes are not complex enough to test on real time test env. 
Please let me know if this works.
   
   Given that a source change which may cause a regression, there is no source 
change for the hadoop-azure module which doesn't require the submitter to run 
the integration test suites. It's not about whether the new feature merits i a 
test, its "what of the existing features have stopped working". As for new 
features: the tests are there to stop the next person who touches the code from 
regressing this bit.
   
   no test: no review
   
   See the section in [Testing 
hadoop-azure](https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md)
   
   @sadikovi if you are set up and willing to test this,  are you able to run 
the `mvn verify` suite and state which endpoint, auth mechanism and build 
options were used? 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603646)
Time Spent: 3h 40m  (was: 3.5h)

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS

2021-05-28 Thread GitBox


steveloughran commented on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850585560


   @virajjasani 
   
   > source changes are not complex enough to test on real time test env. 
Please let me know if this works.
   
   Given that a source change which may cause a regression, there is no source 
change for the hadoop-azure module which doesn't require the submitter to run 
the integration test suites. It's not about whether the new feature merits i a 
test, its "what of the existing features have stopped working". As for new 
features: the tests are there to stop the next person who touches the code from 
regressing this bit.
   
   no test: no review
   
   See the section in [Testing 
hadoop-azure](https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md)
   
   @sadikovi if you are set up and willing to test this,  are you able to run 
the `mvn verify` suite and state which endpoint, auth mechanism and build 
options were used? 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17631) Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when restrictSystemProps=true

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17631?focusedWorklogId=603642=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603642
 ]

ASF GitHub Bot logged work on HADOOP-17631:
---

Author: ASF GitHub Bot
Created on: 28/May/21 18:01
Start Date: 28/May/21 18:01
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2977:
URL: https://github.com/apache/hadoop/pull/2977#issuecomment-850581128


   > +1, nice tests as well. Maybe we could have an assert with more than 1 
colon before the hyphen and it should resolve the fallback option as well(or is 
it too far-fetched?).
   
   Don't think that's valid


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603642)
Time Spent: 40m  (was: 0.5h)

> Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when 
> restrictSystemProps=true 
> --
>
> Key: HADOOP-17631
> URL: https://issues.apache.org/jira/browse/HADOOP-17631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When configuration reads in resources with a restricted parser, it skips 
> evaluaging system ${env. } vars. But it also skips evaluating fallbacks
> As a result, a property like
> ${env.LOCAL_DIRS:-${hadoop.tmp.dir}} ends up evaluating as 
> ${env.LOCAL_DIRS:-${hadoop.tmp.dir}}
> It should instead fall back to the "env var unset" option of 
> ${hadoop.tmp.dir}. This allows for configs (like for s3a buffer dirs) which 
> are usable in restricted mode as well as unrestricted deployments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2977: HADOOP-17631. Configuration ${env.VAR:-FALLBACK} to eval FALLBACK when restrictSystemProps=true

2021-05-28 Thread GitBox


steveloughran commented on pull request #2977:
URL: https://github.com/apache/hadoop/pull/2977#issuecomment-850581128


   > +1, nice tests as well. Maybe we could have an assert with more than 1 
colon before the hyphen and it should resolve the fallback option as well(or is 
it too far-fetched?).
   
   Don't think that's valid


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=603619=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603619
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 28/May/21 16:41
Start Date: 28/May/21 16:41
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641681627



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&

Review comment:
   Sounds good.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603619)
Time Spent: 3h 50m  (was: 3h 40m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #3043: HADOOP-17727. Modularize docker images

2021-05-28 Thread GitBox


goiri commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641681627



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&

Review comment:
   Sounds good.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=603618=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603618
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 28/May/21 16:40
Start Date: 28/May/21 16:40
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641681054



##
File path: dev-support/docker/Dockerfile_aarch64
##
@@ -33,61 +33,19 @@ ENV DEBIAN_FRONTEND noninteractive
 ENV DEBCONF_TERSE true
 
 ##
-# Install common dependencies from packages. Versions here are either
-# sufficient or irrelevant.
+# Platform package dependency resolver
 ##
-# hadolint ignore=DL3008
+COPY pkg-resolver pkg-resolver
+RUN chmod a+x pkg-resolver/install-common-pkgs.sh pkg-resolver/resolve.py \

Review comment:
   Let's remove the copy and CHMOD if we don't use it in this file.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603618)
Time Spent: 3h 40m  (was: 3.5h)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #3043: HADOOP-17727. Modularize docker images

2021-05-28 Thread GitBox


goiri commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641681054



##
File path: dev-support/docker/Dockerfile_aarch64
##
@@ -33,61 +33,19 @@ ENV DEBIAN_FRONTEND noninteractive
 ENV DEBCONF_TERSE true
 
 ##
-# Install common dependencies from packages. Versions here are either
-# sufficient or irrelevant.
+# Platform package dependency resolver
 ##
-# hadolint ignore=DL3008
+COPY pkg-resolver pkg-resolver
+RUN chmod a+x pkg-resolver/install-common-pkgs.sh pkg-resolver/resolve.py \

Review comment:
   Let's remove the copy and CHMOD if we don't use it in this file.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=603617=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603617
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 28/May/21 16:39
Start Date: 28/May/21 16:39
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641680399



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&
+  curl -L 
https://sourceforge.net/projects/boost/files/boost/1.72.0/boost_1_72_0.tar.bz2/download
 >boost_1_72_0.tar.bz2 &&
+  mv boost_1_72_0.tar.bz2 /opt/boost-library &&
+  cd /opt/boost-library &&
+  tar --bzip2 -xf boost_1_72_0.tar.bz2 &&
+  cd /opt/boost-library/boost_1_72_0 &&
+  ./bootstrap.sh --prefix=/usr/ &&
+  ./b2 --without-python install &&
+  cd /root &&
+  rm -rf /opt/boost-library
+
+##
+# Install Google Protobuf 3.7.1 (3.6.1 ships with Focal)
+##
+# hadolint ignore=DL3003
+mkdir -p /opt/protobuf-src &&
+  curl -L -s -S \
+
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
 \
+-o /opt/protobuf.tar.gz &&
+  tar xzf /opt/protobuf.tar.gz --strip-components 1 -C /opt/protobuf-src &&
+  cd /opt/protobuf-src &&
+  ./configure --prefix=/opt/protobuf &&
+  make "-j$(nproc)" &&
+  make install &&
+  cd /root &&
+  rm -rf /opt/protobuf-src
+
+##
+# Install pylint and python-dateutil
+##
+pip3 install pylint==2.6.0 python-dateutil==2.8.1

Review comment:
   Is pip3 installed by now?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603617)
Time Spent: 3.5h  (was: 3h 20m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #3043: HADOOP-17727. Modularize docker images

2021-05-28 Thread GitBox


goiri commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641680399



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&
+  curl -L 
https://sourceforge.net/projects/boost/files/boost/1.72.0/boost_1_72_0.tar.bz2/download
 >boost_1_72_0.tar.bz2 &&
+  mv boost_1_72_0.tar.bz2 /opt/boost-library &&
+  cd /opt/boost-library &&
+  tar --bzip2 -xf boost_1_72_0.tar.bz2 &&
+  cd /opt/boost-library/boost_1_72_0 &&
+  ./bootstrap.sh --prefix=/usr/ &&
+  ./b2 --without-python install &&
+  cd /root &&
+  rm -rf /opt/boost-library
+
+##
+# Install Google Protobuf 3.7.1 (3.6.1 ships with Focal)
+##
+# hadolint ignore=DL3003
+mkdir -p /opt/protobuf-src &&
+  curl -L -s -S \
+
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
 \
+-o /opt/protobuf.tar.gz &&
+  tar xzf /opt/protobuf.tar.gz --strip-components 1 -C /opt/protobuf-src &&
+  cd /opt/protobuf-src &&
+  ./configure --prefix=/opt/protobuf &&
+  make "-j$(nproc)" &&
+  make install &&
+  cd /root &&
+  rm -rf /opt/protobuf-src
+
+##
+# Install pylint and python-dateutil
+##
+pip3 install pylint==2.6.0 python-dateutil==2.8.1

Review comment:
   Is pip3 installed by now?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=603555=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603555
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 28/May/21 15:08
Start Date: 28/May/21 15:08
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on a change in pull request 
#3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641623791



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&

Review comment:
   So, I'm thinking something like this -
   I'll create a `.sh` file for each dependency
   ```
   pkg-resolver /
   | install-boost.sh
   | install-protobuf.sh
   .
   .
   ```
   
   In the `Dockerfile`, we'll do something like -
   ```
   RUN pkg-resolver/install-boost.sh ubuntu:focal 
   ```
   The `` if not specified uses whatever's the default 
version for that package (for Boost, it's 1.72).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603555)
Time Spent: 3h 20m  (was: 3h 10m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on a change in pull request #3043: HADOOP-17727. Modularize docker images

2021-05-28 Thread GitBox


GauthamBanasandra commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641623791



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&

Review comment:
   So, I'm thinking something like this -
   I'll create a `.sh` file for each dependency
   ```
   pkg-resolver /
   | install-boost.sh
   | install-protobuf.sh
   .
   .
   ```
   
   In the `Dockerfile`, we'll do something like -
   ```
   RUN pkg-resolver/install-boost.sh ubuntu:focal 
   ```
   The `` if not specified uses whatever's the default 
version for that package (for Boost, it's 1.72).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=603544=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603544
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 28/May/21 14:35
Start Date: 28/May/21 14:35
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on a change in pull request 
#3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641599482



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&
+  curl -L 
https://sourceforge.net/projects/boost/files/boost/1.72.0/boost_1_72_0.tar.bz2/download
 >boost_1_72_0.tar.bz2 &&
+  mv boost_1_72_0.tar.bz2 /opt/boost-library &&
+  cd /opt/boost-library &&
+  tar --bzip2 -xf boost_1_72_0.tar.bz2 &&
+  cd /opt/boost-library/boost_1_72_0 &&
+  ./bootstrap.sh --prefix=/usr/ &&
+  ./b2 --without-python install &&
+  cd /root &&
+  rm -rf /opt/boost-library
+
+##
+# Install Google Protobuf 3.7.1 (3.6.1 ships with Focal)
+##
+# hadolint ignore=DL3003
+mkdir -p /opt/protobuf-src &&
+  curl -L -s -S \
+
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
 \
+-o /opt/protobuf.tar.gz &&
+  tar xzf /opt/protobuf.tar.gz --strip-components 1 -C /opt/protobuf-src &&
+  cd /opt/protobuf-src &&
+  ./configure --prefix=/opt/protobuf &&
+  make "-j$(nproc)" &&
+  make install &&
+  cd /root &&
+  rm -rf /opt/protobuf-src
+
+##
+# Install pylint and python-dateutil
+##
+pip3 install pylint==2.6.0 python-dateutil==2.8.1

Review comment:
   I didn't quite understand your question. The `already installed` is a 
little confusing. Are you asking if we're sure about `pylint` and 
`python-dateutil` getting installed upon running this command?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603544)
Time Spent: 3h  (was: 2h 50m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=603545=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603545
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 28/May/21 14:35
Start Date: 28/May/21 14:35
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on a change in pull request 
#3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641599482



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&
+  curl -L 
https://sourceforge.net/projects/boost/files/boost/1.72.0/boost_1_72_0.tar.bz2/download
 >boost_1_72_0.tar.bz2 &&
+  mv boost_1_72_0.tar.bz2 /opt/boost-library &&
+  cd /opt/boost-library &&
+  tar --bzip2 -xf boost_1_72_0.tar.bz2 &&
+  cd /opt/boost-library/boost_1_72_0 &&
+  ./bootstrap.sh --prefix=/usr/ &&
+  ./b2 --without-python install &&
+  cd /root &&
+  rm -rf /opt/boost-library
+
+##
+# Install Google Protobuf 3.7.1 (3.6.1 ships with Focal)
+##
+# hadolint ignore=DL3003
+mkdir -p /opt/protobuf-src &&
+  curl -L -s -S \
+
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
 \
+-o /opt/protobuf.tar.gz &&
+  tar xzf /opt/protobuf.tar.gz --strip-components 1 -C /opt/protobuf-src &&
+  cd /opt/protobuf-src &&
+  ./configure --prefix=/opt/protobuf &&
+  make "-j$(nproc)" &&
+  make install &&
+  cd /root &&
+  rm -rf /opt/protobuf-src
+
+##
+# Install pylint and python-dateutil
+##
+pip3 install pylint==2.6.0 python-dateutil==2.8.1

Review comment:
   I didn't quite understand your question. The `already installed` part is 
a little confusing. Are you asking if we're sure about `pylint` and 
`python-dateutil` getting installed upon running this command?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603545)
Time Spent: 3h 10m  (was: 3h)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on a change in pull request #3043: HADOOP-17727. Modularize docker images

2021-05-28 Thread GitBox


GauthamBanasandra commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641599482



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&
+  curl -L 
https://sourceforge.net/projects/boost/files/boost/1.72.0/boost_1_72_0.tar.bz2/download
 >boost_1_72_0.tar.bz2 &&
+  mv boost_1_72_0.tar.bz2 /opt/boost-library &&
+  cd /opt/boost-library &&
+  tar --bzip2 -xf boost_1_72_0.tar.bz2 &&
+  cd /opt/boost-library/boost_1_72_0 &&
+  ./bootstrap.sh --prefix=/usr/ &&
+  ./b2 --without-python install &&
+  cd /root &&
+  rm -rf /opt/boost-library
+
+##
+# Install Google Protobuf 3.7.1 (3.6.1 ships with Focal)
+##
+# hadolint ignore=DL3003
+mkdir -p /opt/protobuf-src &&
+  curl -L -s -S \
+
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
 \
+-o /opt/protobuf.tar.gz &&
+  tar xzf /opt/protobuf.tar.gz --strip-components 1 -C /opt/protobuf-src &&
+  cd /opt/protobuf-src &&
+  ./configure --prefix=/opt/protobuf &&
+  make "-j$(nproc)" &&
+  make install &&
+  cd /root &&
+  rm -rf /opt/protobuf-src
+
+##
+# Install pylint and python-dateutil
+##
+pip3 install pylint==2.6.0 python-dateutil==2.8.1

Review comment:
   I didn't quite understand your question. The `already installed` part is 
a little confusing. Are you asking if we're sure about `pylint` and 
`python-dateutil` getting installed upon running this command?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on a change in pull request #3043: HADOOP-17727. Modularize docker images

2021-05-28 Thread GitBox


GauthamBanasandra commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641599482



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&
+  curl -L 
https://sourceforge.net/projects/boost/files/boost/1.72.0/boost_1_72_0.tar.bz2/download
 >boost_1_72_0.tar.bz2 &&
+  mv boost_1_72_0.tar.bz2 /opt/boost-library &&
+  cd /opt/boost-library &&
+  tar --bzip2 -xf boost_1_72_0.tar.bz2 &&
+  cd /opt/boost-library/boost_1_72_0 &&
+  ./bootstrap.sh --prefix=/usr/ &&
+  ./b2 --without-python install &&
+  cd /root &&
+  rm -rf /opt/boost-library
+
+##
+# Install Google Protobuf 3.7.1 (3.6.1 ships with Focal)
+##
+# hadolint ignore=DL3003
+mkdir -p /opt/protobuf-src &&
+  curl -L -s -S \
+
https://github.com/protocolbuffers/protobuf/releases/download/v3.7.1/protobuf-java-3.7.1.tar.gz
 \
+-o /opt/protobuf.tar.gz &&
+  tar xzf /opt/protobuf.tar.gz --strip-components 1 -C /opt/protobuf-src &&
+  cd /opt/protobuf-src &&
+  ./configure --prefix=/opt/protobuf &&
+  make "-j$(nproc)" &&
+  make install &&
+  cd /root &&
+  rm -rf /opt/protobuf-src
+
+##
+# Install pylint and python-dateutil
+##
+pip3 install pylint==2.6.0 python-dateutil==2.8.1

Review comment:
   I didn't quite understand your question. The `already installed` is a 
little confusing. Are you asking if we're sure about `pylint` and 
`python-dateutil` getting installed upon running this command?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=603542=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603542
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 28/May/21 14:33
Start Date: 28/May/21 14:33
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on a change in pull request 
#3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641597995



##
File path: dev-support/docker/Dockerfile_aarch64
##
@@ -33,61 +33,19 @@ ENV DEBIAN_FRONTEND noninteractive
 ENV DEBCONF_TERSE true
 
 ##
-# Install common dependencies from packages. Versions here are either
-# sufficient or irrelevant.
+# Platform package dependency resolver
 ##
-# hadolint ignore=DL3008
+COPY pkg-resolver pkg-resolver
+RUN chmod a+x pkg-resolver/install-common-pkgs.sh pkg-resolver/resolve.py \

Review comment:
   The hadolint in `Dockerfile_aarch64` is installed in a different manner 
compared to the rest of the platforms. Hence, I didn't invoke 
`install-common-pkgs.sh` here. But as per 
https://github.com/apache/hadoop/pull/3043#discussion_r641596725, I'll handle 
this as part of writing different `.sh` files for each library.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603542)
Time Spent: 2h 50m  (was: 2h 40m)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on a change in pull request #3043: HADOOP-17727. Modularize docker images

2021-05-28 Thread GitBox


GauthamBanasandra commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641597995



##
File path: dev-support/docker/Dockerfile_aarch64
##
@@ -33,61 +33,19 @@ ENV DEBIAN_FRONTEND noninteractive
 ENV DEBCONF_TERSE true
 
 ##
-# Install common dependencies from packages. Versions here are either
-# sufficient or irrelevant.
+# Platform package dependency resolver
 ##
-# hadolint ignore=DL3008
+COPY pkg-resolver pkg-resolver
+RUN chmod a+x pkg-resolver/install-common-pkgs.sh pkg-resolver/resolve.py \

Review comment:
   The hadolint in `Dockerfile_aarch64` is installed in a different manner 
compared to the rest of the platforms. Hence, I didn't invoke 
`install-common-pkgs.sh` here. But as per 
https://github.com/apache/hadoop/pull/3043#discussion_r641596725, I'll handle 
this as part of writing different `.sh` files for each library.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17727) Modularize docker images

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17727?focusedWorklogId=603541=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603541
 ]

ASF GitHub Bot logged work on HADOOP-17727:
---

Author: ASF GitHub Bot
Created on: 28/May/21 14:31
Start Date: 28/May/21 14:31
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on a change in pull request 
#3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641596725



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&

Review comment:
   I think it make sense @goiri . Different platforms have different set of 
libraries to be built and installed. Having different `.sh` files for each 
platform should simplify it. I'll make the change.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603541)
Time Spent: 2h 40m  (was: 2.5h)

> Modularize docker images
> 
>
> Key: HADOOP-17727
> URL: https://issues.apache.org/jira/browse/HADOOP-17727
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> We're now creating the *Dockerfile*s for different platforms. We need a way 
> to manage the packages in a clean way as maintaining the packages for all the 
> different environments becomes cumbersome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on a change in pull request #3043: HADOOP-17727. Modularize docker images

2021-05-28 Thread GitBox


GauthamBanasandra commented on a change in pull request #3043:
URL: https://github.com/apache/hadoop/pull/3043#discussion_r641596725



##
File path: dev-support/docker/pkg-resolver/install-common-pkgs.sh
##
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+###
+# Install SpotBugs 4.2.2
+###
+mkdir -p /opt/spotbugs &&
+  curl -L -s -S 
https://github.com/spotbugs/spotbugs/releases/download/4.2.2/spotbugs-4.2.2.tgz 
\
+-o /opt/spotbugs.tgz &&
+  tar xzf /opt/spotbugs.tgz --strip-components 1 -C /opt/spotbugs &&
+  chmod +x /opt/spotbugs/bin/*
+
+###
+# Install Boost 1.72 (1.71 ships with Focal)
+###
+# hadolint ignore=DL3003
+mkdir -p /opt/boost-library &&

Review comment:
   I think it make sense @goiri . Different platforms have different set of 
libraries to be built and installed. Having different `.sh` files for each 
platform should simplify it. I'll make the change.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3036: HDFS-15998. Fix NullPointException In listOpenFiles

2021-05-28 Thread GitBox


hadoop-yetus commented on pull request #3036:
URL: https://github.com/apache/hadoop/pull/3036#issuecomment-850439929


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3036/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 202 unchanged 
- 0 fixed = 204 total (was 202)  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 345m 35s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3036/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 438m 10s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3036/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3036 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 87bebc9bb683 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 590e58460bf96f515ad0830410375f345058c60a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Work logged] (HADOOP-17631) Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when restrictSystemProps=true

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17631?focusedWorklogId=603530=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603530
 ]

ASF GitHub Bot logged work on HADOOP-17631:
---

Author: ASF GitHub Bot
Created on: 28/May/21 13:57
Start Date: 28/May/21 13:57
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on pull request #2977:
URL: https://github.com/apache/hadoop/pull/2977#issuecomment-850438046


   +1, nice tests as well. Maybe we could have an assert with more than 1 colon 
before the hyphen and it should resolve the fallback option as well(or is it 
too far-fetched?).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603530)
Time Spent: 0.5h  (was: 20m)

> Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when 
> restrictSystemProps=true 
> --
>
> Key: HADOOP-17631
> URL: https://issues.apache.org/jira/browse/HADOOP-17631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When configuration reads in resources with a restricted parser, it skips 
> evaluaging system ${env. } vars. But it also skips evaluating fallbacks
> As a result, a property like
> ${env.LOCAL_DIRS:-${hadoop.tmp.dir}} ends up evaluating as 
> ${env.LOCAL_DIRS:-${hadoop.tmp.dir}}
> It should instead fall back to the "env var unset" option of 
> ${hadoop.tmp.dir}. This allows for configs (like for s3a buffer dirs) which 
> are usable in restricted mode as well as unrestricted deployments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #2977: HADOOP-17631. Configuration ${env.VAR:-FALLBACK} to eval FALLBACK when restrictSystemProps=true

2021-05-28 Thread GitBox


mehakmeet commented on pull request #2977:
URL: https://github.com/apache/hadoop/pull/2977#issuecomment-850438046


   +1, nice tests as well. Maybe we could have an assert with more than 1 colon 
before the hyphen and it should resolve the fallback option as well(or is it 
too far-fetched?).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17152?focusedWorklogId=603504=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603504
 ]

ASF GitHub Bot logged work on HADOOP-17152:
---

Author: ASF GitHub Bot
Created on: 28/May/21 13:14
Start Date: 28/May/21 13:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3061:
URL: https://github.com/apache/hadoop/pull/3061#issuecomment-850410656


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 50s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 186m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3061/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3061 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 437b75a81723 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7c4887cb1083814a9218cbbe03991d3c408d5468 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3061/1/testReport/ |
   | Max. process+thread count | 1234 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3061/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3061: HADOOP-17152. Provide Hadoop's own Lists utility to reduce dependency on Guava

2021-05-28 Thread GitBox


hadoop-yetus commented on pull request #3061:
URL: https://github.com/apache/hadoop/pull/3061#issuecomment-850410656


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 50s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 186m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3061/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3061 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 437b75a81723 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7c4887cb1083814a9218cbbe03991d3c408d5468 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3061/1/testReport/ |
   | Max. process+thread count | 1234 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3061/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: 

[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=603489=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603489
 ]

ASF GitHub Bot logged work on HADOOP-17725:
---

Author: ASF GitHub Bot
Created on: 28/May/21 12:08
Start Date: 28/May/21 12:08
Worklog Time Spent: 10m 
  Work Description: sadikovi edited a comment on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850372026


   Tested commit c8ed0cc25bd094240b6274df447bdc33aa93546c. Ran the following 
code 
   
   ```scala
   import org.apache.hadoop.conf._
   import org.apache.hadoop.fs._
   
   val conf = new Configuration()
   
   conf.set("fs.azure.account.auth.type", "OAuth")
   conf.set("fs.azure.account.oauth.provider.type", 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
   conf.set("fs.azure.account.oauth2.client.id", "")
   // 
conf.set("fs.azure.account.oauth2.client.secret..dfs.core.windows.net",
 "")
   // conf.set("fs.azure.account.oauth2.client.secret", "")
   conf.set("fs.azure.account.oauth2.client.endpoint", 
"https://login.microsoftonline.com/")
   
   val path = new Path("abfss://@.dfs.core.windows.net/")
   val fs = path.getFileSystem(conf)
   fs.getFileStatus(path)
   ``` 
   
   with the storage account in West US, everything seems to work correctly. If 
I comment out one or more configs, e.g. client-secret or client-id, the error 
message is as follows:
   ```
   TokenAccessProviderException: Unable to load OAuth token provider class.
   ...
   Caused by: ConfigurationPropertyNotFoundException: Configuration property 
fs.azure.account.oauth2.client.secret not found.
   ...
   ```
   
   The code works when commenting out `fs.azure.account.oauth2.client.secret` 
or `fs.azure.account.oauth2.client.secret..dfs.core.windows.net`, the 
same is applicable to other configs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603489)
Time Spent: 3.5h  (was: 3h 20m)

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=603488=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603488
 ]

ASF GitHub Bot logged work on HADOOP-17725:
---

Author: ASF GitHub Bot
Created on: 28/May/21 12:08
Start Date: 28/May/21 12:08
Worklog Time Spent: 10m 
  Work Description: sadikovi edited a comment on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850372026


   Tested commit c8ed0cc25bd094240b6274df447bdc33aa93546c. Ran the following 
code 
   
   ```scala
   import org.apache.hadoop.conf._
   import org.apache.hadoop.fs._
   
   val conf = new Configuration()
   
   conf.set("fs.azure.account.auth.type", "OAuth")
   conf.set("fs.azure.account.oauth.provider.type", 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
   conf.set("fs.azure.account.oauth2.client.id", "")
   // 
conf.set("fs.azure.account.oauth2.client.secret..dfs.core.windows.net",
 "")
   // conf.set("fs.azure.account.oauth2.client.secret", "")
   conf.set("fs.azure.account.oauth2.client.endpoint", 
"https://login.microsoftonline.com/")
   
   val path = new Path("abfss://@.dfs.core.windows.net/")
   val fs = path.getFileSystem(conf)
   fs.getFileStatus(path)
   ``` 
   
   with the storage account in West US, everything seems to be work correctly. 
If I comment out one or more configs, e.g. client-secret or client-id, the 
error message is as follows:
   ```
   TokenAccessProviderException: Unable to load OAuth token provider class.
   ...
   Caused by: ConfigurationPropertyNotFoundException: Configuration property 
fs.azure.account.oauth2.client.secret not found.
   ...
   ```
   
   The code works when commenting out `fs.azure.account.oauth2.client.secret` 
or `fs.azure.account.oauth2.client.secret..dfs.core.windows.net`, the 
same is applicable to other configs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603488)
Time Spent: 3h 20m  (was: 3h 10m)

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sadikovi edited a comment on pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS

2021-05-28 Thread GitBox


sadikovi edited a comment on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850372026


   Tested commit c8ed0cc25bd094240b6274df447bdc33aa93546c. Ran the following 
code 
   
   ```scala
   import org.apache.hadoop.conf._
   import org.apache.hadoop.fs._
   
   val conf = new Configuration()
   
   conf.set("fs.azure.account.auth.type", "OAuth")
   conf.set("fs.azure.account.oauth.provider.type", 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
   conf.set("fs.azure.account.oauth2.client.id", "")
   // 
conf.set("fs.azure.account.oauth2.client.secret..dfs.core.windows.net",
 "")
   // conf.set("fs.azure.account.oauth2.client.secret", "")
   conf.set("fs.azure.account.oauth2.client.endpoint", 
"https://login.microsoftonline.com/")
   
   val path = new Path("abfss://@.dfs.core.windows.net/")
   val fs = path.getFileSystem(conf)
   fs.getFileStatus(path)
   ``` 
   
   with the storage account in West US, everything seems to work correctly. If 
I comment out one or more configs, e.g. client-secret or client-id, the error 
message is as follows:
   ```
   TokenAccessProviderException: Unable to load OAuth token provider class.
   ...
   Caused by: ConfigurationPropertyNotFoundException: Configuration property 
fs.azure.account.oauth2.client.secret not found.
   ...
   ```
   
   The code works when commenting out `fs.azure.account.oauth2.client.secret` 
or `fs.azure.account.oauth2.client.secret..dfs.core.windows.net`, the 
same is applicable to other configs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sadikovi edited a comment on pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS

2021-05-28 Thread GitBox


sadikovi edited a comment on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850372026


   Tested commit c8ed0cc25bd094240b6274df447bdc33aa93546c. Ran the following 
code 
   
   ```scala
   import org.apache.hadoop.conf._
   import org.apache.hadoop.fs._
   
   val conf = new Configuration()
   
   conf.set("fs.azure.account.auth.type", "OAuth")
   conf.set("fs.azure.account.oauth.provider.type", 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
   conf.set("fs.azure.account.oauth2.client.id", "")
   // 
conf.set("fs.azure.account.oauth2.client.secret..dfs.core.windows.net",
 "")
   // conf.set("fs.azure.account.oauth2.client.secret", "")
   conf.set("fs.azure.account.oauth2.client.endpoint", 
"https://login.microsoftonline.com/")
   
   val path = new Path("abfss://@.dfs.core.windows.net/")
   val fs = path.getFileSystem(conf)
   fs.getFileStatus(path)
   ``` 
   
   with the storage account in West US, everything seems to be work correctly. 
If I comment out one or more configs, e.g. client-secret or client-id, the 
error message is as follows:
   ```
   TokenAccessProviderException: Unable to load OAuth token provider class.
   ...
   Caused by: ConfigurationPropertyNotFoundException: Configuration property 
fs.azure.account.oauth2.client.secret not found.
   ...
   ```
   
   The code works when commenting out `fs.azure.account.oauth2.client.secret` 
or `fs.azure.account.oauth2.client.secret..dfs.core.windows.net`, the 
same is applicable to other configs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=603486=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603486
 ]

ASF GitHub Bot logged work on HADOOP-17725:
---

Author: ASF GitHub Bot
Created on: 28/May/21 12:07
Start Date: 28/May/21 12:07
Worklog Time Spent: 10m 
  Work Description: sadikovi commented on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850372026


   Tested commit c8ed0cc25bd094240b6274df447bdc33aa93546c. Ran the following 
code 
   
   ```scala
   import org.apache.hadoop.fs._
   
   val conf = spark.sessionState.newHadoopConf
   
   conf.set("fs.azure.account.auth.type", "OAuth")
   conf.set("fs.azure.account.oauth.provider.type", 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
   conf.set("fs.azure.account.oauth2.client.id", "")
   // 
conf.set("fs.azure.account.oauth2.client.secret..dfs.core.windows.net",
 "")
   // conf.set("fs.azure.account.oauth2.client.secret", "")
   conf.set("fs.azure.account.oauth2.client.endpoint", 
"https://login.microsoftonline.com/")
   
   val path = new Path("abfss://@.dfs.core.windows.net/")
   val fs = path.getFileSystem(conf)
   fs.getFileStatus(path)
   ``` 
   
   with the storage account in West US, everything seems to be work correctly. 
If I comment out one or more configs, e.g. client-secret or client-id, the 
error message is as follows:
   ```
   TokenAccessProviderException: Unable to load OAuth token provider class.
   ...
   Caused by: ConfigurationPropertyNotFoundException: Configuration property 
fs.azure.account.oauth2.client.secret not found.
   ...
   ```
   
   The code works when commenting out `fs.azure.account.oauth2.client.secret` 
or `fs.azure.account.oauth2.client.secret..dfs.core.windows.net`, the 
same is applicable to other configs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603486)
Time Spent: 3h  (was: 2h 50m)

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=603487=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603487
 ]

ASF GitHub Bot logged work on HADOOP-17725:
---

Author: ASF GitHub Bot
Created on: 28/May/21 12:07
Start Date: 28/May/21 12:07
Worklog Time Spent: 10m 
  Work Description: sadikovi edited a comment on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850372026


   Tested commit c8ed0cc25bd094240b6274df447bdc33aa93546c. Ran the following 
code 
   
   ```scala
   import org.apache.hadoop.fs._
   
   val conf = new Configuration()
   
   conf.set("fs.azure.account.auth.type", "OAuth")
   conf.set("fs.azure.account.oauth.provider.type", 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
   conf.set("fs.azure.account.oauth2.client.id", "")
   // 
conf.set("fs.azure.account.oauth2.client.secret..dfs.core.windows.net",
 "")
   // conf.set("fs.azure.account.oauth2.client.secret", "")
   conf.set("fs.azure.account.oauth2.client.endpoint", 
"https://login.microsoftonline.com/")
   
   val path = new Path("abfss://@.dfs.core.windows.net/")
   val fs = path.getFileSystem(conf)
   fs.getFileStatus(path)
   ``` 
   
   with the storage account in West US, everything seems to be work correctly. 
If I comment out one or more configs, e.g. client-secret or client-id, the 
error message is as follows:
   ```
   TokenAccessProviderException: Unable to load OAuth token provider class.
   ...
   Caused by: ConfigurationPropertyNotFoundException: Configuration property 
fs.azure.account.oauth2.client.secret not found.
   ...
   ```
   
   The code works when commenting out `fs.azure.account.oauth2.client.secret` 
or `fs.azure.account.oauth2.client.secret..dfs.core.windows.net`, the 
same is applicable to other configs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603487)
Time Spent: 3h 10m  (was: 3h)

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sadikovi edited a comment on pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS

2021-05-28 Thread GitBox


sadikovi edited a comment on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850372026


   Tested commit c8ed0cc25bd094240b6274df447bdc33aa93546c. Ran the following 
code 
   
   ```scala
   import org.apache.hadoop.fs._
   
   val conf = new Configuration()
   
   conf.set("fs.azure.account.auth.type", "OAuth")
   conf.set("fs.azure.account.oauth.provider.type", 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
   conf.set("fs.azure.account.oauth2.client.id", "")
   // 
conf.set("fs.azure.account.oauth2.client.secret..dfs.core.windows.net",
 "")
   // conf.set("fs.azure.account.oauth2.client.secret", "")
   conf.set("fs.azure.account.oauth2.client.endpoint", 
"https://login.microsoftonline.com/")
   
   val path = new Path("abfss://@.dfs.core.windows.net/")
   val fs = path.getFileSystem(conf)
   fs.getFileStatus(path)
   ``` 
   
   with the storage account in West US, everything seems to be work correctly. 
If I comment out one or more configs, e.g. client-secret or client-id, the 
error message is as follows:
   ```
   TokenAccessProviderException: Unable to load OAuth token provider class.
   ...
   Caused by: ConfigurationPropertyNotFoundException: Configuration property 
fs.azure.account.oauth2.client.secret not found.
   ...
   ```
   
   The code works when commenting out `fs.azure.account.oauth2.client.secret` 
or `fs.azure.account.oauth2.client.secret..dfs.core.windows.net`, the 
same is applicable to other configs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sadikovi commented on pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS

2021-05-28 Thread GitBox


sadikovi commented on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850372026


   Tested commit c8ed0cc25bd094240b6274df447bdc33aa93546c. Ran the following 
code 
   
   ```scala
   import org.apache.hadoop.fs._
   
   val conf = spark.sessionState.newHadoopConf
   
   conf.set("fs.azure.account.auth.type", "OAuth")
   conf.set("fs.azure.account.oauth.provider.type", 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
   conf.set("fs.azure.account.oauth2.client.id", "")
   // 
conf.set("fs.azure.account.oauth2.client.secret..dfs.core.windows.net",
 "")
   // conf.set("fs.azure.account.oauth2.client.secret", "")
   conf.set("fs.azure.account.oauth2.client.endpoint", 
"https://login.microsoftonline.com/")
   
   val path = new Path("abfss://@.dfs.core.windows.net/")
   val fs = path.getFileSystem(conf)
   fs.getFileStatus(path)
   ``` 
   
   with the storage account in West US, everything seems to be work correctly. 
If I comment out one or more configs, e.g. client-secret or client-id, the 
error message is as follows:
   ```
   TokenAccessProviderException: Unable to load OAuth token provider class.
   ...
   Caused by: ConfigurationPropertyNotFoundException: Configuration property 
fs.azure.account.oauth2.client.secret not found.
   ...
   ```
   
   The code works when commenting out `fs.azure.account.oauth2.client.secret` 
or `fs.azure.account.oauth2.client.secret..dfs.core.windows.net`, the 
same is applicable to other configs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17343) Upgrade aws-java-sdk to 1.11.901

2021-05-28 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353305#comment-17353305
 ] 

Steve Loughran commented on HADOOP-17343:
-

HADOOP-17735 and PR is the latest, please checkout and verify it works.

do bear in mind 
* the shaded jackson used by AWS is only used in its code, and they don't do 
the arbitrary object deserialization which is an issue with jackson. It may be 
showing up on your audits, but it's not an actual vulnerability
* there's a hadoop-aws/testing.md doc which provides the runbook for qualifying 
an update. You are free to provide backport PRs once that one is in, but you do 
get to invest an afternoon per cherrypick rerunning all the tests, including 
the manual ones.

> Upgrade aws-java-sdk to 1.11.901
> 
>
> Key: HADOOP-17343
> URL: https://issues.apache.org/jira/browse/HADOOP-17343
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Dongjoon Hyun
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Upgrade AWS SDK to most recent version



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=603482=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603482
 ]

ASF GitHub Bot logged work on HADOOP-17725:
---

Author: ASF GitHub Bot
Created on: 28/May/21 11:23
Start Date: 28/May/21 11:23
Worklog Time Spent: 10m 
  Work Description: sadikovi commented on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850349549


   I concur, the issue can be easily reproduced in unit tests and the fix is 
fairly straightforward. Let me run a manual test with this commit in my dev 
environment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603482)
Time Spent: 2h 50m  (was: 2h 40m)

> Improve error message for token providers in ABFS
> -
>
> Key: HADOOP-17725
> URL: https://issues.apache.org/jira/browse/HADOOP-17725
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.0
>Reporter: Ivan
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> It would be good to improve error messages for token providers in ABFS. 
> Currently, when a configuration key is not found or mistyped, the error is 
> not very clear on what went wrong. It would be good to indicate that the key 
> was required but not found in Hadoop configuration when creating a token 
> provider.
> For example, when running the following code:
> {code:java}
> import org.apache.hadoop.conf._
> import org.apache.hadoop.fs._
> val conf = new Configuration()
> conf.set("fs.azure.account.auth.type", "OAuth")
> conf.set("fs.azure.account.oauth.provider.type", 
> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
> conf.set("fs.azure.account.oauth2.client.id", "my-client-id")
> // 
> conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net",
>  "my-secret")
> conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint")
> val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/")
> val fs = path.getFileSystem(conf)
> fs.getFileStatus(path){code}
> The following exception is thrown:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: UncheckedExecutionException: java.lang.NullPointerException: 
> clientSecret
> ...
> Caused by: NullPointerException: clientSecret {code}
> which does not tell what configuration key was not loaded.
>  
> IMHO, it would be good if the exception was something like this:
> {code:java}
> TokenAccessProviderException: Unable to load OAuth token provider class.
> ...
> Caused by: ConfigurationPropertyNotFoundException: Configuration property 
> fs.azure.account.oauth2.client.secret not found. {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sadikovi commented on pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS

2021-05-28 Thread GitBox


sadikovi commented on pull request #3041:
URL: https://github.com/apache/hadoop/pull/3041#issuecomment-850349549


   I concur, the issue can be easily reproduced in unit tests and the fix is 
fairly straightforward. Let me run a manual test with this commit in my dev 
environment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17152:

Labels: pull-request-available  (was: )

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17152?focusedWorklogId=603470=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603470
 ]

ASF GitHub Bot logged work on HADOOP-17152:
---

Author: ASF GitHub Bot
Created on: 28/May/21 10:06
Start Date: 28/May/21 10:06
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #3061:
URL: https://github.com/apache/hadoop/pull/3061


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603470)
Remaining Estimate: 0h
Time Spent: 10m

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani opened a new pull request #3061: HADOOP-17152. Provide Hadoop's own Lists utility to reduce dependency on Guava

2021-05-28 Thread GitBox


virajjasani opened a new pull request #3061:
URL: https://github.com/apache/hadoop/pull/3061


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3060: HDFS-16046. TestBalancerProcedureScheduler and TestDistCpProcedure timeout.

2021-05-28 Thread GitBox


hadoop-yetus commented on pull request #3060:
URL: https://github.com/apache/hadoop/pull/3060#issuecomment-850272204


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   6m 35s |  |  hadoop-federation-balance in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  78m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3060/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3060 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 717a8026fcbe 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a2a99fa906b010605e3e350b4520880e4de3e4aa |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3060/1/testReport/ |
   | Max. process+thread count | 542 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-federation-balance U: 
hadoop-tools/hadoop-federation-balance |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3060/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-17738) getContentSummary return incorrect filecount

2021-05-28 Thread philipse (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353025#comment-17353025
 ] 

philipse commented on HADOOP-17738:
---

the file format as 
{code:java}
1-0-1518105600982.text
10-0-1518105600716.text
{code}

> getContentSummary return incorrect filecount
> 
>
> Key: HADOOP-17738
> URL: https://issues.apache.org/jira/browse/HADOOP-17738
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.7
> Environment: HDP:2.7.7
>Reporter: philipse
>Priority: Minor
>
> Hi team
> I got a strange test result when I get hdfs statistics ,the test process 
> shows below. 
> Any advice will be appreciated ,Thanks in advance.
> {code:java}
> 1、hdfs dfs -count /data/BaseData/Log/mq/2018/02/09/,it shows we have 100 file
> 1  100 9689234070 /data/BaseData/Log/mq/2018/02/09/
> 2、hdfs dfs -ls  /data/BaseData/Log/mq/2018/02/09/ ,it shows we have only 98 
> items
> Found 98 items
> 3、hdfs dfs -cp /data/BaseData/Log/mq/2018/02/09/* 
> /data/BaseData/Log/mq_test/2018/02/09/
> 4、hdfs dfs -count /data/BaseData/Log/mq_test/2018/02/09/ ,it shows we have 98 
> items too
> 1   98 9689234070 /data/dpdcadmin/gf13871/test20210528
> 5、hdfs dfs -ls /data/BaseData/Log/mq_test/2018/02/09/ ,it shows we have 98 
> items
> Found 98 items
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17738) getContentSummary return incorrect filecount

2021-05-28 Thread philipse (Jira)
philipse created HADOOP-17738:
-

 Summary: getContentSummary return incorrect filecount
 Key: HADOOP-17738
 URL: https://issues.apache.org/jira/browse/HADOOP-17738
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 2.7.7
 Environment: HDP:2.7.7
Reporter: philipse


Hi team

I got a strange test result when I get hdfs statistics ,the test process shows 
below. 

Any advice will be appreciated ,Thanks in advance.
{code:java}
1、hdfs dfs -count /data/BaseData/Log/mq/2018/02/09/,it shows we have 100 file
1  100 9689234070 /data/BaseData/Log/mq/2018/02/09/

2、hdfs dfs -ls  /data/BaseData/Log/mq/2018/02/09/ ,it shows we have only 98 
items
Found 98 items

3、hdfs dfs -cp /data/BaseData/Log/mq/2018/02/09/* 
/data/BaseData/Log/mq_test/2018/02/09/

4、hdfs dfs -count /data/BaseData/Log/mq_test/2018/02/09/ ,it shows we have 98 
items too
1   98 9689234070 /data/dpdcadmin/gf13871/test20210528

5、hdfs dfs -ls /data/BaseData/Log/mq_test/2018/02/09/ ,it shows we have 98 items
Found 98 items

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #3060: HDFS-16046. TestBalancerProcedureScheduler and TestDistCpProcedure timeout.

2021-05-28 Thread GitBox


aajisaka opened a new pull request #3060:
URL: https://github.com/apache/hadoop/pull/3060


   JIRA: HDFS-16046


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #3036: HDFS-15998. Fix NullPointException In listOpenFiles

2021-05-28 Thread GitBox


haiyang1987 commented on pull request #3036:
URL: https://github.com/apache/hadoop/pull/3036#issuecomment-850184216


   add unit tests.
   
   @jojochuang Please take a look, Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17590) ABFS: Introduce Lease Operations with Append to provide single writer semantics

2021-05-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17590?focusedWorklogId=603406=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-603406
 ]

ASF GitHub Bot logged work on HADOOP-17590:
---

Author: ASF GitHub Bot
Created on: 28/May/21 06:34
Start Date: 28/May/21 06:34
Worklog Time Spent: 10m 
  Work Description: snehavarma edited a comment on pull request #3026:
URL: https://github.com/apache/hadoop/pull/3026#issuecomment-843994638


   a. HNS account + OAuth config
   b. HNS account + Shared Key config
   c. Non-HNS account + SharedKey config
   d. AppendBlob+HNS+Oauth config
   Failures seen - testReadAndWriteWithDifferentBufferSizesAndSeek, 
ITestAbfsFileSystemContractDistCp, ITestAbfsFileSystemContractSecureDistCp, 
TestAbfsStreamOps with appendblob, testBlobBackCompatibility, readRandom & 
WasbAbfsCompatibility with non-HNS account all are being tracked via JIRAS
   
   
   Appendblob-HNS-OAuth
   
Results:

   Tests run: 97, Failures: 0, Errors: 0, Skipped: 0
Results:

   Failures: 
 
ITestAbfsStreamStatistics.testAbfsStreamOps:140->Assert.assertTrue:42->Assert.fail:89
 The actual value of 99 was not equal to the expected value
   
   Tests run: 565, Failures: 1, Errors: 0, Skipped: 98
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 261, Failures: 0, Errors: 2, Skipped: 74
   
   HNS-OAuth
   
Results:

   Tests run: 97, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 565, Failures: 0, Errors: 0, Skipped: 98
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 261, Failures: 0, Errors: 2, Skipped: 50
   
   HNS-SharedKey
   
Results:

   Tests run: 97, Failures: 0, Errors: 0, Skipped: 0
Results:

   Errors: 
 ITestAzureBlobFileSystemInfiniteLease.testAcquireRetry:324 » TestTimedOut 
test...

   Tests run: 565, Failures: 0, Errors: 1, Skipped: 67
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 261, Failures: 0, Errors: 3, Skipped: 40
   
   NonHNS-SharedKey
   
Results:

   Tests run: 97, Failures: 0, Errors: 0, Skipped: 0
Results:

   Errors: 
 ITestAzureBlobFileSystemBackCompat.testBlobBackCompat:51 » Storage The 
account...
 ITestAzureBlobFileSystemRandomRead.testRandomRead:125 » Azure 
com.microsoft.az...
 ITestWasbAbfsCompatibility.testDir:144 » Azure 
com.microsoft.azure.storage.Sto...
 ITestWasbAbfsCompatibility.testListFileStatus:75 » Azure 
com.microsoft.azure.s...
 ITestWasbAbfsCompatibility.testReadFile:105 » Azure 
com.microsoft.azure.storag...

   Tests run: 565, Failures: 0, Errors: 5, Skipped: 285
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 261, Failures: 0, Errors: 3, Skipped: 40
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 603406)
Time Spent: 1h  (was: 50m)

> ABFS: Introduce Lease Operations with Append to provide single writer 
> semantics
> ---
>
> Key: HADOOP-17590
> URL: https://issues.apache.org/jira/browse/HADOOP-17590
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The lease operations will 

[GitHub] [hadoop] snehavarma edited a comment on pull request #3026: HADOOP-17590 ABFS: Introduce Lease Operations with Append to provide single writer semantics

2021-05-28 Thread GitBox


snehavarma edited a comment on pull request #3026:
URL: https://github.com/apache/hadoop/pull/3026#issuecomment-843994638


   a. HNS account + OAuth config
   b. HNS account + Shared Key config
   c. Non-HNS account + SharedKey config
   d. AppendBlob+HNS+Oauth config
   Failures seen - testReadAndWriteWithDifferentBufferSizesAndSeek, 
ITestAbfsFileSystemContractDistCp, ITestAbfsFileSystemContractSecureDistCp, 
TestAbfsStreamOps with appendblob, testBlobBackCompatibility, readRandom & 
WasbAbfsCompatibility with non-HNS account all are being tracked via JIRAS
   
   
   Appendblob-HNS-OAuth
   
Results:

   Tests run: 97, Failures: 0, Errors: 0, Skipped: 0
Results:

   Failures: 
 
ITestAbfsStreamStatistics.testAbfsStreamOps:140->Assert.assertTrue:42->Assert.fail:89
 The actual value of 99 was not equal to the expected value
   
   Tests run: 565, Failures: 1, Errors: 0, Skipped: 98
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 261, Failures: 0, Errors: 2, Skipped: 74
   
   HNS-OAuth
   
Results:

   Tests run: 97, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 565, Failures: 0, Errors: 0, Skipped: 98
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 261, Failures: 0, Errors: 2, Skipped: 50
   
   HNS-SharedKey
   
Results:

   Tests run: 97, Failures: 0, Errors: 0, Skipped: 0
Results:

   Errors: 
 ITestAzureBlobFileSystemInfiniteLease.testAcquireRetry:324 » TestTimedOut 
test...

   Tests run: 565, Failures: 0, Errors: 1, Skipped: 67
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 261, Failures: 0, Errors: 3, Skipped: 40
   
   NonHNS-SharedKey
   
Results:

   Tests run: 97, Failures: 0, Errors: 0, Skipped: 0
Results:

   Errors: 
 ITestAzureBlobFileSystemBackCompat.testBlobBackCompat:51 » Storage The 
account...
 ITestAzureBlobFileSystemRandomRead.testRandomRead:125 » Azure 
com.microsoft.az...
 ITestWasbAbfsCompatibility.testDir:144 » Azure 
com.microsoft.azure.storage.Sto...
 ITestWasbAbfsCompatibility.testListFileStatus:75 » Azure 
com.microsoft.azure.s...
 ITestWasbAbfsCompatibility.testReadFile:105 » Azure 
com.microsoft.azure.storag...

   Tests run: 565, Failures: 0, Errors: 5, Skipped: 285
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 261, Failures: 0, Errors: 3, Skipped: 40
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17714) ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs

2021-05-28 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17714:
-
Summary: ABFS: testBlobBackCompatibility, testRandomRead & 
WasbAbfsCompatibility tests fail when triggered with default configs  (was: 
ABFS: testBlobBackCompatibility & WasbAbfsCompatibility tests fail when 
triggered with default configs)

> ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests 
> fail when triggered with default configs
> --
>
> Key: HADOOP-17714
> URL: https://issues.apache.org/jira/browse/HADOOP-17714
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Sneha Varma
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> testBlobBackCompatibility & WasbAbfsCompatibility tests fail when triggered 
> with default configs as http is not enabled on gen2 accounts by default.
>  
> Options to fix it:
> tests' config should enforce https by default 
> or the tests should be modified not execute http requests
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17714) ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs

2021-05-28 Thread Sneha Varma (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Varma updated HADOOP-17714:
-
Description: 
testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail 
when triggered with default configs as http is not enabled on gen2 accounts by 
default.

 

Options to fix it:

tests' config should enforce https by default 

or the tests should be modified not execute http requests

 

  was:
testBlobBackCompatibility & WasbAbfsCompatibility tests fail when triggered 
with default configs as http is not enabled on gen2 accounts by default.

 

Options to fix it:

tests' config should enforce https by default 

or the tests should be modified not execute http requests

 


> ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests 
> fail when triggered with default configs
> --
>
> Key: HADOOP-17714
> URL: https://issues.apache.org/jira/browse/HADOOP-17714
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Sneha Varma
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail 
> when triggered with default configs as http is not enabled on gen2 accounts 
> by default.
>  
> Options to fix it:
> tests' config should enforce https by default 
> or the tests should be modified not execute http requests
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org