[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771692#comment-16771692
 ] 

Hadoop QA commented on HADOOP-16077:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 3s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16077 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959195/HADOOP-16077-09.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvn

[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771695#comment-16771695
 ] 

Adam Antal commented on HADOOP-15843:
-

Thanks [~ste...@apache.org]. I'll make a full test suite too validating the fix.

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16121) Cannot build in dev docker environment

2019-02-19 Thread lqjacklee (JIRA)
lqjacklee created HADOOP-16121:
--

 Summary: Cannot build in dev docker environment
 Key: HADOOP-16121
 URL: https://issues.apache.org/jira/browse/HADOOP-16121
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.3.0
 Environment: Darwin lqjacklee-MacBook-Pro.local 18.2.0 Darwin Kernel 
Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; 
root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
Reporter: lqjacklee
Assignee: Steve Loughran


Operation as below : 

 

1, run the docker daemon

2, run ./start-build-env.sh

3, mvn clean package -DskipTests 

 

Response from the command line : 

 

[ERROR] Plugin org.apache.maven.plugins:maven-surefire-plugin:2.17 or one of 
its dependencies could not be resolved: Failed to read artifact descriptor for 
org.apache.maven.plugins:maven-surefire-plugin:jar:2.17: Could not transfer 
artifact org.apache.maven.plugins:maven-surefire-plugin:pom:2.17 from/to 
central (https://repo.maven.apache.org/maven2): 
/home/liu/.m2/repository/org/apache/maven/plugins/maven-surefire-plugin/2.17/maven-surefire-plugin-2.17.pom.part.lock
 (No such file or directory) -> [Help 1] 

 

solution : 

a, sudo chmod -R 775 ${USER_HOME}/.m2/

b, sudo chown -R ${USER_NAME} ${USER_HOME}/.m2

 

After try the way , it still in trouble. 

 

c, sudo mvn clean package -DskipTests. but in this way, will download the file 
(pom, jar ) duplicated ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] lujiefsi opened a new pull request #498: HDFS-14216. NullPointerException happens in NamenodeWebHdfs

2019-02-19 Thread GitBox
lujiefsi opened a new pull request #498: HDFS-14216. NullPointerException 
happens in NamenodeWebHdfs
URL: https://github.com/apache/hadoop/pull/498
 
 
   I have created the jira 
[HDFS-14216](https://jira.apache.org/jira/browse/HDFS-14216) to describe the 
problem. Hope for review and merge!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-02-19 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771712#comment-16771712
 ] 

Ayush Saxena commented on HADOOP-16077:
---

Fixed Whitespace issue in v9.

[~brahmareddy] can you give a check this shall be helpful in getting to know 
the storage policy for the user for the files in a directory in a single go 
rather than checking one by one. :)

 

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch, HADOOP-16077-04.patch, HADOOP-16077-05.patch, 
> HADOOP-16077-06.patch, HADOOP-16077-07.patch, HADOOP-16077-08.patch, 
> HADOOP-16077-09.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] lujiefsi opened a new pull request #499: MAPREDUCE-7178. NPE happens while YarnChild shudown

2019-02-19 Thread GitBox
lujiefsi opened a new pull request #499: MAPREDUCE-7178. NPE happens while 
YarnChild shudown
URL: https://github.com/apache/hadoop/pull/499
 
 
   I have created the jira 
[MAPREDUCE-7178](https://jira.apache.org/jira/browse/MAPREDUCE-7178) to 
describe the problem. Hope for review and merge!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] lujiefsi opened a new pull request #500: YARN-9238. Allocate on previous or removed or non existent application attempt

2019-02-19 Thread GitBox
lujiefsi opened a new pull request #500: YARN-9238. Allocate on previous or 
removed or non existent application attempt
URL: https://github.com/apache/hadoop/pull/500
 
 
   I have created the jira 
[YARN-9238](https://jira.apache.org/jira/browse/YARN-9238) to describe the 
problem. Hope for review and merge!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16112) After exist the baseTrashPath's subDir, delete the subDir leads to don't modify baseTrashPath

2019-02-19 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16112:
-
Status: Patch Available  (was: Open)

> After exist the baseTrashPath's subDir, delete the subDir leads to don't 
> modify baseTrashPath
> -
>
> Key: HADOOP-16112
> URL: https://issues.apache.org/jira/browse/HADOOP-16112
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.2.0
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16112.001.patch
>
>
> There is race condition in TrashPolicyDefault#moveToTrash
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16114) NetUtils#canonicalizeHost gives different value for same host

2019-02-19 Thread Praveen Krishna (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Praveen Krishna updated HADOOP-16114:
-
Fix Version/s: 2.7.6
   3.1.2
 Release Note: The above patch will resolve the race condition
   Attachment: HADOOP-16114-001.patch
   Status: Patch Available  (was: Open)

[~ste...@apache.org] Can you please review them ?

> NetUtils#canonicalizeHost gives different value for same host
> -
>
> Key: HADOOP-16114
> URL: https://issues.apache.org/jira/browse/HADOOP-16114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 3.1.2, 2.7.6
>Reporter: Praveen Krishna
>Priority: Minor
> Fix For: 3.1.2, 2.7.6
>
> Attachments: HADOOP-16114-001.patch
>
>
> In NetUtils#canonicalizeHost uses ConcurrentHashMap#putIfAbsent to add an 
> entry to the cache
> {code:java}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.putIfAbsent(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
> }
> {code}
>  
> If two different threads were invoking this method for the first time (so the 
> cache is empty) and if SecurityUtil#getByName()#getHostName gives two 
> different value for the same host , only one fqHost would be added in the 
> cache and an invalid fqHost would be given to one of the thread which might 
> cause some APIs to fail for the first time `FileSystem#checkPath` even if the 
> path is in the given file system. It might be better if we modify the above 
> method to this
>  
> {code:java}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.putIfAbsent(host, fqHost);
> fqHost = canonicalizedHostCache.get(host);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
> }
> {code}
>  
> So even if other thread get a different host name it will be updated to the 
> cached value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771804#comment-16771804
 ] 

Steve Loughran commented on HADOOP-11127:
-

bq.  But in my experience, things that fail because of missing or broken 
winutils are usually trying to set folder or file permissions.

yes, despite the fact that most people using winutils are trying to get spark 
to work locally on their laptop, rather than deploy a kerberized yarn cluster. 
Even there, I'd like to fall back to the java APIs where possible, as it's what 
stopped me bringing up a kerberized mini-yarn cluster in my HADOOP-14556 tests. 

> Improve versioning and compatibility support in native library for downstream 
> hadoop-common users.
> --
>
> Key: HADOOP-11127
> URL: https://issues.apache.org/jira/browse/HADOOP-11127
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Chris Nauroth
>Assignee: Alan Burlison
>Priority: Major
> Attachments: HADOOP-11064.003.patch, proposal.01.txt
>
>
> There is no compatibility policy enforced on the JNI function signatures 
> implemented in the native library.  This library typically is deployed to all 
> nodes in a cluster, built from a specific source code version.  However, 
> downstream applications that want to run in that cluster might choose to 
> bundle a hadoop-common jar at a different version.  Since there is no 
> compatibility policy, this can cause link errors at runtime when the native 
> function signatures expected by hadoop-common.jar do not exist in 
> libhadoop.so/hadoop.dll.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16121) Cannot build in dev docker environment

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16121:
---

Assignee: (was: Steve Loughran)

> Cannot build in dev docker environment
> --
>
> Key: HADOOP-16121
> URL: https://issues.apache.org/jira/browse/HADOOP-16121
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
> Environment: Darwin lqjacklee-MacBook-Pro.local 18.2.0 Darwin Kernel 
> Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; 
> root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
>Reporter: lqjacklee
>Priority: Minor
>
> Operation as below : 
>  
> 1, run the docker daemon
> 2, run ./start-build-env.sh
> 3, mvn clean package -DskipTests 
>  
> Response from the command line : 
>  
> [ERROR] Plugin org.apache.maven.plugins:maven-surefire-plugin:2.17 or one of 
> its dependencies could not be resolved: Failed to read artifact descriptor 
> for org.apache.maven.plugins:maven-surefire-plugin:jar:2.17: Could not 
> transfer artifact org.apache.maven.plugins:maven-surefire-plugin:pom:2.17 
> from/to central (https://repo.maven.apache.org/maven2): 
> /home/liu/.m2/repository/org/apache/maven/plugins/maven-surefire-plugin/2.17/maven-surefire-plugin-2.17.pom.part.lock
>  (No such file or directory) -> [Help 1] 
>  
> solution : 
> a, sudo chmod -R 775 ${USER_HOME}/.m2/
> b, sudo chown -R ${USER_NAME} ${USER_HOME}/.m2
>  
> After try the way , it still in trouble. 
>  
> c, sudo mvn clean package -DskipTests. but in this way, will download the 
> file (pom, jar ) duplicated ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16077) Add an option in ls command to include storage policy

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771817#comment-16771817
 ] 

Steve Loughran edited comment on HADOOP-16077 at 2/19/19 10:57 AM:
---

If you call {{FileSystems.listFiles(path, recursive)}}, you get a 
RemoteIterator ; LocatedFileStatus contains an array of 
blocklocations, which are meant to contain the block locations and storage types

This is the best API For a recursive file listing as

* on HDFS: bulk incremental updates to reduce marshalling & time NN is locked
* on object stores: the option of switching to more efficient path enumeration 
over treewalks. S3A does this & delivers O(files/1000) listings irrespective of 
the directory tree depth

now, that's a bigger leap for ls -R than just listing the storage type, but 
it'd be great to expose that operation in general, because ls -R is so 
inefficient here.

Trouble is of course, both Ls and LsR extend Command, which implements its 
treewalk recursively. Moving to a new iterator would be traumatic. Except 
maybe, just maybe, we could do something like have it support both forms of 
list & recurse, and for it to become an option to switch to; if you ask for 
storage levels, you must explicitly ask for the new recurse option.

Maybe a separate "listFiles" command would be the strategy

Have a look at {{S3aUtils.applyLocatedFiles()}} if you want to see some fun 
with closures and iterating over a list of LocatedFileStatus entries. That 
could all be promoted into {{org.apache.hadoop.util.LambdaUtils}} or the new 
{{org.apache.hadoop.fs.impl}} package.


BTW: I'm thinking that we could have the object stores expose their archive 
status of files in the storage type, so things like AWS Glacier storage would 
be visible. Being able to list here would be idea.


was (Author: ste...@apache.org):
If you call {{FileSystems.listFiles(path, recursive)}}, you get a 
RemoteIterator ; LocatedFileStatus contains an array of 
blocklocations, which are meant to contain the block locations and storage types

This is the best API For a recursive file listing as

* on HDFS: bulk incremental updates to reduce marshalling & time NN is locked
* on object stores: the option of switching to more efficient path enumeration 
over treewalks. S3A does this & delivers O(files/1000) listings irrespective of 
the directory tree depth

now, that's a bigger leap for ls -R than just listing the storage type, but 
it'd be great to expose that operation in general, because ls -R is so 
inefficient here.

Trouble is of course, both Ls and LsR extend Command, which implements its 
treewalk recursively. Moving to a new iterator would be traumatic. Except 
maybe, just maybe, we could do something like have it support both forms of 
list & recurse, and for it to become an option to switch to; if you ask for 
storage levels, you must explicitly ask for the new recurse option.

Maybe a separate "deepLs" command would be the strategy

Have a look at {{S3aUtils.applyLocatedFiles()}} if you want to see some fun 
with closures and iterating over a list of LocatedFileStatus entries. That 
could all be promoted into {{org.apache.hadoop.util.LambdaUtils}} or the new 
{{org.apache.hadoop.fs.impl}} package.


BTW: I'm thinking that we could have the object stores expose their archive 
status of files in the storage type, so things like AWS Glacier storage would 
be visible. Being able to list here would be idea.

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch, HADOOP-16077-04.patch, HADOOP-16077-05.patch, 
> HADOOP-16077-06.patch, HADOOP-16077-07.patch, HADOOP-16077-08.patch, 
> HADOOP-16077-09.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771817#comment-16771817
 ] 

Steve Loughran commented on HADOOP-16077:
-

If you call {{FileSystems.listFiles(path, recursive)}}, you get a 
RemoteIterator ; LocatedFileStatus contains an array of 
blocklocations, which are meant to contain the block locations and storage types

This is the best API For a recursive file listing as

* on HDFS: bulk incremental updates to reduce marshalling & time NN is locked
* on object stores: the option of switching to more efficient path enumeration 
over treewalks. S3A does this & delivers O(files/1000) listings irrespective of 
the directory tree depth

now, that's a bigger leap for ls -R than just listing the storage type, but 
it'd be great to expose that operation in general, because ls -R is so 
inefficient here.

Trouble is of course, both Ls and LsR extend Command, which implements its 
treewalk recursively. Moving to a new iterator would be traumatic. Except 
maybe, just maybe, we could do something like have it support both forms of 
list & recurse, and for it to become an option to switch to; if you ask for 
storage levels, you must explicitly ask for the new recurse option.

Maybe a separate "deepLs" command would be the strategy

Have a look at {{S3aUtils.applyLocatedFiles()}} if you want to see some fun 
with closures and iterating over a list of LocatedFileStatus entries. That 
could all be promoted into {{org.apache.hadoop.util.LambdaUtils}} or the new 
{{org.apache.hadoop.fs.impl}} package.


BTW: I'm thinking that we could have the object stores expose their archive 
status of files in the storage type, so things like AWS Glacier storage would 
be visible. Being able to list here would be idea.

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch, HADOOP-16077-04.patch, HADOOP-16077-05.patch, 
> HADOOP-16077-06.patch, HADOOP-16077-07.patch, HADOOP-16077-08.patch, 
> HADOOP-16077-09.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15870:

Status: Patch Available  (was: Open)

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1, 2.8.4
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15870:

Status: Open  (was: Patch Available)

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1, 2.8.4
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Attachment: HADOOP-15870-005.patch

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Status: Patch Available  (was: Open)

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15920:

Status: Open  (was: Patch Available)

> get patch for S3a nextReadPos(), through Yetus
> --
>
> Key: HADOOP-15920
> URL: https://issues.apache.org/jira/browse/HADOOP-15920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-001.diff, HADOOP-15870-002.patch, 
> HADOOP-15870-003.patch, HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771833#comment-16771833
 ] 

Steve Loughran commented on HADOOP-15870:
-

patch 005
* Clarify the Gzip bug better in the markdown
* remove the this. prefix on field/method references in the changed lines

run the HDFS, Azure wasb & abfs distcp tests and *all* s3a tests : all were 
happy

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15870:

Attachment: HADOOP-15870-005.patch

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16121) Cannot build in dev docker environment

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771834#comment-16771834
 ] 

Steve Loughran commented on HADOOP-16121:
-

build setups are your problem I'm afraid, or take up with the common dev list. 
Please don't assign issues to me. thanks.

> Cannot build in dev docker environment
> --
>
> Key: HADOOP-16121
> URL: https://issues.apache.org/jira/browse/HADOOP-16121
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
> Environment: Darwin lqjacklee-MacBook-Pro.local 18.2.0 Darwin Kernel 
> Version 18.2.0: Mon Nov 12 20:24:46 PST 2018; 
> root:xnu-4903.231.4~2/RELEASE_X86_64 x86_64
>Reporter: lqjacklee
>Priority: Minor
>
> Operation as below : 
>  
> 1, run the docker daemon
> 2, run ./start-build-env.sh
> 3, mvn clean package -DskipTests 
>  
> Response from the command line : 
>  
> [ERROR] Plugin org.apache.maven.plugins:maven-surefire-plugin:2.17 or one of 
> its dependencies could not be resolved: Failed to read artifact descriptor 
> for org.apache.maven.plugins:maven-surefire-plugin:jar:2.17: Could not 
> transfer artifact org.apache.maven.plugins:maven-surefire-plugin:pom:2.17 
> from/to central (https://repo.maven.apache.org/maven2): 
> /home/liu/.m2/repository/org/apache/maven/plugins/maven-surefire-plugin/2.17/maven-surefire-plugin-2.17.pom.part.lock
>  (No such file or directory) -> [Help 1] 
>  
> solution : 
> a, sudo chmod -R 775 ${USER_HOME}/.m2/
> b, sudo chown -R ${USER_NAME} ${USER_HOME}/.m2
>  
> After try the way , it still in trouble. 
>  
> c, sudo mvn clean package -DskipTests. but in this way, will download the 
> file (pom, jar ) duplicated ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16122) Re-login for multiple Hadoop users without updating global static UGI attributes

2019-02-19 Thread chendihao (JIRA)
chendihao created HADOOP-16122:
--

 Summary: Re-login for multiple Hadoop users without updating 
global static UGI attributes
 Key: HADOOP-16122
 URL: https://issues.apache.org/jira/browse/HADOOP-16122
 Project: Hadoop Common
  Issue Type: Bug
  Components: auth
Reporter: chendihao


In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users have different Hadoop user and permission to access the 
HDFS files. The service will run with multi-threads and create one independent 
UGI object for each user and use the UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will use the static User and static 
Subject objects to check the authentication and re-login. In fact, we want to 
re-login with the current User and Subject instead of the global static one.

 

Because of this issue, we can only support multiple Hadoop users to login with 
their own keytabs but not re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple Hadoop users without using global static UGI users

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Summary: Re-login from keytab for multiple Hadoop users without using 
global static UGI users  (was: Re-login for multiple Hadoop users without 
updating global static UGI attributes)

> Re-login from keytab for multiple Hadoop users without using global static 
> UGI users
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will use the static User and static 
> Subject objects to check the authentication and re-login. In fact, we want to 
> re-login with the current User and Subject instead of the global static one.
>  
> Because of this issue, we can only support multiple Hadoop users to login 
> with their own keytabs but not re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15843:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771843#comment-16771843
 ] 

Steve Loughran commented on HADOOP-16104:
-

LGTM &  +1 from me

Any comments from [~tmarquardt] or [~DanielZhou]

[~iwasakims]: w.r.t DT's the s3a one is a bit overambitious in that it actually 
implements session- and role- based DTs. For ABFS I'm fixing up the plugin 
points to support something similar, but I don't  have an implementation (yet). 
It's mostly new tests and the passing down of the URI of the FS so that the DT 
issuer can issue a token for a specific URI, and the authenticator can look it 
up

> Wasb tests to downgrade to skip when test a/c is namespace enabled
> --
>
> Key: HADOOP-16104
> URL: https://issues.apache.org/jira/browse/HADOOP-16104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HADOOP-16104.001.patch
>
>
> When you run the abfs tests with a namespace-enabled accounts, all the wasb 
> tests fail "don't yet work with namespace-enabled accounts". This should be 
> downgraded to a test skip, somehow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple Hadoop users without using global static UGI users

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Description: 
In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users have different Hadoop user and permission to access the 
HDFS files. The service will run with multi-threads and create one independent 
UGI object for each user and use the UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will re-login with the wrong users 
instead of the one from the expected UGI object.

 

Because of this issue, we can only support multiple Hadoop users to login with 
their own keytabs but not re-login when the tickets expire.

  was:
In our scenario, we have a service to allow multiple users to access HDFS with 
their keytab. The users have different Hadoop user and permission to access the 
HDFS files. The service will run with multi-threads and create one independent 
UGI object for each user and use the UGI to create Hadoop FileSystem object to 
read/write HDFS.

 

Since we have multiple Hadoop users in the same process, we have to use 
`loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
`loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. Then 
we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` before 
the kerberos ticket expires.

 

The issue is that `reloginFromKeytab` will use the static User and static 
Subject objects to check the authentication and re-login. In fact, we want to 
re-login with the current User and Subject instead of the global static one.

 

Because of this issue, we can only support multiple Hadoop users to login with 
their own keytabs but not re-login when the tickets expire.


> Re-login from keytab for multiple Hadoop users without using global static 
> UGI users
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will re-login with the wrong users 
> instead of the one from the expected UGI object.
>  
> Because of this issue, we can only support multiple Hadoop users to login 
> with their own keytabs but not re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771840#comment-16771840
 ] 

Steve Loughran commented on HADOOP-15843:
-

no worries, I've just +1'd and committed the latest patch as it was happy for me

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16114) NetUtils#canonicalizeHost gives different value for same host

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771849#comment-16771849
 ] 

Hadoop QA commented on HADOOP-16114:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
39s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16114 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959223/HADOOP-16114-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 94291e66c693 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 588b4c4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15934/testReport/ |
| Max. process+thread count | 1367 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15934/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> NetU

[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple Hadoop users does not work

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Summary: Re-login from keytab for multiple Hadoop users does not work  
(was: Re-login from keytab for multiple Hadoop users not works)

> Re-login from keytab for multiple Hadoop users does not work
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will re-login with the wrong users 
> instead of the one from the expected UGI object.
>  
> Because of this issue, we can only support multiple Hadoop users to login 
> with their own keytabs but not re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771855#comment-16771855
 ] 

Hudson commented on HADOOP-15843:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15994 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15994/])
HADOOP-15843. s3guard bucket-info command to not print a stack trace on 
(stevel: rev 1e0ae6ed15f55f1dc64d2b9044eb2a84fc5c6837)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md


> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16122) Re-login from keytab for multiple Hadoop users not works

2019-02-19 Thread chendihao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao updated HADOOP-16122:
---
Summary: Re-login from keytab for multiple Hadoop users not works  (was: 
Re-login from keytab for multiple Hadoop users without using global static UGI 
users)

> Re-login from keytab for multiple Hadoop users not works
> 
>
> Key: HADOOP-16122
> URL: https://issues.apache.org/jira/browse/HADOOP-16122
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Reporter: chendihao
>Priority: Major
>
> In our scenario, we have a service to allow multiple users to access HDFS 
> with their keytab. The users have different Hadoop user and permission to 
> access the HDFS files. The service will run with multi-threads and create one 
> independent UGI object for each user and use the UGI to create Hadoop 
> FileSystem object to read/write HDFS.
>  
> Since we have multiple Hadoop users in the same process, we have to use 
> `loginUserFromKeytabAndReturnUGI` instead of `loginUserFromKeytab`. The 
> `loginUserFromKeytabAndReturnUGI` will not do the re-login automatically. 
> Then we have to call `checkTGTAndReloginFromKeytab` or `reloginFromKeytab` 
> before the kerberos ticket expires.
>  
> The issue is that `reloginFromKeytab` will re-login with the wrong users 
> instead of the one from the expected UGI object.
>  
> Because of this issue, we can only support multiple Hadoop users to login 
> with their own keytabs but not re-login when the tickets expire.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16123) Lack of protoc

2019-02-19 Thread lqjacklee (JIRA)
lqjacklee created HADOOP-16123:
--

 Summary: Lack of protoc 
 Key: HADOOP-16123
 URL: https://issues.apache.org/jira/browse/HADOOP-16123
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.3.0
Reporter: lqjacklee
Assignee: Steve Loughran


During build the source code , do the steps as below : 

 

1, run docker daemon 

2, ./start-build-env.sh

3, sudo mvn clean install -DskipTests -Pnative 

the response prompt that : 

[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
'protoc --version' did not return a version -> 

[Help 1]

However , when execute the command : whereis protoc 

liu@a65d187055f9:~/hadoop$ whereis protoc
protoc: /opt/protobuf/bin/protoc

 

the PATH value : 
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin

 

liu@a65d187055f9:~/hadoop$ protoc --version
libprotoc 2.5.0

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-02-19 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-16057:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771890#comment-16771890
 ] 

Adam Antal commented on HADOOP-15843:
-

Thanks [~ste...@apache.org] for the commit. I ran tests against ireland, got 
some DT errors, but I did not configured it, so they're expected. The 
associated tests are passing for me. Also updated HADOOP-16057.

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations

2019-02-19 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771897#comment-16771897
 ] 

Gabor Bota commented on HADOOP-15999:
-

Thanks for the review [~ste...@apache.org]!

Docs: It's HADOOP-15780, but I can do the docs here and we can resolve that 
issue separately without a patch.

Metrics at S3AFs: HADOOP-15779, but I will do the relevant part in this jira

I'll fix all the other issues.

> [s3a] Better support for out-of-band operations
> ---
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, 
> out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Attachment: HADOOP-16107-003.patch

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Status: Patch Available  (was: Open)

patch 001: patch 002 with the mapreduce change pulled out. Ran that test 
locally, all was happy

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-15833) Intermittent failures of some S3A tests with S3Guard in parallel test runs

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15833 stopped by Steve Loughran.
---
> Intermittent failures of some S3A tests with S3Guard in parallel test runs
> --
>
> Key: HADOOP-15833
> URL: https://issues.apache.org/jira/browse/HADOOP-15833
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: Screen Shot 2018-10-09 at 15.33.35.png
>
>
> intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in 
> parallel runs. They don't seem to fail in sequential mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771902#comment-16771902
 ] 

Steve Loughran commented on HADOOP-15999:
-

+1 for merging docs & code. That metrics JIRA is big enough that it should stay 
separate —so don't worry about it here.

in which case, all that should be needed here is: docs, style changes and a 
rerun of the tests

> [s3a] Better support for out-of-band operations
> ---
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999.001.patch, HADOOP-15999.002.patch, 
> out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15833) Intermittent failures of some S3A tests with S3Guard in parallel test runs

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771906#comment-16771906
 ] 

Steve Loughran commented on HADOOP-15833:
-

Not actually seeing this for a while, so closing as a cannot reproduce. reopen 
as needed

> Intermittent failures of some S3A tests with S3Guard in parallel test runs
> --
>
> Key: HADOOP-15833
> URL: https://issues.apache.org/jira/browse/HADOOP-15833
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: Screen Shot 2018-10-09 at 15.33.35.png
>
>
> intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in 
> parallel runs. They don't seem to fail in sequential mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15833) Intermittent failures of some S3A tests with S3Guard in parallel test runs

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15833:
---

Assignee: (was: Steve Loughran)

> Intermittent failures of some S3A tests with S3Guard in parallel test runs
> --
>
> Key: HADOOP-15833
> URL: https://issues.apache.org/jira/browse/HADOOP-15833
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: Screen Shot 2018-10-09 at 15.33.35.png
>
>
> intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in 
> parallel runs. They don't seem to fail in sequential mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15833) Intermittent failures of some S3A tests with S3Guard in parallel test runs

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15833.
-
Resolution: Cannot Reproduce

> Intermittent failures of some S3A tests with S3Guard in parallel test runs
> --
>
> Key: HADOOP-15833
> URL: https://issues.apache.org/jira/browse/HADOOP-15833
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: Screen Shot 2018-10-09 at 15.33.35.png
>
>
> intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in 
> parallel runs. They don't seem to fail in sequential mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16105) WASB in secure mode does not set connectingUsingSAS

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16105:

Affects Version/s: 3.2.0
   3.0.3

> WASB in secure mode does not set connectingUsingSAS
> ---
>
> Key: HADOOP-16105
> URL: https://issues.apache.org/jira/browse/HADOOP-16105
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.2.0, 3.0.3, 2.8.5, 3.1.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16105-001.patch, HADOOP-16105-002.patch
>
>
> If you run WASB in secure mode, it doesn't set {{connectingUsingSAS}} to 
> true, which can break things



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771900#comment-16771900
 ] 

Steve Loughran commented on HADOOP-15843:
-

bq. got some DT errors, 

?? 

is this something related to assumed roles? As they should all be downgrading. 
If that's not happening, it's a regression from HADOOP-14556

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15843-001.patch, HADOOP-15843-03.patch, 
> HADOOP-15843.002.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16057) IndexOutOfBoundsException in ITestS3GuardToolLocal

2019-02-19 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771889#comment-16771889
 ] 

Adam Antal commented on HADOOP-16057:
-

HADOOP-15843 has been reverted and recommitted. The test case is not even in 
the repo anymore, and the other associated tests pass (validated against 
ireland).

> IndexOutOfBoundsException in ITestS3GuardToolLocal
> --
>
> Key: HADOOP-16057
> URL: https://issues.apache.org/jira/browse/HADOOP-16057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Major
>
> A new test from HADOOP-15843 is failing: {{testDestroyNoArgs}}; one arg too 
> short in the command line.
> Test run with {{ -Ds3guard -Ddynamodb}}
> {code}
> [ERROR] 
> testDestroyNoArgs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)  
> Time elapsed: 0.761 s  <<< ERROR!
> java.lang.IndexOutOfBoundsException: toIndex = 1
>   at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
>   at java.util.ArrayList.subList(ArrayList.java:996)
>   at org.apache.hadoop.fs.shell.CommandFormat.parse(CommandFormat.java:89)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseArgs(S3GuardTool.java:371)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Destroy.run(S3GuardTool.java:626)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:399)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testDestroyNoArgs$4(AbstractS3GuardToolTestBase.java:403)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771911#comment-16771911
 ] 

Steve Loughran commented on HADOOP-15847:
-

catching up with this

I couldn't find that bit of ScaleTestBase and the option 
fs.s3a.s3guard.ddb.table.scale.capacity.limit anywhere, I think that diff is 
either against a very old version of the code or its a diff between two 
intermediate patches. Can you do a diff from trunk...HEAD for the full patch. 
thx.

* if a new config option is added for testing it must go into 
{{org.apache.hadoop.fs.s3a.S3ATestConstants}}; something in testing.md to 
mention it.
* IDE shouldn't be converting a single static import to a .*: check your rules 
or strip those changes from patches.
* that deleteTable call should be in a finally clause in the test to guarantee 
it always happens

Yes, we do need that cleanup


> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15847-001.patch, HADOOP-15847-002.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a 
> region, presumably from an interrupted test. Luckily 
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could 
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less 
> often



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-02-19 Thread Adam Antal (JIRA)
Adam Antal created HADOOP-16124:
---

 Summary: Extend documentation in testing.md about endpoint 
constants
 Key: HADOOP-16124
 URL: https://issues.apache.org/jira/browse/HADOOP-16124
 Project: Hadoop Common
  Issue Type: Improvement
  Components: hadoop-aws
Affects Versions: 3.2.0
Reporter: Adam Antal
Assignee: Adam Antal


Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
hadoop-aws. This is useful to know when someone come across testing in 
hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-02-19 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-16124:

Status: Patch Available  (was: Open)

> Extend documentation in testing.md about endpoint constants
> ---
>
> Key: HADOOP-16124
> URL: https://issues.apache.org/jira/browse/HADOOP-16124
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-aws
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
> Attachments: HADOOP-16124.001.patch
>
>
> Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
> hadoop-aws. This is useful to know when someone come across testing in 
> hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15920) get patch for S3a nextReadPos(), through Yetus

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771914#comment-16771914
 ] 

Hadoop QA commented on HADOOP-15920:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
28s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
42s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 29s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15920 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4d521c0a1f77 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / a060e8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15936/artifact/out

[jira] [Updated] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-02-19 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-16124:

Attachment: HADOOP-16124.001.patch

> Extend documentation in testing.md about endpoint constants
> ---
>
> Key: HADOOP-16124
> URL: https://issues.apache.org/jira/browse/HADOOP-16124
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-aws
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
> Attachments: HADOOP-16124.001.patch
>
>
> Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
> hadoop-aws. This is useful to know when someone come across testing in 
> hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771938#comment-16771938
 ] 

Hadoop QA commented on HADOOP-15870:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
54s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
26s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
48s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 35s{color} | {color:orange} root: The patch generated 3 new + 10 unchanged - 
0 fixed = 13 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
31s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-15870 |
| GITHUB PR | https://github.com/apache/hadoop/pull/433 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9d30ffbb659 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / a060e8c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|

[jira] [Commented] (HADOOP-16124) Extend documentation in testing.md about endpoint constants

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771949#comment-16771949
 ] 

Hadoop QA commented on HADOOP-16124:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
30m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16124 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959243/HADOOP-16124.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux e845f12fb9e5 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e0ae6e |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15938/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Extend documentation in testing.md about endpoint constants
> ---
>
> Key: HADOOP-16124
> URL: https://issues.apache.org/jira/browse/HADOOP-16124
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: hadoop-aws
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Trivial
> Attachments: HADOOP-16124.001.patch
>
>
> Since HADOOP-14190 we had shortcuts for endpoints in the core-site.xml in 
> hadoop-aws. This is useful to know when someone come across testing in 
> hadoop-aws, so I suggest to add this little addition to the testing.md.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16112) After exist the baseTrashPath's subDir, delete the subDir leads to don't modify baseTrashPath

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771937#comment-16771937
 ] 

Hadoop QA commented on HADOOP-16112:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 26s{color} | {color:orange} root: The patch generated 4 new + 9 unchanged - 
0 fixed = 13 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  2m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}237m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16112 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959077/HADOOP-16112.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 

[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Attachment: (was: HADOOP-16107-003.patch)

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #501: HDDS-1089. Disable OzoneFSStorageStatistics for hadoop versions older than 2.8

2019-02-19 Thread GitBox
elek opened a new pull request #501: HDDS-1089. Disable 
OzoneFSStorageStatistics for hadoop versions older than 2.8
URL: https://github.com/apache/hadoop/pull/501
 
 
   HDDS-1033 introduced OzoneFSStorageStatistics for OzoneFileSystem. It uses 
the StorageStatistics which is introduced in HADOOP-13065 (available from the 
hadoop2.8/3.0).
   
   Using older hadoop (for example hadoop-2.7 which is included in the spark 
distributions) is not possible any more even with using the isolated class 
loader (introduced in HDDS-922).
   
   Fortunately it can be fixed:
# We can support null in storageStatistics field with checking everywhere 
before call it.
# We can create a new constructor of OzoneClientAdapterImpl without using 
OzoneFSStorageStatistics): If OzoneFSStorageStatistics is not in the 
method/constructor signature we don't need to load it.
# We can check the availability of HADOOP-13065 and if the classes are not 
in the classpath we can skip the initialization of the OzoneFSStorageStatistics
   
   See: https://issues.apache.org/jira/browse/HDDS-1089


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Status: Open  (was: Patch Available)

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Comment: was deleted

(was: patch 001: patch 002 with the mapreduce change pulled out. Ran that test 
locally, all was happy)

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Attachment: HADOOP-16107-003.patch

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-19 Thread GitBox
elek opened a new pull request #502: HDDS-919. Enable prometheus endpoints for 
Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502
 
 
   HDDS-846 provides a new metric endpoint which publishes the available Hadoop 
metrics in prometheus friendly format with a new servlet.
   
   Unfortunately it's enabled only on the scm/om side. It would be great to 
enable it in the Ozone/HDDS datanodes on the web server of the HDDS Rest 
endpoint. 
   
   See: https://issues.apache.org/jira/browse/HDDS-919


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771967#comment-16771967
 ] 

Hadoop QA commented on HADOOP-16107:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m  2s{color} 
| {color:red} root generated 1 new + 1491 unchanged - 0 fixed = 1492 total (was 
1491) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 24 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 16s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959239/HADOOP-16107-003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 725f7b52e2fc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e0ae6e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15937/artifact/out/diff-compile-javac-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15937/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15937/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test R

[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-19 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771969#comment-16771969
 ] 

Ben Roling commented on HADOOP-15625:
-

bq. although I wouldn't expect it to be seen so often as to be offensive to 
users of such third-party stores (assuming such stores actually exist).

This is sort of embarrassing.  I don't know what exactly I was thinking when I 
wrote that.  If GetObject never returns an eTag for some third-party store and 
we logged a warning when that happened then if you used that third-party store 
you'd see a warning on every single file read.  Obviously that would look 
stupid.

It does feel like we will need some form of configuration if we're worried 
about third-party stores not supporting eTags (such as not returning them on 
GetObject or not supporting withMatchingETagConstraint()).  I'll just go ahead 
and add some configuration around this in my next version of the patch.  I'm 
still waiting on the feedback about the Exception type though.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch, 
> HADOOP-15625-003.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16107:

Status: Patch Available  (was: Open)

Patch 003

* review changes to minimise diff
* tag the new protected static create-helper methods as @limited 
private("Filesystems"). 
* tested hadoop-aws; all happy
* tested mapreduce TestJobCounters (which found the problem): all happy

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16120) Lazily allocate KMS delegation tokens

2019-02-19 Thread Ruslan Dautkhanov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruslan Dautkhanov updated HADOOP-16120:
---
Description: 
We noticed that HDFS clients talk to KMS even when they try to access not 
encrypted databases.. Is there is a way to make HDFS clients to talk to KMS 
servers *only* when they need access to encrypted data? Since we will be 
encrypting only one database (and 50+ other much more critical production 
databases will not be encrypted), in case if KMS is down for maintenance or for 
some other reason, we want to limit outage only to encrypted data.

In other words, it would be great if KMS delegation toekns would be allocated 
lazily - on first request to encrypted data.

This could be a non-default option to lazily allocate KMS delegation tokens, to 
improve availability of non-encrypted data.

 

  was:
We noticed that HDFS clients talk to KMS even when they try to access not 
encrypted databases.. Is there is a way to make HDFS clients to talk to KMS 
servers *only* when they need access to encrypted data? Since we will be 
encrypting only one database (and 50 other databases will not be encrypted), in 
case if KMS is down for maintenance or for some other reason, we want to limit 
outage only to encrypted data.

In other words, it would be great if KMS delegation toekns would be allocated 
lazily - on first request to encrypted data.


> Lazily allocate KMS delegation tokens
> -
>
> Key: HADOOP-16120
> URL: https://issues.apache.org/jira/browse/HADOOP-16120
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Ruslan Dautkhanov
>Priority: Major
>
> We noticed that HDFS clients talk to KMS even when they try to access not 
> encrypted databases.. Is there is a way to make HDFS clients to talk to KMS 
> servers *only* when they need access to encrypted data? Since we will be 
> encrypting only one database (and 50+ other much more critical production 
> databases will not be encrypted), in case if KMS is down for maintenance or 
> for some other reason, we want to limit outage only to encrypted data.
> In other words, it would be great if KMS delegation toekns would be allocated 
> lazily - on first request to encrypted data.
> This could be a non-default option to lazily allocate KMS delegation tokens, 
> to improve availability of non-encrypted data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772060#comment-16772060
 ] 

Hadoop QA commented on HADOOP-16107:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m  2s{color} 
| {color:red} root generated 1 new + 1491 unchanged - 0 fixed = 1492 total (was 
1491) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 23 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959251/HADOOP-16107-003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 56adabf15dfb 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e0ae6e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15939/artifact/out/diff-compile-javac-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15939/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15939/artifact/out/patch-unit-hadoop-common

[jira] [Commented] (HADOOP-16120) Lazily allocate KMS delegation tokens

2019-02-19 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772164#comment-16772164
 ] 

Wei-Chiu Chuang commented on HADOOP-16120:
--

Hi,

KMS delegation tokens are issued when an application invokes 
FileSystem#addDelegationTokens() API. An application typically invokes this API 
because the delegation tokens may be used later. For example, a MapReduce 
client invokes it, so that the DTs can be passed along to mapper and reducer. 
And typically it's not possible to know if you would ever access an encryption 
zone a priori.

> Lazily allocate KMS delegation tokens
> -
>
> Key: HADOOP-16120
> URL: https://issues.apache.org/jira/browse/HADOOP-16120
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Ruslan Dautkhanov
>Priority: Major
>
> We noticed that HDFS clients talk to KMS even when they try to access not 
> encrypted databases.. Is there is a way to make HDFS clients to talk to KMS 
> servers *only* when they need access to encrypted data? Since we will be 
> encrypting only one database (and 50+ other much more critical production 
> databases will not be encrypted), in case if KMS is down for maintenance or 
> for some other reason, we want to limit outage only to encrypted data.
> In other words, it would be great if KMS delegation toekns would be allocated 
> lazily - on first request to encrypted data.
> This could be a non-default option to lazily allocate KMS delegation tokens, 
> to improve availability of non-encrypted data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16120) Lazily allocate KMS delegation tokens

2019-02-19 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772164#comment-16772164
 ] 

Wei-Chiu Chuang edited comment on HADOOP-16120 at 2/19/19 6:12 PM:
---

Hi Ruslan, thanks for reporting the issue.

KMS delegation tokens are issued when an application invokes 
FileSystem#addDelegationTokens() API. An application typically invokes this API 
because the delegation tokens may be used later. For example, a MapReduce 
client invokes it, so that the DTs can be passed along to mapper and reducer. 
And typically it's not possible to know if you would ever access an encryption 
zone a priori.


was (Author: jojochuang):
Hi,

KMS delegation tokens are issued when an application invokes 
FileSystem#addDelegationTokens() API. An application typically invokes this API 
because the delegation tokens may be used later. For example, a MapReduce 
client invokes it, so that the DTs can be passed along to mapper and reducer. 
And typically it's not possible to know if you would ever access an encryption 
zone a priori.

> Lazily allocate KMS delegation tokens
> -
>
> Key: HADOOP-16120
> URL: https://issues.apache.org/jira/browse/HADOOP-16120
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.8.5, 3.1.2
>Reporter: Ruslan Dautkhanov
>Priority: Major
>
> We noticed that HDFS clients talk to KMS even when they try to access not 
> encrypted databases.. Is there is a way to make HDFS clients to talk to KMS 
> servers *only* when they need access to encrypted data? Since we will be 
> encrypting only one database (and 50+ other much more critical production 
> databases will not be encrypted), in case if KMS is down for maintenance or 
> for some other reason, we want to limit outage only to encrypted data.
> In other words, it would be great if KMS delegation toekns would be allocated 
> lazily - on first request to encrypted data.
> This could be a non-default option to lazily allocate KMS delegation tokens, 
> to improve availability of non-encrypted data.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16119) KMS on Hadoop RPC Engine

2019-02-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16119:
-
Issue Type: New Feature  (was: Bug)

> KMS on Hadoop RPC Engine
> 
>
> Key: HADOOP-16119
> URL: https://issues.apache.org/jira/browse/HADOOP-16119
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Jonathan Eagles
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> Per discussion on common-dev and text copied here for ease of reference.
> https://lists.apache.org/thread.html/0e2eeaf07b013f17fad6d362393f53d52041828feec53dcddff04808@%3Ccommon-dev.hadoop.apache.org%3E
> {noformat}
> Thanks all for the inputs,
> To offer additional information (while Daryn is working on his stuff),
> optimizing RPC encryption opens up another possibility: migrating KMS
> service to use Hadoop RPC.
> Today's KMS uses HTTPS + REST API, much like webhdfs. It has very
> undesirable performance (a few thousand ops per second) compared to
> NameNode. Unfortunately for each NameNode namespace operation you also need
> to access KMS too.
> Migrating KMS to Hadoop RPC greatly improves its performance (if
> implemented correctly), and RPC encryption would be a prerequisite. So
> please keep that in mind when discussing the Hadoop RPC encryption
> improvements. Cloudera is very interested to help with the Hadoop RPC
> encryption project because a lot of our customers are using at-rest
> encryption, and some of them are starting to hit KMS performance limit.
> This whole "migrating KMS to Hadoop RPC" was Daryn's idea. I heard this
> idea in the meetup and I am very thrilled to see this happening because it
> is a real issue bothering some of our customers, and I suspect it is the
> right solution to address this tech debt.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16119) KMS on Hadoop RPC Engine

2019-02-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-16119:


Assignee: Wei-Chiu Chuang

> KMS on Hadoop RPC Engine
> 
>
> Key: HADOOP-16119
> URL: https://issues.apache.org/jira/browse/HADOOP-16119
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> Per discussion on common-dev and text copied here for ease of reference.
> https://lists.apache.org/thread.html/0e2eeaf07b013f17fad6d362393f53d52041828feec53dcddff04808@%3Ccommon-dev.hadoop.apache.org%3E
> {noformat}
> Thanks all for the inputs,
> To offer additional information (while Daryn is working on his stuff),
> optimizing RPC encryption opens up another possibility: migrating KMS
> service to use Hadoop RPC.
> Today's KMS uses HTTPS + REST API, much like webhdfs. It has very
> undesirable performance (a few thousand ops per second) compared to
> NameNode. Unfortunately for each NameNode namespace operation you also need
> to access KMS too.
> Migrating KMS to Hadoop RPC greatly improves its performance (if
> implemented correctly), and RPC encryption would be a prerequisite. So
> please keep that in mind when discussing the Hadoop RPC encryption
> improvements. Cloudera is very interested to help with the Hadoop RPC
> encryption project because a lot of our customers are using at-rest
> encryption, and some of them are starting to hit KMS performance limit.
> This whole "migrating KMS to Hadoop RPC" was Daryn's idea. I heard this
> idea in the meetup and I am very thrilled to see this happening because it
> is a real issue bothering some of our customers, and I suspect it is the
> right solution to address this tech debt.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-02-19 Thread Michael Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772244#comment-16772244
 ] 

Michael Miller commented on HADOOP-11223:
-

Would it simplify things making this class package private and then adding a 
static method to Configuration to create the unmodifiable object? 

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Michael Miller
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch, HADOOP-11223.002.patch, 
> HADOOP-11223.003.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx closed pull request #494: HDDS-1085 : Create an OM API to serve snapshots to Recon server.

2019-02-19 Thread GitBox
avijayanhwx closed pull request #494: HDDS-1085 : Create an OM API to serve 
snapshots to Recon server.
URL: https://github.com/apache/hadoop/pull/494
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772262#comment-16772262
 ] 

Steve Loughran commented on HADOOP-16068:
-

+rendered on github: 
https://github.com/steveloughran/hadoop/blob/abfs/HADOOP-16068-Delegation/hadoop-tools/hadoop-azure/src/site/markdown/abfs.md

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch, 
> HADOOP-16068-006.patch, HADOOP-16068-007.patch, HADOOP-16068-008.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Patch Available  (was: Open)

Patch 00: more on the doc, including

* details on setting that HADOOP_OPTIONAL_TOOLS env var
* anchors for all the ## level titles; some of the ### ones
* a bit more on setup

I haven't yet played with all the auth mechanisms to understand how they work 
-reviews and comments welcome

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch, 
> HADOOP-16068-006.patch, HADOOP-16068-007.patch, HADOOP-16068-008.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Open  (was: Patch Available)

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch, 
> HADOOP-16068-006.patch, HADOOP-16068-007.patch, HADOOP-16068-008.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Attachment: HADOOP-16068-008.patch

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch, 
> HADOOP-16068-006.patch, HADOOP-16068-007.patch, HADOOP-16068-008.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx opened a new pull request #503: Fix findbugs issues in HDDS-1085.

2019-02-19 Thread GitBox
avijayanhwx opened a new pull request #503: Fix findbugs issues in HDDS-1085.
URL: https://github.com/apache/hadoop/pull/503
 
 
   Fixing findbugs issues in HDDS-1085. If all the issues are solved, I will 
create a JIRA and attach the patch.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772308#comment-16772308
 ] 

Hadoop QA commented on HADOOP-16068:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 77 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959316/HADOOP-16068-008.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 0c71fbe7ca7c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 02d04bd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15940/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15940/testReport/ |
| Max. process+thread count | 295 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Conso

[jira] [Created] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)
Lukas Majercak created HADOOP-16125:
---

 Summary: Support multiple bind users in LdapGroupsMapping
 Key: HADOOP-16125
 URL: https://issues.apache.org/jira/browse/HADOOP-16125
 Project: Hadoop Common
  Issue Type: New Feature
  Components: common, security
Reporter: Lukas Majercak
Assignee: Lukas Majercak






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx closed pull request #503: HDDS-1139 : Fix findbugs issues in HDDS-1085.

2019-02-19 Thread GitBox
avijayanhwx closed pull request #503: HDDS-1139 : Fix findbugs issues in 
HDDS-1085.
URL: https://github.com/apache/hadoop/pull/503
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx opened a new pull request #504: HDDS-1139 : Fix findbugs issues caused by HDDS-1085.

2019-02-19 Thread GitBox
avijayanhwx opened a new pull request #504: HDDS-1139 : Fix findbugs issues 
caused by HDDS-1085.
URL: https://github.com/apache/hadoop/pull/504
 
 
   Fixing findbugs issues caused by HDDS-1085.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-16125:

Description: 
Currently, LdapGroupsMapping supports only a single user to bind to when 
connecting to LDAP. This can be problematic if such user's password needs to be 
reset. 

The proposal is to support multiple such users and switch between them if 
necessary, more info in GroupsMapping.md / core-default.xml in the patches.

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-16125:

Attachment: HADOOP-16125.001.patch

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16125 started by Lukas Majercak.
---
> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-16125:

Status: Patch Available  (was: In Progress)

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15967) KMS Benchmark Tool

2019-02-19 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772320#comment-16772320
 ] 

Wei-Chiu Chuang commented on HADOOP-15967:
--

+1 will commit soon

> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Attachments: HADOOP-15967.001.patch, HADOOP-15967.002.patch, 
> HADOOP-15967.003.patch
>
>
> We've been working on several pieces of KMS improvement work. One thing 
> that's missing so far is a good benchmark tool for KMS, similar to 
> NNThroughputBenchmark.
> Some requirements I have in mind:
> # it should be a standalone benchmark tool, requiring only KMS and a 
> benchmark client. No NameNode or DataNode should be involved.
> # specify the type of KMS request sent by client. E.g., generate_eek, 
> decrypt_eek, reencrypt_eek
> # optionally specify number of threads sending KMS requests.
> File this jira to gather more requirements. Thoughts? [~knanasi] [~xyao] 
> [~daryn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772330#comment-16772330
 ] 

Lukas Majercak commented on HADOOP-16125:
-

Add DummyLdapCtxFactory.reset() in patch002.

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-16125:

Attachment: HADOOP-16125.002.patch

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772336#comment-16772336
 ] 

Íñigo Goiri commented on HADOOP-16125:
--

As referred in the description, if we have a single user and we are changing 
the password, there will be a time (while the new secret is getting propagated) 
that auth will fail.
A common solution for this is to have two users and change one at a time.
This change will enable this approach.

Regarding [^HADOOP-16125.001.patch], LGTM.
The refactor for the user/password makes LdapGroupsMapping cleaner too.

Let's see what Yetus says.

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16055) Upgrade AWS SDK to 1.11.271 in branch-2

2019-02-19 Thread t oo (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772347#comment-16772347
 ] 

t oo commented on HADOOP-16055:
---

is 2.8 branch not getting v271 sdk like title says?

> Upgrade AWS SDK to 1.11.271 in branch-2
> ---
>
> Key: HADOOP-16055
> URL: https://issues.apache.org/jira/browse/HADOOP-16055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 2.10.0, 2.9.3
>
> Attachments: HADOOP-16055-branch-2-01.patch, 
> HADOOP-16055-branch-2.8-01.patch, HADOOP-16055-branch-2.8-02.patch, 
> HADOOP-16055-branch-2.9-01.patch
>
>
> Per HADOOP-13794, we must exclude the JSON license.
> The upgrade will contain incompatible changes, however, the license issue is 
> much more important.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-16126:


 Summary: ipc.Client.stop() may sleep too long to wait for all 
connections
 Key: HADOOP-16126
 URL: https://issues.apache.org/jira/browse/HADOOP-16126
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


{code}
//Client.java
  public void stop() {
...
// wait until all connections are closed
while (!connections.isEmpty()) {
  try {
Thread.sleep(100);
  } catch (InterruptedException e) {
  }
}
...
  }
{code}
In the code above, the sleep time is 100ms.  We found that simply changing the 
sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772404#comment-16772404
 ] 

Hadoop QA commented on HADOOP-16125:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m  2s{color} 
| {color:red} root generated 2 new + 1491 unchanged - 0 fixed = 1493 total (was 
1491) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 16 new + 22 unchanged - 2 fixed = 38 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-common-project/hadoop-common generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  org.apache.hadoop.security.LdapGroupsMapping$BindUserInfo defines equals 
and uses Object.hashCode()  At LdapGroupsMapping.java:Object.hashCode()  At 
LdapGroupsMapping.java:[lines 906-909] |
|  |  Should org.apache.hadoop.security.LdapGroupsMapping$BindUserInfo be a 
_static_ inner class?  At LdapGroupsMapping.java:inner class?  At 
LdapGroupsMapping.java:[lines 895-914] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16125 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959325/HADOOP-16125.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 6

[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772406#comment-16772406
 ] 

Hadoop QA commented on HADOOP-16125:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m  6s{color} 
| {color:red} root generated 2 new + 1491 unchanged - 0 fixed = 1493 total (was 
1491) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 16 new + 22 unchanged - 2 fixed = 38 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-common-project/hadoop-common generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  org.apache.hadoop.security.LdapGroupsMapping$BindUserInfo defines equals 
and uses Object.hashCode()  At LdapGroupsMapping.java:Object.hashCode()  At 
LdapGroupsMapping.java:[lines 906-909] |
|  |  Should org.apache.hadoop.security.LdapGroupsMapping$BindUserInfo be a 
_static_ inner class?  At LdapGroupsMapping.java:inner class?  At 
LdapGroupsMapping.java:[lines 895-914] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16125 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12959325/HADOOP-16125.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7

[jira] [Updated] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-16125:

Attachment: HADOOP-16125.003.patch

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch, 
> HADOOP-16125.003.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16125) Support multiple bind users in LdapGroupsMapping

2019-02-19 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772418#comment-16772418
 ] 

Lukas Majercak commented on HADOOP-16125:
-

Patch003 to fix findbugs/checkstyle/whitespace.

> Support multiple bind users in LdapGroupsMapping
> 
>
> Key: HADOOP-16125
> URL: https://issues.apache.org/jira/browse/HADOOP-16125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-16125.001.patch, HADOOP-16125.002.patch, 
> HADOOP-16125.003.patch
>
>
> Currently, LdapGroupsMapping supports only a single user to bind to when 
> connecting to LDAP. This can be problematic if such user's password needs to 
> be reset. 
> The proposal is to support multiple such users and switch between them if 
> necessary, more info in GroupsMapping.md / core-default.xml in the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15914) hadoop jar command has no help argument

2019-02-19 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772422#comment-16772422
 ] 

Daniel Templeton commented on HADOOP-15914:
---

LGTM.  [~aw], any concerns?  Otherwise I'll +1 and commit.

> hadoop jar command has no help argument
> ---
>
> Key: HADOOP-15914
> URL: https://issues.apache.org/jira/browse/HADOOP-15914
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HADOOP-15914.000.patch
>
>
> {{hadoop jar --help}} and {{hadoop jar help}} commands show outputs like this:
> {noformat}
> WARNING: Use "yarn jar" to launch YARN applications.
> JAR does not exist or is not a normal file: /root/--help
> {noformat}
> Only if called with no arguments: {{hadoop jar}} we get the usage text, but 
> even in that case we get:
> {noformat}
> WARNING: Use "yarn jar" to launch YARN applications.
> RunJar jarFile [mainClass] args...
> {noformat}
> Where RunJar is wrapped by the hadoop script (so it should not be displayed).
> {{hadoop --help}} displays the following:
> {noformat}
> jar  run a jar file. NOTE: please use "yarn jar" to launch YARN 
> applications, not this command.
> {noformat}
> which is fine, but {{CommandsManual.md}} tells a bit more information about 
> the usage of this command:
> {noformat}
> Usage: hadoop jar  [mainClass] args...
> {noformat}
> My suggestion is to add a {{--help}} option to the {{hadoop jar}} command 
> that would display this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx commented on issue #504: HDDS-1139 : Fix findbugs issues caused by HDDS-1085.

2019-02-19 Thread GitBox
avijayanhwx commented on issue #504: HDDS-1139 : Fix findbugs issues caused by 
HDDS-1085.
URL: https://github.com/apache/hadoop/pull/504#issuecomment-465353620
 
 
   All findbugs issues caused by HDDS-1085 have passed. The integration test 
TestOzoneConfigurationFields has also passed in the run.
   
   cc @elek @bharatviswa504 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15967) KMS Benchmark Tool

2019-02-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15967:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks [~ghuangups] for the patch!

> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15967.001.patch, HADOOP-15967.002.patch, 
> HADOOP-15967.003.patch
>
>
> We've been working on several pieces of KMS improvement work. One thing 
> that's missing so far is a good benchmark tool for KMS, similar to 
> NNThroughputBenchmark.
> Some requirements I have in mind:
> # it should be a standalone benchmark tool, requiring only KMS and a 
> benchmark client. No NameNode or DataNode should be involved.
> # specify the type of KMS request sent by client. E.g., generate_eek, 
> decrypt_eek, reencrypt_eek
> # optionally specify number of threads sending KMS requests.
> File this jira to gather more requirements. Thoughts? [~knanasi] [~xyao] 
> [~daryn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15967) KMS Benchmark Tool

2019-02-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772443#comment-16772443
 ] 

Hudson commented on HADOOP-15967:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16001 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16001/])
HADOOP-15967. KMS Benchmark Tool. Contributed by George Huang. (weichiu: rev 
0525d85d57763a0078bdaf9b08d36909f3c6ae2e)
* (add) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/KMSBenchmark.java


> KMS Benchmark Tool
> --
>
> Key: HADOOP-15967
> URL: https://issues.apache.org/jira/browse/HADOOP-15967
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: George Huang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15967.001.patch, HADOOP-15967.002.patch, 
> HADOOP-15967.003.patch
>
>
> We've been working on several pieces of KMS improvement work. One thing 
> that's missing so far is a good benchmark tool for KMS, similar to 
> NNThroughputBenchmark.
> Some requirements I have in mind:
> # it should be a standalone benchmark tool, requiring only KMS and a 
> benchmark client. No NameNode or DataNode should be involved.
> # specify the type of KMS request sent by client. E.g., generate_eek, 
> decrypt_eek, reencrypt_eek
> # optionally specify number of threads sending KMS requests.
> File this jira to gather more requirements. Thoughts? [~knanasi] [~xyao] 
> [~daryn]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16772452#comment-16772452
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-16126:
--

Tried to change the sleep to wait-notify.  However, found some race conditions 
such as
- put a new connection could happen after stop.
- stop can be called twice.

Therefore, we will just change the sleep time here and then fix the race 
conditions and change to wait-notify in a separated JIRA.

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16126:
-
Attachment: c16126_20190219.patch

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16126_20190219.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-19 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-16126:
-
Status: Patch Available  (was: Open)

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: c16126_20190219.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >