[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2017-12-11 Thread Harshakiran Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287119#comment-16287119
 ] 

Harshakiran Reddy commented on HADOOP-13590:


Hi [~xiaochen], After this commit i am getting the bellow following exception 
before 5 hours of my user  expires.

{noformat}
bin> ./hdfs dfs -ls /
2017-12-12 12:31:50,910 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2017-12-12 12:31:52,312 WARN security.UserGroupInformation: Exception 
encountered while running the renewal command for principal_name. (TGT end 
time:1513070122000, renewalFailures: 
org.apache.hadoop.metrics2.lib.MutableGaugeInt@1bbb43eb,renewalFailuresTotal: 
org.apache.hadoop.metrics2.lib.MutableGaugeLong@424a0549)
ExitCodeException exitCode=1: kinit: KDC can't fulfill requested option while 
renewing credentials

at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
at org.apache.hadoop.util.Shell.run(Shell.java:887)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1306)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1288)
at 
org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:1067)
at java.lang.Thread.run(Thread.java:745)
bin> klist
Ticket cache: FILE:/tmp/krb5cc_20024
Default principal: principal_name

Valid starting ExpiresService principal
12/11/17 17:15:25  12/12/17 17:15:22  Service principal_name
bin> date
Tue Dec 12 12:40:16 CST 2017
{noformat}



> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch, 
> HADOOP-13590.09.patch, HADOOP-13590.10.patch, HADOOP-13590.branch-2.01.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15110) Objects are getting logged when we got exception from AutoRenewalThreadForUserCreds

2017-12-11 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HADOOP-15110:
--

 Summary: Objects are getting logged when we got exception from 
AutoRenewalThreadForUserCreds
 Key: HADOOP-15110
 URL: https://issues.apache.org/jira/browse/HADOOP-15110
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0-alpha2, 2.8.0
Reporter: Harshakiran Reddy


*scenario*:
-

While Running the renewal command for principal it's printing the direct 
objects for *renewalFailures *and *renewalFailuresTotal*

{noformat}
bin> ./hdfs dfs -ls /
2017-12-12 12:31:50,910 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2017-12-12 12:31:52,312 WARN security.UserGroupInformation: Exception 
encountered while running the renewal command for principal_name. (TGT end 
time:1513070122000, renewalFailures: 
org.apache.hadoop.metrics2.lib.MutableGaugeInt@1bbb43eb,renewalFailuresTotal: 
org.apache.hadoop.metrics2.lib.MutableGaugeLong@424a0549)
ExitCodeException exitCode=1: kinit: KDC can't fulfill requested option while 
renewing credentials

at org.apache.hadoop.util.Shell.runCommand(Shell.java:994)
at org.apache.hadoop.util.Shell.run(Shell.java:887)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1212)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1306)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1288)
at 
org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:1067)
at java.lang.Thread.run(Thread.java:745)
{noformat}

*Expected Result*:
it's should be user understandable value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15080) Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on Cat-x "json-lib"

2017-12-11 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287095#comment-16287095
 ] 

SammiChen commented on HADOOP-15080:


Hi Andrew,


One simple question. I see there are branches  trunk, branch-3.0 and 
branch-3.0.0.

So I assume 3.1.0 is for trunk,3.0.0 is for branch-3.0.0, and 3.0.1 is for 
branch-3.0.   

3.0.1 for branch-3.0 is not correct, right? And why? 


Bests,
Sammi



> Aliyun OSS: update oss sdk from 2.8.1 to 2.8.3 to remove its dependency on 
> Cat-x "json-lib"
> ---
>
> Key: HADOOP-15080
> URL: https://issues.apache.org/jira/browse/HADOOP-15080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Chris Douglas
>Assignee: SammiChen
>Priority: Blocker
> Fix For: 3.0.0, 3.1.0, 2.10.0, 2.9.1
>
> Attachments: HADOOP-15080-branch-3.0.0.001.patch, 
> HADOOP-15080-branch-3.0.0.002.patch
>
>
> Cat-X dependency on org.json via derived json-lib. OSS SDK has a dependency 
> on json-lib. In LEGAL-245, the org.json library (from which json-lib may be 
> derived) is released under a 
> [category-x|https://www.apache.org/legal/resolved.html#json] license.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286954#comment-16286954
 ] 

genericqa commented on HADOOP-15109:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:
 The patch generated 1 new + 46 unchanged - 1 fixed = 47 total (was 47) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 56s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.mapreduce.v2.TestUberAM |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15109 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901567/HADOOP-15109.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8c5b05437ccf 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2316f52 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13814/artifact/out/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13814/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
 |
|  

[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-12-11 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286959#comment-16286959
 ] 

Bharat Viswanadham commented on HADOOP-9747:


[~daryn]
Thanks for comments.
Intentional. If a specific ticket cache is defined, it must be used. It's wrong 
set a property for one of the locations to look and then specify default cache 
which means it might find a ticket cache somewhere other than specifically 
defined. Not to mention a system property has the same thread-safety issues as 
the statics I removed.

Got it, if Krb5CCName is set, now using useCCache and using that location, 


{code:java}
 put(LoginParam.TICKET_CACHE, System.getenv("KRB5CCNAME")); // 1801
if (ticketCache != null) { //1970
options.put("useCcache", prependFileAuthority(ticketCache));
  }
{code}


if  Krb5CCName is not set or ticketCache is null,  then set useDefaultCcache to 
true and use it.

{code:java}
final String ticketCache = params.get(LoginParam.TICKET_CACHE); //1952
  if (ticketCache != null) {
options.put("useCcache", prependFileAuthority(ticketCache));
  } else {
options.put("useDefaultCcache", "true");
  }
}
{code}

  
So, in this way this issue has been addressed in this patch?



> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286947#comment-16286947
 ] 

zhoutai.zt commented on HADOOP-15109:
-

Thanks [~ajayydv].  

Another way to generate a bounded random long.
{code:java}
ThreadLocalRandom.current().nextLong(fileSize)
{code}


> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15109.001.patch, Screen Shot 2017-12-11 at 
> 3.17.22 PM.png
>
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14993) AliyunOSS: Override listFiles and listLocatedStatus

2017-12-11 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286948#comment-16286948
 ] 

Genmao Yu commented on HADOOP-14993:


[~Sammi] OK, let me have a test.

> AliyunOSS: Override listFiles and listLocatedStatus 
> 
>
> Key: HADOOP-14993
> URL: https://issues.apache.org/jira/browse/HADOOP-14993
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0, 3.1.0, 3.0.1
>
> Attachments: HADOOP-14993.001.patch, HADOOP-14993.002.patch, 
> HADOOP-14993.003.patch
>
>
> Do a bulk listing off all entries under a path in one single operation, there 
> is no need to recursively walk the directory tree.
> Updates:
> - override listFiles and listLocatedStatus by using bulk listing
> - some minor updates in hadoop-aliyun index.md



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15006) Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS

2017-12-11 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286869#comment-16286869
 ] 

Aaron Fabbri commented on HADOOP-15006:
---

Thanks for the reminder.. Will try to take a look this week.

> Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS
> ---
>
> Key: HADOOP-15006
> URL: https://issues.apache.org/jira/browse/HADOOP-15006
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3, kms
>Reporter: Steve Moist
>Priority: Minor
> Attachments: S3-CSE Proposal.pdf
>
>
> This is for the proposal to introduce Client Side Encryption to S3 in such a 
> way that it can leverage HDFS transparent encryption, use the Hadoop KMS to 
> manage keys, use the `hdfs crypto` command line tools to manage encryption 
> zones in the cloud, and enable distcp to copy from HDFS to S3 (and 
> vice-versa) with data still encrypted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9129) ViewFs does not validate internal names in the mount table

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286727#comment-16286727
 ] 

genericqa commented on HADOOP-9129:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  6s{color} | {color:orange} root: The patch generated 2 new + 84 unchanged - 
0 fixed = 86 total (was 84) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 49s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemDelegation |
|   | hadoop.fs.viewfs.TestFcPermissionsLocalFs |
|   | hadoop.fs.viewfs.TestViewFsURIs |
|   | hadoop.fs.viewfs.TestViewfsFileStatus |
|   | hadoop.fs.viewfs.TestViewFsTrash |
|   | hadoop.fs.viewfs.TestFcCreateMkdirLocalFs |
|   | hadoop.fs.viewfs.TestFcMainOperationsLocalFs |
|   | hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFileSystemDelegationTokenSupport |
|   | 

[jira] [Commented] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286729#comment-16286729
 ] 

Ajay Kumar commented on HADOOP-15109:
-

[~zhoutai.zt], Attached patch to address the issue. Tested it locally with 5Gb 
file for random reads.

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15109.001.patch
>
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15109:

Attachment: Screen Shot 2017-12-11 at 3.17.22 PM.png

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15109.001.patch, Screen Shot 2017-12-11 at 
> 3.17.22 PM.png
>
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1628#comment-1628
 ] 

genericqa commented on HADOOP-15085:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 
530 unchanged - 1 fixed = 530 total (was 531) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
36s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
0s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15085 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901551/HADOOP-15085.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c59b2f2754ef 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 00129c5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13812/testReport/ 

[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-11 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286713#comment-16286713
 ] 

Jim Brennan commented on HADOOP-15085:
--

Looks like this one is ready.
Please review.


> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HADOOP-15085
> URL: https://issues.apache.org/jira/browse/HADOOP-15085
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Attachments: HADOOP-15085.001.patch, HADOOP-15085.002.patch, 
> HADOOP-15085.003.patch, HADOOP-15085.004.patch
>
>
> There are a few places in hadoop-common that are closing an output stream 
> with IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15109:

Attachment: HADOOP-15109.001.patch

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15109.001.patch
>
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15109:

Status: Patch Available  (was: Open)

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15109.001.patch
>
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15107) Prove the correctness of the new committers, or fix where they are not correct

2017-12-11 Thread Ryan Blue (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286673#comment-16286673
 ] 

Ryan Blue commented on HADOOP-15107:


For the definition of correctness, I think we will need two based on the 
possible failures that are handled. Task-level failure tolerance: the committer 
can handle any task failure, including during task commit. Job-level failure 
tolerance: the committer can handle failure during job commit. The contribution 
of the multi-part committer is that it handles task-level failure without a 
copy and minimizes the impact of job-level failure. But, it doesn't guarantee 
job-level failure if the job commit fails.

> Prove the correctness of the new committers, or fix where they are not correct
> --
>
> Key: HADOOP-15107
> URL: https://issues.apache.org/jira/browse/HADOOP-15107
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> I'm writing about the paper on the committers, one which, being a proper 
> paper, requires me to show the committers work.
> # define the requirements of a "Correct" committed job (this applies to the 
> FileOutputCommitter too)
> # show that the Staging committer meets these requirements (most of this is 
> implicit in that it uses the V1 FileOutputCommitter to marshall .pendingset 
> lists from committed tasks to the final destination, where they are read and 
> committed.
> # Show the magic committer also works.
> I'm now not sure that the magic committer works.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15006) Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS

2017-12-11 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286574#comment-16286574
 ] 

Steve Moist commented on HADOOP-15006:
--

Hey [~fabbri] and [~steve_l], any traction on this?

> Encrypt S3A data client-side with Hadoop libraries & Hadoop KMS
> ---
>
> Key: HADOOP-15006
> URL: https://issues.apache.org/jira/browse/HADOOP-15006
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3, kms
>Reporter: Steve Moist
>Priority: Minor
> Attachments: S3-CSE Proposal.pdf
>
>
> This is for the proposal to introduce Client Side Encryption to S3 in such a 
> way that it can leverage HDFS transparent encryption, use the Hadoop KMS to 
> manage keys, use the `hdfs crypto` command line tools to manage encryption 
> zones in the cloud, and enable distcp to copy from HDFS to S3 (and 
> vice-versa) with data still encrypted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-12-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286627#comment-16286627
 ] 

Daryn Sharp commented on HADOOP-9747:
-

Trying to get this patch up but found out ranger is doing some amazing dubious 
things to/with the ugi.  I can't believe it actually works w/o identity race 
conditions.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14209) Remove @Ignore from valid S3a test.

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286631#comment-16286631
 ] 

genericqa commented on HADOOP-14209:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | HADOOP-14209 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12865580/HADOOP-14209-branch-2-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9b27e4c169c 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 7b3c64a |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_151 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13813/testReport/ |
| Max. process+thread count | 156 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13813/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove @Ignore from valid S3a test.
> ---
>
> Key: HADOOP-14209
> URL: https://issues.apache.org/jira/browse/HADOOP-14209
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Steve Moist
>Assignee: Steve Moist
>Priority: Trivial
>  Labels: newbie
>

[jira] [Commented] (HADOOP-14209) Remove @Ignore from valid S3a test.

2017-12-11 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286572#comment-16286572
 ] 

Steve Moist commented on HADOOP-14209:
--

Is this still something that is needed to be done?

> Remove @Ignore from valid S3a test.
> ---
>
> Key: HADOOP-14209
> URL: https://issues.apache.org/jira/browse/HADOOP-14209
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Steve Moist
>Assignee: Steve Moist
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-14209-001.patch, HADOOP-14209-branch-2-001.patch
>
>
> The class org.apache.hadoop.fs.s3a.ITestS3AEncryptionAlgorithmValidation is 
> ignored through the @Ignore annotation, this should be removed as it is a 
> valid test class.  This was a minor mistake introduced during development of 
> HADOOP-13075.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-11 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated HADOOP-15085:
-
Attachment: HADOOP-15085.004.patch

Updated patch to fix checkstyle issues.

> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HADOOP-15085
> URL: https://issues.apache.org/jira/browse/HADOOP-15085
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Attachments: HADOOP-15085.001.patch, HADOOP-15085.002.patch, 
> HADOOP-15085.003.patch, HADOOP-15085.004.patch
>
>
> There are a few places in hadoop-common that are closing an output stream 
> with IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9129) ViewFs does not validate internal names in the mount table

2017-12-11 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-9129:
---
Target Version/s:   (was: )
  Status: Patch Available  (was: Open)

> ViewFs does not validate internal names in the mount table
> --
>
> Key: HADOOP-9129
> URL: https://issues.apache.org/jira/browse/HADOOP-9129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Chris Nauroth
>Assignee: Hanisha Koneru
> Attachments: HADOOP-9129.001.patch
>
>
> Currently, there is no explicit validation of {{ViewFs}} internal names in 
> the mount table during initialization.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9129) ViewFs does not validate internal names in the mount table

2017-12-11 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-9129:
---
Attachment: HADOOP-9129.001.patch

For a start, attached patch v01 which validates that the mount table source 
entry path is a valid HDFS path.


> ViewFs does not validate internal names in the mount table
> --
>
> Key: HADOOP-9129
> URL: https://issues.apache.org/jira/browse/HADOOP-9129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Chris Nauroth
>Assignee: Hanisha Koneru
> Attachments: HADOOP-9129.001.patch
>
>
> Currently, there is no explicit validation of {{ViewFs}} internal names in 
> the mount table during initialization.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286393#comment-16286393
 ] 

genericqa commented on HADOOP-15085:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-common-project: The patch generated 2 new 
+ 530 unchanged - 1 fixed = 532 total (was 531) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
36s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
53s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15085 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901522/HADOOP-15085.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 66f6d27381fd 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 312ceeb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HADOOP-15085) Output streams closed with IOUtils suppressing write errors

2017-12-11 Thread Jim Brennan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated HADOOP-15085:
-
Attachment: HADOOP-15085.003.patch

Thanks for the comments.
I've addressed them in the attached patch.


> Output streams closed with IOUtils suppressing write errors
> ---
>
> Key: HADOOP-15085
> URL: https://issues.apache.org/jira/browse/HADOOP-15085
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Jim Brennan
> Attachments: HADOOP-15085.001.patch, HADOOP-15085.002.patch, 
> HADOOP-15085.003.patch
>
>
> There are a few places in hadoop-common that are closing an output stream 
> with IOUtils.cleanupWithLogger like this:
> {code}
>   try {
> ...write to outStream...
>   } finally {
> IOUtils.cleanupWithLogger(LOG, outStream);
>   }
> {code}
> This suppresses any IOException that occurs during the close() method which 
> could lead to partial/corrupted output without throwing a corresponding 
> exception.  The code should either use try-with-resources or explicitly close 
> the stream within the try block so the exception thrown during close() is 
> properly propagated as exceptions during write operations are.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HADOOP-15109:
---

Assignee: Ajay Kumar

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Ajay Kumar
>Priority: Minor
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15109:

Status: Open  (was: Patch Available)

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Minor
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15109:

Status: Patch Available  (was: Open)

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Minor
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15108) Testcase TestBalancer#testBalancerWithPinnedBlocks always fails

2017-12-11 Thread Jianfei Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285797#comment-16285797
 ] 

Jianfei Jiang edited comment on HADOOP-15108 at 12/11/17 11:50 AM:
---

Is anyone run this case successfully? If you succeed, could you please tell me 
how to do. 

Due to debugging experience and the error log message, there may be something 
wrong with the code below. There are two favoredNodes both target to local, the 
file: /tmp.txt seems to have lease conflict. 

DFSTestUtil.createFile(cluster.getFileSystem(0), filePath, false, 1024,
totalUsedSpace / numOfDatanodes, DEFAULT_BLOCK_SIZE,
(short) numOfDatanodes, 0, false, favoredNodes);

In the preliminary patch, I change the two datanodes in the cluster to only 
one, then the testcase runs successfully. It will only remain only one 
favoredNode and have no conflict. In my opinion, the testcase will still reach 
its goal when given only one node at the beginning. However, I am not certain 
about it.

The following is the error log:

2017-12-11 18:45:54,063 [PacketResponder: 
BP-197616310-127.0.1.1-1512989063241:blk_1073741827_1003, 
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37715]] INFO  
datanode.DataNode (BlockReceiver.java:run(1497)) - PacketResponder: 
BP-197616310-127.0.1.1-1512989063241:blk_1073741827_1003, 
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37715] terminating
2017-12-11 18:46:02,292 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1957)) - Shutting down the Mini HDFS Cluster
2017-12-11 18:46:02,293 [DataStreamer for file /tmp.txt] WARN  
hdfs.DataStreamer (DataStreamer.java:run(843)) - DataStreamer Exception
java.io.InterruptedIOException: Call interrupted
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
at org.apache.hadoop.ipc.Client.call(Client.java:1436)
at org.apache.hadoop.ipc.Client.call(Client.java:1346)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy25.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy26.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1031)
at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1882)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1685)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:733)
2017-12-11 18:46:02,298 [main] ERROR hdfs.DFSClient 
(DFSClient.java:closeAllFilesBeingWritten(602)) - Failed to close file: 
/tmp.txt with inode: 16386
java.io.InterruptedIOException: Call interrupted
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
at org.apache.hadoop.ipc.Client.call(Client.java:1436)
at org.apache.hadoop.ipc.Client.call(Client.java:1346)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy25.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 

[jira] [Updated] (HADOOP-15108) Testcase TestBalancer#testBalancerWithPinnedBlocks always fails

2017-12-11 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HADOOP-15108:
---
Status: Open  (was: Patch Available)

> Testcase TestBalancer#testBalancerWithPinnedBlocks always fails
> ---
>
> Key: HADOOP-15108
> URL: https://issues.apache.org/jira/browse/HADOOP-15108
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0-beta1
>Reporter: Jianfei Jiang
> Attachments: HADOOP-15108.000.patch
>
>
> When running testcases without any code changes, the function 
> testBalancerWithPinnedBlocks in TestBalancer.java never succeeded. I tried to 
> use Ubuntu 16.04 and redhat 7, maybe the failure is not related to various 
> linux environment. I am not sure if there is some bug in this case or I used 
> wrong environment and settings. Could anyone give some advice.
> ---
> Test set: org.apache.hadoop.hdfs.server.balancer.TestBalancer
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 100.389 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
> testBalancerWithPinnedBlocks(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
>   Time elapsed: 100.134 sec  <<< ERROR!
> java.lang.Exception: test timed out after 10 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:903)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:773)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:870)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:441)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithPinnedBlocks(TestBalancer.java:515)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15108) Testcase TestBalancer#testBalancerWithPinnedBlocks always fails

2017-12-11 Thread Jianfei Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285797#comment-16285797
 ] 

Jianfei Jiang edited comment on HADOOP-15108 at 12/11/17 11:45 AM:
---

Due to debugging experience and the error log message, there may be something 
wrong with the code below. There are two favoredNodes both target to local, the 
file: /tmp.txt seems to have lease conflict. 

DFSTestUtil.createFile(cluster.getFileSystem(0), filePath, false, 1024,
totalUsedSpace / numOfDatanodes, DEFAULT_BLOCK_SIZE,
(short) numOfDatanodes, 0, false, favoredNodes);

When I change the two datanodes in the cluster to only one which shown in my 
patch, the testcase runs successfully. It will only remain only one favoredNode 
and have no conflict. In my opinion, the testcase will still reach its goal 
when given only one node at the beginning. However, I am not certain about it.

The following is the error log:

2017-12-11 18:45:54,063 [PacketResponder: 
BP-197616310-127.0.1.1-1512989063241:blk_1073741827_1003, 
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37715]] INFO  
datanode.DataNode (BlockReceiver.java:run(1497)) - PacketResponder: 
BP-197616310-127.0.1.1-1512989063241:blk_1073741827_1003, 
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37715] terminating
2017-12-11 18:46:02,292 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1957)) - Shutting down the Mini HDFS Cluster
2017-12-11 18:46:02,293 [DataStreamer for file /tmp.txt] WARN  
hdfs.DataStreamer (DataStreamer.java:run(843)) - DataStreamer Exception
java.io.InterruptedIOException: Call interrupted
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
at org.apache.hadoop.ipc.Client.call(Client.java:1436)
at org.apache.hadoop.ipc.Client.call(Client.java:1346)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy25.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy26.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1031)
at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1882)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1685)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:733)
2017-12-11 18:46:02,298 [main] ERROR hdfs.DFSClient 
(DFSClient.java:closeAllFilesBeingWritten(602)) - Failed to close file: 
/tmp.txt with inode: 16386
java.io.InterruptedIOException: Call interrupted
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
at org.apache.hadoop.ipc.Client.call(Client.java:1436)
at org.apache.hadoop.ipc.Client.call(Client.java:1346)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy25.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 

[jira] [Updated] (HADOOP-15108) Testcase TestBalancer#testBalancerWithPinnedBlocks always fails

2017-12-11 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HADOOP-15108:
---
Status: Patch Available  (was: Open)

> Testcase TestBalancer#testBalancerWithPinnedBlocks always fails
> ---
>
> Key: HADOOP-15108
> URL: https://issues.apache.org/jira/browse/HADOOP-15108
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0-beta1
>Reporter: Jianfei Jiang
> Attachments: HADOOP-15108.000.patch
>
>
> When running testcases without any code changes, the function 
> testBalancerWithPinnedBlocks in TestBalancer.java never succeeded. I tried to 
> use Ubuntu 16.04 and redhat 7, maybe the failure is not related to various 
> linux environment. I am not sure if there is some bug in this case or I used 
> wrong environment and settings. Could anyone give some advice.
> ---
> Test set: org.apache.hadoop.hdfs.server.balancer.TestBalancer
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 100.389 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
> testBalancerWithPinnedBlocks(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
>   Time elapsed: 100.134 sec  <<< ERROR!
> java.lang.Exception: test timed out after 10 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:903)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:773)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:870)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:441)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithPinnedBlocks(TestBalancer.java:515)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15108) Testcase TestBalancer#testBalancerWithPinnedBlocks always fails

2017-12-11 Thread Jianfei Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jianfei Jiang updated HADOOP-15108:
---
Attachment: HADOOP-15108.000.patch

> Testcase TestBalancer#testBalancerWithPinnedBlocks always fails
> ---
>
> Key: HADOOP-15108
> URL: https://issues.apache.org/jira/browse/HADOOP-15108
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0-beta1
>Reporter: Jianfei Jiang
> Attachments: HADOOP-15108.000.patch
>
>
> When running testcases without any code changes, the function 
> testBalancerWithPinnedBlocks in TestBalancer.java never succeeded. I tried to 
> use Ubuntu 16.04 and redhat 7, maybe the failure is not related to various 
> linux environment. I am not sure if there is some bug in this case or I used 
> wrong environment and settings. Could anyone give some advice.
> ---
> Test set: org.apache.hadoop.hdfs.server.balancer.TestBalancer
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 100.389 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
> testBalancerWithPinnedBlocks(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
>   Time elapsed: 100.134 sec  <<< ERROR!
> java.lang.Exception: test timed out after 10 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:903)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:773)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:870)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:441)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithPinnedBlocks(TestBalancer.java:515)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15108) Testcase TestBalancer#testBalancerWithPinnedBlocks always fails

2017-12-11 Thread Jianfei Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285797#comment-16285797
 ] 

Jianfei Jiang commented on HADOOP-15108:


Due to debugging experience and the error log message, there may be something 
wrong with the code below. There are two favoredNodes both target to local, the 
file: /tmp.txt seems to have lease conflict. 
DFSTestUtil.createFile(cluster.getFileSystem(0), filePath, false, 1024,
totalUsedSpace / numOfDatanodes, DEFAULT_BLOCK_SIZE,
(short) numOfDatanodes, 0, false, favoredNodes);

When I change the two favoredNodes to only one which shown in my patch, the 
testcase runs successfully. I am not certain this change has no influence to 
the original target of function in this test.

2017-12-11 18:45:54,063 [PacketResponder: 
BP-197616310-127.0.1.1-1512989063241:blk_1073741827_1003, 
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37715]] INFO  
datanode.DataNode (BlockReceiver.java:run(1497)) - PacketResponder: 
BP-197616310-127.0.1.1-1512989063241:blk_1073741827_1003, 
type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[127.0.0.1:37715] terminating
2017-12-11 18:46:02,292 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1957)) - Shutting down the Mini HDFS Cluster
2017-12-11 18:46:02,293 [DataStreamer for file /tmp.txt] WARN  
hdfs.DataStreamer (DataStreamer.java:run(843)) - DataStreamer Exception
java.io.InterruptedIOException: Call interrupted
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
at org.apache.hadoop.ipc.Client.call(Client.java:1436)
at org.apache.hadoop.ipc.Client.call(Client.java:1346)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy25.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy26.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1031)
at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1882)
at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1685)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:733)
2017-12-11 18:46:02,298 [main] ERROR hdfs.DFSClient 
(DFSClient.java:closeAllFilesBeingWritten(602)) - Failed to close file: 
/tmp.txt with inode: 16386
java.io.InterruptedIOException: Call interrupted
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1484)
at org.apache.hadoop.ipc.Client.call(Client.java:1436)
at org.apache.hadoop.ipc.Client.call(Client.java:1346)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy25.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:495)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 

[jira] [Updated] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15109:

Priority: Minor  (was: Major)

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Minor
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)
zhoutai.zt created HADOOP-15109:
---

 Summary: TestDFSIO -read -random doesn't work on file sized 4GB
 Key: HADOOP-15109
 URL: https://issues.apache.org/jira/browse/HADOOP-15109
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0-beta1
Reporter: zhoutai.zt


TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The cause 
is:

{code:java}
private long nextOffset(long current) {
  if(skipSize == 0)
return rnd.nextInt((int)(fileSize));
  if(skipSize > 0)
return (current < 0) ? 0 : (current + bufferSize + skipSize);
  // skipSize < 0
  return (current < 0) ? Math.max(0, fileSize - bufferSize) :
 Math.max(0, current + skipSize);
}
  }
{code}

When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) will 
be negative and cause Random.nextInt throws  IllegalArgumentException("n must 
be positive").




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15108) Testcase TestBalancer#testBalancerWithPinnedBlocks always fails

2017-12-11 Thread Jianfei Jiang (JIRA)
Jianfei Jiang created HADOOP-15108:
--

 Summary: Testcase TestBalancer#testBalancerWithPinnedBlocks always 
fails
 Key: HADOOP-15108
 URL: https://issues.apache.org/jira/browse/HADOOP-15108
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0-beta1
Reporter: Jianfei Jiang


When running testcases without any code changes, the function 
testBalancerWithPinnedBlocks in TestBalancer.java never succeeded. I tried to 
use Ubuntu 16.04 and redhat 7, maybe the failure is not related to various 
linux environment. I am not sure if there is some bug in this case or I used 
wrong environment and settings. Could anyone give some advice.

---
Test set: org.apache.hadoop.hdfs.server.balancer.TestBalancer
---
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 100.389 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
testBalancerWithPinnedBlocks(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
  Time elapsed: 100.134 sec  <<< ERROR!
java.lang.Exception: test timed out after 10 milliseconds
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:903)
at 
org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:773)
at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:870)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:441)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithPinnedBlocks(TestBalancer.java:515)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org