[jira] [Resolved] (HDFS-10859) TestBalancer#testUnknownDatanodeSimple and testBalancerWithKeytabs are flaky in branch-2.7

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-10859.
--
Resolution: Duplicate

Thanks for the pointer Xiao. HDFS-10716 indeed solves the problem.

> TestBalancer#testUnknownDatanodeSimple and testBalancerWithKeytabs are flaky 
> in branch-2.7
> --
>
> Key: HDFS-10859
> URL: https://issues.apache.org/jira/browse/HDFS-10859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, test
>Affects Versions: 2.7.4
>Reporter: Zhe Zhang
>Priority: Minor
> Attachments: testUnknownDatanodeSimple-failure.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10716) In Balancer, the target task should be removed when its size < 0.

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486436#comment-15486436
 ] 

Zhe Zhang commented on HDFS-10716:
--

I backported this to branch-2.7.

> In Balancer, the target task should be removed when its size < 0.
> -
>
> Key: HDFS-10716
> URL: https://issues.apache.org/jira/browse/HDFS-10716
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10716.001.patch, failing.log
>
>
> In HDFS-10602, we found a failing case that the balancer moves data always 
> between 2 DNs. And it made the balancer can't be finished. I debug the code 
> for this, I found there seems a bug in choosing pending blocks in 
> {{Dispatcher.Source.chooseNextMove}}.
> The codes:
> {code}
> private PendingMove chooseNextMove() {
>   for (Iterator i = tasks.iterator(); i.hasNext();) {
> final Task task = i.next();
> final DDatanode target = task.target.getDDatanode();
> final PendingMove pendingBlock = new PendingMove(this, task.target);
> if (target.addPendingBlock(pendingBlock)) {
>   // target is not busy, so do a tentative block allocation
>   if (pendingBlock.chooseBlockAndProxy()) {
> long blockSize = pendingBlock.reportedBlock.getNumBytes(this);
> incScheduledSize(-blockSize);
> task.size -= blockSize;
> // If the size of bytes that need to be moved was first reduced 
> to less than 0
> // it should also be removed.
> if (task.size == 0) {
>   i.remove();
> }
> return pendingBlock;
> //...
> {code}
> The value of task.size was assigned in 
> {{Balancer#matchSourceWithTargetToMove}}
> {code}
> long size = Math.min(source.availableSizeToMove(), 
> target.availableSizeToMove());
> final Task task = new Task(target, size);
> {code}
> This value was depended on the source and target node, and this value will 
> not always can be reduced to 0 in choosing pending blocks. And then, it will 
> still move the data to the target node even if the size of bytes that needed 
> to move has been already reduced less than 0. And finally it will make the 
> data imbalance again in cluster, then it leads the next balancer.
> We can opitimize for this as this title mentioned, I think this can speed the 
> balancer.
> Can see the logs for failling case, or see the HDFS-10602.(Concentrating on 
> the change record for the scheduled size of target node. That's my added info 
> for debug, like this).
> {code}
> 2016-08-01 16:51:57,492 [pool-51-thread-1] INFO  balancer.Dispatcher 
> (Dispatcher.java:chooseNextMove(799)) - TargetNode: 58794, bytes scheduled to 
> move, after: -67, before: 33
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10716) In Balancer, the target task should be removed when its size < 0.

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10716:
-
Fix Version/s: 2.7.4

> In Balancer, the target task should be removed when its size < 0.
> -
>
> Key: HDFS-10716
> URL: https://issues.apache.org/jira/browse/HDFS-10716
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10716.001.patch, failing.log
>
>
> In HDFS-10602, we found a failing case that the balancer moves data always 
> between 2 DNs. And it made the balancer can't be finished. I debug the code 
> for this, I found there seems a bug in choosing pending blocks in 
> {{Dispatcher.Source.chooseNextMove}}.
> The codes:
> {code}
> private PendingMove chooseNextMove() {
>   for (Iterator i = tasks.iterator(); i.hasNext();) {
> final Task task = i.next();
> final DDatanode target = task.target.getDDatanode();
> final PendingMove pendingBlock = new PendingMove(this, task.target);
> if (target.addPendingBlock(pendingBlock)) {
>   // target is not busy, so do a tentative block allocation
>   if (pendingBlock.chooseBlockAndProxy()) {
> long blockSize = pendingBlock.reportedBlock.getNumBytes(this);
> incScheduledSize(-blockSize);
> task.size -= blockSize;
> // If the size of bytes that need to be moved was first reduced 
> to less than 0
> // it should also be removed.
> if (task.size == 0) {
>   i.remove();
> }
> return pendingBlock;
> //...
> {code}
> The value of task.size was assigned in 
> {{Balancer#matchSourceWithTargetToMove}}
> {code}
> long size = Math.min(source.availableSizeToMove(), 
> target.availableSizeToMove());
> final Task task = new Task(target, size);
> {code}
> This value was depended on the source and target node, and this value will 
> not always can be reduced to 0 in choosing pending blocks. And then, it will 
> still move the data to the target node even if the size of bytes that needed 
> to move has been already reduced less than 0. And finally it will make the 
> data imbalance again in cluster, then it leads the next balancer.
> We can opitimize for this as this title mentioned, I think this can speed the 
> balancer.
> Can see the logs for failling case, or see the HDFS-10602.(Concentrating on 
> the change record for the scheduled size of target node. That's my added info 
> for debug, like this).
> {code}
> 2016-08-01 16:51:57,492 [pool-51-thread-1] INFO  balancer.Dispatcher 
> (Dispatcher.java:chooseNextMove(799)) - TargetNode: 58794, bytes scheduled to 
> move, after: -67, before: 33
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10856) Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport

2016-09-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486429#comment-15486429
 ] 

Yiqun Lin commented on HDFS-10856:
--

Thanks [~ajisakaa] for the commit, I will attach a patch for HADOOP-13598 soon.

> Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport
> --
>
> Key: HDFS-10856
> URL: https://issues.apache.org/jira/browse/HDFS-10856
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10856.001.patch
>
>
> Now the interval is 6 hours by default.
> {code:title=BPServiceActor$Scheduler#scheduleNextBlockReport}
> /* say the last block report was at 8:20:14. The current report
>  * should have started around 9:20:14 (default 1 hour interval).
>  * If current time is :
>  *   1) normal like 9:20:18, next report should be at 10:20:14
>  *   2) unexpected like 11:35:43, next report should be at 12:20:14
>  */
> nextBlockReportTime +=
>   (((monotonicNow() - nextBlockReportTime + 
> blockReportIntervalMs) /
>   blockReportIntervalMs)) * blockReportIntervalMs;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10856) Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport

2016-09-12 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486404#comment-15486404
 ] 

Akira Ajisaka commented on HDFS-10856:
--

{quote}
bq. This should be done in a separate jira.
+1 for this, it will make users avoid these error when they creating patches.
{quote}
Filed HADOOP-13598.

> Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport
> --
>
> Key: HDFS-10856
> URL: https://issues.apache.org/jira/browse/HDFS-10856
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10856.001.patch
>
>
> Now the interval is 6 hours by default.
> {code:title=BPServiceActor$Scheduler#scheduleNextBlockReport}
> /* say the last block report was at 8:20:14. The current report
>  * should have started around 9:20:14 (default 1 hour interval).
>  * If current time is :
>  *   1) normal like 9:20:18, next report should be at 10:20:14
>  *   2) unexpected like 11:35:43, next report should be at 12:20:14
>  */
> nextBlockReportTime +=
>   (((monotonicNow() - nextBlockReportTime + 
> blockReportIntervalMs) /
>   blockReportIntervalMs)) * blockReportIntervalMs;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10856) Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport

2016-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486400#comment-15486400
 ] 

Hudson commented on HDFS-10856:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10431 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10431/])
HDFS-10856. Update the comment of (aajisaka: rev 
f0876b8b60c19aa25e0417ac0f419a3a82bf210b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java


> Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport
> --
>
> Key: HDFS-10856
> URL: https://issues.apache.org/jira/browse/HDFS-10856
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10856.001.patch
>
>
> Now the interval is 6 hours by default.
> {code:title=BPServiceActor$Scheduler#scheduleNextBlockReport}
> /* say the last block report was at 8:20:14. The current report
>  * should have started around 9:20:14 (default 1 hour interval).
>  * If current time is :
>  *   1) normal like 9:20:18, next report should be at 10:20:14
>  *   2) unexpected like 11:35:43, next report should be at 12:20:14
>  */
> nextBlockReportTime +=
>   (((monotonicNow() - nextBlockReportTime + 
> blockReportIntervalMs) /
>   blockReportIntervalMs)) * blockReportIntervalMs;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10860) Switch HttpFS to use Jetty

2016-09-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10860:
-
Description: 
The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
other good options, I would propose switching to {{Jetty 9}} for the following 
reasons:
* Easier migration. Both Tomcat and Jetty are based on {{Servlet Containers}}, 
so we don't have to change client code that much. It would require more work to 
switch to {{JAX-RS}}.
* Well established.
* Good performance and scalability.

Other alternatives:
* Jersey + Grizzly
* Tomcat 8

Your opinions will be greatly appreciated.

  was:
The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
other good options, I would propose switching to {{Jetty 9}} for the following 
reasons:
* Easier migration. Both Tomcat and Jetty are based on {{Servlet Containers}}, 
so we don't have change client code that much. It would require more work to 
switch to {{JAX-RS}}.
* Well established.
* Good performance and scalability.

Other alternatives:
* Jersey + Grizzly
* Tomcat 8

Your opinions will be greatly appreciated.


> Switch HttpFS to use Jetty
> --
>
> Key: HDFS-10860
> URL: https://issues.apache.org/jira/browse/HDFS-10860
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have to change client code that much. It would 
> require more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10744) Internally optimize path component resolution

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486380#comment-15486380
 ] 

Zhe Zhang commented on HDFS-10744:
--

I verified that all reported failures pass locally. In backporting, all 
conflicts I resolved were due to the fact that the {{FSDirectory#resolvePath}} 
change were made to some code not existing in branch-2.7. I made notes on 
related JIRAs that if they were to be backported to branch-2.7, the backporter 
should pay attention to HDFS-10744. And actually since in this change we 
updated the signature of {{resolvePath}}, it's pretty hard to miss without 
failing the build.

I also made another pass of more tricky changes in {{FSDirectory}}. They were 
all clean in the backport and I think the branch-2.7 patch is making the same 
logical changes as the branch-2.8 patch.

Pinging [~daryn] and [~kihwal] to take another look at the branch-2.7 patch. If 
there's no objection I will commit tomorrow night. Thanks!

> Internally optimize path component resolution
> -
>
> Key: HDFS-10744
> URL: https://issues.apache.org/jira/browse/HDFS-10744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10744-branch-2.7.patch, HDFS-10744.patch
>
>
> {{FSDirectory}}'s path resolution currently uses a mixture of string & 
> byte[][]  conversions, back to string, back to byte[][] for {{INodesInPath}}. 
>  Internally all path component resolution should be byte[][]-based as the 
> precursor to instantiating an {{INodesInPath}} w/o the last 2 unnecessary 
> conversions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10856) Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport

2016-09-12 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10856:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~linyiqun] for the 
contribution.

> Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport
> --
>
> Key: HDFS-10856
> URL: https://issues.apache.org/jira/browse/HDFS-10856
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10856.001.patch
>
>
> Now the interval is 6 hours by default.
> {code:title=BPServiceActor$Scheduler#scheduleNextBlockReport}
> /* say the last block report was at 8:20:14. The current report
>  * should have started around 9:20:14 (default 1 hour interval).
>  * If current time is :
>  *   1) normal like 9:20:18, next report should be at 10:20:14
>  *   2) unexpected like 11:35:43, next report should be at 12:20:14
>  */
> nextBlockReportTime +=
>   (((monotonicNow() - nextBlockReportTime + 
> blockReportIntervalMs) /
>   blockReportIntervalMs)) * blockReportIntervalMs;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10378) FSDirAttrOp#setOwner throws ACE with misleading message

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486345#comment-15486345
 ] 

Hadoop QA commented on HDFS-10378:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 26 unchanged - 1 fixed = 26 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.TestRenameWhileOpen |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10378 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828174/HDFS-10378.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0216869c5847 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16730/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16730/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16730/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FSDirAttrOp#setOwner throws ACE with misleading message
> ---
>
> Key: HDFS-10378
> URL: https://issues.

[jira] [Updated] (HDFS-10856) Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport

2016-09-12 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-10856:
-
Component/s: documentation

LGTM, +1. The test failure is unrelated.

> Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport
> --
>
> Key: HDFS-10856
> URL: https://issues.apache.org/jira/browse/HDFS-10856
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-10856.001.patch
>
>
> Now the interval is 6 hours by default.
> {code:title=BPServiceActor$Scheduler#scheduleNextBlockReport}
> /* say the last block report was at 8:20:14. The current report
>  * should have started around 9:20:14 (default 1 hour interval).
>  * If current time is :
>  *   1) normal like 9:20:18, next report should be at 10:20:14
>  *   2) unexpected like 11:35:43, next report should be at 12:20:14
>  */
> nextBlockReportTime +=
>   (((monotonicNow() - nextBlockReportTime + 
> blockReportIntervalMs) /
>   blockReportIntervalMs)) * blockReportIntervalMs;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10657) testAclCLI.xml setfacl test should expect mask r-x

2016-09-12 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486309#comment-15486309
 ] 

Yongjun Zhang commented on HDFS-10657:
--

Committed to trunk, branch-2 and branch-2.8.

Thanks [~jzhuge] for the contribution, and [~vinayrpet] for the input.




> testAclCLI.xml setfacl test should expect mask r-x
> --
>
> Key: HDFS-10657
> URL: https://issues.apache.org/jira/browse/HDFS-10657
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10657.001.patch
>
>
> The following test case should expect {{mask::r-x}} ACL entry instead of 
> {{mask::rwx}}:
> {code:xml}
>   setfacl : check inherit default ACL to dir
>   
> -fs NAMENODE -mkdir /dir1
> -fs NAMENODE -setfacl -m 
> default:user:charlie:r-x,default:group:admin:rwx /dir1
> -fs NAMENODE -mkdir /dir1/dir2
> -fs NAMENODE -getfacl /dir1/dir2
> ...
> 
>   SubstringComparator
>   mask::rwx
> 
> {code}
> But why does it pass? Because the comparator type is {{SubstringComparator}} 
> and it matches the wrong line {{default:mask::rwx}} in the output of 
> {{getfacl}}:
> {noformat}
> # file: /dir1/dir2
> # owner: jzhuge
> # group: supergroup
> user::rwx
> user:charlie:r-x
> group::r-x
> group:admin:rwx   #effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:charlie:r-x
> default:group::r-x
> default:group:admin:rwx
> default:mask::rwx
> default:other::r-x
> {noformat}
> The comparator should match the entire line instead of just substring. Other 
> comparators in {{testAclCLI.xml}} have the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10657) testAclCLI.xml setfacl test should expect mask r-x

2016-09-12 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10657:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> testAclCLI.xml setfacl test should expect mask r-x
> --
>
> Key: HDFS-10657
> URL: https://issues.apache.org/jira/browse/HDFS-10657
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10657.001.patch
>
>
> The following test case should expect {{mask::r-x}} ACL entry instead of 
> {{mask::rwx}}:
> {code:xml}
>   setfacl : check inherit default ACL to dir
>   
> -fs NAMENODE -mkdir /dir1
> -fs NAMENODE -setfacl -m 
> default:user:charlie:r-x,default:group:admin:rwx /dir1
> -fs NAMENODE -mkdir /dir1/dir2
> -fs NAMENODE -getfacl /dir1/dir2
> ...
> 
>   SubstringComparator
>   mask::rwx
> 
> {code}
> But why does it pass? Because the comparator type is {{SubstringComparator}} 
> and it matches the wrong line {{default:mask::rwx}} in the output of 
> {{getfacl}}:
> {noformat}
> # file: /dir1/dir2
> # owner: jzhuge
> # group: supergroup
> user::rwx
> user:charlie:r-x
> group::r-x
> group:admin:rwx   #effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:charlie:r-x
> default:group::r-x
> default:group:admin:rwx
> default:mask::rwx
> default:other::r-x
> {noformat}
> The comparator should match the entire line instead of just substring. Other 
> comparators in {{testAclCLI.xml}} have the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10860) Switch HttpFS to use Jetty

2016-09-12 Thread John Zhuge (JIRA)
John Zhuge created HDFS-10860:
-

 Summary: Switch HttpFS to use Jetty
 Key: HDFS-10860
 URL: https://issues.apache.org/jira/browse/HDFS-10860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Affects Versions: 2.6.0
Reporter: John Zhuge
Assignee: John Zhuge


The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
other good options, I would propose switching to {{Jetty 9}} for the following 
reasons:
* Easier migration. Both Tomcat and Jetty are based on {{Servlet Containers}}, 
so we don't have change client code that much. It would require more work to 
switch to {{JAX-RS}}.
* Well established.
* Good performance and scalability.

Other alternatives:
* Jersey + Grizzly
* Tomcat 8

Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10657) testAclCLI.xml setfacl test should expect mask r-x

2016-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486274#comment-15486274
 ] 

Hudson commented on HDFS-10657:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10430 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10430/])
HDFS-10657. testAclCLI.xml setfacl test should expect mask r-x. (John (yzhang: 
rev d2466ec3e01b5ef2a0bde738232c5ad6d2d956eb)
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/cli/util/ExactLineComparator.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml


> testAclCLI.xml setfacl test should expect mask r-x
> --
>
> Key: HDFS-10657
> URL: https://issues.apache.org/jira/browse/HDFS-10657
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-10657.001.patch
>
>
> The following test case should expect {{mask::r-x}} ACL entry instead of 
> {{mask::rwx}}:
> {code:xml}
>   setfacl : check inherit default ACL to dir
>   
> -fs NAMENODE -mkdir /dir1
> -fs NAMENODE -setfacl -m 
> default:user:charlie:r-x,default:group:admin:rwx /dir1
> -fs NAMENODE -mkdir /dir1/dir2
> -fs NAMENODE -getfacl /dir1/dir2
> ...
> 
>   SubstringComparator
>   mask::rwx
> 
> {code}
> But why does it pass? Because the comparator type is {{SubstringComparator}} 
> and it matches the wrong line {{default:mask::rwx}} in the output of 
> {{getfacl}}:
> {noformat}
> # file: /dir1/dir2
> # owner: jzhuge
> # group: supergroup
> user::rwx
> user:charlie:r-x
> group::r-x
> group:admin:rwx   #effective:r-x
> mask::r-x
> other::r-x
> default:user::rwx
> default:user:charlie:r-x
> default:group::r-x
> default:group:admin:rwx
> default:mask::rwx
> default:other::r-x
> {noformat}
> The comparator should match the entire line instead of just substring. Other 
> comparators in {{testAclCLI.xml}} have the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10837) Standardize serializiation of WebHDFS DirectoryListing

2016-09-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486236#comment-15486236
 ] 

Xiao Chen commented on HDFS-10837:
--

Thanks Andrew for revving! 
The feature is covered on a wider test, so no tests needed for the smaller 
changes here. +1 on patch 3.

> Standardize serializiation of WebHDFS DirectoryListing
> --
>
> Key: HDFS-10837
> URL: https://issues.apache.org/jira/browse/HDFS-10837
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.9.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-10837.001.patch, hdfs-10837.002.patch, 
> hdfs-10837.003.patch
>
>
> HDFS-10784 introduced a batched listing API to WebHDFS. However, the API 
> response doesn't follow the format of other WebHDFS calls. Let's standardize 
> it, and also document the schema.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10378) FSDirAttrOp#setOwner throws ACE with misleading message

2016-09-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10378:
--
Attachment: HDFS-10378.003.patch

Patch 003:
* Fix the unit test bugs

> FSDirAttrOp#setOwner throws ACE with misleading message
> ---
>
> Key: HDFS-10378
> URL: https://issues.apache.org/jira/browse/HDFS-10378
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10378-unit.patch, HDFS-10378.001.patch, 
> HDFS-10378.002.patch, HDFS-10378.003.patch
>
>
> Calling {{setOwner}} as a non-super user does trigger 
> {{AccessControlException}}, however, the message "Permission denied. 
> user=user1967821757 is not the owner of inode=child" is wrong. Expect this 
> message: "Non-super user cannot change owner".
> Output of patched unit test {{TestPermission.testFilePermission}}:
> {noformat}
> 2016-05-06 16:45:44,915 [main] INFO  security.TestPermission 
> (TestPermission.java:testFilePermission(280)) - GOOD: got 
> org.apache.hadoop.security.AccessControlException: Permission denied. 
> user=user1967821757 is not the owner of inode=child1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:273)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:250)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1642)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1626)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1595)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:88)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:835)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:481)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:665)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2417)
> {noformat}
> Will upload the unit test patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10378) FSDirAttrOp#setOwner throws ACE with misleading message

2016-09-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10378:
--
Status: Patch Available  (was: In Progress)

> FSDirAttrOp#setOwner throws ACE with misleading message
> ---
>
> Key: HDFS-10378
> URL: https://issues.apache.org/jira/browse/HDFS-10378
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10378-unit.patch, HDFS-10378.001.patch, 
> HDFS-10378.002.patch, HDFS-10378.003.patch
>
>
> Calling {{setOwner}} as a non-super user does trigger 
> {{AccessControlException}}, however, the message "Permission denied. 
> user=user1967821757 is not the owner of inode=child" is wrong. Expect this 
> message: "Non-super user cannot change owner".
> Output of patched unit test {{TestPermission.testFilePermission}}:
> {noformat}
> 2016-05-06 16:45:44,915 [main] INFO  security.TestPermission 
> (TestPermission.java:testFilePermission(280)) - GOOD: got 
> org.apache.hadoop.security.AccessControlException: Permission denied. 
> user=user1967821757 is not the owner of inode=child1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:273)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:250)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1642)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1626)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1595)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:88)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:835)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:481)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:665)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2417)
> {noformat}
> Will upload the unit test patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10856) Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486128#comment-15486128
 ] 

Hadoop QA commented on HDFS-10856:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10856 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828159/HDFS-10856.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8798f86532bb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16728/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16728/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16728/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport
> --
>
> Key: HDFS-10856
> URL: https://issues.apache.org/jira/browse/HDFS-10856
> 

[jira] [Updated] (HDFS-10378) FSDirAttrOp#setOwner throws ACE with misleading message

2016-09-12 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10378:
--
Summary: FSDirAttrOp#setOwner throws ACE with misleading message  (was: 
FSDirAttrOp#setOwner throws ACE with wrong message)

> FSDirAttrOp#setOwner throws ACE with misleading message
> ---
>
> Key: HDFS-10378
> URL: https://issues.apache.org/jira/browse/HDFS-10378
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10378-unit.patch, HDFS-10378.001.patch, 
> HDFS-10378.002.patch
>
>
> Calling {{setOwner}} as a non-super user does trigger 
> {{AccessControlException}}, however, the message "Permission denied. 
> user=user1967821757 is not the owner of inode=child" is wrong. Expect this 
> message: "Non-super user cannot change owner".
> Output of patched unit test {{TestPermission.testFilePermission}}:
> {noformat}
> 2016-05-06 16:45:44,915 [main] INFO  security.TestPermission 
> (TestPermission.java:testFilePermission(280)) - GOOD: got 
> org.apache.hadoop.security.AccessControlException: Permission denied. 
> user=user1967821757 is not the owner of inode=child1
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:273)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:250)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1642)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1626)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1595)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setOwner(FSDirAttrOp.java:88)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setOwner(FSNamesystem.java:1717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setOwner(NameNodeRpcServer.java:835)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setOwner(ClientNamenodeProtocolServerSideTranslatorPB.java:481)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:665)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2417)
> {noformat}
> Will upload the unit test patch shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486056#comment-15486056
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 379 unchanged - 7 fixed = 382 total (was 386) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10301 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828153/HDFS-10301.014.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 25ecab498dc4 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16726/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16726/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16726/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16726/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> ---

[jira] [Commented] (HDFS-10599) DiskBalancer: Execute CLI via Shell

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486054#comment-15486054
 ] 

Hadoop QA commented on HDFS-10599:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10599 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828155/HDFS-10599.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 10d258c87402 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16727/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16727/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16727/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: Execute CLI via Shell 
> 
>
> Key: HDFS-10599
> URL: https://issues.apache.org/jira/browse/HDFS-10599
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachme

[jira] [Commented] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486029#comment-15486029
 ] 

Hadoop QA commented on HDFS-10562:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 67 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10562 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828162/HDFS-10562.003.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 137807f6aa70 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16729/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16729/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10562-HDFS-1312.001.patch, HDFS-10562.002.patch, 
> HDFS-10562.003.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-09-12 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486024#comment-15486024
 ] 

Virajith Jalaparti commented on HDFS-10636:
---

Hi [~eddyxu],
That is a very valid concern. Our solution is to have a {{ProvidedReplica}} 
class which inherits {{ReplicaInfo}}, and then have 
{{ProvidedFinalizedReplica}} inherit {{ProvidedReplica}}. Not only this, we 
will need to have a hierarchy for {{ProvidedReplica}} that mirrors the 
hierarchy for {{LocalReplica}} in the patch. This is somewhat forced on us as 
different classes are used to implicitly implement the replica state machine 
(for example, {{ReplicaInfo::getBytesOnDisk()}} must resolve correctly in 
{{FsDatasetImpl}}). Thus, we will also need to have 
{{ProvidedReplicaInPipeline}}, {{ProvidedReplicaUnderRecovery}} and others 
which inherit {{ProvidedReplica}}. 

One way to avoid implementing such shadow classes for {{ProvidedReplica}} would 
be to explicitly implement a state machine for the replicas. This would be a 
much more disruptive change, with significant amount of new code and changes 
required to several parts of {{FsDatasetImpl}}, and {{FsVolumeImpl}}. The 
current patch keeps most of the code paths intact with relatively minor changes 
to the APIs of {{ReplicaInfo}}.

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch, 
> HDFS-10636.003.patch, HDFS-10636.004.patch, HDFS-10636.005.patch, 
> HDFS-10636.006.patch, HDFS-10636.007.patch, HDFS-10636.008.patch, 
> HDFS-10636.009.patch, HDFS-10636.010.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10562:

Attachment: HDFS-10562.003.patch

Fixed the issue pointed out by [~arpitagarwal]

> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10562-HDFS-1312.001.patch, HDFS-10562.002.patch, 
> HDFS-10562.003.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10837) Standardize serializiation of WebHDFS DirectoryListing

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15486006#comment-15486006
 ] 

Hadoop QA commented on HDFS-10837:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 
54s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10837 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828148/hdfs-10837.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 49a82685a4d3 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16725/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16725/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Standardize serializiation of WebH

[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485971#comment-15485971
 ] 

Hadoop QA commented on HDFS-10824:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 200 unchanged - 6 fixed = 205 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10824 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828147/HDFS-10824.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e7bd3e626e59 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16724/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16724/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16724/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16724/console |
| Powered by | Apache Yetus 0.4.0-SNAPS

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-09-12 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485956#comment-15485956
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

[~arpiagariu] In the latest patch, BR lease is removed when 
{{context.getTotalRpcs() == context.getCurRpc() + 1}}. If BRs are processed out 
of order/interleaved, the BR lease for the DN will be removed before all the 
BRs from the DN are processed. So, I have modified the {{checkLease}} method in 
{{BlockReportLeaseManager}} to return true when {{node.leaseId == 0}}. Please 
let me know if you see any issues with this approach.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Fix For: 2.7.4
>
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10856) Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport

2016-09-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10856:
-
Attachment: (was: HDFS-10856.001.patch)

> Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport
> --
>
> Key: HDFS-10856
> URL: https://issues.apache.org/jira/browse/HDFS-10856
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-10856.001.patch
>
>
> Now the interval is 6 hours by default.
> {code:title=BPServiceActor$Scheduler#scheduleNextBlockReport}
> /* say the last block report was at 8:20:14. The current report
>  * should have started around 9:20:14 (default 1 hour interval).
>  * If current time is :
>  *   1) normal like 9:20:18, next report should be at 10:20:14
>  *   2) unexpected like 11:35:43, next report should be at 12:20:14
>  */
> nextBlockReportTime +=
>   (((monotonicNow() - nextBlockReportTime + 
> blockReportIntervalMs) /
>   blockReportIntervalMs)) * blockReportIntervalMs;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10856) Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport

2016-09-12 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10856:
-
Attachment: HDFS-10856.001.patch

> Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport
> --
>
> Key: HDFS-10856
> URL: https://issues.apache.org/jira/browse/HDFS-10856
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-10856.001.patch
>
>
> Now the interval is 6 hours by default.
> {code:title=BPServiceActor$Scheduler#scheduleNextBlockReport}
> /* say the last block report was at 8:20:14. The current report
>  * should have started around 9:20:14 (default 1 hour interval).
>  * If current time is :
>  *   1) normal like 9:20:18, next report should be at 10:20:14
>  *   2) unexpected like 11:35:43, next report should be at 12:20:14
>  */
> nextBlockReportTime +=
>   (((monotonicNow() - nextBlockReportTime + 
> blockReportIntervalMs) /
>   blockReportIntervalMs)) * blockReportIntervalMs;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10856) Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport

2016-09-12 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485954#comment-15485954
 ] 

Yiqun Lin commented on HDFS-10856:
--

Thanks [~ajisakaa] for the suggestions. 
{quote}
I'm thinking we should add eol=lf in the following lines to avoid such error
{quote}
You are right. By default way, I creat a patch in windows, it will print the 
following warning:
{code}
warning: LF will be replaced by CRLF in .../BPServiceActor.java
{code}
{quote}
This should be done in a separate jira.
{quote}
+1 for this, it will make users avoid these error when they creating patches.

I had make a change {{*.java   text diff=java  eof=lf}}? And then I did the 
dos2unix command for the output patch file. Hope my latest patch will be 
successful, :).

> Update the comment of BPServiceActor$Scheduler#scheduleNextBlockReport
> --
>
> Key: HDFS-10856
> URL: https://issues.apache.org/jira/browse/HDFS-10856
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-10856.001.patch
>
>
> Now the interval is 6 hours by default.
> {code:title=BPServiceActor$Scheduler#scheduleNextBlockReport}
> /* say the last block report was at 8:20:14. The current report
>  * should have started around 9:20:14 (default 1 hour interval).
>  * If current time is :
>  *   1) normal like 9:20:18, next report should be at 10:20:14
>  *   2) unexpected like 11:35:43, next report should be at 12:20:14
>  */
> nextBlockReportTime +=
>   (((monotonicNow() - nextBlockReportTime + 
> blockReportIntervalMs) /
>   blockReportIntervalMs)) * blockReportIntervalMs;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485945#comment-15485945
 ] 

Anu Engineer commented on HDFS-10562:
-

Thanks for catching that, will do. 

> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10562-HDFS-1312.001.patch, HDFS-10562.002.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485936#comment-15485936
 ] 

Arpit Agarwal commented on HDFS-10562:
--

+1. A minor comment:
{code}
212  *max disk errors* - The configuration used for this move step. 
currently it will
213  report it as zero, since the user interface to control these values 
per step
214  is not in place. It is a future work item. The default or the command 
line 
215  value specified in plan command is used for this value. 
{code}

The sample output shows {{"maxDiskErrors" : 5,}}. Should we fix the output to 
show zero to be consistent with the description?

> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10562-HDFS-1312.001.patch, HDFS-10562.002.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10744) Internally optimize path component resolution

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485926#comment-15485926
 ] 

Hadoop QA commented on HDFS-10744:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 8s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 530 unchanged - 3 fixed = 533 total (was 533) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1595 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
38s{color} | {color:red} The patch 103 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||

[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485915#comment-15485915
 ] 

Hadoop QA commented on HDFS-10843:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 364 unchanged - 7 fixed = 366 total (was 371) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 58m 
32s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10843 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828142/HDFS-10843.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c026c367bdf6 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16722/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16722/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16722/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Aff

[jira] [Updated] (HDFS-10599) DiskBalancer: Execute CLI via Shell

2016-09-12 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10599:
--
Attachment: HDFS-10599.002.patch

> DiskBalancer: Execute CLI via Shell 
> 
>
> Key: HDFS-10599
> URL: https://issues.apache.org/jira/browse/HDFS-10599
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10599.001.patch, HDFS-10599.002.patch
>
>
> DiskBalancer CLI invokes CLI functions directly instead of shell. This is not 
> representative of how end users use it. To provide good unit test coverage, 
> we need to have tests where DiskBalancer CLI is invoked via shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10744) Internally optimize path component resolution

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485898#comment-15485898
 ] 

Hadoop QA commented on HDFS-10744:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 530 unchanged - 3 fixed = 533 total (was 533) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1595 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
39s{color} | {color:red} The patch 103 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c420dfe |
| JIRA Issue | HDFS-10744 |
| JIRA Patch URL | 
https://issue

[jira] [Commented] (HDFS-10599) DiskBalancer: Execute CLI via Shell

2016-09-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485897#comment-15485897
 ] 

Manoj Govindassamy commented on HDFS-10599:
---

Thanks for the review [~xiaobingo]. 

1. Yes, Report command doesn't need 'fs' option. But 'fs' being a generic 
option, all Commands should work well with 'fs' option.  Since DiskBalancer 
methods were run directly from the TestBalancerCommand unit tests, generic 
options weren't added during the run and the Commands used to fail with 
unexpected option errors.  So, the intention of this jira is to make the test  
run DiskBalancer Commands just like the way it runs in real command line shell 
via DiskBalancerCLI main and there by have access to all generic options. So, 
added a unit test to prove DiskBalancer Commands can be run with GenericOptions 
like "fs". 

2. Thats right. Removed the out parameter from DiskBalancerCLI#dispatch. Thanks 
for catching this.


Attaching v002 patch with more test comments and with redundant out argument 
removed. 

> DiskBalancer: Execute CLI via Shell 
> 
>
> Key: HDFS-10599
> URL: https://issues.apache.org/jira/browse/HDFS-10599
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10599.001.patch
>
>
> DiskBalancer CLI invokes CLI functions directly instead of shell. This is not 
> representative of how end users use it. To provide good unit test coverage, 
> we need to have tests where DiskBalancer CLI is invoked via shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-09-12 Thread Vinitha Reddy Gankidi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinitha Reddy Gankidi updated HDFS-10301:
-
Attachment: HDFS-10301.014.patch

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Fix For: 2.7.4
>
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-09-12 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485884#comment-15485884
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

Upon thorough investigation of heartbeat logic I have verified that unreported 
storages do get removed without any code change. Attached patch 014 eliminates 
the state and the zombie storage removal logic introduced in HDFS-7960. 
I have added a unit test that verifies that when a DN storage with blocks is 
removed, this storage is removed from the DatanodeDescriptor as well and does 
not linger forever. Unreported storages are marked as FAILED in  
{{updateHeartbeatState}} method when {{checkFailedStorages}} is true. Thus when 
a DN storage is removed, it will be marked as FAILED in the next heartbeat. 
The storage removal happens in 2 steps after that (Refer Step 2 & 3 in 
https://issues.apache.org/jira/browse/HDFS-10301?focusedCommentId=15427387&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15427387).
 
The test {{testRemovingStorageDoesNotProduceZombies}} introduced in HDFS-7960 
passes by reducing the heartbeat recheck interval so that the test doesn't 
timeout. By default, the Heartbeat Manager removes blocks associated with 
failed storages every 5 minutes.
I have ignored {{testProcessOverReplicatedAndMissingStripedBlock}} in this 
patch. Please refer to HDFS-10854 for more details.


> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Fix For: 2.7.4
>
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.branch-2.7.patch, 
> HDFS-10301.branch-2.patch, HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-09-12 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485831#comment-15485831
 ] 

Lei (Eddy) Xu commented on HDFS-10636:
--

Sorry for late reply, [~chris] and [~virajith]

I think that my previous thoughts on types might put too much constraints and 
make things difficult. Apologize for that.

This patch itself looks OK to me. +0. for the reason described below:

{code}
abstract public class LocalReplica extends ReplicaInfo {
}

// and 

public class FinalizedReplica extends LocalReplica {
}
{code}

My concern is that in the future, where in the class hierarchy is 
{{ProvidedFinalizedReplica}}, should it inherent from {{FinalizedReplica}} or 
{{LocalReplica}}? In such cases, is {{ProvidedFinalizedReplica}} a 
{{LocalReplica}}? 



> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch, 
> HDFS-10636.003.patch, HDFS-10636.004.patch, HDFS-10636.005.patch, 
> HDFS-10636.006.patch, HDFS-10636.007.patch, HDFS-10636.008.patch, 
> HDFS-10636.009.patch, HDFS-10636.010.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485790#comment-15485790
 ] 

Hadoop QA commented on HDFS-10562:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 66 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10562 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828146/HDFS-10562.002.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux f16324da63f9 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 72dfb04 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16723/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16723/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10562-HDFS-1312.001.patch, HDFS-10562.002.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10837) Standardize serializiation of WebHDFS DirectoryListing

2016-09-12 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10837:
---
Attachment: hdfs-10837.003.patch

Thanks for reviewing Xiao, new patch attached. Added requested precondition 
checks and fixed the WebHDFS.md link (very thorough!).

> Standardize serializiation of WebHDFS DirectoryListing
> --
>
> Key: HDFS-10837
> URL: https://issues.apache.org/jira/browse/HDFS-10837
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.9.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-10837.001.patch, hdfs-10837.002.patch, 
> hdfs-10837.003.patch
>
>
> HDFS-10784 introduced a batched listing API to WebHDFS. However, the API 
> response doesn't follow the format of other WebHDFS calls. Let's standardize 
> it, and also document the schema.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-12 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10824:
-
Attachment: HDFS-10824.001.patch

v002 is posted. It fixed test failures and check style issues.

> MiniDFSCluster#storageCapacities has no effects on real capacity
> 
>
> Key: HDFS-10824
> URL: https://issues.apache.org/jira/browse/HDFS-10824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-12 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485776#comment-15485776
 ] 

Xiaobing Zhou edited comment on HDFS-10824 at 9/13/16 12:35 AM:


v001 is posted. It fixed test failures and check style issues.


was (Author: xiaobingo):
v002 is posted. It fixed test failures and check style issues.

> MiniDFSCluster#storageCapacities has no effects on real capacity
> 
>
> Key: HDFS-10824
> URL: https://issues.apache.org/jira/browse/HDFS-10824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-09-12 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485773#comment-15485773
 ] 

Chris Douglas commented on HDFS-10636:
--

The latest version of the patch lgtm, +1. Thanks [~virajith] for seeing this 
through.

[~eddyxu] do you have any other feedback on v10 before commit?

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch, 
> HDFS-10636.003.patch, HDFS-10636.004.patch, HDFS-10636.005.patch, 
> HDFS-10636.006.patch, HDFS-10636.007.patch, HDFS-10636.008.patch, 
> HDFS-10636.009.patch, HDFS-10636.010.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10562:

Attachment: HDFS-10562.002.patch

> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10562-HDFS-1312.001.patch, HDFS-10562.002.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10562:

Target Version/s: 3.0.0-alpha2  (was: HDFS-1312)

> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10562-HDFS-1312.001.patch, HDFS-10562.002.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10562:

Attachment: (was: HDFS-10562.001.patch)

> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: HDFS-1312
>
> Attachments: HDFS-10562-HDFS-1312.001.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10562:

Fix Version/s: (was: HDFS-1312)

> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10562-HDFS-1312.001.patch, HDFS-10562.002.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10562) DiskBalancer: update documentation on how to report issues and debug

2016-09-12 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10562:

Attachment: HDFS-10562.001.patch

[~arpitagarwal] Thanks for the code review comments. I have updated the patch.


> DiskBalancer: update documentation on how to report issues and debug
> 
>
> Key: HDFS-10562
> URL: https://issues.apache.org/jira/browse/HDFS-10562
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: HDFS-1312
>
> Attachments: HDFS-10562-HDFS-1312.001.patch
>
>
> Add a section in the diskbalancer documentation on how to report issues and 
> how to debug diskbalancer usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10850) getEZForPath should NOT throw FNF

2016-09-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485744#comment-15485744
 ] 

Andrew Wang commented on HDFS-10850:


Okay, from a quick look at the Hive code, one possible issue is with 
{{SemanticAnalyzer#getStagingDirectoryPathname}}, since it passes in a temp 
path that might not exist yet. Correct comparison would be with the parent dir 
instead.

[~daryn] do you have a stack trace we can look at?

Also, do we have any other "query" APIs that take a path, but do *not* throw 
FNF? This seems semantically weird to me, which is why I'm wondering if we can 
fix this in Hive instead.

> getEZForPath should NOT throw FNF
> -
>
> Key: HDFS-10850
> URL: https://issues.apache.org/jira/browse/HDFS-10850
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Rakesh R
>Priority: Blocker
>
> HDFS-9433 made an incompatible change to the semantics of getEZForPath.  It 
> used to return the EZ of the closest ancestor path.  It never threw FNF.  A 
> common use of getEZForPath to determining if a file can be renamed, or must 
> be copied due to mismatched EZs.  Notably, this has broken hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-12 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-10843:
---
Attachment: HDFS-10843.003.patch

Seems that I accidentally uploaded a slightly out of date patch file for v002. 
v003 is the correct patch. 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, 
> HDFS-10843.002.patch, HDFS-10843.003.patch
>
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10858) FBR processing may generate incorrect reportedBlock-blockGroup mapping

2016-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485699#comment-15485699
 ] 

Hudson commented on HDFS-10858:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10429 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10429/])
HDFS-10858. FBR processing may generate incorrect (jing9: rev 
72dfb048a9a7be64b371b74478b90150bf300d35)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlockInFBR.java


> FBR processing may generate incorrect reportedBlock-blockGroup mapping
> --
>
> Key: HDFS-10858
> URL: https://issues.apache.org/jira/browse/HDFS-10858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10858.000.patch
>
>
> In BlockManager#reportDiffSorted:
> {code}
> } else if (reportedState == ReplicaState.FINALIZED &&
>(storedBlock.findStorageInfo(storageInfo) == -1 ||
> corruptReplicas.isReplicaCorrupt(storedBlock, dn))) {
>   // Add replica if appropriate. If the replica was previously corrupt
>   // but now okay, it might need to be updated.
>   toAdd.add(new BlockInfoToAdd(storedBlock, replica));
> }
> {code}
> "new BlockInfoToAdd(storedBlock, replica)" is wrong because "replica" (i.e., 
> the reported block) is a reused object provided by BlockListAsLongs#iterator. 
> Later this object is reused by directly changing its ID/GS. Thus 
> {{addStoredBlock}} can get wrong (reportedBlock, stored-BlockInfo) mapping. 
> For EC the reported block is used to calculate the internal block index. Thus 
> the bug can completely corrupt the EC block group internal states.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485687#comment-15485687
 ] 

Hadoop QA commented on HDFS-10843:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 9 new + 388 unchanged - 6 fixed = 397 total (was 394) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 
49s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10843 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828124/HDFS-10843.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 31998cefd804 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 58ed4fa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16718/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16718/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16718/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Aff

[jira] [Comment Edited] (HDFS-10475) Adding metrics for long FSNamesystem read and write locks

2016-09-12 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485649#comment-15485649
 ] 

Erik Krogen edited comment on HDFS-10475 at 9/12/16 11:47 PM:
--

To get a mapping of operation -> lock time metrics we propose the following:
1. Move the logging/metrics logic into FSNamesystemLock rather than 
FSNamesystem to centralize logic and tracking. 
2. Add new methods, {{(read|write)Unlock(operation)}}, in which you specify a 
name for the current operation as you unlock (note that for metrics collecting 
the name is only needed on unlock). If an operation is not specified, a 
catch-all 'default' or 'other' operation would be used. We would manually add 
the name of the operation to the unlock call for those operations which we 
think are likely to contribute significantly to the overall lock hold time. 
This is a manual process since otherwise we would need to get a stack trace (to 
find the method name) on each call to {{unlock}} which may be prohibitively 
expensive.
3. Add a map of OperationName -> MutableRate metrics to FSNamesystemLock, all 
of which are also contained within a MetricsRegistry. On each time a lock is 
released we look up the corresponding MutableRate and add a value for the lock 
hold time. We do not use the map within MetricsRegistry because it is 
synchronized and we do not want contention on this map to cause slowness around 
the FSNamesystem lock. 

The best type of map to use within FSNamesystemLock to hold the MutableRate 
metrics is tricky. Ideally we would use a Java 8 ConcurrentHashMap, using 
{{computeIfAbsent}} to create new MutableRate metrics objects and insert them 
into the registry whenever a new operation is encountered. However this 
functionality is not available in Java 7 and we would like to support older 
versions. Thus we propose using a regular HashMap (wrapped within a call to 
{{Collections.unmodifiableMap}}) which is initialized with all of the different 
operations at the time the FSNamesystemLock is created. This allows for 
lock-free access, but requires that we have a list of all the possible 
operations. So we suggest an Enum, e.g. FSNamesystemLockMetricOp, which lists 
all of the operations of interest to be supplied to the {{(read|write)Unlock}} 
calls. This would likely be a list of a few dozen operations of interest which 
are likely to be relatively expensive lock holders. Operations not listed 
within this Enum would be regarded as "other"/"default". 

We believe this is the right tradeoff between granularity of metrics, 
performance, and developer effort, but it is certainly not ideal in terms of 
manual effort required. We would be interested to hear any other ideas about 
how to make the metrics collection require less manual intervention. 


was (Author: xkrogen):
To get a mapping of operation -> lock time metrics we propose the following:
1. Move the logging/metrics logic into FSNamesystemLock rather than 
FSNamesystem to centralize logic and tracking. 
2. Add new methods, {{(read|write)Unlock(operation)}}, in which you specify a 
name for the current operation as you unlock (note that for metrics collecting 
the name is only needed on unlock). If an operation is not specified, a 
catch-all 'default' or 'other' operation would be used. We would manually add 
the name of the operation to the unlock call for those operations which we 
think are likely to contribute significantly to the overall lock hold time. 
This is a manual process since otherwise we would need to get a stack trace (to 
find the method name) on each call to {{unlock}} which may be prohibitively 
expensive.
3. FSNamesystemLock contains a map of OperationName -> MutableRate metrics, all 
of which are also contained within a MetricsRegistry. On each time a lock is 
released we look up the corresponding MutableRate and add a value for the lock 
hold time. We do not use the map within MetricsRegistry because it is 
synchronized and we do not want contention on this map to cause slowness around 
the FSNamesystem lock. 

The best type of map to use within FSNamesystemLock to hold the MutableRate 
metrics is tricky. Ideally we would use a Java 8 ConcurrentHashMap, using 
{{computeIfAbsent}} to create new MutableRate metrics objects and insert them 
into the registry whenever a new operation is encountered. However this 
functionality is not available in Java 7 and we would like to support older 
versions. Thus we propose using a regular HashMap (wrapped within a call to 
{{Collections.unmodifiableMap}}) which is initialized with all of the different 
operations at the time the FSNamesystemLock is created. This allows for 
lock-free access, but requires that we have a list of all the possible 
operations. So we suggest an Enum, e.g. FSNamesystemLockMetricOp, which lists 
all of the operations of interest to be supplied to the {{(read|write)Un

[jira] [Updated] (HDFS-10858) FBR processing may generate incorrect reportedBlock-blockGroup mapping

2016-09-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-10858:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Thanks for the review, [~andrew.wang]. The test failure was due to binding 
conflict and it can pass in my local machine. I've committed the patch.

> FBR processing may generate incorrect reportedBlock-blockGroup mapping
> --
>
> Key: HDFS-10858
> URL: https://issues.apache.org/jira/browse/HDFS-10858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10858.000.patch
>
>
> In BlockManager#reportDiffSorted:
> {code}
> } else if (reportedState == ReplicaState.FINALIZED &&
>(storedBlock.findStorageInfo(storageInfo) == -1 ||
> corruptReplicas.isReplicaCorrupt(storedBlock, dn))) {
>   // Add replica if appropriate. If the replica was previously corrupt
>   // but now okay, it might need to be updated.
>   toAdd.add(new BlockInfoToAdd(storedBlock, replica));
> }
> {code}
> "new BlockInfoToAdd(storedBlock, replica)" is wrong because "replica" (i.e., 
> the reported block) is a reused object provided by BlockListAsLongs#iterator. 
> Later this object is reused by directly changing its ID/GS. Thus 
> {{addStoredBlock}} can get wrong (reportedBlock, stored-BlockInfo) mapping. 
> For EC the reported block is used to calculate the internal block index. Thus 
> the bug can completely corrupt the EC block group internal states.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10475) Adding metrics for long FSNamesystem read and write locks

2016-09-12 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485649#comment-15485649
 ] 

Erik Krogen commented on HDFS-10475:


To get a mapping of operation -> lock time metrics we propose the following:
1. Move the logging/metrics logic into FSNamesystemLock rather than 
FSNamesystem to centralize logic and tracking. 
2. Add new methods, {{(read|write)Unlock(operation)}}, in which you specify a 
name for the current operation as you unlock (note that for metrics collecting 
the name is only needed on unlock). If an operation is not specified, a 
catch-all 'default' or 'other' operation would be used. We would manually add 
the name of the operation to the unlock call for those operations which we 
think are likely to contribute significantly to the overall lock hold time. 
This is a manual process since otherwise we would need to get a stack trace (to 
find the method name) on each call to {{unlock}} which may be prohibitively 
expensive.
3. FSNamesystemLock contains a map of OperationName -> MutableRate metrics, all 
of which are also contained within a MetricsRegistry. On each time a lock is 
released we look up the corresponding MutableRate and add a value for the lock 
hold time. We do not use the map within MetricsRegistry because it is 
synchronized and we do not want contention on this map to cause slowness around 
the FSNamesystem lock. 

The best type of map to use within FSNamesystemLock to hold the MutableRate 
metrics is tricky. Ideally we would use a Java 8 ConcurrentHashMap, using 
{{computeIfAbsent}} to create new MutableRate metrics objects and insert them 
into the registry whenever a new operation is encountered. However this 
functionality is not available in Java 7 and we would like to support older 
versions. Thus we propose using a regular HashMap (wrapped within a call to 
{{Collections.unmodifiableMap}}) which is initialized with all of the different 
operations at the time the FSNamesystemLock is created. This allows for 
lock-free access, but requires that we have a list of all the possible 
operations. So we suggest an Enum, e.g. FSNamesystemLockMetricOp, which lists 
all of the operations of interest to be supplied to the {{(read|write)Unlock}} 
calls. This would likely be a list of a few dozen operations of interest which 
are likely to be relatively expensive lock holders. Operations not listed 
within this Enum would be regarded as "other"/"default". 

We believe this is the right tradeoff between granularity of metrics, 
performance, and developer effort, but it is certainly not ideal in terms of 
manual effort required. We would be interested to hear any other ideas about 
how to make the metrics collection require less manual intervention. 

> Adding metrics for long FSNamesystem read and write locks
> -
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10533) Make DistCpOptions class immutable

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485624#comment-15485624
 ] 

Hadoop QA commented on HDFS-10533:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
26 new + 321 unchanged - 53 fixed = 347 total (was 374) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-tools_hadoop-distcp generated 0 new + 46 
unchanged - 4 fixed = 46 total (was 50) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
23s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10533 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828130/HDFS-10533.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 78c753d151c5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8a93f45 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16719/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16719/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16719/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make DistCpOptions class immutable
> --
>
> Key: HDFS-10533
> URL: https://issues.apache.org/jira/browse/HDFS-10533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Mingliang Liu
>  

[jira] [Updated] (HDFS-10744) Internally optimize path component resolution

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10744:
-
Attachment: HDFS-10744-branch-2.7.patch

> Internally optimize path component resolution
> -
>
> Key: HDFS-10744
> URL: https://issues.apache.org/jira/browse/HDFS-10744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10744-branch-2.7.patch, HDFS-10744.patch
>
>
> {{FSDirectory}}'s path resolution currently uses a mixture of string & 
> byte[][]  conversions, back to string, back to byte[][] for {{INodesInPath}}. 
>  Internally all path component resolution should be byte[][]-based as the 
> precursor to instantiating an {{INodesInPath}} w/o the last 2 unnecessary 
> conversions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10744) Internally optimize path component resolution

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10744:
-
Status: Patch Available  (was: Reopened)

> Internally optimize path component resolution
> -
>
> Key: HDFS-10744
> URL: https://issues.apache.org/jira/browse/HDFS-10744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10744-branch-2.7.patch, HDFS-10744.patch
>
>
> {{FSDirectory}}'s path resolution currently uses a mixture of string & 
> byte[][]  conversions, back to string, back to byte[][] for {{INodesInPath}}. 
>  Internally all path component resolution should be byte[][]-based as the 
> precursor to instantiating an {{INodesInPath}} w/o the last 2 unnecessary 
> conversions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-10744) Internally optimize path component resolution

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang reopened HDFS-10744:
--

Sorry for re-opening this. Triggering Jenkins for branch-2.7 patch.

> Internally optimize path component resolution
> -
>
> Key: HDFS-10744
> URL: https://issues.apache.org/jira/browse/HDFS-10744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10744-branch-2.7.patch, HDFS-10744.patch
>
>
> {{FSDirectory}}'s path resolution currently uses a mixture of string & 
> byte[][]  conversions, back to string, back to byte[][] for {{INodesInPath}}. 
>  Internally all path component resolution should be byte[][]-based as the 
> precursor to instantiating an {{INodesInPath}} w/o the last 2 unnecessary 
> conversions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485593#comment-15485593
 ] 

Zhe Zhang commented on HDFS-10673:
--

Verified all reported failures pass locally. Pushed to branch-2.7.

> Optimize FSPermissionChecker's internal path usage
> --
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10673-branch-2.7.00.patch, HDFS-10673.1.patch, 
> HDFS-10673.2.patch, HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade 
> performance and generate excessive garbage even when neither is used.  Main 
> issues:
> # A byte[][] of components is unnecessarily created.  Each path component 
> lookup converts a subrange of the byte[][] to a new String[] - then not used 
> by default attribute provider.
> # Subaccess checks are insanely expensive.  The full path of every subdir is 
> created by walking up the inode tree, creating a INode[], building a string 
> by converting each inode's byte[] name to a string, etc.  Which will only be 
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer 
> feature.  For #2, paths should be created on-demand for exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10673:
-
   Resolution: Fixed
Fix Version/s: 2.7.4
   Status: Resolved  (was: Patch Available)

> Optimize FSPermissionChecker's internal path usage
> --
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10673-branch-2.7.00.patch, HDFS-10673.1.patch, 
> HDFS-10673.2.patch, HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade 
> performance and generate excessive garbage even when neither is used.  Main 
> issues:
> # A byte[][] of components is unnecessarily created.  Each path component 
> lookup converts a subrange of the byte[][] to a new String[] - then not used 
> by default attribute provider.
> # Subaccess checks are insanely expensive.  The full path of every subdir is 
> created by walking up the inode tree, creating a INode[], building a string 
> by converting each inode's byte[] name to a string, etc.  Which will only be 
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer 
> feature.  For #2, paths should be created on-demand for exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10859) TestBalancer#testUnknownDatanodeSimple and testBalancerWithKeytabs are flaky in branch-2.7

2016-09-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485585#comment-15485585
 ] 

Xiao Chen commented on HDFS-10859:
--

Thanks Zhe!
Now I recall, the common method {{testUnkonwnDatanode}} is flaky, which turns 
out to be HDFS-10716. +1 on getting that into branch-2.7. :)

> TestBalancer#testUnknownDatanodeSimple and testBalancerWithKeytabs are flaky 
> in branch-2.7
> --
>
> Key: HDFS-10859
> URL: https://issues.apache.org/jira/browse/HDFS-10859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, test
>Affects Versions: 2.7.4
>Reporter: Zhe Zhang
>Priority: Minor
> Attachments: testUnknownDatanodeSimple-failure.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485584#comment-15485584
 ] 

Hadoop QA commented on HDFS-10673:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 142 unchanged - 7 fixed = 142 total (was 149) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2202 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
54s{color} | {color:red} The patch 83 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.datanode.Tes

[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485572#comment-15485572
 ] 

Hadoop QA commented on HDFS-10824:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 205 unchanged - 1 fixed = 213 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks |
|   | hadoop.net.TestNetworkTopology |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.TestLeaseRecovery |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.datanode.TestDataNodeMXBean |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server

[jira] [Commented] (HDFS-10821) DiskBalancer: Report command support with multiple nodes

2016-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485566#comment-15485566
 ] 

Hudson commented on HDFS-10821:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10427 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10427/])
HDFS-10821. DiskBalancer: Report command support with multiple nodes. 
(aengineer: rev 8a93f45a80932a1ef62a6c20551e8cab95888fee)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSDiskbalancer.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/command/TestDiskBalancerCommand.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java


> DiskBalancer: Report command support with multiple nodes
> 
>
> Key: HDFS-10821
> URL: https://issues.apache.org/jira/browse/HDFS-10821
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10821.001.patch, HDFS-10821.002.patch, 
> HDFS-10821.003.patch
>
>
> Since HDFS-10813 has committed to the trunk, then we can use {{getNodes}} 
> method to parse the nodes string and support multiple nodes with {{hdfs 
> diskbalancer}} subcommands(ex -report, -query). In this JIRA, we are focusing 
> on the subcommand {{-report}}.
> That means we can use command {{hdfs diskbalancer -report -node}} to print 
> one or one more datanodes report info. A test input command(here I use UUID 
> to specify one datanode):
> {code}
> hdfs diskbalancer -report -node 
> e05ade8e-fb28-42cf-9aa9-43e564c0ec99,38714337-84fb-4e35-9ea3-0bb47d6da700
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10533) Make DistCpOptions class immutable

2016-09-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10533:
-
Attachment: HDFS-10533.004.patch

Rebase from {{trunk}} branch and resolved conflicts with [HADOOP-13587].

> Make DistCpOptions class immutable
> --
>
> Key: HDFS-10533
> URL: https://issues.apache.org/jira/browse/HDFS-10533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10533.000.patch, HDFS-10533.000.patch, 
> HDFS-10533.001.patch, HDFS-10533.002.patch, HDFS-10533.003.patch, 
> HDFS-10533.004.patch
>
>
> Currently the {{DistCpOptions}} class encapsulates all DistCp options, which 
> may be set from command-line (via the {{OptionsParser}}) or may be set 
> manually (eg construct an instance and call setters). As there are multiple 
> option fields and more (e.g. [HDFS-9868], [HDFS-10314]) to add, validating 
> them can be cumbersome. Ideally, the {{DistCpOptions}} object should be 
> immutable. The benefits are:
> # {{DistCpOptions}} is simple and easier to use and share, plus it scales well
> # validation is automatic, e.g. manually constructed {{DistCpOptions}} gets 
> validated before usage
> # validation error message is well-defined which does not depend on the order 
> of setters
> This jira is to track the effort of making the {{DistCpOptions}} immutable by 
> using a Builder pattern for creation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10858) FBR processing may generate incorrect reportedBlock-blockGroup mapping

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485556#comment-15485556
 ] 

Hadoop QA commented on HDFS-10858:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 127 unchanged - 0 fixed = 130 total (was 127) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileCreationDelete |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10858 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828116/HDFS-10858.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4e221a08fe00 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 58ed4fa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16716/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16716/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16716/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16716/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FBR processing may generate incorrect reportedBlock-blockGroup mapping
> --
>
>

[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485547#comment-15485547
 ] 

Hadoop QA commented on HDFS-10673:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m  
1s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 142 unchanged - 7 fixed = 142 total (was 149) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2202 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
54s{color} | {color:red} The patch 83 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |

[jira] [Commented] (HDFS-10599) DiskBalancer: Execute CLI via Shell

2016-09-12 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485544#comment-15485544
 ] 

Xiaobing Zhou commented on HDFS-10599:
--

Thanks [~manojg] for the work, and [~anu] for the review. The patch 001 looks 
good. Minor comments:
1. report command doesn't need fs option
{code}
146 final String topReportArg = "5";
147 final String reportArgs = String.format("%s %s -%s -%s %s",
148 "fs", cluster.getNameNode().getNameNodeAddressHostPortString(),
{code}

2. Since PrintStream is instance member in DiskBalancerCLI, the parameter 'out' 
can be removed from DiskBalancerCLI#dispatch


> DiskBalancer: Execute CLI via Shell 
> 
>
> Key: HDFS-10599
> URL: https://issues.apache.org/jira/browse/HDFS-10599
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10599.001.patch
>
>
> DiskBalancer CLI invokes CLI functions directly instead of shell. This is not 
> representative of how end users use it. To provide good unit test coverage, 
> we need to have tests where DiskBalancer CLI is invoked via shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10821) DiskBalancer: Report command support with multiple nodes

2016-09-12 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10821:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: (was: 2.9.0)
  3.0.0-alpha2
Target Version/s: 3.0.0-alpha2
  Status: Resolved  (was: Patch Available)

[~linyiqun] Thank you for the contribution. I have committed this to trunk.

> DiskBalancer: Report command support with multiple nodes
> 
>
> Key: HDFS-10821
> URL: https://issues.apache.org/jira/browse/HDFS-10821
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10821.001.patch, HDFS-10821.002.patch, 
> HDFS-10821.003.patch
>
>
> Since HDFS-10813 has committed to the trunk, then we can use {{getNodes}} 
> method to parse the nodes string and support multiple nodes with {{hdfs 
> diskbalancer}} subcommands(ex -report, -query). In this JIRA, we are focusing 
> on the subcommand {{-report}}.
> That means we can use command {{hdfs diskbalancer -report -node}} to print 
> one or one more datanodes report info. A test input command(here I use UUID 
> to specify one datanode):
> {code}
> hdfs diskbalancer -report -node 
> e05ade8e-fb28-42cf-9aa9-43e564c0ec99,38714337-84fb-4e35-9ea3-0bb47d6da700
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10859) TestBalancer#testUnknownDatanodeSimple and testBalancerWithKeytabs are flaky in branch-2.7

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10859:
-
Summary: TestBalancer#testUnknownDatanodeSimple and testBalancerWithKeytabs 
are flaky in branch-2.7  (was: TestBalancer#testUnknownDatanodeSimple fails in 
branch-2.7)

> TestBalancer#testUnknownDatanodeSimple and testBalancerWithKeytabs are flaky 
> in branch-2.7
> --
>
> Key: HDFS-10859
> URL: https://issues.apache.org/jira/browse/HDFS-10859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, test
>Affects Versions: 2.7.4
>Reporter: Zhe Zhang
>Priority: Minor
> Attachments: testUnknownDatanodeSimple-failure.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10821) DiskBalancer: Report command support with multiple nodes

2016-09-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485513#comment-15485513
 ] 

Anu Engineer commented on HDFS-10821:
-

+1, LGTM. [~linyiqun] Thanks for updating the patch. I will commit this shortly.

> DiskBalancer: Report command support with multiple nodes
> 
>
> Key: HDFS-10821
> URL: https://issues.apache.org/jira/browse/HDFS-10821
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.9.0
>
> Attachments: HDFS-10821.001.patch, HDFS-10821.002.patch, 
> HDFS-10821.003.patch
>
>
> Since HDFS-10813 has committed to the trunk, then we can use {{getNodes}} 
> method to parse the nodes string and support multiple nodes with {{hdfs 
> diskbalancer}} subcommands(ex -report, -query). In this JIRA, we are focusing 
> on the subcommand {{-report}}.
> That means we can use command {{hdfs diskbalancer -report -node}} to print 
> one or one more datanodes report info. A test input command(here I use UUID 
> to specify one datanode):
> {code}
> hdfs diskbalancer -report -node 
> e05ade8e-fb28-42cf-9aa9-43e564c0ec99,38714337-84fb-4e35-9ea3-0bb47d6da700
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-12 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485496#comment-15485496
 ] 

Erik Krogen edited comment on HDFS-10843 at 9/12/16 10:35 PM:
--

v002 patch incorporates your suggestions, [~shv]. Tests now only start the 
minicluster once, and the logic for updating the directory size has been moved 
to {{FSDirectory}}. Also I have persisted iip wherever possible. 


was (Author: xkrogen):
v002 patch incorporates your suggestions, [~shv]. Tests now only start the 
minicluster once, and the logic for updating the directory size has been moved 
to {{FSDirectory}}. 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, 
> HDFS-10843.002.patch
>
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-12 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485496#comment-15485496
 ] 

Erik Krogen edited comment on HDFS-10843 at 9/12/16 10:34 PM:
--

v002 patch incorporates your suggestions, [~shv]. Tests now only start the 
minicluster once, and the logic for updating the directory size has been moved 
to {{FSDirectory}}. 


was (Author: xkrogen):
v3 patch incorporates your suggestions, [~shv]. Tests now only start the 
minicluster once, and the logic for updating the directory size has been moved 
to {{FSDirectory}}. 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, 
> HDFS-10843.002.patch
>
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-12 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-10843:
---
Attachment: HDFS-10843.002.patch

v3 patch incorporates your suggestions, [~shv]. Tests now only start the 
minicluster once, and the logic for updating the directory size has been moved 
to {{FSDirectory}}. 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, 
> HDFS-10843.002.patch
>
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10599) DiskBalancer: Execute CLI via Shell

2016-09-12 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485492#comment-15485492
 ] 

Manoj Govindassamy commented on HDFS-10599:
---

Thanks for the review [~anu].

> DiskBalancer: Execute CLI via Shell 
> 
>
> Key: HDFS-10599
> URL: https://issues.apache.org/jira/browse/HDFS-10599
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10599.001.patch
>
>
> DiskBalancer CLI invokes CLI functions directly instead of shell. This is not 
> representative of how end users use it. To provide good unit test coverage, 
> we need to have tests where DiskBalancer CLI is invoked via shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8818) Allow Balancer to run faster

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8818:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branch-2.7. Pre-existing flaky tests in {{TestBalancer}} are being 
addressed in HDFS-10859.

> Allow Balancer to run faster
> 
>
> Key: HDFS-8818
> URL: https://issues.apache.org/jira/browse/HDFS-8818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-8818-branch-2.7.00.patch, h8818_20150723.patch, 
> h8818_20150727.patch
>
>
> The original design of Balancer is intentionally to make it run slowly so 
> that the balancing activities won't affect the normal cluster activities and 
> the running jobs.
> There are new use case that cluster admin may choose to balance the cluster 
> when the cluster load is low, or in a maintain window.  So that we should 
> have an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8818) Allow Balancer to run faster

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8818:

Fix Version/s: 2.7.4

> Allow Balancer to run faster
> 
>
> Key: HDFS-8818
> URL: https://issues.apache.org/jira/browse/HDFS-8818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-8818-branch-2.7.00.patch, h8818_20150723.patch, 
> h8818_20150727.patch
>
>
> The original design of Balancer is intentionally to make it run slowly so 
> that the balancing activities won't affect the normal cluster activities and 
> the running jobs.
> There are new use case that cluster admin may choose to balance the cluster 
> when the cluster load is low, or in a maintain window.  So that we should 
> have an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10859) TestBalancer#testUnknownDatanodeSimple fails in branch-2.7

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485459#comment-15485459
 ] 

Zhe Zhang commented on HDFS-10859:
--

Actually {{testBalancerWithKeytabs}} seems flaky too. Getting below on CLI run:

{code}
Tests run: 26, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 283.892 sec 
<<< FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
testBalancerWithKeytabs(org.apache.hadoop.hdfs.server.balancer.TestBalancer)  
Time elapsed: 39.283 sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<-3>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:975)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.access$000(TestBalancer.java:116)


Results :

Failed tests: 
  
TestBalancer.testBalancerWithKeytabs:1571->access$000:116->testUnknownDatanode:975
 expected:<0> but was:<-3>
{code}

> TestBalancer#testUnknownDatanodeSimple fails in branch-2.7
> --
>
> Key: HDFS-10859
> URL: https://issues.apache.org/jira/browse/HDFS-10859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, test
>Affects Versions: 2.7.4
>Reporter: Zhe Zhang
>Priority: Minor
> Attachments: testUnknownDatanodeSimple-failure.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10599) DiskBalancer: Execute CLI via Shell

2016-09-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485430#comment-15485430
 ] 

Anu Engineer commented on HDFS-10599:
-

+1, LGTM. I will commit this tomorrow to make sure that there are no other 
comments on this JIRA. [~manojg] Thank you for the contribution.


> DiskBalancer: Execute CLI via Shell 
> 
>
> Key: HDFS-10599
> URL: https://issues.apache.org/jira/browse/HDFS-10599
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Manoj Govindassamy
> Attachments: HDFS-10599.001.patch
>
>
> DiskBalancer CLI invokes CLI functions directly instead of shell. This is not 
> representative of how end users use it. To provide good unit test coverage, 
> we need to have tests where DiskBalancer CLI is invoked via shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10859) TestBalancer#testUnknownDatanodeSimple fails in branch-2.7

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485425#comment-15485425
 ] 

Zhe Zhang commented on HDFS-10859:
--

I think that one is flaky. See the last 2 Jenkins reports from HDFS-8818, one 
reported {{testUnknownDatanodeSimple}} but one didn't.

> TestBalancer#testUnknownDatanodeSimple fails in branch-2.7
> --
>
> Key: HDFS-10859
> URL: https://issues.apache.org/jira/browse/HDFS-10859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, test
>Affects Versions: 2.7.4
>Reporter: Zhe Zhang
>Priority: Minor
> Attachments: testUnknownDatanodeSimple-failure.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10859) TestBalancer#testUnknownDatanodeSimple fails in branch-2.7

2016-09-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10859:
-
Affects Version/s: (was: 2.7)
   2.7.4

> TestBalancer#testUnknownDatanodeSimple fails in branch-2.7
> --
>
> Key: HDFS-10859
> URL: https://issues.apache.org/jira/browse/HDFS-10859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, test
>Affects Versions: 2.7.4
>Reporter: Zhe Zhang
>Priority: Minor
> Attachments: testUnknownDatanodeSimple-failure.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10858) FBR processing may generate incorrect reportedBlock-blockGroup mapping

2016-09-12 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485405#comment-15485405
 ] 

Andrew Wang commented on HDFS-10858:


+1 LGTM, nice find Jing!

> FBR processing may generate incorrect reportedBlock-blockGroup mapping
> --
>
> Key: HDFS-10858
> URL: https://issues.apache.org/jira/browse/HDFS-10858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-10858.000.patch
>
>
> In BlockManager#reportDiffSorted:
> {code}
> } else if (reportedState == ReplicaState.FINALIZED &&
>(storedBlock.findStorageInfo(storageInfo) == -1 ||
> corruptReplicas.isReplicaCorrupt(storedBlock, dn))) {
>   // Add replica if appropriate. If the replica was previously corrupt
>   // but now okay, it might need to be updated.
>   toAdd.add(new BlockInfoToAdd(storedBlock, replica));
> }
> {code}
> "new BlockInfoToAdd(storedBlock, replica)" is wrong because "replica" (i.e., 
> the reported block) is a reused object provided by BlockListAsLongs#iterator. 
> Later this object is reused by directly changing its ID/GS. Thus 
> {{addStoredBlock}} can get wrong (reportedBlock, stored-BlockInfo) mapping. 
> For EC the reported block is used to calculate the internal block index. Thus 
> the bug can completely corrupt the EC block group internal states.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8818) Allow Balancer to run faster

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485387#comment-15485387
 ] 

Hadoop QA commented on HDFS-8818:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
37s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 574 unchanged - 1 fixed = 581 total (was 575) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1967 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
51s{color} | {color:red} The patch 161 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.web.TestHttpsFileSystem |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
| JDK v1.7.0_111 Failed junit tests | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c420dfe |
| JIRA Issue | HDFS-8818 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-8818) Allow Balancer to run faster

2016-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485402#comment-15485402
 ] 

Hadoop QA commented on HDFS-8818:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
37s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 574 unchanged - 1 fixed = 581 total (was 575) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1967 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
47s{color} | {color:red} The patch 161 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.TestFSImageWithSnapshot |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes

[jira] [Comment Edited] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-12 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485348#comment-15485348
 ] 

Xiaobing Zhou edited comment on HDFS-10824 at 9/12/16 9:35 PM:
---

v000 is posted for review. The idea is to memorize storage capacity setting 
which is used for resetting after restart.


was (Author: xiaobingo):
v000 is posted for review.

> MiniDFSCluster#storageCapacities has no effects on real capacity
> 
>
> Key: HDFS-10824
> URL: https://issues.apache.org/jira/browse/HDFS-10824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10824.000.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-12 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10824:
-
Status: Patch Available  (was: Open)

> MiniDFSCluster#storageCapacities has no effects on real capacity
> 
>
> Key: HDFS-10824
> URL: https://issues.apache.org/jira/browse/HDFS-10824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10824.000.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-12 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10824:
-
Attachment: HDFS-10824.000.patch

v000 is posted for review.

> MiniDFSCluster#storageCapacities has no effects on real capacity
> 
>
> Key: HDFS-10824
> URL: https://issues.apache.org/jira/browse/HDFS-10824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10824.000.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8898) Create API and command-line argument to get quota and quota usage without detailed content summary

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485346#comment-15485346
 ] 

Zhe Zhang commented on HDFS-8898:
-

Quick note that I'm working on backporting HDFS-10744 to branch-2.7; since this 
change is not in branch-2.7, the overlapping part won't be backported.

If someone plans to backport this to branch-2.7, please remember to add the 
optimization from HDFS-10744.

> Create API and command-line argument to get quota and quota usage without 
> detailed content summary
> --
>
> Key: HDFS-8898
> URL: https://issues.apache.org/jira/browse/HDFS-8898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Joep Rottinghuis
>Assignee: Ming Ma
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-8898-2.patch, HDFS-8898-3.patch, HDFS-8898-4.patch, 
> HDFS-8898-5.patch, HDFS-8898-6.patch, HDFS-8898-branch-2.patch, 
> HDFS-8898.patch
>
>
> On large directory structures it takes significant time to iterate through 
> the file and directory counts recursively to get a complete ContentSummary.
> When you want to just check for the quota on a higher level directory it 
> would be good to have an option to skip the file and directory counts.
> Moreover, currently one can only check the quota if you have access to all 
> the directories underneath. For example, if I have a large home directory 
> under /user/joep and I host some files for another user in a sub-directory, 
> the moment they create an unreadable sub-directory under my home I can no 
> longer check what my quota is. Understood that I cannot check the current 
> file counts unless I can iterate through all the usage, but for 
> administrative purposes it is nice to be able to get the current quota 
> setting on a directory without the need to iterate through and run into 
> permission issues on sub-directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10858) FBR processing may generate incorrect reportedBlock-blockGroup mapping

2016-09-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-10858:
-
Status: Patch Available  (was: Open)

> FBR processing may generate incorrect reportedBlock-blockGroup mapping
> --
>
> Key: HDFS-10858
> URL: https://issues.apache.org/jira/browse/HDFS-10858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-10858.000.patch
>
>
> In BlockManager#reportDiffSorted:
> {code}
> } else if (reportedState == ReplicaState.FINALIZED &&
>(storedBlock.findStorageInfo(storageInfo) == -1 ||
> corruptReplicas.isReplicaCorrupt(storedBlock, dn))) {
>   // Add replica if appropriate. If the replica was previously corrupt
>   // but now okay, it might need to be updated.
>   toAdd.add(new BlockInfoToAdd(storedBlock, replica));
> }
> {code}
> "new BlockInfoToAdd(storedBlock, replica)" is wrong because "replica" (i.e., 
> the reported block) is a reused object provided by BlockListAsLongs#iterator. 
> Later this object is reused by directly changing its ID/GS. Thus 
> {{addStoredBlock}} can get wrong (reportedBlock, stored-BlockInfo) mapping. 
> For EC the reported block is used to calculate the internal block index. Thus 
> the bug can completely corrupt the EC block group internal states.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10858) FBR processing may generate incorrect reportedBlock-blockGroup mapping

2016-09-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-10858:
-
Attachment: HDFS-10858.000.patch

> FBR processing may generate incorrect reportedBlock-blockGroup mapping
> --
>
> Key: HDFS-10858
> URL: https://issues.apache.org/jira/browse/HDFS-10858
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-10858.000.patch
>
>
> In BlockManager#reportDiffSorted:
> {code}
> } else if (reportedState == ReplicaState.FINALIZED &&
>(storedBlock.findStorageInfo(storageInfo) == -1 ||
> corruptReplicas.isReplicaCorrupt(storedBlock, dn))) {
>   // Add replica if appropriate. If the replica was previously corrupt
>   // but now okay, it might need to be updated.
>   toAdd.add(new BlockInfoToAdd(storedBlock, replica));
> }
> {code}
> "new BlockInfoToAdd(storedBlock, replica)" is wrong because "replica" (i.e., 
> the reported block) is a reused object provided by BlockListAsLongs#iterator. 
> Later this object is reused by directly changing its ID/GS. Thus 
> {{addStoredBlock}} can get wrong (reportedBlock, stored-BlockInfo) mapping. 
> For EC the reported block is used to calculate the internal block index. Thus 
> the bug can completely corrupt the EC block group internal states.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8335) FSNamesystem should construct FSPermissionChecker only if permission is enabled

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485331#comment-15485331
 ] 

Zhe Zhang commented on HDFS-8335:
-

Quick note that I'm working on backporting HDFS-10744 to branch-2.7; since this 
change is not in branch-2.7, the overlapping part won't be backported.

If someone plans to backport this to branch-2.7, please remember to add the 
optimization from HDFS-10744.

> FSNamesystem should construct FSPermissionChecker only if permission is 
> enabled
> ---
>
> Key: HDFS-8335
> URL: https://issues.apache.org/jira/browse/HDFS-8335
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0, 2.6.0, 2.7.0, 2.8.0, 3.0.0-alpha1
>Reporter: David Bryson
>Assignee: Gabor Liptak
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-8335.2.patch, HDFS-8335.3.patch, HDFS-8335.patch
>
>
> FSNamesystem (2.5.x)/FSDirStatAndListingOp(current trunk) getFileInfo and 
> getListingInt methods call getPermissionChecker() to construct a 
> FSPermissionChecker regardless of isPermissionEnabled(). When permission 
> checking is disabled, this leads to an unnecessary performance hit 
> constructing a UserGroupInformation object that is never used.
> For example, from a stack dump when driving concurrent requests, they all end 
> up blocking.
> Here's the thread holding the lock:
> "IPC Server handler 9 on 9000" daemon prio=10 tid=0x7f78d8b9e800 
> nid=0x142f3 runnable [0x7f78c2ddc000]
>java.lang.Thread.State: RUNNABLE
> at java.io.FileInputStream.readBytes(Native Method)
> at java.io.FileInputStream.read(FileInputStream.java:272)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> - locked <0x0007d9b105c0> (a java.lang.UNIXProcess$ProcessPipeInputStream)
> at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
> at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
> at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
> - locked <0x0007d9b1a888> (a java.io.InputStreamReader)
> at java.io.InputStreamReader.read(InputStreamReader.java:184)
> at java.io.BufferedReader.fill(BufferedReader.java:154)
> at java.io.BufferedReader.read1(BufferedReader.java:205)
> at java.io.BufferedReader.read(BufferedReader.java:279)
> - locked <0x0007d9b1a888> (a java.io.InputStreamReader)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:524)
> at org.apache.hadoop.util.Shell.run(Shell.java:455)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
> at org.apache.hadoop.util.Shell.execCommand(Shell.java:774)
> at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:84)
> at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
> at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
> at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
> at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1474)
> - locked <0x0007a6df75f8> (a 
> org.apache.hadoop.security.UserGroupInformation)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.(FSPermissionChecker.java:82)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3534)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4489)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4478)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:898)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:602)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> at org.apache.hadoop.ipc.Server$Handler

[jira] [Commented] (HDFS-10859) TestBalancer#testUnknownDatanodeSimple fails in branch-2.7

2016-09-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485307#comment-15485307
 ] 

Xiao Chen commented on HDFS-10859:
--

Ah I see, testUnknownDatanodeSimple passed locally for me. I'm locally running 
on OSX with jdk1.7.79 FYI.

> TestBalancer#testUnknownDatanodeSimple fails in branch-2.7
> --
>
> Key: HDFS-10859
> URL: https://issues.apache.org/jira/browse/HDFS-10859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, test
>Affects Versions: 2.7
>Reporter: Zhe Zhang
>Priority: Minor
> Attachments: testUnknownDatanodeSimple-failure.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10859) TestBalancer#testUnknownDatanodeSimple fails in branch-2.7

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485306#comment-15485306
 ] 

Zhe Zhang commented on HDFS-10859:
--

{{testBalancerWithKeytabs}} fails with {{java.lang.NoClassDefFoundError: 
jdbm/helper/CachePolicy}}.

{{testUnknownDatanodeSimple}} fails with the log I attached. Sorry about the 
confusion.

> TestBalancer#testUnknownDatanodeSimple fails in branch-2.7
> --
>
> Key: HDFS-10859
> URL: https://issues.apache.org/jira/browse/HDFS-10859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, test
>Affects Versions: 2.7
>Reporter: Zhe Zhang
>Priority: Minor
> Attachments: testUnknownDatanodeSimple-failure.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8815) DFS getStoragePolicy implementation using single RPC call

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485302#comment-15485302
 ] 

Zhe Zhang commented on HDFS-8815:
-

Quick note that I'm working on backporting HDFS-10744 to branch-2.7; since this 
change is not in branch-2.7, the overlapping part won't be backported.

If someone plans to backport this to branch-2.7, please remember to add the 
optimization from HDFS-10744. 

> DFS getStoragePolicy implementation using single RPC call
> -
>
> Key: HDFS-8815
> URL: https://issues.apache.org/jira/browse/HDFS-8815
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Arpit Agarwal
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-8815-001.patch, HDFS-8815-002.patch, 
> HDFS-8815-003.patch, HDFS-8815-004.patch
>
>
> HADOOP-12161 introduced a new {{FileSystem#getStoragePolicy}} call. The DFS 
> implementation of the call requires two RPC calls, the first to fetch the 
> storage policy ID and the second to fetch the policy suite to map the policy 
> ID to a {{BlockStoragePolicySpi}}.
> Fix the implementation to require a single RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10859) TestBalancer#testUnknownDatanodeSimple fails in branch-2.7

2016-09-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485303#comment-15485303
 ] 

Xiao Chen commented on HDFS-10859:
--

Sorry... what other failure?

> TestBalancer#testUnknownDatanodeSimple fails in branch-2.7
> --
>
> Key: HDFS-10859
> URL: https://issues.apache.org/jira/browse/HDFS-10859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover, test
>Affects Versions: 2.7
>Reporter: Zhe Zhang
>Priority: Minor
> Attachments: testUnknownDatanodeSimple-failure.log
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8983) NameNode support for protected directories

2016-09-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15485300#comment-15485300
 ] 

Zhe Zhang commented on HDFS-8983:
-

Quick note that I'm working on backporting HDFS-10744 to branch-2.7; since this 
change is not in branch-2.7, the overlapping part won't be backported.

If someone plans to backport this to branch-2.7, please remember to add the 
optimization from HDFS-10744. 

> NameNode support for protected directories
> --
>
> Key: HDFS-8983
> URL: https://issues.apache.org/jira/browse/HDFS-8983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-8393.01.patch, HDFS-8393.02.patch, 
> HDFS-8983.03.patch, HDFS-8983.04.patch
>
>
> To protect important system directories from inadvertent deletion (e.g. 
> /Users) the NameNode can allow marking directories as _protected_. Such 
> directories cannot be deleted unless they are empty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >