[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2015-11-15 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15006278#comment-15006278
 ] 

Walter Su commented on HDFS-9337:
-

{noformat}
curl -i -X POST 
"http://:/webhdfs/v1/?op=CONCAT&sources="
{noformat}

throws not user-friendly stacktrace when missing {{sources}}

{noformat}
curl -i -X PUT "http://:/webhdfs/v1/?op=SETXATTR
  &xattr.name=&xattr.value=
  &flag="
{noformat}
throws NPE when missing XATTRNAME. And {{xattr.value}} acually is not required, 
so document is wrong in this case.

That's fix the same issues together, shall we? I haven't found other NPEs. I'll 
appreciate if you could check again all the required params per documentation.

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch, HDFS-9337_03.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT&snapshotname=SNAPSHOTNAME";
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9432) WebHDFS: Some GET operations are wrongly documented as PUT operations

2015-11-15 Thread Walter Su (JIRA)
Walter Su created HDFS-9432:
---

 Summary: WebHDFS: Some GET operations are wrongly documented as 
PUT operations
 Key: HDFS-9432
 URL: https://issues.apache.org/jira/browse/HDFS-9432
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Walter Su
Priority: Minor


In 
[document|https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_an_XAttr]
{noformat}
curl -i -X PUT "http://:/webhdfs/v1/?op=GETXATTRS
  &encoding="
{noformat}

Actually it's a GET. If you do it with PUT, you get 
{noformat}
{"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Invalid
 value for webhdfs parameter \"op\": No enum constant 
org.apache.hadoop.hdfs.web.resources.PutOpParam.Op.GETXATTRS"}}
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9403) Erasure coding: some EC tests are missing timeout

2015-11-15 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui reassigned HDFS-9403:
-

Assignee: GAO Rui

> Erasure coding: some EC tests are missing timeout
> -
>
> Key: HDFS-9403
> URL: https://issues.apache.org/jira/browse/HDFS-9403
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Zhe Zhang
>Assignee: GAO Rui
>Priority: Minor
>
> EC data writing pipeline is still being worked on, and bugs could introduce 
> program hang. We should add a timeout for all tests involving striped 
> writing. I see at least the following:
> * {{TestErasureCodingPolicies}}
> * {{TestFileStatusWithECPolicy}}
> * {{TestDFSStripedOutputStream}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9428) Fix intermittent failure of TestDNFencing.testQueueingWithAppend

2015-11-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9428:
---
Status: Patch Available  (was: Open)

> Fix intermittent failure of TestDNFencing.testQueueingWithAppend
> 
>
> Key: HDFS-9428
> URL: https://issues.apache.org/jira/browse/HDFS-9428
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-9428.001.patch, HDFS-9428.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9428) Fix intermittent failure of TestDNFencing.testQueueingWithAppend

2015-11-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9428:
---
Attachment: HDFS-9428.002.patch

I added the fix to wrong place... I update the patch.

> Fix intermittent failure of TestDNFencing.testQueueingWithAppend
> 
>
> Key: HDFS-9428
> URL: https://issues.apache.org/jira/browse/HDFS-9428
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-9428.001.patch, HDFS-9428.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9428) Fix intermittent failure of TestDNFencing.testQueueingWithAppend

2015-11-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9428:
---
Status: Open  (was: Patch Available)

> Fix intermittent failure of TestDNFencing.testQueueingWithAppend
> 
>
> Key: HDFS-9428
> URL: https://issues.apache.org/jira/browse/HDFS-9428
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-9428.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9428) Fix intermittent failure of TestDNFencing.testQueueingWithAppend

2015-11-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9428:
---
Attachment: HDFS-9428.001.patch

I attached 001.

The assertion failed due to checking the condition before all datanodes 
processed IBR which is asynchronous to {{FSDataOutputStream#close}}. 
{{MiniDFSCluster#triggerHeartbeats}} can be used to wait for IBRs to be 
processed.

> Fix intermittent failure of TestDNFencing.testQueueingWithAppend
> 
>
> Key: HDFS-9428
> URL: https://issues.apache.org/jira/browse/HDFS-9428
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-9428.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9428) Fix intermittent failure of TestDNFencing.testQueueingWithAppend

2015-11-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9428:
---
Status: Patch Available  (was: Open)

> Fix intermittent failure of TestDNFencing.testQueueingWithAppend
> 
>
> Key: HDFS-9428
> URL: https://issues.apache.org/jira/browse/HDFS-9428
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-9428.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9431) DistributedFileSystem#concat fails if the target path is relative.

2015-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15005925#comment-15005925
 ] 

Hadoop QA commented on HDFS-9431:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 16s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 42s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 143m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestLease |
|   | hadoop.hdfs.TestMissingBlocksAlert |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs

[jira] [Commented] (HDFS-9375) Set balancer bandwidth for specific node

2015-11-15 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15005910#comment-15005910
 ] 

Lin Yiqun commented on HDFS-9375:
-

Hi [~brahmareddy], does this patch need to be modified again or where should 
still be modified of this patch ?Thanks!

> Set balancer bandwidth for specific node
> 
>
> Key: HDFS-9375
> URL: https://issues.apache.org/jira/browse/HDFS-9375
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9375.001.patch, HDFS-9375.002.patch, 
> HDFS-9375.003.patch
>
>
> Now even though the balancer is impove a lot, but In some cases that is still 
> slow. For example when the cluster is extended,the new nodes all need 
> balancer datas from existed nodes.In order to improve the balancer 
> velocity,generally,we will use {{setBalancerBandwidth}} of {{dfsadmin}} 
> command.But this is set for every node, obviously,we can increase more 
> bandwidth for new nodes because these nodes lacking of data.When the new 
> nodes balancer data enough,we can let new nodes to work.So we can define a 
> new clientDatanode interface to set specific node's bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9381) When same block came for replication for Striped mode, we can move that block to PendingReplications

2015-11-15 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15005870#comment-15005870
 ] 

Rakesh R commented on HDFS-9381:


Thanks [~umamaheswararao] for the good work.

bq. maybe we need to have an "inactive" pending replication list
Its interesting idea to have 'inactive' list and reduce the operations under 
the lock. Just adding one thought, IIUC if admin thinks {{ECRecoveryWorker}} 
takes more recovery time(involves more CPU + IO) than the normal block recovery 
there could be high chance of increasing the 
{{dfs.namenode.replication.pending.timeout-sec}} recovery window (defaults to 
5secs). In that case it would be possible to hit this case more likely.

> When same block came for replication for Striped mode, we can move that block 
> to PendingReplications
> 
>
> Key: HDFS-9381
> URL: https://issues.apache.org/jira/browse/HDFS-9381
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, namenode
>Affects Versions: 3.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-9381.00.patch
>
>
> Currently I noticed that we are just returning null if block already exists 
> in pendingReplications in replication flow for striped blocks.
> {code}
> if (block.isStriped()) {
>   if (pendingNum > 0) {
> // Wait the previous recovery to finish.
> return null;
>   }
> {code}
>  Here if we just return null and if neededReplications contains only fewer 
> blocks(basically by default if less than numliveNodes*2), then same blocks 
> can be picked again from neededReplications from next loop as we are not 
> removing element from neededReplications. Since this replication process need 
> to take fsnamesystmem lock and do, we may spend some time unnecessarily in 
> every loop. 
> So my suggestion/improvement is:
>  Instead of just returning null, how about incrementing pendingReplications 
> for this block and remove from neededReplications? and also another point to 
> consider here is, to add into pendingReplications, generally we need target 
> and it is nothing but to which node we issued replication command. Later when 
> after replication success and DN reported it, block will be removed from 
> pendingReplications from NN addBlock. 
>  So since this is newly picked block from neededReplications, we would not 
> have selected target yet. So which target to be passed to pendingReplications 
> if we add this block? One Option I am thinking is, how about just passing 
> srcNode itself as target for this special condition? So, anyway if the block 
> is really missed, srcNode will not report it. So this block will not be 
> removed from pending replications, so that when it is timed out, it will be 
> considered for replication again and that time it will find actual target to 
> replicate while processing as part of regular replication flow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9431) DistributedFileSystem#concat fails if the target path is relative.

2015-11-15 Thread Kazuho Fujii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kazuho Fujii updated HDFS-9431:
---
Attachment: HDFS-9431.002.patch

Hi, [~liuml07]. Thanks for the comment.

I added some assertions to the test. I think this is enough because the test 
expects only that the operation does not fail.

{quote}
the trunk code fails the newly added test because pathname is not a valid DFS 
filename?
{quote}

Yes. {{getPathName}} method expects a URI with an absolute path.

> DistributedFileSystem#concat fails if the target path is relative.
> --
>
> Key: HDFS-9431
> URL: https://issues.apache.org/jira/browse/HDFS-9431
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Kazuho Fujii
>Assignee: Kazuho Fujii
> Attachments: HDFS-9431.001.patch, HDFS-9431.002.patch
>
>
> {{DistributedFileSystem#concat}} fails if the target path is relative.
> The method tries to send a relative path to DFSClient at the first call.
> bq.  dfs.concat(getPathName(trg), srcsStr);
> But, {{getPathName}} failed. It seems that {{trg}} should be {{absF}} like 
> the second call.
> bq.  dfs.concat(getPathName(absF), srcsStr);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9413) getContentSummary() on standby should throw StandbyException

2015-11-15 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15005821#comment-15005821
 ] 

Vinayakumar B commented on HDFS-9413:
-

IMO 
This should go to 2.7.2 as well. 
Agree?

> getContentSummary() on standby should throw StandbyException
> 
>
> Key: HDFS-9413
> URL: https://issues.apache.org/jira/browse/HDFS-9413
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-9413-002.patch, HDFS-9413-003.patch, HDFS-9413.patch
>
>
> Currently when we call getContentSummary() on standby it will not throw 
> StandbyException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)