[jira] [Assigned] (HDFS-6205) FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is specified

2014-04-09 Thread sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sathish reassigned HDFS-6205:
-

Assignee: sathish

 FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is 
 specified
 --

 Key: HDFS-6205
 URL: https://issues.apache.org/jira/browse/HDFS-6205
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: sathish
Priority: Minor

 If users don't specify the perm of an acl when using the FsShell's setfacl 
 command, a fatal internal error ArrayIndexOutOfBoundsException will be thrown.
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob: /user/hdfs/td1
 -setfacl: Fatal internal error
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclEntry(AclEntry.java:285)
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclSpec(AclEntry.java:221)
   at 
 org.apache.hadoop.fs.shell.AclCommands$SetfaclCommand.processOptions(AclCommands.java:260)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 [root@hdfs-nfs ~]# 
 {code}
 An improvement would be if it returned something like this:
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob:rww /user/hdfs/td1
 -setfacl: Invalid permission in aclSpec : user:bob:rww
 Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x acl_spec} 
 path]|[--set acl_spec path]
 [root@hdfs-nfs ~]# 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6205) FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is specified

2014-04-09 Thread sathish (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963878#comment-13963878
 ] 

sathish commented on HDFS-6205:
---

setfacl command with -m option took include permission by default as true.so 
if we doesn't provide the permissions it will through the 
ArrayIndexoutofboundexception.
{code}
if (includePermission) {
  if (split.length  index) {
throw new HadoopIllegalArgumentException(Invalid aclSpec : 
+ aclStr);
  }
  String permission = split[index];
{code}

actually we are not giving the permision right so we have only two index.so it 
is checking for third index and throwing the arrayindexout of bound exception

 FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is 
 specified
 --

 Key: HDFS-6205
 URL: https://issues.apache.org/jira/browse/HDFS-6205
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: sathish
Priority: Minor

 If users don't specify the perm of an acl when using the FsShell's setfacl 
 command, a fatal internal error ArrayIndexOutOfBoundsException will be thrown.
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob: /user/hdfs/td1
 -setfacl: Fatal internal error
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclEntry(AclEntry.java:285)
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclSpec(AclEntry.java:221)
   at 
 org.apache.hadoop.fs.shell.AclCommands$SetfaclCommand.processOptions(AclCommands.java:260)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 [root@hdfs-nfs ~]# 
 {code}
 An improvement would be if it returned something like this:
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob:rww /user/hdfs/td1
 -setfacl: Invalid permission in aclSpec : user:bob:rww
 Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x acl_spec} 
 path]|[--set acl_spec path]
 [root@hdfs-nfs ~]# 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6194) Create new tests for {{ByteRangeInputStream}}

2014-04-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6194:


Attachment: HDFS-6194.patch

Attaching a patch.

 Create new tests for {{ByteRangeInputStream}}
 -

 Key: HDFS-6194
 URL: https://issues.apache.org/jira/browse/HDFS-6194
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Akira AJISAKA
 Attachments: HDFS-6194.patch


 HDFS-5570 removes old tests for {{ByteRangeInputStream}}, because the tests 
 only are tightly coupled with hftp / hsftp. New tests need to be written 
 because the same class is also used by {{WebHdfsFileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6194) Create new tests for {{ByteRangeInputStream}}

2014-04-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6194:


Status: Patch Available  (was: Open)

 Create new tests for {{ByteRangeInputStream}}
 -

 Key: HDFS-6194
 URL: https://issues.apache.org/jira/browse/HDFS-6194
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Akira AJISAKA
 Attachments: HDFS-6194.patch


 HDFS-5570 removes old tests for {{ByteRangeInputStream}}, because the tests 
 only are tightly coupled with hftp / hsftp. New tests need to be written 
 because the same class is also used by {{WebHdfsFileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963901#comment-13963901
 ] 

Hadoop QA commented on HDFS-6170:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639338/HDFS-6170.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
  org.apache.hadoop.hdfs.TestPread

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6627//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6627//console

This message is automatically generated.

 Support GETFILESTATUS operation in WebImageViewer
 -

 Key: HDFS-6170
 URL: https://issues.apache.org/jira/browse/HDFS-6170
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6170.patch


 WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
 operation. {{GETFILESTATUS}} operation is required for users to execute hdfs 
 dfs -ls webhdfs://foo on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963927#comment-13963927
 ] 

Akira AJISAKA commented on HDFS-6170:
-

The test failures should be unrelated to the patch.

 Support GETFILESTATUS operation in WebImageViewer
 -

 Key: HDFS-6170
 URL: https://issues.apache.org/jira/browse/HDFS-6170
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6170.patch


 WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
 operation. {{GETFILESTATUS}} operation is required for users to execute hdfs 
 dfs -ls webhdfs://foo on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6205) FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is specified

2014-04-09 Thread sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sathish updated HDFS-6205:
--

Attachment: HDFS-6205.patch

Attaching a patch with small fix.Please review it

 FsShell setfacl can throw ArrayIndexOutOfBoundsException when no perm is 
 specified
 --

 Key: HDFS-6205
 URL: https://issues.apache.org/jira/browse/HDFS-6205
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: sathish
Priority: Minor
 Attachments: HDFS-6205.patch


 If users don't specify the perm of an acl when using the FsShell's setfacl 
 command, a fatal internal error ArrayIndexOutOfBoundsException will be thrown.
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob: /user/hdfs/td1
 -setfacl: Fatal internal error
 java.lang.ArrayIndexOutOfBoundsException: 2
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclEntry(AclEntry.java:285)
   at 
 org.apache.hadoop.fs.permission.AclEntry.parseAclSpec(AclEntry.java:221)
   at 
 org.apache.hadoop.fs.shell.AclCommands$SetfaclCommand.processOptions(AclCommands.java:260)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
 [root@hdfs-nfs ~]# 
 {code}
 An improvement would be if it returned something like this:
 {code}
 [root@hdfs-nfs ~]# hdfs dfs -setfacl -m user:bob:rww /user/hdfs/td1
 -setfacl: Invalid permission in aclSpec : user:bob:rww
 Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x acl_spec} 
 path]|[--set acl_spec path]
 [root@hdfs-nfs ~]# 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6213) TestDataNode failing on Jenkins runs due to DN web port in use

2014-04-09 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-6213:


 Summary: TestDataNode failing on Jenkins runs due to DN web port 
in use
 Key: HDFS-6213
 URL: https://issues.apache.org/jira/browse/HDFS-6213
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Minor


The {{TestDataNode}} test fails on some runs from the port 50075 being in use 
-the DN the test is creating is picking up the default web port, and if is in 
use (previous test?) the test fails.





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6213) TestDataNodeConfig failing on Jenkins runs due to DN web port in use

2014-04-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-6213:
-

Description: 
The {{TestDataNodeConfig}} test fails on some runs from the port 50075 being in 
use -the DN the test is creating is picking up the default web port, and if is 
in use (previous test?) the test fails.



  was:
The {{TestDataNode}} test fails on some runs from the port 50075 being in use 
-the DN the test is creating is picking up the default web port, and if is in 
use (previous test?) the test fails.




 TestDataNodeConfig failing on Jenkins runs due to DN web port in use
 

 Key: HDFS-6213
 URL: https://issues.apache.org/jira/browse/HDFS-6213
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Minor

 The {{TestDataNodeConfig}} test fails on some runs from the port 50075 being 
 in use -the DN the test is creating is picking up the default web port, and 
 if is in use (previous test?) the test fails.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6213) TestDataNode failing on Jenkins runs due to DN web port in use

2014-04-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963955#comment-13963955
 ] 

Steve Loughran commented on HDFS-6213:
--

Stack from a jenkins build
{code}
ava.net.BindException: Port in use: 0.0.0.0:50075
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at 
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:853)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:794)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:372)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:730)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:278)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1871)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1765)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1805)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1795)
at 
org.apache.hadoop.hdfs.TestDatanodeConfig.testMemlockLimit(TestDatanodeConfig.java:133)
{code]

 TestDataNode failing on Jenkins runs due to DN web port in use
 --

 Key: HDFS-6213
 URL: https://issues.apache.org/jira/browse/HDFS-6213
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Minor

 The {{TestDataNode}} test fails on some runs from the port 50075 being in use 
 -the DN the test is creating is picking up the default web port, and if is in 
 use (previous test?) the test fails.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6213) TestDataNodeConfig failing on Jenkins runs due to DN web port in use

2014-04-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-6213:
-

Summary: TestDataNodeConfig failing on Jenkins runs due to DN web port in 
use  (was: TestDataNode failing on Jenkins runs due to DN web port in use)

 TestDataNodeConfig failing on Jenkins runs due to DN web port in use
 

 Key: HDFS-6213
 URL: https://issues.apache.org/jira/browse/HDFS-6213
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Minor

 The {{TestDataNode}} test fails on some runs from the port 50075 being in use 
 -the DN the test is creating is picking up the default web port, and if is in 
 use (previous test?) the test fails.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13963976#comment-13963976
 ] 

Steve Loughran commented on HDFS-6143:
--

Daryn,

Having spent time looking at traces of swift FS operations, the combination of 
Open+seek is ubiquitous, and it is expensive over long-distance links, 
especially with HTTP in the story.

But: we do expect {{open(path)}} to fail if its not there -changing that is a 
major change in expectations.

What would make sense -long term- is for a new operation  {{openAt(Path, 
offset)}}. For any of the HTTP filesystems, this would do a GET from the offset 
at open time; 

Short term, looking at the {{ByteRangeInputStream}}, it's inefficient in that 
for even a single byte forward seek (seek(getPos()+1), it closes the connection 
and re-opens it, adds the cost of setting up the connection and resets all flow 
control data on the channel. If you look a {{SwiftNativeInputStream}} you can 
see how it does read-ahead for short range seeks, which is a lot more efficient 
for any code that is reading and skipping ahead. Someone should think about 
doing that as it would reduce the performance of those seeks.

 WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
 paths
 

 Key: HDFS-6143
 URL: https://issues.apache.org/jira/browse/HDFS-6143
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Fix For: 2.5.0

 Attachments: HDFS-6143-branch-2.4.0.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v01.patch, 
 HDFS-6143-trunk-after-HDFS-5570.v02.patch, HDFS-6143.v01.patch, 
 HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, 
 HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6194) Create new tests for {{ByteRangeInputStream}}

2014-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964000#comment-13964000
 ] 

Hadoop QA commented on HDFS-6194:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639355/HDFS-6194.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6628//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6628//console

This message is automatically generated.

 Create new tests for {{ByteRangeInputStream}}
 -

 Key: HDFS-6194
 URL: https://issues.apache.org/jira/browse/HDFS-6194
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Akira AJISAKA
 Attachments: HDFS-6194.patch


 HDFS-5570 removes old tests for {{ByteRangeInputStream}}, because the tests 
 only are tightly coupled with hftp / hsftp. New tests need to be written 
 because the same class is also used by {{WebHdfsFileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6169) Move the address in WebImageViewer

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964035#comment-13964035
 ] 

Hudson commented on HDFS-6169:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #534 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/534/])
HDFS-6169. Move the address in WebImageViewer. Contributed by Akira Ajisaka. 
(wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1585802)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java


 Move the address in WebImageViewer
 --

 Key: HDFS-6169
 URL: https://issues.apache.org/jira/browse/HDFS-6169
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HDFS-6169.2.patch, HDFS-6169.3.patch, HDFS-6169.4.patch, 
 HDFS-6169.5.patch, HDFS-6169.6.patch, HDFS-6169.7.patch, HDFS-6169.patch


 Move the endpoint of WebImageViewer from http://hostname:port/ to 
 http://hostname:port/webhdfs/v1/ to support {{hdfs dfs -ls}} to 
 WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6169) Move the address in WebImageViewer

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964140#comment-13964140
 ] 

Hudson commented on HDFS-6169:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1752 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1752/])
HDFS-6169. Move the address in WebImageViewer. Contributed by Akira Ajisaka. 
(wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1585802)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java


 Move the address in WebImageViewer
 --

 Key: HDFS-6169
 URL: https://issues.apache.org/jira/browse/HDFS-6169
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HDFS-6169.2.patch, HDFS-6169.3.patch, HDFS-6169.4.patch, 
 HDFS-6169.5.patch, HDFS-6169.6.patch, HDFS-6169.7.patch, HDFS-6169.patch


 Move the endpoint of WebImageViewer from http://hostname:port/ to 
 http://hostname:port/webhdfs/v1/ to support {{hdfs dfs -ls}} to 
 WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-5983:
-

Assignee: Ming Ma  (was: Chen He)

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6169) Move the address in WebImageViewer

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964170#comment-13964170
 ] 

Hudson commented on HDFS-6169:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1726 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1726/])
HDFS-6169. Move the address in WebImageViewer. Contributed by Akira Ajisaka. 
(wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1585802)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java


 Move the address in WebImageViewer
 --

 Key: HDFS-6169
 URL: https://issues.apache.org/jira/browse/HDFS-6169
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HDFS-6169.2.patch, HDFS-6169.3.patch, HDFS-6169.4.patch, 
 HDFS-6169.5.patch, HDFS-6169.6.patch, HDFS-6169.7.patch, HDFS-6169.patch


 Move the endpoint of WebImageViewer from http://hostname:port/ to 
 http://hostname:port/webhdfs/v1/ to support {{hdfs dfs -ls}} to 
 WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964185#comment-13964185
 ] 

Chen He commented on HDFS-5983:
---

Hi Ming, I am reviewing your code now.

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6214) Webhdfs has poor throughput for files 2GB

2014-04-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6214:
-

 Summary: Webhdfs has poor throughput for files 2GB
 Key: HDFS-6214
 URL: https://issues.apache.org/jira/browse/HDFS-6214
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


For the DN's open call, jetty returns a Content-Length header for files 2GB, 
and uses chunking for files 2GB.  A bug in jetty's buffer handling results 
in a ~8X reduction in throughput.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6160) TestSafeMode occasionally fails

2014-04-09 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964250#comment-13964250
 ] 

Sanjay Radia commented on HDFS-6160:


+1

 TestSafeMode occasionally fails
 ---

 Key: HDFS-6160
 URL: https://issues.apache.org/jira/browse/HDFS-6160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Ted Yu
Assignee: Arpit Agarwal
 Attachments: HDFS-6160.01.patch


 From 
 https://builds.apache.org/job/PreCommit-HDFS-Build/6511//testReport/org.apache.hadoop.hdfs/TestSafeMode/testInitializeReplQueuesEarly/
  :
 {code}
 java.lang.AssertionError: expected:13 but was:0
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:212)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-6215:


Assignee: Kihwal Lee

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-6215:


 Summary: Wrong error message for upgrade
 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee


UPGRADE is printed instead of -upgrade.

{panel}
File system image contains an old layout version -51.
An upgrade to version -56 is required.
Please restart NameNode with the -rollingUpgrade started option if a rolling
upgraded is already started; or restart NameNode with the UPGRADE to start a 
new upgrade.
{panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Issue Comment Deleted] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6215:
-

Comment: was deleted

(was: Minor but annoying.  Targeting for 2.4.1.)

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6215:
-

Attachment: (was: HDFS-6215.patch)

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee

 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6215:
-

Attachment: HDFS-6215.patch

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6215:
-

Status: Patch Available  (was: Open)

Minor but annoying.  Targeting for 2.4.1.

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6215:
-

Status: Open  (was: Patch Available)

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee

 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6215:
-

Status: Patch Available  (was: Open)

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6215:
-

Attachment: HDFS-6215.patch

Minor but annoying. Targeting for 2.4.1.

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6215:
-

Priority: Minor  (was: Major)

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6216) Issues with webhdfs and http proxies

2014-04-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6216:
-

 Summary: Issues with webhdfs and http proxies
 Key: HDFS-6216
 URL: https://issues.apache.org/jira/browse/HDFS-6216
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


Umbrella jira for issues related to webhdfs functioning via a http proxy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6217) Webhdfs PUT operations may not work via a http proxy

2014-04-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6217:
-

 Summary: Webhdfs PUT operations may not work via a http proxy
 Key: HDFS-6217
 URL: https://issues.apache.org/jira/browse/HDFS-6217
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


Most of webhdfs's PUT operations have no message body.  The HTTP/1.1 spec is 
fuzzy in how PUT requests with no body should be handled.  If the request does 
not specify chunking or Content-Length, the server _may_ consider the request 
to have no body.  However, popular proxies such as Apache Traffic Server will 
reject PUT requests with no body unless Content-Length: 0 is specified.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6218) Audit log should use true client IP for proxied webhdfs operations

2014-04-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6218:
-

 Summary: Audit log should use true client IP for proxied webhdfs 
operations
 Key: HDFS-6218
 URL: https://issues.apache.org/jira/browse/HDFS-6218
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


When using a http proxy, it's not very useful for the audit log to contain the 
proxy's IP address.  Similar to proxy superusers, the NN should allow 
configuration of trusted proxy servers and use the X-Forwarded-For header when 
logging the client request.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6219) Proxy superuser configuration should use true client IP for address checks

2014-04-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6219:
-

 Summary: Proxy superuser configuration should use true client IP 
for address checks
 Key: HDFS-6219
 URL: https://issues.apache.org/jira/browse/HDFS-6219
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


Similar to HDFS-6218, the trusted proxies should use X-Forwarded-For when 
performing superuser checks so oozie can use webhdfs via a http proxy.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6220) Webhdfs should differentiate remote exceptions

2014-04-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6220:
-

 Summary: Webhdfs should differentiate remote exceptions
 Key: HDFS-6220
 URL: https://issues.apache.org/jira/browse/HDFS-6220
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


Webhdfs's validateResponse should use a distinct exception for json decoded 
exceptions than it does for servlet exceptions.  Ex. A servlet generated 404 
with a json payload should be distinguishable from a http proxy or jetty 
generated 404.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964321#comment-13964321
 ] 

Jonathan Eagles commented on HDFS-6215:
---

+1. Thanks, Kihwal. It is indeed an annoyance. Committing to trunk, branch-2 
and branch-2.4.

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6160) TestSafeMode occasionally fails

2014-04-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6160:


  Resolution: Fixed
   Fix Version/s: 2.5.0
  3.0.0
Target Version/s: 2.5.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks Sanjay! I committed it to trunk and branch-2.

 TestSafeMode occasionally fails
 ---

 Key: HDFS-6160
 URL: https://issues.apache.org/jira/browse/HDFS-6160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Ted Yu
Assignee: Arpit Agarwal
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6160.01.patch


 From 
 https://builds.apache.org/job/PreCommit-HDFS-Build/6511//testReport/org.apache.hadoop.hdfs/TestSafeMode/testInitializeReplQueuesEarly/
  :
 {code}
 java.lang.AssertionError: expected:13 but was:0
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:212)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6160) TestSafeMode occasionally fails

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964341#comment-13964341
 ] 

Hudson commented on HDFS-6160:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5476 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5476/])
HDFS-6160. TestSafeMode occasionally fails. (Contributed by Arpit Agarwal) 
(arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586007)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java


 TestSafeMode occasionally fails
 ---

 Key: HDFS-6160
 URL: https://issues.apache.org/jira/browse/HDFS-6160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Ted Yu
Assignee: Arpit Agarwal
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6160.01.patch


 From 
 https://builds.apache.org/job/PreCommit-HDFS-Build/6511//testReport/org.apache.hadoop.hdfs/TestSafeMode/testInitializeReplQueuesEarly/
  :
 {code}
 java.lang.AssertionError: expected:13 but was:0
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at org.junit.Assert.assertEquals(Assert.java:456)
   at 
 org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:212)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HDFS-6215:
--

   Resolution: Fixed
Fix Version/s: 2.4.1
   2.5.0
   3.0.0
   Status: Resolved  (was: Patch Available)

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Fix For: 3.0.0, 2.5.0, 2.4.1

 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964342#comment-13964342
 ] 

Hudson commented on HDFS-6215:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5476 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5476/])
HDFS-6215. Wrong error message for upgrade. (Kihwal Lee via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586011)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java


 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HDFS-5983:
--

Attachment: (was: HDFS-5983.patch)

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HDFS-5983:
--

Attachment: HDFS-5983.patch

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HDFS-5983:
--

Attachment: HDFS-5983-updated.patch

+1, patch looks good to me. I made a little bit change in patch format.

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HDFS-5983:
--

Attachment: (was: HDFS-5983-updated.patch)

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HDFS-5983:
--

Attachment: HDFS-5983-updated.patch

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964362#comment-13964362
 ] 

Chen He commented on HDFS-5983:
---

Kindly reminder,  the next time, take JIRA before you start to work on it. 

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6221) Webhdfs should recover from dead DNs

2014-04-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6221:
-

 Summary: Webhdfs should recover from dead DNs
 Key: HDFS-6221
 URL: https://issues.apache.org/jira/browse/HDFS-6221
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


We've repeatedly observed the jetty acceptor thread silently dying in the DNs.  
The webhdfs servlet may also disappear and jetty returns non-json 404s.

One approach to make webhdfs more resilient to bad DNs is dfsclient-like 
fetching of block locations to directly access the DNs instead of relying on a 
NN redirect that may repeatedly send the client to the same faulty DN(s).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6206) DFSUtil.substituteForWildcardAddress may throw NPE

2014-04-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6206:
--

   Resolution: Fixed
Fix Version/s: 2.4.1
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Arpit for reviewing the patch.

I have committed this.

 DFSUtil.substituteForWildcardAddress may throw NPE
 --

 Key: HDFS-6206
 URL: https://issues.apache.org/jira/browse/HDFS-6206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.4.1

 Attachments: h6206_20140408.patch


 InetSocketAddress.getAddress() may return null if the address null is 
 unresolved.  In such case, DFSUtil.substituteForWildcardAddress may throw NPE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6222) Remove background token renewer from webhdfs

2014-04-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6222:
-

 Summary: Remove background token renewer from webhdfs
 Key: HDFS-6222
 URL: https://issues.apache.org/jira/browse/HDFS-6222
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


The background token renewer is a source of problems for long-running daemons.  
Webhdfs should lazy fetch a new token when it receives an InvalidToken 
exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6208) DataNode caching can leak file descriptors.

2014-04-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6208:


Attachment: HDFS-6208.1.patch

Here is a patch with these changes:
# {{FsDatasetCache}}: Close the block file and checksum file in the happy path, 
not just after a failure.  POSIX specs dictate that it's OK to close the file 
descriptor from which an mmap was obtained and then keep using the mmap, so 
this is fine.
# {{MappableBlock}}: Close the {{FileChannel}} obtained for the block file and 
checksum file.  This diff looks big only because of indentation changes to 
introduce a try-finally block.
# {{TestCacheDirectives}}: At the end of every test, remove all cache 
directives and wait for 0 blocks cached.  This is done so that we know the 
mmaps are freed, and thus any underlying file locks are released, before 
starting another {{MiniDFSCluster}}.  I started out trying to hook this logic 
into the main {{DataNode}} shutdown flow, but it was going to be very brittle 
coordinating with all of background caching and uncaching threads.  It's much 
more convenient to just rely on the OS cleanup at process shutdown.  This is 
only a problem for tests, where one process starts and stops many test clusters.

 DataNode caching can leak file descriptors.
 ---

 Key: HDFS-6208
 URL: https://issues.apache.org/jira/browse/HDFS-6208
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-6208.1.patch


 In the DataNode, management of mmap'd/mlock'd block files can leak file 
 descriptors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6208) DataNode caching can leak file descriptors.

2014-04-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6208:


Status: Patch Available  (was: Open)

 DataNode caching can leak file descriptors.
 ---

 Key: HDFS-6208
 URL: https://issues.apache.org/jira/browse/HDFS-6208
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-6208.1.patch


 In the DataNode, management of mmap'd/mlock'd block files can leak file 
 descriptors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6204) TestRBWBlockInvalidation may fail

2014-04-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6204:
--

   Resolution: Fixed
Fix Version/s: 2.4.1
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Arpit for reviewing the patch.

I have committed this.

 TestRBWBlockInvalidation may fail
 -

 Key: HDFS-6204
 URL: https://issues.apache.org/jira/browse/HDFS-6204
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 2.4.1

 Attachments: h6204_20140408.patch


 {code}
 java.lang.AssertionError: There should not be any replica in the 
 corruptReplicasMap expected:0 but was:1
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation.testBlockInvalidationWhenRBWReplicaMissedInDN(TestRBWBlockInvalidation.java:137)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6208) DataNode caching can leak file descriptors.

2014-04-09 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964387#comment-13964387
 ] 

Chris Nauroth commented on HDFS-6208:
-

I ran {{TestCacheDirectives}} successfully on Mac, Linux and Windows.  On 
Linux, I also ran manual end-to-end tests of caching and then doing zero-copy 
reads using a custom client app that I wrote.

 DataNode caching can leak file descriptors.
 ---

 Key: HDFS-6208
 URL: https://issues.apache.org/jira/browse/HDFS-6208
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-6208.1.patch


 In the DataNode, management of mmap'd/mlock'd block files can leak file 
 descriptors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6204) TestRBWBlockInvalidation may fail

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964392#comment-13964392
 ] 

Hudson commented on HDFS-6204:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5478 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5478/])
HDFS-6204. Fix TestRBWBlockInvalidation: change the last sleep to a loop. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586039)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java


 TestRBWBlockInvalidation may fail
 -

 Key: HDFS-6204
 URL: https://issues.apache.org/jira/browse/HDFS-6204
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 2.4.1

 Attachments: h6204_20140408.patch


 {code}
 java.lang.AssertionError: There should not be any replica in the 
 corruptReplicasMap expected:0 but was:1
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.failNotEquals(Assert.java:647)
   at org.junit.Assert.assertEquals(Assert.java:128)
   at org.junit.Assert.assertEquals(Assert.java:472)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation.testBlockInvalidationWhenRBWReplicaMissedInDN(TestRBWBlockInvalidation.java:137)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6206) DFSUtil.substituteForWildcardAddress may throw NPE

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964393#comment-13964393
 ] 

Hudson commented on HDFS-6206:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5478 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5478/])
HDFS-6206. Fix NullPointerException in DFSUtil.substituteForWildcardAddress. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586034)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java


 DFSUtil.substituteForWildcardAddress may throw NPE
 --

 Key: HDFS-6206
 URL: https://issues.apache.org/jira/browse/HDFS-6206
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.4.1

 Attachments: h6206_20140408.patch


 InetSocketAddress.getAddress() may return null if the address null is 
 unresolved.  In such case, DFSUtil.substituteForWildcardAddress may throw NPE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6223) Using the command setrep to set the replication factor more than the number of datanodes with the -w parameter, the command gets in a infinite loop.

2014-04-09 Thread Jinghui Wang (JIRA)
Jinghui Wang created HDFS-6223:
--

 Summary: Using the command setrep to set the replication factor 
more than the number of datanodes with the -w parameter, the command gets in a 
infinite loop.
 Key: HDFS-6223
 URL: https://issues.apache.org/jira/browse/HDFS-6223
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.3.0, 2.2.0, 2.1.1-beta
Reporter: Jinghui Wang
Assignee: Jinghui Wang


Using the command setrep to set the replication factor more than the number of 
datanodes with the -w parameter gets in a infinite loop. When the -w parameter 
is there, the command gets in the following code to wait for the replication 
factor to be met before exiting. But if the number of datanodes is less than 
the desired replication factor, the exiting condition is never met. 

A proposed fix is to add a timeout parameter, so the command will wait for a 
certain amount of time or number of tries before exiting. 

  for(boolean done = false; !done; ) {
BlockLocation[] locations = fs.getFileBlockLocations(status, 0, len);
int i = 0;
for(; i  locations.length  
  locations[i].getHosts().length == rep; i++)
  if (!printWarning  locations[i].getHosts().length  rep) {
System.out.println(\nWARNING: the waiting time may be long for 
+ DECREASING the number of replication.);
printWarning = true;
  }
done = i == locations.length;

if (!done) {
  System.out.print(.);
  System.out.flush();
  try {Thread.sleep(1);} catch (InterruptedException e) {}
}
  }




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964455#comment-13964455
 ] 

Haohui Mai commented on HDFS-6170:
--

{code}
+  private void compareFile(FileStatus expected, FileStatus status) {
{code}

It should be a private static method.

{code}
+  public String getFileStatus(String path) throws IOException {
{code}

It might be better to mark all methods and the {{FSImageLoader}} class as 
package local instead of public.

Other than that the patch looks good to me.

 Support GETFILESTATUS operation in WebImageViewer
 -

 Key: HDFS-6170
 URL: https://issues.apache.org/jira/browse/HDFS-6170
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6170.patch


 WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
 operation. {{GETFILESTATUS}} operation is required for users to execute hdfs 
 dfs -ls webhdfs://foo on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964459#comment-13964459
 ] 

Mit Desai commented on HDFS-5983:
-

Reviewed the patch. LGTM
+1 (non binding)

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HDFS-5983:


Status: Patch Available  (was: Open)

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964462#comment-13964462
 ] 

Mit Desai commented on HDFS-5983:
-

One note, you need to Submit the Patch once you upload the patch to get the 
HadoopQA Comment. I just did that.

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6194) Create new tests for {{ByteRangeInputStream}}

2014-04-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964464#comment-13964464
 ] 

Haohui Mai commented on HDFS-6194:
--

{code}
+  @VisibleForTesting
+  protected ByteRangeInputStream(URLOpener o, URLOpener r, boolean connect)
+  throws IOException {
+this.originalURL = o;
+this.resolvedURL = r;
+if (connect) {
+  getInputStream();
+}
+  }
+
{code}

Please remove it. See the discussion in HDFS-6143 for more details.

For {{MockByteRangeInputStream}} and {{MockHttpURLConnection}}, please write 
them with Mockito directly. The code should not need to use the {{spy()}} call 
at all.

 Create new tests for {{ByteRangeInputStream}}
 -

 Key: HDFS-6194
 URL: https://issues.apache.org/jira/browse/HDFS-6194
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Akira AJISAKA
 Attachments: HDFS-6194.patch


 HDFS-5570 removes old tests for {{ByteRangeInputStream}}, because the tests 
 only are tightly coupled with hftp / hsftp. New tests need to be written 
 because the same class is also used by {{WebHdfsFileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4114) Remove the BackupNode and CheckpointNode from trunk

2014-04-09 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964483#comment-13964483
 ] 

Jing Zhao commented on HDFS-4114:
-

[~wheat9], there are still several TODOs in the current rebased patch. How do 
you plan to address them?

 Remove the BackupNode and CheckpointNode from trunk
 ---

 Key: HDFS-4114
 URL: https://issues.apache.org/jira/browse/HDFS-4114
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Suresh Srinivas
 Attachments: HDFS-4114.000.patch, HDFS-4114.patch


 Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the 
 BackupNode and CheckpointNode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964482#comment-13964482
 ] 

Hadoop QA commented on HDFS-5983:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12639424/HDFS-5983-updated.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6631//console

This message is automatically generated.

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6215) Wrong error message for upgrade

2014-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964487#comment-13964487
 ] 

Hadoop QA commented on HDFS-6215:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639416/HDFS-6215.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6629//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6629//console

This message is automatically generated.

 Wrong error message for upgrade
 ---

 Key: HDFS-6215
 URL: https://issues.apache.org/jira/browse/HDFS-6215
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Minor
 Fix For: 3.0.0, 2.5.0, 2.4.1

 Attachments: HDFS-6215.patch


 UPGRADE is printed instead of -upgrade.
 {panel}
 File system image contains an old layout version -51.
 An upgrade to version -56 is required.
 Please restart NameNode with the -rollingUpgrade started option if a rolling
 upgraded is already started; or restart NameNode with the UPGRADE to start 
 a new upgrade.
 {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-09 Thread Charles Lamb (JIRA)
Charles Lamb created HDFS-6224:
--

 Summary: Add a unit test to TestAuditLogger for file permissions 
passed to logAuditEvent
 Key: HDFS-6224
 URL: https://issues.apache.org/jira/browse/HDFS-6224
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor


Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
that during a setPermission operation the permission returned is the one that 
was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6203) check other namenode's state before HAadmin transitionToActive

2014-04-09 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964490#comment-13964490
 ] 

Kihwal Lee commented on HDFS-6203:
--

bq. For checking other NN's state, if we add the check into the 
transitionToActive method, we cannot still guarantee that the other NN will not 
transition to active after the checking. Thus I think the checking here will 
not be very useful.

We can make {{transitionToActive}} do the check by default when automatic 
fail-over is not used. If the other NN does not respond or in the active state, 
the command will fail with warning. At that point the user can reissue it with 
a force option, if s/he wants to.  I think this is a good preventive measure 
for avoiding the easy-to-make but fatal mistake.

 check other namenode's state before HAadmin transitionToActive
 --

 Key: HDFS-6203
 URL: https://issues.apache.org/jira/browse/HDFS-6203
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha
Affects Versions: 2.3.0
Reporter: patrick white

 Current behavior is that the HAadmin -transitionToActive command will 
 complete the transition to Active even if the other namenode is already in 
 Active state. This is not an allowed condition and should be handled by 
 fencing, however setting both namenode's active can happen accidentally with 
 relative ease, especially in a production environment when performing manual 
 maintenance operations. 
 If this situation does occur it is very serious and will likely cause data 
 loss, or best case, require a difficult recovery to avoid data loss.
 This is requesting an enhancement to haadmin's -transitionToActive command, 
 to have HAadmin check the Active state of the other namenode before 
 completing the transition. If the other namenode is Active, then fail the 
 request due to other nn already-active.
 Not sure if there is a scenario where both namenode's being Active is valid 
 or desired, but to maintain functional compatibility a 'force' parameter 
 could be added to  override this check and allow previous behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964491#comment-13964491
 ] 

Mit Desai commented on HDFS-5983:
-

[~airbots], [~mingma] : Can any of you regenerate the patch and attach it to 
make sure it applies successfully?

Mit

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HDFS-5983:


Status: Open  (was: Patch Available)

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HDFS-5983:
--

Affects Version/s: 2.2.0

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964509#comment-13964509
 ] 

Chen He commented on HDFS-5983:
---

Hi [~mitdesai]
This patch works on my machine and can be applied to trunk. I saw Hadoop trunk 
was broken this morning.

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-09 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6224:
---

Attachment: HDFS-6224.001.patch

Simple unit test.

 Add a unit test to TestAuditLogger for file permissions passed to 
 logAuditEvent
 ---

 Key: HDFS-6224
 URL: https://issues.apache.org/jira/browse/HDFS-6224
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor
 Attachments: HDFS-6224.001.patch


 Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
 that during a setPermission operation the permission returned is the one that 
 was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6170:


Attachment: HDFS-6170.2.patch

Thanks [~wheat9] for the comment. Attaching a new patch.

 Support GETFILESTATUS operation in WebImageViewer
 -

 Key: HDFS-6170
 URL: https://issues.apache.org/jira/browse/HDFS-6170
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6170.2.patch, HDFS-6170.patch


 WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
 operation. {{GETFILESTATUS}} operation is required for users to execute hdfs 
 dfs -ls webhdfs://foo on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6209) Fix flaky test TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK

2014-04-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6209:
--

Priority: Minor  (was: Major)
Hadoop Flags: Reviewed

+1 patch looks good.  I will commit this shortly.

 Fix flaky test 
 TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK
 --

 Key: HDFS-6209
 URL: https://issues.apache.org/jira/browse/HDFS-6209
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Attachments: HDFS-6209.01.patch


 The test depends on hard-coded port numbers being available. It should retry 
 if the chosen port is in use.
 Exception details below in a comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4114) Remove the BackupNode and CheckpointNode from trunk

2014-04-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-4114:
-

Attachment: HDFS-4114.001.patch

 Remove the BackupNode and CheckpointNode from trunk
 ---

 Key: HDFS-4114
 URL: https://issues.apache.org/jira/browse/HDFS-4114
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Suresh Srinivas
 Attachments: HDFS-4114.000.patch, HDFS-4114.001.patch, HDFS-4114.patch


 Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the 
 BackupNode and CheckpointNode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6208) DataNode caching can leak file descriptors.

2014-04-09 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964539#comment-13964539
 ] 

Arpit Agarwal commented on HDFS-6208:
-

+1

I verified the fix on Windows and Mac OS X. Thanks Chris!

 DataNode caching can leak file descriptors.
 ---

 Key: HDFS-6208
 URL: https://issues.apache.org/jira/browse/HDFS-6208
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-6208.1.patch


 In the DataNode, management of mmap'd/mlock'd block files can leak file 
 descriptors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4114) Remove the BackupNode and CheckpointNode from trunk

2014-04-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964540#comment-13964540
 ] 

Haohui Mai commented on HDFS-4114:
--

The v1 patch cleans up some of the TODOs. There are a couple TODOs that require 
changes in the protobuf files. I'll clean them up in a subsequent jira.

 Remove the BackupNode and CheckpointNode from trunk
 ---

 Key: HDFS-4114
 URL: https://issues.apache.org/jira/browse/HDFS-4114
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Suresh Srinivas
 Attachments: HDFS-4114.000.patch, HDFS-4114.001.patch, HDFS-4114.patch


 Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the 
 BackupNode and CheckpointNode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6209) Fix flaky test TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK

2014-04-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6209:
--

   Resolution: Fixed
Fix Version/s: 2.4.1
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Arpit!

 Fix flaky test 
 TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK
 --

 Key: HDFS-6209
 URL: https://issues.apache.org/jira/browse/HDFS-6209
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: 2.4.1

 Attachments: HDFS-6209.01.patch


 The test depends on hard-coded port numbers being available. It should retry 
 if the chosen port is in use.
 Exception details below in a comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6209) Fix flaky test TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK

2014-04-09 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964546#comment-13964546
 ] 

Arpit Agarwal commented on HDFS-6209:
-

Thanks for the review and commit Nicholas!

 Fix flaky test 
 TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK
 --

 Key: HDFS-6209
 URL: https://issues.apache.org/jira/browse/HDFS-6209
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: 2.4.1

 Attachments: HDFS-6209.01.patch


 The test depends on hard-coded port numbers being available. It should retry 
 if the chosen port is in use.
 Exception details below in a comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6209) Fix flaky test TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964555#comment-13964555
 ] 

Hudson commented on HDFS-6209:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5480 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5480/])
HDFS-6209. TestValidateConfigurationSettings should use random ports.  
Contributed by Arpit Agarwal (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586079)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestValidateConfigurationSettings.java


 Fix flaky test 
 TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK
 --

 Key: HDFS-6209
 URL: https://issues.apache.org/jira/browse/HDFS-6209
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: 2.4.1

 Attachments: HDFS-6209.01.patch


 The test depends on hard-coded port numbers being available. It should retry 
 if the chosen port is in use.
 Exception details below in a comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-5693) Few NN metrics data points were collected via JMX when NN is under heavy load

2014-04-09 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma reassigned HDFS-5693:
-

Assignee: Ming Ma

 Few NN metrics data points were collected via JMX when NN is under heavy load
 -

 Key: HDFS-5693
 URL: https://issues.apache.org/jira/browse/HDFS-5693
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HADOOP-5693.patch, HDFS-5693.patch


 JMX sometimes doesn’t return any value when NN is under heavy load. However, 
 that is when we would like to get metrics to help to diagnosis the issue.
 When NN is under heavy load due to bad application or other reasons, it holds 
 FSNamesystem's writer lock for a long period of time. Many of the 
 FSNamesystem metrics require FSNamesystem's reader lock and thus can't be 
 processed.
 This is a special case to improve the overall NN concurrency.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964568#comment-13964568
 ] 

Ming Ma commented on HDFS-5983:
---

Thanks, folks. It seems Arpit Agarwal has just fixed 
https://issues.apache.org/jira/browse/HDFS-6160 in a different way.

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964574#comment-13964574
 ] 

Arpit Agarwal commented on HDFS-5983:
-

Yes this is the same issue and [~mingma]'s diagnosis was spot on.

I searched for open Jiras on this failure and only saw HDFS-6160, my apologies 
for missing your work.

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5983.
-

Resolution: Duplicate

Resolving since this should be fixed now.

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5983) TestSafeMode#testInitializeReplQueuesEarly fails

2014-04-09 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964579#comment-13964579
 ] 

Mit Desai commented on HDFS-5983:
-

Already fixed by HDFS-6160. So Closing it.

 TestSafeMode#testInitializeReplQueuesEarly fails
 

 Key: HDFS-5983
 URL: https://issues.apache.org/jira/browse/HDFS-5983
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Kihwal Lee
Assignee: Ming Ma
 Attachments: HDFS-5983-updated.patch, HDFS-5983.patch, testlog.txt


 It was seen from one of the precommit build of HDFS-5962.  The test case 
 creates 15 blocks and then shuts down all datanodes. Then the namenode is 
 restarted with a low safe block threshold and one datanode is restarted. The 
 idea is that the initial block report from the restarted datanode will make 
 the namenode leave the safemode and initialize the replication queues.
 According to the log, the datanode reported 3 blocks, but slightly before 
 that the namenode did repl queue init with 1 block.  I will attach the log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6208) DataNode caching can leak file descriptors.

2014-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964612#comment-13964612
 ] 

Hadoop QA commented on HDFS-6208:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639427/HDFS-6208.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6630//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6630//console

This message is automatically generated.

 DataNode caching can leak file descriptors.
 ---

 Key: HDFS-6208
 URL: https://issues.apache.org/jira/browse/HDFS-6208
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-6208.1.patch


 In the DataNode, management of mmap'd/mlock'd block files can leak file 
 descriptors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6225) Remove the o.a.h.hdfs.server.common.UpgradeStatusReport

2014-04-09 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6225:


 Summary: Remove the o.a.h.hdfs.server.common.UpgradeStatusReport
 Key: HDFS-6225
 URL: https://issues.apache.org/jira/browse/HDFS-6225
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


The class o.a.h.hdfs.server.common.UpgradeStatusReport has been dead since 
HDFS-2688. This jira proposes to remove it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6225) Remove the o.a.h.hdfs.server.common.UpgradeStatusReport

2014-04-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6225:
-

Description: The class o.a.h.hdfs.server.common.UpgradeStatusReport has 
been dead since HDFS-2686. This jira proposes to remove it.  (was: The class 
o.a.h.hdfs.server.common.UpgradeStatusReport has been dead since HDFS-2688. 
This jira proposes to remove it.)

 Remove the o.a.h.hdfs.server.common.UpgradeStatusReport
 ---

 Key: HDFS-6225
 URL: https://issues.apache.org/jira/browse/HDFS-6225
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai

 The class o.a.h.hdfs.server.common.UpgradeStatusReport has been dead since 
 HDFS-2686. This jira proposes to remove it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6225) Remove the o.a.h.hdfs.server.common.UpgradeStatusReport

2014-04-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6225:
-

Attachment: HDFS-6225.000.patch

 Remove the o.a.h.hdfs.server.common.UpgradeStatusReport
 ---

 Key: HDFS-6225
 URL: https://issues.apache.org/jira/browse/HDFS-6225
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6225.000.patch


 The class o.a.h.hdfs.server.common.UpgradeStatusReport has been dead since 
 HDFS-2686. This jira proposes to remove it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6225) Remove the o.a.h.hdfs.server.common.UpgradeStatusReport

2014-04-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6225:
-

Status: Patch Available  (was: Open)

 Remove the o.a.h.hdfs.server.common.UpgradeStatusReport
 ---

 Key: HDFS-6225
 URL: https://issues.apache.org/jira/browse/HDFS-6225
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6225.000.patch


 The class o.a.h.hdfs.server.common.UpgradeStatusReport has been dead since 
 HDFS-2686. This jira proposes to remove it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6214) Webhdfs has poor throughput for files 2GB

2014-04-09 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-6214:
--

Attachment: HDFS-6214.patch

Jetty's chunked responses steal/reserve 12 bytes at the beginning of the 
buffer.  If you write the full buffer size, then 12 bytes spill over into 
another buffer which again has 12 reserved bytes.  The solution is to write  
flush the buffer size minus 12.  The difference is dramatic: 10MB/s before vs 
80MB/s after which was probably hitting the network saturation point.

No test because it's rather difficult to write a performance test for big 
files.  We've been internally running with this change for months.

 Webhdfs has poor throughput for files 2GB
 --

 Key: HDFS-6214
 URL: https://issues.apache.org/jira/browse/HDFS-6214
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-6214.patch


 For the DN's open call, jetty returns a Content-Length header for files 2GB, 
 and uses chunking for files 2GB.  A bug in jetty's buffer handling results 
 in a ~8X reduction in throughput.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6214) Webhdfs has poor throughput for files 2GB

2014-04-09 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-6214:
--

Status: Patch Available  (was: Open)

 Webhdfs has poor throughput for files 2GB
 --

 Key: HDFS-6214
 URL: https://issues.apache.org/jira/browse/HDFS-6214
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-6214.patch


 For the DN's open call, jetty returns a Content-Length header for files 2GB, 
 and uses chunking for files 2GB.  A bug in jetty's buffer handling results 
 in a ~8X reduction in throughput.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6203) check other namenode's state before HAadmin transitionToActive

2014-04-09 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964629#comment-13964629
 ] 

Jing Zhao commented on HDFS-6203:
-

Thanks for the clarification, [~patwhitey2007] and [~kihwal].
bq. I think this is a good preventive measure for avoiding the easy-to-make but 
fatal mistake
Yes, this makes sense to me: to do a check in the beginning should be able to 
avoid the mistake in most of the normal cases. 

 check other namenode's state before HAadmin transitionToActive
 --

 Key: HDFS-6203
 URL: https://issues.apache.org/jira/browse/HDFS-6203
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha
Affects Versions: 2.3.0
Reporter: patrick white

 Current behavior is that the HAadmin -transitionToActive command will 
 complete the transition to Active even if the other namenode is already in 
 Active state. This is not an allowed condition and should be handled by 
 fencing, however setting both namenode's active can happen accidentally with 
 relative ease, especially in a production environment when performing manual 
 maintenance operations. 
 If this situation does occur it is very serious and will likely cause data 
 loss, or best case, require a difficult recovery to avoid data loss.
 This is requesting an enhancement to haadmin's -transitionToActive command, 
 to have HAadmin check the Active state of the other namenode before 
 completing the transition. If the other namenode is Active, then fail the 
 request due to other nn already-active.
 Not sure if there is a scenario where both namenode's being Active is valid 
 or desired, but to maintain functional compatibility a 'force' parameter 
 could be added to  override this check and allow previous behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964675#comment-13964675
 ] 

Hadoop QA commented on HDFS-6170:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639445/HDFS-6170.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestCacheDirectives
  org.apache.hadoop.hdfs.TestPread

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6632//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6632//console

This message is automatically generated.

 Support GETFILESTATUS operation in WebImageViewer
 -

 Key: HDFS-6170
 URL: https://issues.apache.org/jira/browse/HDFS-6170
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6170.2.patch, HDFS-6170.patch


 WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
 operation. {{GETFILESTATUS}} operation is required for users to execute hdfs 
 dfs -ls webhdfs://foo on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-09 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6224:
---

Status: Patch Available  (was: Open)

A simple diff (the unit test addition) is attached.


 Add a unit test to TestAuditLogger for file permissions passed to 
 logAuditEvent
 ---

 Key: HDFS-6224
 URL: https://issues.apache.org/jira/browse/HDFS-6224
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor
 Attachments: HDFS-6224.001.patch


 Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
 that during a setPermission operation the permission returned is the one that 
 was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964685#comment-13964685
 ] 

Haohui Mai commented on HDFS-6170:
--

+1

 Support GETFILESTATUS operation in WebImageViewer
 -

 Key: HDFS-6170
 URL: https://issues.apache.org/jira/browse/HDFS-6170
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6170.2.patch, HDFS-6170.patch


 WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
 operation. {{GETFILESTATUS}} operation is required for users to execute hdfs 
 dfs -ls webhdfs://foo on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6170:
-

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~ajisakaa] for the 
contribution.

 Support GETFILESTATUS operation in WebImageViewer
 -

 Key: HDFS-6170
 URL: https://issues.apache.org/jira/browse/HDFS-6170
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 2.5.0

 Attachments: HDFS-6170.2.patch, HDFS-6170.patch


 WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
 operation. {{GETFILESTATUS}} operation is required for users to execute hdfs 
 dfs -ls webhdfs://foo on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964701#comment-13964701
 ] 

Hudson commented on HDFS-6170:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5482 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5482/])
HDFS-6170. Support GETFILESTATUS operation in WebImageViewer. Contributed by 
Akira Ajisaka. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586152)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java


 Support GETFILESTATUS operation in WebImageViewer
 -

 Key: HDFS-6170
 URL: https://issues.apache.org/jira/browse/HDFS-6170
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 2.5.0

 Attachments: HDFS-6170.2.patch, HDFS-6170.patch


 WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
 operation. {{GETFILESTATUS}} operation is required for users to execute hdfs 
 dfs -ls webhdfs://foo on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4114) Remove the BackupNode and CheckpointNode from trunk

2014-04-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964713#comment-13964713
 ] 

Hadoop QA commented on HDFS-4114:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639451/HDFS-4114.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6633//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6633//console

This message is automatically generated.

 Remove the BackupNode and CheckpointNode from trunk
 ---

 Key: HDFS-4114
 URL: https://issues.apache.org/jira/browse/HDFS-4114
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Suresh Srinivas
 Attachments: HDFS-4114.000.patch, HDFS-4114.001.patch, HDFS-4114.patch


 Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the 
 BackupNode and CheckpointNode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6208) DataNode caching can leak file descriptors.

2014-04-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6208:


   Resolution: Fixed
Fix Version/s: 2.4.1
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2 and branch-2.4.  Arpit, thank you for the 
review.

 DataNode caching can leak file descriptors.
 ---

 Key: HDFS-6208
 URL: https://issues.apache.org/jira/browse/HDFS-6208
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.4.1

 Attachments: HDFS-6208.1.patch


 In the DataNode, management of mmap'd/mlock'd block files can leak file 
 descriptors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6208) DataNode caching can leak file descriptors.

2014-04-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13964737#comment-13964737
 ] 

Hudson commented on HDFS-6208:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5483 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5483/])
HDFS-6208. DataNode caching can leak file descriptors. Contributed by Chris 
Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1586154)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MappableBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java


 DataNode caching can leak file descriptors.
 ---

 Key: HDFS-6208
 URL: https://issues.apache.org/jira/browse/HDFS-6208
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0, 2.4.1

 Attachments: HDFS-6208.1.patch


 In the DataNode, management of mmap'd/mlock'd block files can leak file 
 descriptors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >