[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961652#comment-13961652
 ] 

Hadoop QA commented on HDFS-6143:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12638935/HDFS-6143-trunk-after-HDFS-5570.v01.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6597//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6597//console

This message is automatically generated.

> WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
> paths
> 
>
> Key: HDFS-6143
> URL: https://issues.apache.org/jira/browse/HDFS-6143
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HDFS-6143-branch-2.4.0.v01.patch, 
> HDFS-6143-trunk-after-HDFS-5570.v01.patch, HDFS-6143.v01.patch, 
> HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, 
> HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch
>
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6170:


Description: WebImageViewer is created by HDFS-5978 but now supports only 
{{LISTSTATUS}} operation. {{GETFILESTATUS}} operation is required for users to 
execute "hdfs dfs -ls webhdfs://foo" on WebImageViewer.  (was: WebImageViewer 
is created by HDFS-5978 but now supports only {{LISTSTATUS}} operation.)
   Priority: Major  (was: Minor)
   Assignee: Akira AJISAKA

> Support GETFILESTATUS operation in WebImageViewer
> -
>
> Key: HDFS-6170
> URL: https://issues.apache.org/jira/browse/HDFS-6170
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
>
> WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
> operation. {{GETFILESTATUS}} operation is required for users to execute "hdfs 
> dfs -ls webhdfs://foo" on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961646#comment-13961646
 ] 

Hadoop QA commented on HDFS-6193:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12638942/HDFS-6193-branch-2.4.0.v01.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6598//console

This message is automatically generated.

> HftpFileSystem open should throw FileNotFoundException for non-existing paths
> -
>
> Key: HDFS-6193
> URL: https://issues.apache.org/jira/browse/HDFS-6193
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HDFS-6193-branch-2.4.0.v01.patch
>
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6169) Move the address in WebImageViewer

2014-04-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961647#comment-13961647
 ] 

Akira AJISAKA commented on HDFS-6169:
-

Thanks for the comment. I updated the patch to use a {{WebHdfsFileSystem}} 
instance to test.

> Move the address in WebImageViewer
> --
>
> Key: HDFS-6169
> URL: https://issues.apache.org/jira/browse/HDFS-6169
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-6169.2.patch, HDFS-6169.3.patch, HDFS-6169.4.patch, 
> HDFS-6169.5.patch, HDFS-6169.patch
>
>
> Move the endpoint of WebImageViewer from http://hostname:port/ to 
> http://hostname:port/webhdfs/v1/ to support {{hdfs dfs -ls}} to 
> WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6169) Move the address in WebImageViewer

2014-04-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6169:


Attachment: HDFS-6169.5.patch

> Move the address in WebImageViewer
> --
>
> Key: HDFS-6169
> URL: https://issues.apache.org/jira/browse/HDFS-6169
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-6169.2.patch, HDFS-6169.3.patch, HDFS-6169.4.patch, 
> HDFS-6169.5.patch, HDFS-6169.patch
>
>
> Move the endpoint of WebImageViewer from http://hostname:port/ to 
> http://hostname:port/webhdfs/v1/ to support {{hdfs dfs -ls}} to 
> WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6193:


Affects Version/s: (was: 2.3.0)
   2.4.0
   Status: Patch Available  (was: Open)

> HftpFileSystem open should throw FileNotFoundException for non-existing paths
> -
>
> Key: HDFS-6193
> URL: https://issues.apache.org/jira/browse/HDFS-6193
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HDFS-6193-branch-2.4.0.v01.patch
>
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143-branch-2.4.0.v01.patch

> WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
> paths
> 
>
> Key: HDFS-6143
> URL: https://issues.apache.org/jira/browse/HDFS-6143
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HDFS-6143-branch-2.4.0.v01.patch, 
> HDFS-6143-trunk-after-HDFS-5570.v01.patch, HDFS-6143.v01.patch, 
> HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, 
> HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch
>
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6193:


Attachment: HDFS-6193-branch-2.4.0.v01.patch

> HftpFileSystem open should throw FileNotFoundException for non-existing paths
> -
>
> Key: HDFS-6193
> URL: https://issues.apache.org/jira/browse/HDFS-6193
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HDFS-6193-branch-2.4.0.v01.patch
>
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6143:


Attachment: HDFS-6143-trunk-after-HDFS-5570.v01.patch

[~wheat9], I am uploading a new patch for trunk, and will follow up with 
patches for branch-2.4.0. The change to ExceptionHandler was done specifically 
to address [~jingzhao]'s comments in the original patch.

> WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
> paths
> 
>
> Key: HDFS-6143
> URL: https://issues.apache.org/jira/browse/HDFS-6143
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HDFS-6143-trunk-after-HDFS-5570.v01.patch, 
> HDFS-6143.v01.patch, HDFS-6143.v02.patch, HDFS-6143.v03.patch, 
> HDFS-6143.v04.patch, HDFS-6143.v04.patch, HDFS-6143.v05.patch, 
> HDFS-6143.v06.patch
>
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961611#comment-13961611
 ] 

Gera Shegalov commented on HDFS-6193:
-

[~ste...@apache.org], thanks for following up. 

bq. Interesting that FileSystemContractBaseTest doesn't catch this

FileSystemContractBaseTest does not have a test for {{open}} on a 
non-exisisting path. Neither did {{TestHftpFileSystem}}. 
{{TestWebHdfsFileSystemContract.testOpenNonExistFile}} had incorrect 
implementation that relied on {{read}} to fail.

bq. We could optimise any of the web filesystems by not doing that open (e,g, 
S3, s3n, swift) and waiting for the first seek. But we don't because things 
expect missing files to not be there.

Note that a seek for WebHdfs/Hftp is a client-only operation as well. Deferring 
real open to a stream operation is misleading because the application presumes 
an open stream when issuing a stream operation.




> HftpFileSystem open should throw FileNotFoundException for non-existing paths
> -
>
> Key: HDFS-6193
> URL: https://issues.apache.org/jira/browse/HDFS-6193
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6181) Fix the wrong property names in NFS user guide

2014-04-06 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961601#comment-13961601
 ] 

Brandon Li commented on HDFS-6181:
--

Thank you, [~szetszwo]. I will commit the patch shortly.

> Fix the wrong property names in NFS user guide
> --
>
> Key: HDFS-6181
> URL: https://issues.apache.org/jira/browse/HDFS-6181
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Attachments: HDFS-6181.002.patch, HDFS-6181.003.patch, HDFS-6181.patch
>
>
> A couple property names are wrong in the NFS user guide, and should be fixed 
> as the following:
> {noformat}
>  
> -  dfs.nfsgateway.keytab.file
> +  dfs.nfs.keytab.file
>/etc/hadoop/conf/nfsserver.keytab 
>  
>  
> -  dfs.nfsgateway.kerberos.principal
> +  dfs.nfs.kerberos.principal
>nfsserver/_h...@your-realm.com
>  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6181) Fix the wrong property names in NFS user guide

2014-04-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6181:
--

Hadoop Flags: Reviewed

+1 patch looks good.

> Fix the wrong property names in NFS user guide
> --
>
> Key: HDFS-6181
> URL: https://issues.apache.org/jira/browse/HDFS-6181
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, nfs
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Attachments: HDFS-6181.002.patch, HDFS-6181.003.patch, HDFS-6181.patch
>
>
> A couple property names are wrong in the NFS user guide, and should be fixed 
> as the following:
> {noformat}
>  
> -  dfs.nfsgateway.keytab.file
> +  dfs.nfs.keytab.file
>/etc/hadoop/conf/nfsserver.keytab 
>  
>  
> -  dfs.nfsgateway.kerberos.principal
> +  dfs.nfs.kerberos.principal
>nfsserver/_h...@your-realm.com
>  
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961567#comment-13961567
 ] 

Hadoop QA commented on HDFS-6180:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638926/HDFS-6180.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6596//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6596//console

This message is automatically generated.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HDFS-6180.000.patch, HDFS-6180.001.patch, 
> HDFS-6180.002.patch, HDFS-6180.003.patch, HDFS-6180.004.patch, dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961524#comment-13961524
 ] 

Haohui Mai edited comment on HDFS-6180 at 4/6/14 9:07 PM:
--

The v4 patch addresses [~szetszwo]'s comments.


was (Author: wheat9):
The v5 patch addresses [~szetszwo]'s comments.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HDFS-6180.000.patch, HDFS-6180.001.patch, 
> HDFS-6180.002.patch, HDFS-6180.003.patch, HDFS-6180.004.patch, dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961524#comment-13961524
 ] 

Haohui Mai commented on HDFS-6180:
--

The v5 patch addresses [~szetszwo]'s comments.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HDFS-6180.000.patch, HDFS-6180.001.patch, 
> HDFS-6180.002.patch, HDFS-6180.003.patch, HDFS-6180.004.patch, dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-06 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6180:
-

Attachment: HDFS-6180.004.patch

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HDFS-6180.000.patch, HDFS-6180.001.patch, 
> HDFS-6180.002.patch, HDFS-6180.003.patch, HDFS-6180.004.patch, dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-06 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6180:
-

Attachment: HDFS-6180.003.patch

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HDFS-6180.000.patch, HDFS-6180.001.patch, 
> HDFS-6180.002.patch, HDFS-6180.003.patch, dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6194) Create new tests for {{ByteRangeInputStream}}

2014-04-06 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6194:


 Summary: Create new tests for {{ByteRangeInputStream}}
 Key: HDFS-6194
 URL: https://issues.apache.org/jira/browse/HDFS-6194
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai


HDFS-5570 removes old tests for {{ByteRangeInputStream}}, because the tests 
only are tightly coupled with hftp / hsftp. New tests need to be written 
because the same class is also used by {{WebHdfsFileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961515#comment-13961515
 ] 

Haohui Mai commented on HDFS-6143:
--

bq. The v05 patch did not apply because HDFS-5570 removed 
TestByteRangeInputStream. Was it intentional?

Thanks for catching it. The old test only covers the usage from 
{{HftpFileSystem}}. I'll file a jira to create new tests to test 
{{ByteRangeInputStream}}, and we don't need to address it in this jira.

> WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
> paths
> 
>
> Key: HDFS-6143
> URL: https://issues.apache.org/jira/browse/HDFS-6143
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
> HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, 
> HDFS-6143.v05.patch, HDFS-6143.v06.patch
>
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961512#comment-13961512
 ] 

Haohui Mai commented on HDFS-6143:
--

{code}
+  public ByteRangeInputStream(URLOpener o, URLOpener r, boolean connect)
+  throws IOException {
+this(o, r);
+if (connect) {
+  getInputStream();
+}
+  }
{code}

{{WebHdfsFileSystem}} is only user of {{ByteRangeInputStream}}. I think it is 
safe to change the original constructor instead of adding a new one.

{code}
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java
...
{code}

This should be unnecessary, as based on my understanding the code is only 
invoked at the server side.




> WebHdfsFileSystem open should throw FileNotFoundException for non-existing 
> paths
> 
>
> Key: HDFS-6143
> URL: https://issues.apache.org/jira/browse/HDFS-6143
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
> Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, 
> HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, 
> HDFS-6143.v05.patch, HDFS-6143.v06.patch
>
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-353) DFSClient does not always throw a FileNotFound exception when a file could not be opened

2014-04-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-353:


Summary: DFSClient does not always throw a FileNotFound exception when a 
file could not be opened  (was: DFSClient could throw a FileNotFound exception 
when a file could not be opened)

> DFSClient does not always throw a FileNotFound exception when a file could 
> not be opened
> 
>
> Key: HDFS-353
> URL: https://issues.apache.org/jira/browse/HDFS-353
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Steve Loughran
>Assignee: Chu Tong
>Priority: Minor
>
> DfsClient.openInit() throws an IOE when a file can't be found, that is, it 
> has no blocks
> [sf-startdaemon-debug] 09/02/16 12:38:47 [IPC Server handler 0 on 8012] INFO 
> mapred.TaskInProgress : Error from attempt_200902161238_0001_m_00_2: 
> java.io.IOException: Cannot open filename /tests/mrtestsequence/in/in.txt
> [sf-startdaemon-debug]at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1352)
> [sf-startdaemon-debug]at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.(DFSClient.java:1343)
> [sf-startdaemon-debug]at 
> org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:312)
> [sf-startdaemon-debug]at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:177)
> [sf-startdaemon-debug]at 
> org.apache.hadoop.fs.FileSystem.open(FileSystem.java:347)
> I propose turning this into a FileNotFoundException, which is more specific 
> about the underlying problem. Including the full dfs URL would be useful too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-04-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961490#comment-13961490
 ] 

Steve Loughran commented on HDFS-6193:
--

linking to HADOOP-9361 and FS semantics.

Failing on the open if a file is not found is a core expectation of filesystems.

We could optimise any of the web filesystems by not doing that open (e,g, S3, 
s3n, swift) and waiting for the first seek. But we don't because things expect 
missing files to not be there.

Interesting that FileSystemContractBaseTest doesn't catch this

> HftpFileSystem open should throw FileNotFoundException for non-existing paths
> -
>
> Key: HDFS-6193
> URL: https://issues.apache.org/jira/browse/HDFS-6193
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Blocker
>
> WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
> non-existing paths. 
> - 'open', does not really open anything, i.e., it does not contact the 
> server, and therefore cannot discover FileNotFound, it's deferred until next 
> read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
> get ENOENT on open. 
> [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
>  is an example of the code that's broken because of this.
> - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
> instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)