[jira] [Commented] (HDFS-6776) distcp from insecure cluster (source) to secure cluster (destination) doesn't work

2014-07-30 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14080458#comment-14080458
 ] 

Travis Thompson commented on HDFS-6776:
---

Yes but HADOOP-10016 is H1 (secure) -> H2 (insecure) (which on that note, I've 
also tried and it doesn't work).  This shows that secure talking to insecure is 
flat out broken, source or destination.

> distcp from insecure cluster (source) to secure cluster (destination) doesn't 
> work
> --
>
> Key: HDFS-6776
> URL: https://issues.apache.org/jira/browse/HDFS-6776
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0, 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> Issuing distcp command at the secure cluster side, trying to copy stuff from 
> insecure cluster to secure cluster, and see the following problem:
> {code}
> hadoopuser@yjc5u-1 ~]$ hadoop distcp webhdfs://:/tmp 
> hdfs://:8020/tmp/tmptgt
> 14/07/30 20:06:19 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[webhdfs://:/tmp], 
> targetPath=hdfs://:8020/tmp/tmptgt, targetPathExists=true}
> 14/07/30 20:06:19 INFO client.RMProxy: Connecting to ResourceManager at 
> :8032
> 14/07/30 20:06:20 WARN ssl.FileBasedKeyStoresFactory: The property 
> 'ssl.client.truststore.location' has not been set, no TrustStore will be 
> loaded
> 14/07/30 20:06:20 WARN security.UserGroupInformation: 
> PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
> cause:java.io.IOException: Failed to get the token for hadoopuser, 
> user=hadoopuser
> 14/07/30 20:06:20 WARN security.UserGroupInformation: 
> PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
> cause:java.io.IOException: Failed to get the token for hadoopuser, 
> user=hadoopuser
> 14/07/30 20:06:20 ERROR tools.DistCp: Exception encountered 
> java.io.IOException: Failed to get the token for hadoopuser, user=hadoopuser
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:365)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$600(WebHdfsFileSystem.java:84)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:618)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:584)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:438)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:466)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:462)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1132)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:218)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:403)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:424)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:640)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:565)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:438)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:466)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>   at 
> org.apache.hadoop.hdfs.web.W

[jira] [Updated] (HDFS-6776) distcp from insecure cluster to secure cluster doesn't work

2014-07-30 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-6776:
--

Affects Version/s: 2.3.0

> distcp from  insecure cluster to secure cluster doesn't work
> 
>
> Key: HDFS-6776
> URL: https://issues.apache.org/jira/browse/HDFS-6776
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0, 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> More details to come.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6776) distcp from insecure cluster to secure cluster doesn't work

2014-07-30 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14080393#comment-14080393
 ] 

Travis Thompson commented on HDFS-6776:
---

Also seeing this in 2.3.0 so it looks like it's been broken for a while!

With hdfs://
{noformat}
$ hadoop distcp -i /tmp/motd hdfs://insecure-namenode:9000/tmp/motd
14/07/30 23:45:20 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=true, maxMaps=20, sslConfigurationFile='null', 
copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[/tmp/motd], 
targetPath=hdfs://insecure-namenode:9000/tmp/motd}
14/07/30 23:45:21 INFO client.RMProxy: Connecting to ResourceManager at 
secure-resourcemanager/xxx.xxx.xxx.xxx:8032
14/07/30 23:45:22 ERROR tools.DistCp: Exception encountered 
java.io.IOException: Failed on local exception: java.io.IOException: Server 
asks us to fall back to SIMPLE auth, but this client is configured to only 
allow secure connections.; Host Details : local host is: 
"secure-gateway/xxx.xxx.xxx.xxx"; destination host is: 
"insecure-namenode":9000; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1359)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:671)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1746)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1112)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1108)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1108)
at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1425)
at 
org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:69)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:79)
at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:80)
at 
org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:327)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:151)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:375)
Caused by: java.io.IOException: Server asks us to fall back to SIMPLE auth, but 
this client is configured to only allow secure connections.
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:734)
at org.apache.hadoop.ipc.Client$Connection.access$2700(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1458)
at org.apache.hadoop.ipc.Client.call(Client.java:1377)
... 26 more
{noformat}

With webhdfs://
{noformat}
$ hadoop distcp -i /tmp/motd webhdfs://insecure-namenode:50070/tmp/motd
14/07/31 00:27:42 INFO tools.DistCp: Input Options: 
DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
ignoreFailures=true, maxMaps=20, sslConfigurationFile='null', 
copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[/tmp/motd], 
targetPath=webhdfs://insecure-namenode:50070/tmp/motd}
14/07/31 00:27:42 INFO client.RMProxy: Connecting to ResourceManager at 
secure-resourcemanager/xxx.xxx.xxx.xxx:8032
14/07/31 00:27:44 ERROR tools.DistCp: Exception encountered 
java.io.IOException: Failed to get the token for tthompso, user=tthompso
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  

[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-03 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959211#comment-13959211
 ] 

Travis Thompson commented on HDFS-6180:
---

Sorry, meant {{NumDeadDataNodes}}

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-03 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959210#comment-13959210
 ] 

Travis Thompson commented on HDFS-6180:
---

On a side note, the JMX mbean for deadNodeCount is also broken, I can open 
another jira for this issue if desired.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Haohui Mai
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-02 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13958255#comment-13958255
 ] 

Travis Thompson commented on HDFS-6180:
---

I don't think so, these nodes were added with fresh installs, ip's have never 
changed and they've never been part of another hdfs instance.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-01 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13957224#comment-13957224
 ] 

Travis Thompson commented on HDFS-6180:
---

After looking at it for a while, it seems like a bigger issue than initially 
thought.  The last digit of the last octet being truncated.

Example:
|| LIVE NODE || LIVE & DEAD NODES |
| datanode5464 (xxx.xxx.138.244) | datanode5391 (xxx.xxx.138.24) |
| datanode5486 (xxx.xxx.138.222) | datanode5392 (xxx.xxx.138.22) |
| datanode5477 (xxx.xxx.138.233) | datanode5393 (xxx.xxx.138.23) |
| datanode5601 (xxx.xxx.139.244) | datanode5526 (xxx.xxx.139.24) |

Starting {{datanode5464}} will force {{datanode5391}} to become both live and 
dead if it's running.  The order in which the pair up doesn't matter, the same 
effect will happen in the end.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-01 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956966#comment-13956966
 ] 

Travis Thompson commented on HDFS-6180:
---

Looks like I lied, it's the same 9 nodes that are getting duplicated everytime.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-01 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956954#comment-13956954
 ] 

Travis Thompson commented on HDFS-6180:
---

Also notice that duplication starts around 200 nodes, everytime it's started, 
it always reports 22 dead nodes (9 duplicated), but they're different nodes 
every time.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-01 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-6180:
--

Attachment: dn.log

DN log of a node that is duplicated.  Also note, if I start just the Namenode, 
it reports 0 dead nodes even though none of the datanodes aren't running.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
> Attachments: dn.log
>
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-01 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956843#comment-13956843
 ] 

Travis Thompson commented on HDFS-6180:
---

Also, if a live node is shutdown, the counter will go to 1 dead node.

> dead node count / listing is very broken in JMX and old GUI
> ---
>
> Key: HDFS-6180
> URL: https://issues.apache.org/jira/browse/HDFS-6180
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>
> After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on 
> the new GUI, but showed up properly in the datanodes tab.  Some nodes are 
> also being double reported in the deadnode and inservice section (22 show up 
> dead, 565 show up alive, 9 duplicated nodes). 
> From /jmx (confirmed that it's the same in jconsole):
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 5477748687372288,
> "CapacityUsed" : 24825720407,
> "CapacityRemaining" : 5477723861651881,
> "TotalLoad" : 565,
> "SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
> "BlocksTotal" : 21065,
> "MaxObjects" : 0,
> "FilesTotal" : 25454,
> "PendingReplicationBlocks" : 0,
> "UnderReplicatedBlocks" : 0,
> "ScheduledReplicationBlocks" : 0,
> "FSState" : "Operational",
> "NumLiveDataNodes" : 565,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 1
>   },
> {noformat}
> I'm not going to include deadnode/livenodes because the list is huge, but 
> I've confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6180) dead node count / listing is very broken in JMX and old GUI

2014-04-01 Thread Travis Thompson (JIRA)
Travis Thompson created HDFS-6180:
-

 Summary: dead node count / listing is very broken in JMX and old 
GUI
 Key: HDFS-6180
 URL: https://issues.apache.org/jira/browse/HDFS-6180
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Travis Thompson


After bringing up a 578 node cluster with 13 dead nodes, 0 were reported on the 
new GUI, but showed up properly in the datanodes tab.  Some nodes are also 
being double reported in the deadnode and inservice section (22 show up dead, 
565 show up alive, 9 duplicated nodes). 

>From /jmx (confirmed that it's the same in jconsole):
{noformat}
{
"name" : "Hadoop:service=NameNode,name=FSNamesystemState",
"modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
"CapacityTotal" : 5477748687372288,
"CapacityUsed" : 24825720407,
"CapacityRemaining" : 5477723861651881,
"TotalLoad" : 565,
"SnapshotStats" : "{\"SnapshottableDirectories\":0,\"Snapshots\":0}",
"BlocksTotal" : 21065,
"MaxObjects" : 0,
"FilesTotal" : 25454,
"PendingReplicationBlocks" : 0,
"UnderReplicatedBlocks" : 0,
"ScheduledReplicationBlocks" : 0,
"FSState" : "Operational",
"NumLiveDataNodes" : 565,
"NumDeadDataNodes" : 0,
"NumDecomLiveDataNodes" : 0,
"NumDecomDeadDataNodes" : 0,
"NumDecommissioningDataNodes" : 0,
"NumStaleDataNodes" : 1
  },
{noformat}

I'm not going to include deadnode/livenodes because the list is huge, but I've 
confirmed there are 9 nodes showing up in both deadnodes and livenodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6105) NN web UI for DN list loads the same jmx page multiple times.

2014-03-19 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13940982#comment-13940982
 ] 

Travis Thompson commented on HDFS-6105:
---

I've confirmed the patch is working on Hadoop 2.3.0.

> NN web UI for DN list loads the same jmx page multiple times.
> -
>
> Key: HDFS-6105
> URL: https://issues.apache.org/jira/browse/HDFS-6105
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Kihwal Lee
>Assignee: Haohui Mai
> Attachments: HDFS-6105.000.patch, datanodes-tab.png
>
>
> When loading "Datanodes" page of the NN web UI, the same jmx query is made 
> multiple times. For a big cluster, that's a lot of data and overhead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6105) NN web UI for DN list loads the same jmx page multiple times.

2014-03-14 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-6105:
--

Attachment: datanodes-tab.png

I can reproduce it in 2.3.0.  If I open the NN page and (with the Firefox 
console open) I click on the "Datanodes" tab, 3 GETs are sent to http://nn/jmx 
very quickly.  I've attached an image of the Firefox console.  Using Firefox 
27.0.1 on Mac OS 1.8.5

> NN web UI for DN list loads the same jmx page multiple times.
> -
>
> Key: HDFS-6105
> URL: https://issues.apache.org/jira/browse/HDFS-6105
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Kihwal Lee
> Attachments: datanodes-tab.png
>
>
> When loading "Datanodes" page of the NN web UI, the same jmx query is made 
> multiple times. For a big cluster, that's a lot of data and overhead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6084) Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage

2014-03-14 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935169#comment-13935169
 ] 

Travis Thompson commented on HDFS-6084:
---

Thanks everyone

> Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage
> 
>
> Key: HDFS-6084
> URL: https://issues.apache.org/jira/browse/HDFS-6084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: HDFS-6084.1.patch.txt, HDFS-6084.2.patch.txt
>
>
> When clicking the "Hadoop" title the user is taken to the Hadoop homepage, 
> which feels unintuitive.  There's already a link at the bottom where it's 
> always been, which is reasonable.  I think that the title should go to the 
> main Namenode page, #tab-overview.  Suggestions?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6084) Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage

2014-03-13 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-6084:
--

Attachment: HDFS-6084.2.patch.txt

Version with all external links removed.

> Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage
> 
>
> Key: HDFS-6084
> URL: https://issues.apache.org/jira/browse/HDFS-6084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-6084.1.patch.txt, HDFS-6084.2.patch.txt
>
>
> When clicking the "Hadoop" title the user is taken to the Hadoop homepage, 
> which feels unintuitive.  There's already a link at the bottom where it's 
> always been, which is reasonable.  I think that the title should go to the 
> main Namenode page, #tab-overview.  Suggestions?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6084) Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage

2014-03-13 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13934031#comment-13934031
 ] 

Travis Thompson commented on HDFS-6084:
---

Yeah, having external links on intranet sites is usually looked down upon.  Or 
maybe force it to open in a new tab?  Either way, I don't think the page logo 
should link back to Apache, I click it constantly expecting to go back to the 
main Namenode page and remembering that's not where it goes.

> Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage
> 
>
> Key: HDFS-6084
> URL: https://issues.apache.org/jira/browse/HDFS-6084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-6084.1.patch.txt
>
>
> When clicking the "Hadoop" title the user is taken to the Hadoop homepage, 
> which feels unintuitive.  There's already a link at the bottom where it's 
> always been, which is reasonable.  I think that the title should go to the 
> main Namenode page, #tab-overview.  Suggestions?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6084) Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage

2014-03-10 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13929064#comment-13929064
 ] 

Travis Thompson commented on HDFS-6084:
---

Patch with my original proposed solution.

> Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage
> 
>
> Key: HDFS-6084
> URL: https://issues.apache.org/jira/browse/HDFS-6084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-6084.1.patch.txt
>
>
> When clicking the "Hadoop" title the user is taken to the Hadoop homepage, 
> which feels unintuitive.  There's already a link at the bottom where it's 
> always been, which is reasonable.  I think that the title should go to the 
> main Namenode page, #tab-overview.  Suggestions?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6084) Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage

2014-03-10 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-6084:
--

Status: Patch Available  (was: Open)

> Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage
> 
>
> Key: HDFS-6084
> URL: https://issues.apache.org/jira/browse/HDFS-6084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-6084.1.patch.txt
>
>
> When clicking the "Hadoop" title the user is taken to the Hadoop homepage, 
> which feels unintuitive.  There's already a link at the bottom where it's 
> always been, which is reasonable.  I think that the title should go to the 
> main Namenode page, #tab-overview.  Suggestions?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6084) Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage

2014-03-10 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-6084:
--

Attachment: HDFS-6084.1.patch.txt

> Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage
> 
>
> Key: HDFS-6084
> URL: https://issues.apache.org/jira/browse/HDFS-6084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-6084.1.patch.txt
>
>
> When clicking the "Hadoop" title the user is taken to the Hadoop homepage, 
> which feels unintuitive.  There's already a link at the bottom where it's 
> always been, which is reasonable.  I think that the title should go to the 
> main Namenode page, #tab-overview.  Suggestions?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6084) Namenode UI - "Hadoop" logo link shouldn't go to hadoop homepage

2014-03-10 Thread Travis Thompson (JIRA)
Travis Thompson created HDFS-6084:
-

 Summary: Namenode UI - "Hadoop" logo link shouldn't go to hadoop 
homepage
 Key: HDFS-6084
 URL: https://issues.apache.org/jira/browse/HDFS-6084
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Travis Thompson
Priority: Minor
 Attachments: HDFS-6084.1.patch.txt

When clicking the "Hadoop" title the user is taken to the Hadoop homepage, 
which feels unintuitive.  There's already a link at the bottom where it's 
always been, which is reasonable.  I think that the title should go to the main 
Namenode page, #tab-overview.  Suggestions?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-20 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5935:
--

Attachment: HDFS-5935-4.patch

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch, HDFS-5935-2.patch, HDFS-5935-3.patch, 
> HDFS-5935-4.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-20 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5935:
--

Attachment: (was: HDFS-5935-4.patch)

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch, HDFS-5935-2.patch, HDFS-5935-3.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-20 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5935:
--

Attachment: HDFS-5935-4.patch

Update with combined if statement

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch, HDFS-5935-2.patch, HDFS-5935-3.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-20 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13907814#comment-13907814
 ] 

Travis Thompson commented on HDFS-5935:
---

Looks like you are right: [http://jsfiddle.net/GD4zN/]

I'll update the patch with the combined if.

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch, HDFS-5935-2.patch, HDFS-5935-3.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-20 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13907780#comment-13907780
 ] 

Travis Thompson commented on HDFS-5935:
---

The reason I didn't combine them is because I was worried it would throw an 
error checking for {{jqxhr.responseJSON.RemoteException}} if 
{{jqxhr.responseJSON}} is undefined.  If you don't think this is something to 
worry about, I can combine them.

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch, HDFS-5935-2.patch, HDFS-5935-3.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5951) Provide diagnosis information in the Web UI

2014-02-20 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13907764#comment-13907764
 ] 

Travis Thompson commented on HDFS-5951:
---

I have to agree with [~sureshms], especially since the WebUI is hitting the JMX 
REST API which I think is a fairly common way to monitor these things with 
other tools like Nagios.  To me, it seems to be more like a convince thing to 
expose useful diagnosis information, like the missing files message, but not 
try to replace things like Nagios checks.

> Provide diagnosis information in the Web UI
> ---
>
> Key: HDFS-5951
> URL: https://issues.apache.org/jira/browse/HDFS-5951
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-5951.000.patch, diagnosis-failure.png, 
> diagnosis-succeed.png
>
>
> HDFS should provide operation statistics in its UI. it can go one step 
> further by leveraging the information to diagnose common problems.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-13 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5935:
--

Attachment: HDFS-5935-3.patch

Updated version with a {{default}} section.  Hopefully less confusing :)

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch, HDFS-5935-2.patch, HDFS-5935-3.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-13 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13900773#comment-13900773
 ] 

Travis Thompson commented on HDFS-5935:
---

[~wheat9] Yeah we could do that, or set the default in the {{default:}} part of 
the switch statement, either way, I just prefer switch statements :)

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch, HDFS-5935-2.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-13 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5935:
--

Attachment: HDFS-5935-2.patch

Realized 404 is also thrown when trying to browse to a folder/file that doesn't 
exist.  Updated error message.

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch, HDFS-5935-2.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-13 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5935:
--

Status: Patch Available  (was: Open)

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-13 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5935:
--

Attachment: HDFS-5935-1.patch

First attempt.  What do you guys think of these error messages?  This will give 
a permission denied just like the current JSP browser or the CLI.

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5935-1.patch
>
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5949) New Namenode UI when trying to download a file, the browser doesn't know the file name

2014-02-13 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5949:
--

Attachment: HDFS-5949-1.patch

> New Namenode UI when trying to download a file, the browser doesn't know the 
> file name
> --
>
> Key: HDFS-5949
> URL: https://issues.apache.org/jira/browse/HDFS-5949
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5949-1.patch
>
>
> When trying to download a file though the new Namenode UI FS Browser, the 
> browser doesn't know the name of the file because of a trailing slash.  For 
> instance, this url is broken and Firefox picks a random name for:
> {noformat}
> http://dn.example.com:70/webhdfs/v1/user/tthompso/test_examples/wordcount_in/core-site.xml/?op=OPEN&delegation=TOKEN&namenoderpcaddress=namenode.example.com:9000&offset=0
> {noformat}
> But if you remove the trailing / on the file name, Firefox correctly picks up 
> the name of the file:
> {noformat}
> http://dn.example.com:70/webhdfs/v1/user/tthompso/test_examples/wordcount_in/core-site.xml?op=OPEN&delegation=TOKEN&namenoderpcaddress=namenode.example.com:9000&offset=0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5949) New Namenode UI when trying to download a file, the browser doesn't know the file name

2014-02-13 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5949:
--

Status: Patch Available  (was: Open)

> New Namenode UI when trying to download a file, the browser doesn't know the 
> file name
> --
>
> Key: HDFS-5949
> URL: https://issues.apache.org/jira/browse/HDFS-5949
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5949-1.patch
>
>
> When trying to download a file though the new Namenode UI FS Browser, the 
> browser doesn't know the name of the file because of a trailing slash.  For 
> instance, this url is broken and Firefox picks a random name for:
> {noformat}
> http://dn.example.com:70/webhdfs/v1/user/tthompso/test_examples/wordcount_in/core-site.xml/?op=OPEN&delegation=TOKEN&namenoderpcaddress=namenode.example.com:9000&offset=0
> {noformat}
> But if you remove the trailing / on the file name, Firefox correctly picks up 
> the name of the file:
> {noformat}
> http://dn.example.com:70/webhdfs/v1/user/tthompso/test_examples/wordcount_in/core-site.xml?op=OPEN&delegation=TOKEN&namenoderpcaddress=namenode.example.com:9000&offset=0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HDFS-5949) New Namenode UI when trying to download a file, the browser doesn't know the file name

2014-02-13 Thread Travis Thompson (JIRA)
Travis Thompson created HDFS-5949:
-

 Summary: New Namenode UI when trying to download a file, the 
browser doesn't know the file name
 Key: HDFS-5949
 URL: https://issues.apache.org/jira/browse/HDFS-5949
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Travis Thompson
Priority: Minor


When trying to download a file though the new Namenode UI FS Browser, the 
browser doesn't know the name of the file because of a trailing slash.  For 
instance, this url is broken and Firefox picks a random name for:
{noformat}
http://dn.example.com:70/webhdfs/v1/user/tthompso/test_examples/wordcount_in/core-site.xml/?op=OPEN&delegation=TOKEN&namenoderpcaddress=namenode.example.com:9000&offset=0
{noformat}
But if you remove the trailing / on the file name, Firefox correctly picks up 
the name of the file:
{noformat}
http://dn.example.com:70/webhdfs/v1/user/tthompso/test_examples/wordcount_in/core-site.xml?op=OPEN&delegation=TOKEN&namenoderpcaddress=namenode.example.com:9000&offset=0
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5934) New Namenode UI back button doesn't work as expected

2014-02-12 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5934:
--

Attachment: HDFS-5934-3.patch

Whoops, my bad.  Fixed.

> New Namenode UI back button doesn't work as expected
> 
>
> Key: HDFS-5934
> URL: https://issues.apache.org/jira/browse/HDFS-5934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5934-1.patch, HDFS-5934-2.patch, HDFS-5934-3.patch
>
>
> When I navigate to the Namenode page, and I click on the "Datanodes" tab, it 
> will take me to the Datanodes page.  If I click my browser back button, it 
> does not take me back to the overview page as one would expect.  This is true 
> of choosing any tab.
> Another example of the back button acting weird is when browsing HDFS, if I 
> click back one page, either the previous directory I was viewing, or the page 
> I was viewing before entering the FS browser.  Instead I am always taken back 
> to the previous page I was viewing before entering the FS browser.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-12 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13899744#comment-13899744
 ] 

Travis Thompson commented on HDFS-5935:
---

Well something else to consider for security is that you'll never get to this 
page if you have security enabled and your browser isn't setup or your ticket 
is expired, because again, they're running on the same port.  I'll still handle 
for them, I'm just not sure they'll ever get hit.

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5934) New Namenode UI back button doesn't work as expected

2014-02-12 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5934:
--

Attachment: HDFS-5934-2.patch

New patch to handle direct links.  Good catch by the way.  Since the hash isn't 
changing you still have call {{browse_directory()}} but only if we have a dir 
to browse to.

> New Namenode UI back button doesn't work as expected
> 
>
> Key: HDFS-5934
> URL: https://issues.apache.org/jira/browse/HDFS-5934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5934-1.patch, HDFS-5934-2.patch
>
>
> When I navigate to the Namenode page, and I click on the "Datanodes" tab, it 
> will take me to the Datanodes page.  If I click my browser back button, it 
> does not take me back to the overview page as one would expect.  This is true 
> of choosing any tab.
> Another example of the back button acting weird is when browsing HDFS, if I 
> click back one page, either the previous directory I was viewing, or the page 
> I was viewing before entering the FS browser.  Instead I am always taken back 
> to the previous page I was viewing before entering the FS browser.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5934) New Namenode UI back button doesn't work as expected

2014-02-12 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5934:
--

Status: Patch Available  (was: Open)

> New Namenode UI back button doesn't work as expected
> 
>
> Key: HDFS-5934
> URL: https://issues.apache.org/jira/browse/HDFS-5934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5934-1.patch
>
>
> When I navigate to the Namenode page, and I click on the "Datanodes" tab, it 
> will take me to the Datanodes page.  If I click my browser back button, it 
> does not take me back to the overview page as one would expect.  This is true 
> of choosing any tab.
> Another example of the back button acting weird is when browsing HDFS, if I 
> click back one page, either the previous directory I was viewing, or the page 
> I was viewing before entering the FS browser.  Instead I am always taken back 
> to the previous page I was viewing before entering the FS browser.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5934) New Namenode UI back button doesn't work as expected

2014-02-12 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson updated HDFS-5934:
--

Attachment: HDFS-5934-1.patch

Added handler to watch for hash changes so that the back button works correctly.

> New Namenode UI back button doesn't work as expected
> 
>
> Key: HDFS-5934
> URL: https://issues.apache.org/jira/browse/HDFS-5934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
> Attachments: HDFS-5934-1.patch
>
>
> When I navigate to the Namenode page, and I click on the "Datanodes" tab, it 
> will take me to the Datanodes page.  If I click my browser back button, it 
> does not take me back to the overview page as one would expect.  This is true 
> of choosing any tab.
> Another example of the back button acting weird is when browsing HDFS, if I 
> click back one page, either the previous directory I was viewing, or the page 
> I was viewing before entering the FS browser.  Instead I am always taken back 
> to the previous page I was viewing before entering the FS browser.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-11 Thread Travis Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Travis Thompson reassigned HDFS-5935:
-

Assignee: Travis Thompson

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5934) New Namenode UI back button doesn't work as expected

2014-02-11 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898624#comment-13898624
 ] 

Travis Thompson commented on HDFS-5934:
---

Yeah I can take a look at this issue, I should have some time this week.

> New Namenode UI back button doesn't work as expected
> 
>
> Key: HDFS-5934
> URL: https://issues.apache.org/jira/browse/HDFS-5934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Priority: Minor
>
> When I navigate to the Namenode page, and I click on the "Datanodes" tab, it 
> will take me to the Datanodes page.  If I click my browser back button, it 
> does not take me back to the overview page as one would expect.  This is true 
> of choosing any tab.
> Another example of the back button acting weird is when browsing HDFS, if I 
> click back one page, either the previous directory I was viewing, or the page 
> I was viewing before entering the FS browser.  Instead I am always taken back 
> to the previous page I was viewing before entering the FS browser.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-11 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13898618#comment-13898618
 ] 

Travis Thompson commented on HDFS-5935:
---

It's actually returning a 403, not a 404.  I'll have to test with webhdfs off 
to see what it throws, but I suspect it's not a 403.  Here's the curl output of 
that url:

{noformat}
> User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 
> OpenSSL/0.9.8y zlib/1.2.5
> Host: eat1-fiasconn01.grid.linkedin.com:50070
> Accept: */*
> 
< HTTP/1.1 403 Forbidden
< Cache-Control: no-cache
< Expires: Thu, 01-Jan-1970 00:00:00 GMT
< Date: Wed, 12 Feb 2014 01:18:25 GMT
< Pragma: no-cache
< Date: Wed, 12 Feb 2014 01:18:25 GMT
< Pragma: no-cache
< Content-Type: application/json
< WWW-Authenticate: Negotiate 
YGoGCSqGSIb3EgECAgIAb1swWaADAgEFoQMCAQ+iTTBLoAMCARKiRARCRpa0/L2wNWawS8JZNkCuV/5IpHw3A87DxqA/m2N7yk5Q6cj5N0FwEtFLrL1AGoV5dP0bqoXK53HF3W2YoGxGQJ5b
< Set-Cookie: 
hadoop.auth="u=tthompso&p=tthom...@linkedin.biz&t=kerberos&e=1392203905735&s=6aXfyTZu5ebuUgZ/OqTZTJWPlwA=";Path=/
< Transfer-Encoding: chunked
< Server: Jetty(6.1.26)
< 
* Connection #0 to host (nil) left intact
{"RemoteException":{"exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException","message":"Permission
 denied: user=tthompso, access=READ_EXECUTE, 
inode=\"/system\":hdfs:yarn:drwxr-x--x"}}* Closing connection #0
{noformat}

I've found where this going on in the source code and I can take a look at it 
in the next few days.  It doesn't look like it'll be too hard to add smarter 
error handling.

> New Namenode UI FS browser should throw smarter error messages
> --
>
> Key: HDFS-5935
> URL: https://issues.apache.org/jira/browse/HDFS-5935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Priority: Minor
>
> When browsing using the new FS browser in the namenode, if I try to browse a 
> folder that I don't have permission to view, it throws the error:
> {noformat}
> Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: 
> Forbidden
> WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
> {noformat}
> The reason I'm not allowed to see /system is because I don't have permission, 
> not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HDFS-5935) New Namenode UI FS browser should throw smarter error messages

2014-02-11 Thread Travis Thompson (JIRA)
Travis Thompson created HDFS-5935:
-

 Summary: New Namenode UI FS browser should throw smarter error 
messages
 Key: HDFS-5935
 URL: https://issues.apache.org/jira/browse/HDFS-5935
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.3.0
Reporter: Travis Thompson
Priority: Minor


When browsing using the new FS browser in the namenode, if I try to browse a 
folder that I don't have permission to view, it throws the error:

{noformat}
Failed to retreive data from /webhdfs/v1/system?op=LISTSTATUS, cause: Forbidden

WebHDFS might be disabled. WebHDFS is required to browse the filesystem.
{noformat}

The reason I'm not allowed to see /system is because I don't have permission, 
not because WebHDFS is disabled.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HDFS-5934) New Namenode UI back button doesn't work as expected

2014-02-11 Thread Travis Thompson (JIRA)
Travis Thompson created HDFS-5934:
-

 Summary: New Namenode UI back button doesn't work as expected
 Key: HDFS-5934
 URL: https://issues.apache.org/jira/browse/HDFS-5934
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.3.0
Reporter: Travis Thompson
Priority: Minor


When I navigate to the Namenode page, and I click on the "Datanodes" tab, it 
will take me to the Datanodes page.  If I click my browser back button, it does 
not take me back to the overview page as one would expect.  This is true of 
choosing any tab.

Another example of the back button acting weird is when browsing HDFS, if I 
click back one page, either the previous directory I was viewing, or the page I 
was viewing before entering the FS browser.  Instead I am always taken back to 
the previous page I was viewing before entering the FS browser.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5569) WebHDFS should support a deny/allow list for data access

2013-12-02 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13837260#comment-13837260
 ] 

Travis Thompson commented on HDFS-5569:
---

[~cmccabe] I'm not sure how adding a new IP address would by-pass this?  As the 
client I can add whatever IP address I want but if it's not routable it won't 
work... and the server would be doing filtering on the server side so it would 
be based on Source IP...  so having a ip/host based filter would be useful.

Also on the Kerberos note, there is NO authorization in Kerberos, only 
authentication.  Kerberos only tells you who you are, not what you can do, it's 
up to the application layer to decide what to do with that information.

> WebHDFS should support a deny/allow list for data access
> 
>
> Key: HDFS-5569
> URL: https://issues.apache.org/jira/browse/HDFS-5569
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Adam Faris
>  Labels: features
>
> Currently we can't restrict what networks are allowed to transfer data using 
> WebHDFS.  Obviously we can use firewalls to block ports, but this can be 
> complicated and problematic to maintain.  Additionally, because all the jetty 
> servlets run inside the same container, blocking access to jetty to prevent 
> WebHDFS transfers also blocks the other servlets running inside that same 
> jetty container.
> I am requesting a deny/allow feature be added to WebHDFS.  This is already 
> done with the Apache HTTPD server, and is what I'd like to see the deny/allow 
> list modeled after.   Thanks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)