[jira] [Commented] (HDFS-6289) HA failover can fail if there are pending DN messages for DNs which no longer exist

2014-04-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985238#comment-13985238
 ] 

Hadoop QA commented on HDFS-6289:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12642380/HDFS-6289.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6771//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6771//console

This message is automatically generated.

> HA failover can fail if there are pending DN messages for DNs which no longer 
> exist
> ---
>
> Key: HDFS-6289
> URL: https://issues.apache.org/jira/browse/HDFS-6289
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>Priority: Critical
> Attachments: HDFS-6289.patch, HDFS-6289.patch
>
>
> In an HA setup, the standby NN may receive messages from DNs for blocks which 
> the standby NN is not yet aware of. It queues up these messages and replays 
> them when it next reads from the edit log or fails over. On a failover, all 
> of these pending DN messages must be processed successfully in order for the 
> failover to succeed. If one of these pending DN messages refers to a DN 
> storageId that no longer exists (because the DN with that transfer address 
> has been reformatted and has re-registered with the same transfer address) 
> then on transition to active the NN will not be able to process this DN 
> message and will suicide with an error like the following:
> {noformat}
> 2014-04-25 14:23:17,922 FATAL namenode.NameNode 
> (NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
> shutdown. Shutting down immediately.
> java.io.IOException: Cannot mark 
> blk_1073741825_900(stored=blk_1073741825_1001) as corrupt because datanode 
> 127.0.0.1:33324 does not exist
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985288#comment-13985288
 ] 

Uma Maheswara Rao G commented on HDFS-6302:
---

Thanks a lot Yi, for the patch.

Below are my initial comments on the patch.

{code}
if (f1 != null) {
+  throw new IllegalStateException("Duplicated XAttrsFeature");
+}
{code}
How about having the preconditions check like Preconditions.checkState?


{code}
+/**
+ * Feature for extends attributes.
+ */
{code}
Is this "Feature for extended attributes." ?

I agree that more higher level tests will come as part of other patches. But 
having some test with unit level is needed. How about adding some test for 
change like below?
  {code}
  @Test
  public void testXattrFeature() {
replication = 3;
preferredBlockSize = 128 * 1024 * 1024;
INodeFile inf = createINodeFile(replication, preferredBlockSize);
List list = new ArrayList();
list.add(new XAttr.Builder().setName("testxattrname")
.setValue(new byte[] { 1, 2, 3 }).setNameSpace(NameSpace.USER).build());
ImmutableList ls= ImmutableList.copyOf(list);
XAttrFeature f = new XAttrFeature(ls);
inf.addXAttrsFeature(f);
XAttrFeature xAttrsFeature = inf.getXAttrsFeature();
assertEquals("testxattrname", xAttrsFeature.getXAttrs().get(0).getName());
  }
  }
  {code}

> Implement XAttr as a INode feature.
> ---
>
> Key: HDFS-6302
> URL: https://issues.apache.org/jira/browse/HDFS-6302
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6302.patch
>
>
> XAttr is based on INode feature(HDFS-5284).
> Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985296#comment-13985296
 ] 

Fengdong Yu commented on HDFS-6299:
---

sorry, my comments is later, 
{code}
+int prefixIndex = name.indexOf(".");
+if (prefixIndex == -1) {
+  throw new HadoopIllegalArgumentException("XAttr name must be prefixed 
with" +
+  " user/trusted/security/system which followed by '.'");
+} else if (prefixIndex == name.length() -1) {
+  throw new HadoopIllegalArgumentException("XAttr name can not be empty.");
+}
{code}

It should be 
{code}
if (prefixIndex <= 0) {
{code}

other wise, the code will do waste before catch this Exception.

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985299#comment-13985299
 ] 

Fengdong Yu commented on HDFS-6299:
---

Can you add Javadoc in DFSClient?

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu reopened HDFS-6299:
---


I reopened this issue, because I found there are more than two issues after I 
review.

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985309#comment-13985309
 ] 

Uma Maheswara Rao G commented on HDFS-6299:
---

Thanks Fengdong, for the reviews.
>Can you add Javadoc in DFSClient?
Actually DFSClient is not publicly exposed one and the clear javadoc comments 
there with API. That is like a core helper class delegation from client 
perspective. No harm in having javadoc.

This issue already committed. Feel free to file a new JIRA for minor javadoc 
etc and post a patch with testcases which may catch the issues if you notice. I 
will be happy to review the patch and commit.

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6299.
---

Resolution: Fixed

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985312#comment-13985312
 ] 

Fengdong Yu commented on HDFS-6299:
---

1.
{code}
+  public byte[] getXAttr(String src, String name) throws IOException {
+checkOpen();
+try {
+  XAttr xAttr = buildXAttr(name, null);
+  List xAttrs = Lists.newArrayListWithCapacity(1);
+  xAttrs.add(xAttr);
+  List result = namenode.getXAttrs(src, xAttrs);
+  byte[] value = null;
+  if (result != null && result.size() > 0) {
+XAttr a = result.get(0);
+value = a.getValue();
+if (value == null) {
+  value = new byte[0]; //xattr exists, but no value.
+}
+  }
+  return value;
+} catch(RemoteException re) {
{code}

It looks like you don't want to return null here? so if result is null or 
empty, It still return null
Another, try to use !result.isEmpty(), instead of 'result.size() > 0'

2.
ClientRPC interface is not symmetrical, there are getXAttr(), getXAttrs(), 
setXAttr(), so there should be setXAttrs().

3.
why there is no getXAttr() in the ClientProtocol? we should allow to get only 
one xattr in one time.





> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985315#comment-13985315
 ] 

Fengdong Yu commented on HDFS-6299:
---

bq.Actually DFSClient is not publicly exposed one and the clear javadoc 
comments there with API. That is like a core helper class delegation from 
client perspective. No harm in having javadoc.

yes, there is no harm, but it's the code style required. please refer to other 
client methods in DFSClent, such as append() ?

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Reopened] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu reopened HDFS-6299:
---


It cannot be closed, I still reopen it. and the committed should be reverted. 
and revise these comments here.

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6308) TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky

2014-04-30 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985319#comment-13985319
 ] 

Binglin Chang commented on HDFS-6308:
-

Related error log:

{code}
2014-04-28 05:18:19,700 TRACE ipc.ProtobufRpcEngine 
(ProtobufRpcEngine.java:invoke(197)) - 1418: Call -> /127.0.0.1:58789: 
getHdfsBlockLocations {tokens { identifier: "" password: "" kind: "" service: 
"" } tokens { identifier: "" password: "" kind: "" service: "" } blockPoolId: 
"BP-1664789652-67.195.138.24-1398662297553" blockIds: 1073741825 blockIds: 
1073741826}
2014-04-28 05:18:19,700 TRACE ipc.ProtobufRpcEngine 
(ProtobufRpcEngine.java:invoke(197)) - 1419: Call -> /127.0.0.1:45933: 
getHdfsBlockLocations {tokens { identifier: "" password: "" kind: "" service: 
"" } tokens { identifier: "" password: "" kind: "" service: "" } blockPoolId: 
"BP-1664789652-67.195.138.24-1398662297553" blockIds: 1073741825 blockIds: 
1073741826}
2014-04-28 05:18:19,701 TRACE ipc.ProtobufRpcEngine 
(ProtobufRpcEngine.java:invoke(211)) - 1418: Exception <- 
localhost/127.0.0.1:58789: getHdfsBlockLocations {java.net.ConnectException: 
Call From asf000.sp2.ygridcore.net/67.195.138.24 to localhost:58789 failed on 
connection exception: java.net.ConnectException: Connection refused; For more 
details see:  http://wiki.apache.org/hadoop/ConnectionRefused}
2014-04-28 05:18:19,701 INFO  ipc.Server (Server.java:doRead(762)) - Socket 
Reader #1 for port 45933: readAndProcess from client 127.0.0.1 threw exception 
[java.io.IOException: Connection reset by peer]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
at sun.nio.ch.IOUtil.read(IOUtil.java:171)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
at org.apache.hadoop.ipc.Server.channelRead(Server.java:2644)
at org.apache.hadoop.ipc.Server.access$2800(Server.java:133)
at 
org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1517)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:753)
at 
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:627)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:598)
2014-04-28 05:18:19,702 TRACE ipc.ProtobufRpcEngine 
(ProtobufRpcEngine.java:invoke(211)) - 1419: Exception <- /127.0.0.1:45933: 
getHdfsBlockLocations {java.net.SocketTimeoutException: Call From 
asf000.sp2.ygridcore.net/67.195.138.24 to localhost:45933 failed on socket 
timeout exception: java.net.SocketTimeoutException: 1500 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:56102 
remote=/127.0.0.1:45933]; For more details see:  
http://wiki.apache.org/hadoop/SocketTimeout}
2014-04-28 05:18:19,702 TRACE ipc.ProtobufRpcEngine 
(ProtobufRpcEngine.java:invoke(211)) - 1415: Exception <- 
localhost/127.0.0.1:45933: getHdfsBlockLocations 
{java.net.SocketTimeoutException: Call From 
asf000.sp2.ygridcore.net/67.195.138.24 to localhost:45933 failed on socket 
timeout exception: java.net.SocketTimeoutException: 1500 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:56102 
remote=/127.0.0.1:45933]; For more details see:  
{code}

socket read/write timeout is set to 1500ms, timeout error is global(per 
connection), so when timeout occurs, all calls in this connection are marked 
timeout, but the expected behavior should be: first call timeout, second call 
normal.

There is a simple fix, just invoke second call after the connection is closed 
for sure.

We can consider improving ipc.Client to prevent this kind of corner case later.




> TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky
> 
>
> Key: HDFS-6308
> URL: https://issues.apache.org/jira/browse/HDFS-6308
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>
> Found this on pre-commit build of HDFS-6261
> {code}
> java.lang.AssertionError: Expected one valid and one invalid volume
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testGetFileBlockStorageLocationsError(TestDistributedFileSystem.java:837)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-557) 0.20 HDFS documentation for dfsadmin is using bin/hadoop instead of bin/hdfs

2014-04-30 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-557:
-

Fix Version/s: (was: 0.24.0)

> 0.20 HDFS documentation for dfsadmin is using bin/hadoop instead of bin/hdfs
> 
>
> Key: HDFS-557
> URL: https://issues.apache.org/jira/browse/HDFS-557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.23.0
>Reporter: Boris Shkolnik
>Assignee: Harsh J
>Priority: Minor
> Attachments: HDFS-557.patch, file_system_shell.pdf, 
> file_system_shell_2.pdf, hdfs_user_guide.pdf
>
>
> forest documentation is using bin/hadoop for dfsadmin command help instead of 
> bin/hdfs



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6308) TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky

2014-04-30 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-6308:


Assignee: Binglin Chang
  Status: Patch Available  (was: Open)

> TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky
> 
>
> Key: HDFS-6308
> URL: https://issues.apache.org/jira/browse/HDFS-6308
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>
> Found this on pre-commit build of HDFS-6261
> {code}
> java.lang.AssertionError: Expected one valid and one invalid volume
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testGetFileBlockStorageLocationsError(TestDistributedFileSystem.java:837)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6308) TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky

2014-04-30 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-6308:


Attachment: HDFS-6308.v1.patch

> TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky
> 
>
> Key: HDFS-6308
> URL: https://issues.apache.org/jira/browse/HDFS-6308
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-6308.v1.patch
>
>
> Found this on pre-commit build of HDFS-6261
> {code}
> java.lang.AssertionError: Expected one valid and one invalid volume
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testGetFileBlockStorageLocationsError(TestDistributedFileSystem.java:837)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-557) 0.20 HDFS documentation for dfsadmin is using bin/hadoop instead of bin/hdfs

2014-04-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985335#comment-13985335
 ] 

Hadoop QA commented on HDFS-557:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12504395/HDFS-557.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6773//console

This message is automatically generated.

> 0.20 HDFS documentation for dfsadmin is using bin/hadoop instead of bin/hdfs
> 
>
> Key: HDFS-557
> URL: https://issues.apache.org/jira/browse/HDFS-557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.23.0
>Reporter: Boris Shkolnik
>Assignee: Harsh J
>Priority: Minor
> Attachments: HDFS-557.patch, file_system_shell.pdf, 
> file_system_shell_2.pdf, hdfs_user_guide.pdf
>
>
> forest documentation is using bin/hadoop for dfsadmin command help instead of 
> bin/hdfs



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6304) Consolidate the logic of path resolution in FSDirectory

2014-04-30 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6304:
--

Hadoop Flags: Reviewed

+1 patch looks good.

> Consolidate the logic of path resolution in FSDirectory
> ---
>
> Key: HDFS-6304
> URL: https://issues.apache.org/jira/browse/HDFS-6304
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HADOOP-10551.000.patch, HDFS-6304.000.patch
>
>
> Currently both FSDirectory and INodeDirectory provide helpers to resolve 
> paths to inodes. This jira proposes to move all these helpers into 
> FSDirectory to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985349#comment-13985349
 ] 

Uma Maheswara Rao G commented on HDFS-6299:
---

{quote}
2.
ClientRPC interface is not symmetrical, there are getXAttr(), getXAttrs(), 
setXAttr(), so there should be setXAttrs().
3.
why there is no getXAttr() in the ClientProtocol? we should allow to get only 
one xattr in one time.
{quote}
Please look at the client protocol interface audience. That is internal rpc 
mechanism API. There is no guideline to have symmetrical api signatures with 
client exposed API. See the create api in ClientProtocol, we will have only 
one. but we exposed many overloaded APIs. Please look at the review comments in 
HDFS-6258.

>please refer to other client methods in DFSClent, such as append() ?
Please also look at FileSystem#append doc. Both says same information, no 
additional info there. So, from the exposed API doc, same and we just delegate 
till this core implementation.

I am saying you again. You can file a new JIRA with your comments when they are 
valid. This is a branch development. I don't see any critical issues from your 
comments. I am not against adding some developper doc in DFSClient and at the 
same time I am not insisting to have there. 

Ex: see 
setAcl,
removeAcl... 
public DFSInputStream open(String src) 
  throws IOException, UnresolvedLinkException {
return open(src, dfsClientConf.ioBufferSize, true, null);
  }


> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6309) Javadocs for Xattrs apis in DFSCliengt and other minor fixups

2014-04-30 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-6309:
-

 Summary: Javadocs for Xattrs apis in DFSCliengt and other minor 
fixups
 Key: HDFS-6309
 URL: https://issues.apache.org/jira/browse/HDFS-6309
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Uma Maheswara Rao G


Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6299.
---

Resolution: Fixed

I have created a JIRA, Lets discuss there with your comments.

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6309) Javadocs for Xattrs apis in DFSCliengt and other minor fixups

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-6309:
--

Assignee: Yi Liu

> Javadocs for Xattrs apis in DFSCliengt and other minor fixups
> -
>
> Key: HDFS-6309
> URL: https://issues.apache.org/jira/browse/HDFS-6309
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
>
> Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6309) Javadocs for Xattrs apis in DFSCliengt and other minor fixups

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-6309:
--

Affects Version/s: HDFS XAttrs (HDFS-2006)

> Javadocs for Xattrs apis in DFSCliengt and other minor fixups
> -
>
> Key: HDFS-6309
> URL: https://issues.apache.org/jira/browse/HDFS-6309
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
>
> Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6309) Javadocs for Xattrs apis in DFSCliengt and other minor fixups

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-6309:
--

Priority: Minor  (was: Major)

> Javadocs for Xattrs apis in DFSCliengt and other minor fixups
> -
>
> Key: HDFS-6309
> URL: https://issues.apache.org/jira/browse/HDFS-6309
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
>Priority: Minor
>
> Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985351#comment-13985351
 ] 

Uma Maheswara Rao G edited comment on HDFS-6299 at 4/30/14 10:41 AM:
-

I have created a JIRA HDFS-6309, Lets discuss there with your comments.


was (Author: umamaheswararao):
I have created a JIRA, Lets discuss there with your comments.

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6309) Javadocs for Xattrs apis in DFSClient and other minor fixups

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-6309:
--

Summary: Javadocs for Xattrs apis in DFSClient and other minor fixups  
(was: Javadocs for Xattrs apis in DFSCliengt and other minor fixups)

> Javadocs for Xattrs apis in DFSClient and other minor fixups
> 
>
> Key: HDFS-6309
> URL: https://issues.apache.org/jira/browse/HDFS-6309
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
>Priority: Minor
>
> Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6293) Issues with OIV processing PB-based fsimages

2014-04-30 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985368#comment-13985368
 ] 

Akira AJISAKA commented on HDFS-6293:
-

bq. It will be great if someone can come up with a standalone tool that allows 
dumping directory structure and content with, say, 1-2GB heap AND completes in 
comparable execution time.
The way seems to be the best, however, I'm okay with using more memory (e.g. 
10-20GB). I'm curious about [~vanzin]'s idea.
By the way,
bq. The 2.4.0 pb-fsimage does contain tokens, but OIV does not show any tokens.
I think the issue can be separated. I'll create a jira for tracking the issue.

> Issues with OIV processing PB-based fsimages
> 
>
> Key: HDFS-6293
> URL: https://issues.apache.org/jira/browse/HDFS-6293
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Priority: Blocker
> Attachments: Heap Histogram.html
>
>
> There are issues with OIV when processing fsimages in protobuf. 
> Due to the internal layout changes introduced by the protobuf-based fsimage, 
> OIV consumes excessive amount of memory.  We have tested with a fsimage with 
> about 140M files/directories. The peak heap usage when processing this image 
> in pre-protobuf (i.e. pre-2.4.0) format was about 350MB.  After converting 
> the image to the protobuf format on 2.4.0, OIV would OOM even with 80GB of 
> heap (max new size was 1GB).  It should be possible to process any image with 
> the default heap size of 1.5GB.
> Another issue is the complete change of format/content in OIV's XML output.  
> I also noticed that the secret manager section has no tokens while there were 
> unexpired tokens in the original image (pre-2.4.0).  I did not check whether 
> they were also missing in the new pb fsimage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-04-30 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6310:
---

 Summary: PBImageXmlWriter should output information about 
Delegation Tokens
 Key: HDFS-6310
 URL: https://issues.apache.org/jira/browse/HDFS-6310
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.4.0
Reporter: Akira AJISAKA


Separated from HDFS-6293.
The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6269) NameNode Audit Log should differentiate between webHDFS open and HDFS open.

2014-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985388#comment-13985388
 ] 

Hudson commented on HDFS-6269:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #556 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/556/])
HDFS-6269. NameNode Audit Log should differentiate between webHDFS open and 
HDFS open. (Eric Payne via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591117)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> NameNode Audit Log should differentiate between webHDFS open and HDFS open.
> ---
>
> Key: HDFS-6269
> URL: https://issues.apache.org/jira/browse/HDFS-6269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-6269-AuditLogWebOpen.txt, 
> HDFS-6269-AuditLogWebOpen.txt, HDFS-6269-AuditLogWebOpen.txt
>
>
> To enhance traceability, the NameNode audit log should use a different string 
> for open in the "cmd=" part of the audit entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-04-30 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6310:


Attachment: HDFS-6319.patch

Attaching a patch. I verified OIV output the information of Delegation Tokens 
with the patch locally. Here is the output:
{code}

2111398857460505
21398943860505
0aajisakaJobTracker13988574524261398857462426121398857457426

{code}

> PBImageXmlWriter should output information about Delegation Tokens
> --
>
> Key: HDFS-6310
> URL: https://issues.apache.org/jira/browse/HDFS-6310
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
> Attachments: HDFS-6319.patch
>
>
> Separated from HDFS-6293.
> The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
> option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-04-30 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6310:


Assignee: Akira AJISAKA
  Status: Patch Available  (was: Open)

> PBImageXmlWriter should output information about Delegation Tokens
> --
>
> Key: HDFS-6310
> URL: https://issues.apache.org/jira/browse/HDFS-6310
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-6319.patch
>
>
> Separated from HDFS-6293.
> The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
> option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6293) Issues with OIV processing PB-based fsimages

2014-04-30 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985395#comment-13985395
 ] 

Akira AJISAKA commented on HDFS-6293:
-

Created HDFS-6310 for output tokens, and attached a patch.

> Issues with OIV processing PB-based fsimages
> 
>
> Key: HDFS-6293
> URL: https://issues.apache.org/jira/browse/HDFS-6293
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Priority: Blocker
> Attachments: Heap Histogram.html
>
>
> There are issues with OIV when processing fsimages in protobuf. 
> Due to the internal layout changes introduced by the protobuf-based fsimage, 
> OIV consumes excessive amount of memory.  We have tested with a fsimage with 
> about 140M files/directories. The peak heap usage when processing this image 
> in pre-protobuf (i.e. pre-2.4.0) format was about 350MB.  After converting 
> the image to the protobuf format on 2.4.0, OIV would OOM even with 80GB of 
> heap (max new size was 1GB).  It should be possible to process any image with 
> the default heap size of 1.5GB.
> Another issue is the complete change of format/content in OIV's XML output.  
> I also noticed that the secret manager section has no tokens while there were 
> unexpired tokens in the original image (pre-2.4.0).  I did not check whether 
> they were also missing in the new pb fsimage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-04-30 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6310:


Attachment: (was: HDFS-6319.patch)

> PBImageXmlWriter should output information about Delegation Tokens
> --
>
> Key: HDFS-6310
> URL: https://issues.apache.org/jira/browse/HDFS-6310
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-6310.patch
>
>
> Separated from HDFS-6293.
> The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
> option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-04-30 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6310:


Attachment: HDFS-6310.patch

> PBImageXmlWriter should output information about Delegation Tokens
> --
>
> Key: HDFS-6310
> URL: https://issues.apache.org/jira/browse/HDFS-6310
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-6310.patch
>
>
> Separated from HDFS-6293.
> The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
> option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6302:
-

Attachment: HDFS-6302.1.patch

Thanks Uma for your review. I update the patch including your comments.

> Implement XAttr as a INode feature.
> ---
>
> Key: HDFS-6302
> URL: https://issues.apache.org/jira/browse/HDFS-6302
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6302.1.patch, HDFS-6302.patch
>
>
> XAttr is based on INode feature(HDFS-5284).
> Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6308) TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky

2014-04-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985416#comment-13985416
 ] 

Hadoop QA commented on HDFS-6308:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12642622/HDFS-6308.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6772//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6772//console

This message is automatically generated.

> TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky
> 
>
> Key: HDFS-6308
> URL: https://issues.apache.org/jira/browse/HDFS-6308
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-6308.v1.patch
>
>
> Found this on pre-commit build of HDFS-6261
> {code}
> java.lang.AssertionError: Expected one valid and one invalid volume
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testGetFileBlockStorageLocationsError(TestDistributedFileSystem.java:837)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985415#comment-13985415
 ] 

Yi Liu commented on HDFS-6299:
--

Hi [~azuryy], thanks for your comment.
{quote}
It should be
{code}
if (prefixIndex <= 0) {
{code}
other wise, the code will do waste before catch this Exception.
{quote}

{{prefixIndex == -1}} indicates not found. The logic is name should have prefix 
and should have real content after prefix. Is it the same as what you think?

{quote}
It looks like you don't want to return null here? so if result is null or 
empty, It still return null
{quote}

Allow to return null, there is no doc says not return null.  The comment in the 
code 
{{//xattr exists, but no value.}} 
explains that if xattr doesn't exist, null is return, but if xattr exists with 
no value, empty should return.

{quote}
Another, try to use !result.isEmpty(), instead of 'result.size() > 0'
{quote}
Right, {{isEmpty()}} is efficient, and I will add this improvement.

{quote}
ClientRPC interface is not symmetrical, there are getXAttr(), getXAttrs(), 
setXAttr(), so there should be setXAttrs().
{quote}
We have dicussed during design and are not intended to support setXAttrs 
currently, it's close to POSIX setxattr and there is no use case for setXAttrs 
currently. We can extend this in future. Please go to see HADOOP-10520.

{quote}
why there is no getXAttr() in the ClientProtocol? we should allow to get only 
one xattr in one time.
{quote}
Agree with Uma.

{quote}
Can you add Javadoc in DFSClient?
{quote}
Agree with Uma. We could improve the javadoc for internal methods  in other 
JIRA.



> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6309) Javadocs for Xattrs apis in DFSClient and other minor fixups

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6309:
-

Assignee: Charles Lamb  (was: Yi Liu)

> Javadocs for Xattrs apis in DFSClient and other minor fixups
> 
>
> Key: HDFS-6309
> URL: https://issues.apache.org/jira/browse/HDFS-6309
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Charles Lamb
>Priority: Minor
>
> Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6309) Javadocs for Xattrs apis in DFSClient and other minor fixups

2014-04-30 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6309:
---

Attachment: HDFS-6309.1.patch

Here are some minor javadoc fixes and cleanups to XAttr.java.

> Javadocs for Xattrs apis in DFSClient and other minor fixups
> 
>
> Key: HDFS-6309
> URL: https://issues.apache.org/jira/browse/HDFS-6309
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6309.1.patch
>
>
> Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5168) BlockPlacementPolicy does not work for cross node group dependencies

2014-04-30 Thread Nikola Vujic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikola Vujic updated HDFS-5168:
---

Attachment: HDFS-5168.patch

> BlockPlacementPolicy does not work for cross node group dependencies
> 
>
> Key: HDFS-5168
> URL: https://issues.apache.org/jira/browse/HDFS-5168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Nikola Vujic
>Assignee: Nikola Vujic
>Priority: Critical
> Attachments: HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
> HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
> HDFS-5168.patch
>
>
> Block placement policies do not work for cross rack/node group dependencies. 
> In reality this is needed when compute servers and storage fall in two 
> independent fault domains, then both BlockPlacementPolicyDefault and 
> BlockPlacementPolicyWithNodeGroup are not able to provide proper block 
> placement.
> Let's suppose that we have Hadoop cluster with one rack with two servers, and 
> we run 2 VMs per server. Node group topology for this cluster would be:
>  server1-vm1 -> /d1/r1/n1
>  server1-vm2 -> /d1/r1/n1
>  server2-vm1 -> /d1/r1/n2
>  server2-vm2 -> /d1/r1/n2
> This is working fine as long as server and storage fall into the same fault 
> domain but if storage is in a different fault domain from the server, we will 
> not be able to handle that. For example, if storage of server1-vm1 is in the 
> same fault domain as storage of server2-vm1, then we must not place two 
> replicas on these two nodes although they are in different node groups.
> Two possible approaches:
> - One approach would be to define cross rack/node group dependencies and to 
> use them when excluding nodes from the search space. This looks as the 
> cleanest way to fix this as it requires minor changes in the 
> BlockPlacementPolicy classes.
> - Other approach would be to allow nodes to fall in more than one node group. 
> When we chose a node to hold a replica we have to exclude from the search 
> space all nodes from the node groups where the chosen node belongs. This 
> approach may require major changes in the NetworkTopology.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-5522) Datanode disk error check may be incorrectly skipped

2014-04-30 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah reassigned HDFS-5522:


Assignee: Rushabh S Shah

> Datanode disk error check may be incorrectly skipped
> 
>
> Key: HDFS-5522
> URL: https://issues.apache.org/jira/browse/HDFS-5522
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.9, 2.2.0
>Reporter: Kihwal Lee
>Assignee: Rushabh S Shah
>
> After HDFS-4581 and HDFS-4699, {{checkDiskError()}} is not called when 
> network errors occur during processing data node requests.  This appears to 
> create problems when a disk is having problems, but not failing I/O soon. 
> If I/O hangs for a long time, network read/write may timeout first and the 
> peer may close the connection. Although the error was caused by a faulty 
> local disk, disk check is not being carried out in this case. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5168) BlockPlacementPolicy does not work for cross node group dependencies

2014-04-30 Thread Nikola Vujic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985486#comment-13985486
 ] 

Nikola Vujic commented on HDFS-5168:


Hi [~szetszwo],

New patch addressing your comments is attached.

I have changed everything you suggested except changing getRawMapping methods 
to package-private. I left it private since we don't have any use case for 
package-private getRawMapping  at the moment.

> BlockPlacementPolicy does not work for cross node group dependencies
> 
>
> Key: HDFS-5168
> URL: https://issues.apache.org/jira/browse/HDFS-5168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Nikola Vujic
>Assignee: Nikola Vujic
>Priority: Critical
> Attachments: HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
> HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
> HDFS-5168.patch
>
>
> Block placement policies do not work for cross rack/node group dependencies. 
> In reality this is needed when compute servers and storage fall in two 
> independent fault domains, then both BlockPlacementPolicyDefault and 
> BlockPlacementPolicyWithNodeGroup are not able to provide proper block 
> placement.
> Let's suppose that we have Hadoop cluster with one rack with two servers, and 
> we run 2 VMs per server. Node group topology for this cluster would be:
>  server1-vm1 -> /d1/r1/n1
>  server1-vm2 -> /d1/r1/n1
>  server2-vm1 -> /d1/r1/n2
>  server2-vm2 -> /d1/r1/n2
> This is working fine as long as server and storage fall into the same fault 
> domain but if storage is in a different fault domain from the server, we will 
> not be able to handle that. For example, if storage of server1-vm1 is in the 
> same fault domain as storage of server2-vm1, then we must not place two 
> replicas on these two nodes although they are in different node groups.
> Two possible approaches:
> - One approach would be to define cross rack/node group dependencies and to 
> use them when excluding nodes from the search space. This looks as the 
> cleanest way to fix this as it requires minor changes in the 
> BlockPlacementPolicy classes.
> - Other approach would be to allow nodes to fall in more than one node group. 
> When we chose a node to hold a replica we have to exclude from the search 
> space all nodes from the node groups where the chosen node belongs. This 
> approach may require major changes in the NetworkTopology.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6309) Javadocs for Xattrs apis in DFSClient and other minor fixups

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985521#comment-13985521
 ] 

Uma Maheswara Rao G commented on HDFS-6309:
---

Patch looks good to me.
Could you also consider the suggestion from [~azuryy] at HDFS-6299?

{quote}Another, try to use !result.isEmpty(), instead of 'result.size() > 
0'{quote}
Right, isEmpty() is efficient, and I will add this improvement.

> Javadocs for Xattrs apis in DFSClient and other minor fixups
> 
>
> Key: HDFS-6309
> URL: https://issues.apache.org/jira/browse/HDFS-6309
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6309.1.patch
>
>
> Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6311) TestLargeBlock#testLargeBlockSize : File /tmp/TestLargeBlock/2147484160.dat could only be replicated to 0 nodes instead of minReplication (=1)

2014-04-30 Thread Tony Reix (JIRA)
Tony Reix created HDFS-6311:
---

 Summary: TestLargeBlock#testLargeBlockSize : File 
/tmp/TestLargeBlock/2147484160.dat could only be replicated to 0 nodes instead 
of minReplication (=1)
 Key: HDFS-6311
 URL: https://issues.apache.org/jira/browse/HDFS-6311
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.4.0
 Environment: Virtual Box - Ubuntu 14.04 - x86_64
Reporter: Tony Reix


I'm testing HDFS 2.4.0 

Apache Hadoop HDFS: Tests run: 2650, Failures: 2, Errors: 
2, Skipped: 99

I have the following error each time I launch my tests (3 tries).

Forking command line: /bin/sh -c cd 
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs && 
/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Xmx1024m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter2355654085353142996.jar
 
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire983005167523288650tmp
 
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4328161716955453811297tmp

Running org.apache.hadoop.hdfs.TestLargeBlock

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 16.011 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestLargeBlock
testLargeBlockSize(org.apache.hadoop.hdfs.TestLargeBlock)  Time elapsed: 15.549 
sec  <<< ERROR!

org.apache.hadoop.ipc.RemoteException: File /tmp/TestLargeBlock/2147484160.dat 
could only be replicated to 0 nodes instead of minReplication (=1).  There are 
1 datanode(s) running and no node(s) are excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1430)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2684)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2008)

at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6269) NameNode Audit Log should differentiate between webHDFS open and HDFS open.

2014-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985543#comment-13985543
 ] 

Hudson commented on HDFS-6269:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1747 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1747/])
HDFS-6269. NameNode Audit Log should differentiate between webHDFS open and 
HDFS open. (Eric Payne via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591117)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> NameNode Audit Log should differentiate between webHDFS open and HDFS open.
> ---
>
> Key: HDFS-6269
> URL: https://issues.apache.org/jira/browse/HDFS-6269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-6269-AuditLogWebOpen.txt, 
> HDFS-6269-AuditLogWebOpen.txt, HDFS-6269-AuditLogWebOpen.txt
>
>
> To enhance traceability, the NameNode audit log should use a different string 
> for open in the "cmd=" part of the audit entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985550#comment-13985550
 ] 

Uma Maheswara Rao G commented on HDFS-6302:
---

Thanks Yi for the update on the patch!
Patch almost looks good to me. I have few nits left to handle.

1) You should have covered removeXattrFeature also in the test
2) Produce brief javadoc for *xattrfeature apis in INode.

> Implement XAttr as a INode feature.
> ---
>
> Key: HDFS-6302
> URL: https://issues.apache.org/jira/browse/HDFS-6302
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6302.1.patch, HDFS-6302.patch
>
>
> XAttr is based on INode feature(HDFS-5284).
> Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6299:
---

Attachment: HDFS-6299.2.patch

This patch addresses Javadoc and exception message clarity as well as 
[~azuryy]'s comment about isEmpty() vs result.size() > 0.


> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.2.patch, HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985565#comment-13985565
 ] 

Charles Lamb commented on HDFS-6302:


+1 from me. This looks straightforward.

> Implement XAttr as a INode feature.
> ---
>
> Key: HDFS-6302
> URL: https://issues.apache.org/jira/browse/HDFS-6302
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6302.1.patch, HDFS-6302.patch
>
>
> XAttr is based on INode feature(HDFS-5284).
> Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985569#comment-13985569
 ] 

Uma Maheswara Rao G commented on HDFS-6299:
---

Hi Charles, I meant to upload this patch along with HDFS-6309.

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.2.patch, HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985574#comment-13985574
 ] 

Fengdong Yu commented on HDFS-6299:
---

Yes, upload the patch to HDFS-6309, don't update this issue again.:)

{code}
+  if (result != null && result.isEmpty()) {
{code}

it should be 
{code}
  if (result != null && !result.isEmpty()) {
{code}


> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.2.patch, HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-04-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985578#comment-13985578
 ] 

Hadoop QA commented on HDFS-6310:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12642632/HDFS-6310.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6775//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6775//console

This message is automatically generated.

> PBImageXmlWriter should output information about Delegation Tokens
> --
>
> Key: HDFS-6310
> URL: https://issues.apache.org/jira/browse/HDFS-6310
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-6310.patch
>
>
> Separated from HDFS-6293.
> The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
> option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6299) Protobuf for XAttr and client-side implementation

2014-04-30 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985580#comment-13985580
 ] 

Yi Liu commented on HDFS-6299:
--

Thanks Charles, It should be a typo for "isEmpty()", please update the patch to 
HDFS-6309.

> Protobuf for XAttr and client-side implementation 
> --
>
> Key: HDFS-6299
> URL: https://issues.apache.org/jira/browse/HDFS-6299
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6299.2.patch, HDFS-6299.patch
>
>
> This JIRA tracks Protobuf for XAttr and implementation for XAttr interfaces 
> in DistributedFilesystem and DFSClient. 
> With this JIRA we may just keep the dummy  implemenation for Xattr API of 
> ClientProtocol in NameNodeRpcServer



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6302:
-

Attachment: HDFS-6302.2.patch

Thanks Uma, Charles. I updated the patch.

> Implement XAttr as a INode feature.
> ---
>
> Key: HDFS-6302
> URL: https://issues.apache.org/jira/browse/HDFS-6302
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6302.1.patch, HDFS-6302.2.patch, HDFS-6302.patch
>
>
> XAttr is based on INode feature(HDFS-5284).
> Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6309) Javadocs for Xattrs apis in DFSClient and other minor fixups

2014-04-30 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6309:
---

Attachment: HDFS-6309.2.patch

Fix typo in DFSClient.java

> Javadocs for Xattrs apis in DFSClient and other minor fixups
> 
>
> Key: HDFS-6309
> URL: https://issues.apache.org/jira/browse/HDFS-6309
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6309.1.patch, HDFS-6309.2.patch
>
>
> Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985595#comment-13985595
 ] 

Uma Maheswara Rao G commented on HDFS-6302:
---

+1 patch looks good to me. I will commit the patch shortly to branch.

Note:
{code}
 return nsQuota == -1L && dsQuota == -1L?
  new INodeDirectoryAttributes.SnapshotCopy(name, permissions, null, 
modificationTime, null)
{code}
The above code piece crossed 80 Chars. Will update it while committing. 

> Implement XAttr as a INode feature.
> ---
>
> Key: HDFS-6302
> URL: https://issues.apache.org/jira/browse/HDFS-6302
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6302.1.patch, HDFS-6302.2.patch, HDFS-6302.patch
>
>
> XAttr is based on INode feature(HDFS-5284).
> Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6165) "hdfs dfs -rm -r" and "hdfs -rmdir" commands can't remove empty directory

2014-04-30 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985606#comment-13985606
 ] 

Yongjun Zhang commented on HDFS-6165:
-

Somehow test was not triggered by previous upload, uploaded the same version 
again to trigger test.

> "hdfs dfs -rm -r" and "hdfs -rmdir" commands can't remove empty directory 
> --
>
> Key: HDFS-6165
> URL: https://issues.apache.org/jira/browse/HDFS-6165
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Minor
> Attachments: HDFS-6165.001.patch, HDFS-6165.002.patch, 
> HDFS-6165.003.patch, HDFS-6165.004.patch, HDFS-6165.004.patch, 
> HDFS-6165.005.patch, HDFS-6165.006.patch, HDFS-6165.006.patch
>
>
> Given a directory owned by user A with WRITE permission containing an empty 
> directory owned by user B, it is not possible to delete user B's empty 
> directory with either "hdfs dfs -rm -r" or "hdfs dfs -rmdir". Because the 
> current implementation requires FULL permission of the empty directory, and 
> throws exception. 
> On the other hand, on linux, "rm -r" and "rmdir" command can remove empty 
> directory as long as the parent directory has WRITE permission (and prefix 
> component of the path have EXECUTE permission), For the tested OSes, some 
> prompt user asking for confirmation, some don't.
> Here's a reproduction:
> {code}
> [root@vm01 ~]# hdfs dfs -ls /user/
> Found 4 items
> drwxr-xr-x   - userabc users   0 2013-05-03 01:55 /user/userabc
> drwxr-xr-x   - hdfssupergroup  0 2013-05-03 00:28 /user/hdfs
> drwxrwxrwx   - mapred  hadoop  0 2013-05-03 00:13 /user/history
> drwxr-xr-x   - hdfssupergroup  0 2013-04-14 16:46 /user/hive
> [root@vm01 ~]# hdfs dfs -ls /user/userabc
> Found 8 items
> drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
> drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
> drwx--   - userabc users  0 2013-05-03 01:06 
> /user/userabc/.staging
> drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
> drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:54 /user/userabc/foo
> drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
> /user/userabc/maven_source
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
> /user/userabc/test-restore
> [root@vm01 ~]# hdfs dfs -ls /user/userabc/foo/
> [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -r -skipTrash /user/userabc/foo
> rm: Permission denied: user=userabc, access=ALL, 
> inode="/user/userabc/foo":hdfs:users:drwxr-xr-x
> {code}
> The super user can delete the directory.
> {code}
> [root@vm01 ~]# sudo -u hdfs hdfs dfs -rm -r -skipTrash /user/userabc/foo
> Deleted /user/userabc/foo
> {code}
> The same is not true for files, however. They have the correct behavior.
> {code}
> [root@vm01 ~]# sudo -u hdfs hdfs dfs -touchz /user/userabc/foo-file
> [root@vm01 ~]# hdfs dfs -ls /user/userabc/
> Found 8 items
> drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
> drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
> drwx--   - userabc users  0 2013-05-03 01:06 
> /user/userabc/.staging
> drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
> drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
> -rw-r--r--   1 hdfsusers  0 2013-05-03 02:11 
> /user/userabc/foo-file
> drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
> /user/userabc/maven_source
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
> /user/userabc/test-restore
> [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -skipTrash /user/userabc/foo-file
> Deleted /user/userabc/foo-file
> {code}
> Using "hdfs dfs -rmdir" command:
> {code}
> bash-4.1$ hadoop fs -lsr /
> lsr: DEPRECATED: Please use 'ls -R' instead.
> drwxr-xr-x   - hdfs supergroup  0 2014-03-25 16:29 /user
> drwxr-xr-x   - hdfs   supergroup  0 2014-03-25 16:28 /user/hdfs
> drwxr-xr-x   - usrabc users   0 2014-03-28 23:39 /user/usrabc
> drwxr-xr-x   - abcabc 0 2014-03-28 23:39 
> /user/usrabc/foo-empty1
> [root@vm01 usrabc]# su usrabc
> [usrabc@vm01 ~]$ hdfs dfs -rmdir /user/usrabc/foo-empty1
> rmdir: Permission denied: user=usrabc, access=ALL, 
> inode="/user/usrabc/foo-empty1":abc:abc:drwxr-xr-x
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6165) "hdfs dfs -rm -r" and "hdfs -rmdir" commands can't remove empty directory

2014-04-30 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6165:


Attachment: HDFS-6165.006.patch

> "hdfs dfs -rm -r" and "hdfs -rmdir" commands can't remove empty directory 
> --
>
> Key: HDFS-6165
> URL: https://issues.apache.org/jira/browse/HDFS-6165
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Minor
> Attachments: HDFS-6165.001.patch, HDFS-6165.002.patch, 
> HDFS-6165.003.patch, HDFS-6165.004.patch, HDFS-6165.004.patch, 
> HDFS-6165.005.patch, HDFS-6165.006.patch, HDFS-6165.006.patch
>
>
> Given a directory owned by user A with WRITE permission containing an empty 
> directory owned by user B, it is not possible to delete user B's empty 
> directory with either "hdfs dfs -rm -r" or "hdfs dfs -rmdir". Because the 
> current implementation requires FULL permission of the empty directory, and 
> throws exception. 
> On the other hand, on linux, "rm -r" and "rmdir" command can remove empty 
> directory as long as the parent directory has WRITE permission (and prefix 
> component of the path have EXECUTE permission), For the tested OSes, some 
> prompt user asking for confirmation, some don't.
> Here's a reproduction:
> {code}
> [root@vm01 ~]# hdfs dfs -ls /user/
> Found 4 items
> drwxr-xr-x   - userabc users   0 2013-05-03 01:55 /user/userabc
> drwxr-xr-x   - hdfssupergroup  0 2013-05-03 00:28 /user/hdfs
> drwxrwxrwx   - mapred  hadoop  0 2013-05-03 00:13 /user/history
> drwxr-xr-x   - hdfssupergroup  0 2013-04-14 16:46 /user/hive
> [root@vm01 ~]# hdfs dfs -ls /user/userabc
> Found 8 items
> drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
> drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
> drwx--   - userabc users  0 2013-05-03 01:06 
> /user/userabc/.staging
> drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
> drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:54 /user/userabc/foo
> drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
> /user/userabc/maven_source
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
> /user/userabc/test-restore
> [root@vm01 ~]# hdfs dfs -ls /user/userabc/foo/
> [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -r -skipTrash /user/userabc/foo
> rm: Permission denied: user=userabc, access=ALL, 
> inode="/user/userabc/foo":hdfs:users:drwxr-xr-x
> {code}
> The super user can delete the directory.
> {code}
> [root@vm01 ~]# sudo -u hdfs hdfs dfs -rm -r -skipTrash /user/userabc/foo
> Deleted /user/userabc/foo
> {code}
> The same is not true for files, however. They have the correct behavior.
> {code}
> [root@vm01 ~]# sudo -u hdfs hdfs dfs -touchz /user/userabc/foo-file
> [root@vm01 ~]# hdfs dfs -ls /user/userabc/
> Found 8 items
> drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
> drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
> drwx--   - userabc users  0 2013-05-03 01:06 
> /user/userabc/.staging
> drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
> drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
> -rw-r--r--   1 hdfsusers  0 2013-05-03 02:11 
> /user/userabc/foo-file
> drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
> /user/userabc/maven_source
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
> /user/userabc/test-restore
> [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -skipTrash /user/userabc/foo-file
> Deleted /user/userabc/foo-file
> {code}
> Using "hdfs dfs -rmdir" command:
> {code}
> bash-4.1$ hadoop fs -lsr /
> lsr: DEPRECATED: Please use 'ls -R' instead.
> drwxr-xr-x   - hdfs supergroup  0 2014-03-25 16:29 /user
> drwxr-xr-x   - hdfs   supergroup  0 2014-03-25 16:28 /user/hdfs
> drwxr-xr-x   - usrabc users   0 2014-03-28 23:39 /user/usrabc
> drwxr-xr-x   - abcabc 0 2014-03-28 23:39 
> /user/usrabc/foo-empty1
> [root@vm01 usrabc]# su usrabc
> [usrabc@vm01 ~]$ hdfs dfs -rmdir /user/usrabc/foo-empty1
> rmdir: Permission denied: user=usrabc, access=ALL, 
> inode="/user/usrabc/foo-empty1":abc:abc:drwxr-xr-x
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985612#comment-13985612
 ] 

Charles Lamb commented on HDFS-6302:


I'm pretty sure there should also be a space before the ?.

i.e. s/-1L?/-1L ?/


> Implement XAttr as a INode feature.
> ---
>
> Key: HDFS-6302
> URL: https://issues.apache.org/jira/browse/HDFS-6302
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6302.1.patch, HDFS-6302.2.patch, HDFS-6302.patch
>
>
> XAttr is based on INode feature(HDFS-5284).
> Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985628#comment-13985628
 ] 

Yi Liu commented on HDFS-6302:
--

Charles, it's better to have. But it's the original code and we just add the 
last parameter.

> Implement XAttr as a INode feature.
> ---
>
> Key: HDFS-6302
> URL: https://issues.apache.org/jira/browse/HDFS-6302
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6302.1.patch, HDFS-6302.2.patch, HDFS-6302.patch
>
>
> XAttr is based on INode feature(HDFS-5284).
> Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6269) NameNode Audit Log should differentiate between webHDFS open and HDFS open.

2014-04-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985632#comment-13985632
 ] 

Hudson commented on HDFS-6269:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1773 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1773/])
HDFS-6269. NameNode Audit Log should differentiate between webHDFS open and 
HDFS open. (Eric Payne via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1591117)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> NameNode Audit Log should differentiate between webHDFS open and HDFS open.
> ---
>
> Key: HDFS-6269
> URL: https://issues.apache.org/jira/browse/HDFS-6269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, webhdfs
>Affects Versions: 2.4.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-6269-AuditLogWebOpen.txt, 
> HDFS-6269-AuditLogWebOpen.txt, HDFS-6269-AuditLogWebOpen.txt
>
>
> To enhance traceability, the NameNode audit log should use a different string 
> for open in the "cmd=" part of the audit entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6302) Implement XAttr as a INode feature.

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6302.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks a lot, Yi for the patch. 
Also thanks a lot, Charles for the review!

I have just committed this patch to branch!
Please note that I have formatted the code which I pasted above while 
committing.


> Implement XAttr as a INode feature.
> ---
>
> Key: HDFS-6302
> URL: https://issues.apache.org/jira/browse/HDFS-6302
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6302.1.patch, HDFS-6302.2.patch, HDFS-6302.patch
>
>
> XAttr is based on INode feature(HDFS-5284).
> Persisting XAttrs in fsimage and edit log is handled by HDFS-6301.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6312) WebHdfs HA failover is broken on secure clusters

2014-04-30 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6312:
-

 Summary: WebHdfs HA failover is broken on secure clusters
 Key: HDFS-6312
 URL: https://issues.apache.org/jira/browse/HDFS-6312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.4.0, 3.0.0
Reporter: Daryn Sharp
Priority: Blocker


When webhdfs does a failover, it blanks out the delegation token.  This will 
cause subsequent operations against the other NN to acquire a new token.  Tasks 
cannot acquire a token (no kerberos credentials) so jobs will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6312) WebHdfs HA failover is broken on secure clusters

2014-04-30 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985657#comment-13985657
 ] 

Arpit Gupta commented on HDFS-6312:
---

[~daryn] in our testing with webhdfs + HA on a secure cluster we hit 
HADOOP-10519. I am curious what kind of job did you run that actually started 
running tasks.

> WebHdfs HA failover is broken on secure clusters
> 
>
> Key: HDFS-6312
> URL: https://issues.apache.org/jira/browse/HDFS-6312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Priority: Blocker
>
> When webhdfs does a failover, it blanks out the delegation token.  This will 
> cause subsequent operations against the other NN to acquire a new token.  
> Tasks cannot acquire a token (no kerberos credentials) so jobs will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6313) WebHdfs may use the wrong NN when configured for multiple HA NNs

2014-04-30 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6313:
-

 Summary: WebHdfs may use the wrong NN when configured for multiple 
HA NNs
 Key: HDFS-6313
 URL: https://issues.apache.org/jira/browse/HDFS-6313
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.4.0, 3.0.0
Reporter: Daryn Sharp


WebHdfs resolveNNAddr will return a union of addresses for all HA configured 
NNs.  The client may access the wrong NN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6314) Test cases for XAttrs

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6314:
-

Summary: Test cases for XAttrs  (was: Testcases for XAttrs)

> Test cases for XAttrs
> -
>
> Key: HDFS-6314
> URL: https://issues.apache.org/jira/browse/HDFS-6314
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: test
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
>
> Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving 
> new checkpoint.
> Tests XAttr for Snapshot, symlinks.
> Tests XAttr for HA failover.
> And more...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6314) Testcases for XAttrs

2014-04-30 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6314:


 Summary: Testcases for XAttrs
 Key: HDFS-6314
 URL: https://issues.apache.org/jira/browse/HDFS-6314
 Project: Hadoop HDFS
  Issue Type: Task
  Components: test
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)


Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving new 
checkpoint.
Tests XAttr for Snapshot, symlinks.
Tests XAttr for HA failover.
And more...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6314) Test cases for XAttrs

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6314:
-

Issue Type: Sub-task  (was: Task)
Parent: HDFS-2006

> Test cases for XAttrs
> -
>
> Key: HDFS-6314
> URL: https://issues.apache.org/jira/browse/HDFS-6314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
>
> Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving 
> new checkpoint.
> Tests XAttr for Snapshot, symlinks.
> Tests XAttr for HA failover.
> And more...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6293) Issues with OIV processing PB-based fsimages

2014-04-30 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985707#comment-13985707
 ] 

Marcelo Vanzin commented on HDFS-6293:
--

[~ajisakaa] my modified parser does not have an upper-bound on the memory; 
because it still needs to load information about all inodes, it's still O(n) 
for the number of inodes in the image.

> Issues with OIV processing PB-based fsimages
> 
>
> Key: HDFS-6293
> URL: https://issues.apache.org/jira/browse/HDFS-6293
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Priority: Blocker
> Attachments: Heap Histogram.html
>
>
> There are issues with OIV when processing fsimages in protobuf. 
> Due to the internal layout changes introduced by the protobuf-based fsimage, 
> OIV consumes excessive amount of memory.  We have tested with a fsimage with 
> about 140M files/directories. The peak heap usage when processing this image 
> in pre-protobuf (i.e. pre-2.4.0) format was about 350MB.  After converting 
> the image to the protobuf format on 2.4.0, OIV would OOM even with 80GB of 
> heap (max new size was 1GB).  It should be possible to process any image with 
> the default heap size of 1.5GB.
> Another issue is the complete change of format/content in OIV's XML output.  
> I also noticed that the secret manager section has no tokens while there were 
> unexpired tokens in the original image (pre-2.4.0).  I did not check whether 
> they were also missing in the new pb fsimage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6258) Namenode server-side storage for XAttrs

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6258:
-

Description: 
Namenode Server-side storage for XAttrs: FSNamesystem and friends.
Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.

  was:This JIRA is to implement extended attributes in HDFS: support XAttrs 
from NameNode, implements XAttr APIs for DistributedFileSystem and so on.


> Namenode server-side storage for XAttrs
> ---
>
> Key: HDFS-6258
> URL: https://issues.apache.org/jira/browse/HDFS-6258
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
> HDFS-6258.patch
>
>
> Namenode Server-side storage for XAttrs: FSNamesystem and friends.
> Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6312) WebHdfs HA failover is broken on secure clusters

2014-04-30 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985714#comment-13985714
 ] 

Daryn Sharp commented on HDFS-6312:
---

I found the bug by just looking at the code.  {{resetStateToFailOver}} nulls 
out the internal token and clears the init flag in {{TokenAspect}}.  The next 
operation that calls {{getDelegationToken()}} to build the auth params for the 
url will attempt to acquire a new token.

HADOOP-10519 is a different issue.  The HA support in webhdfs has many flaws.

> WebHdfs HA failover is broken on secure clusters
> 
>
> Key: HDFS-6312
> URL: https://issues.apache.org/jira/browse/HDFS-6312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Priority: Blocker
>
> When webhdfs does a failover, it blanks out the delegation token.  This will 
> cause subsequent operations against the other NN to acquire a new token.  
> Tasks cannot acquire a token (no kerberos credentials) so jobs will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6258) Namenode server-side storage for XAttrs

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6258:
-

Attachment: HDFS-6258.patch

The new patch only includes NN server-side storage for XAttrs: {{FSNamesystem}} 
and friends. And refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.


> Namenode server-side storage for XAttrs
> ---
>
> Key: HDFS-6258
> URL: https://issues.apache.org/jira/browse/HDFS-6258
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
> HDFS-6258.patch, HDFS-6258.patch
>
>
> Namenode Server-side storage for XAttrs: FSNamesystem and friends.
> Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6258) Namenode server-side storage for XAttrs

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6258:
-

Attachment: (was: HDFS-6258.patch)

> Namenode server-side storage for XAttrs
> ---
>
> Key: HDFS-6258
> URL: https://issues.apache.org/jira/browse/HDFS-6258
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
> HDFS-6258.4.patch, HDFS-6258.patch
>
>
> Namenode Server-side storage for XAttrs: FSNamesystem and friends.
> Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6258) Namenode server-side storage for XAttrs

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6258:
-

Attachment: HDFS-6258.4.patch

> Namenode server-side storage for XAttrs
> ---
>
> Key: HDFS-6258
> URL: https://issues.apache.org/jira/browse/HDFS-6258
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
> HDFS-6258.4.patch, HDFS-6258.patch
>
>
> Namenode Server-side storage for XAttrs: FSNamesystem and friends.
> Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6301) NameNode: persist XAttrs in fsimage and record XAttrs modifications to edit log.

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6301:
-

Attachment: HDFS-6301.patch

The patch contains update for comments of Chris and Uma in HDFS-6258:
1. XAttr version is add in NamenodeLayoutVersion
2. tests that cover: 1) save xattrs, restart NN, assert xattrs reloaded from 
edit log, 2) save xattrs, create new checkpoint, restart NN, assert xattrs 
reloaded from fsimage.
3. More javadoc for tests.

> NameNode: persist XAttrs in fsimage and record XAttrs modifications to edit 
> log.
> 
>
> Key: HDFS-6301
> URL: https://issues.apache.org/jira/browse/HDFS-6301
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6301.patch
>
>
> Store XAttrs in fsimage so that XAttrs are retained across NameNode restarts.
> Implement a new edit log opcode, {{OP_SET_XATTRS}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HDFS-6301) NameNode: persist XAttrs in fsimage and record XAttrs modifications to edit log.

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-6301 started by Yi Liu.

> NameNode: persist XAttrs in fsimage and record XAttrs modifications to edit 
> log.
> 
>
> Key: HDFS-6301
> URL: https://issues.apache.org/jira/browse/HDFS-6301
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6301.patch
>
>
> Store XAttrs in fsimage so that XAttrs are retained across NameNode restarts.
> Implement a new edit log opcode, {{OP_SET_XATTRS}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6314) Test cases for XAttrs

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6314:
-

Attachment: HDFS-6314.patch

There are lots of test cases, create a seperate patch to include part of them. 
The patch contains update for comments of Chris and Uma in HDFS-6258 about test 
cases:

1. Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving 
new checkpoint.
2. Tests XAttr for Snapshot, symlinks.
3. Tests XAttr for HA failover.
4. More javadoc for tests.
5. Tests for setxattr API with no flags.
6. More...


> Test cases for XAttrs
> -
>
> Key: HDFS-6314
> URL: https://issues.apache.org/jira/browse/HDFS-6314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6314.patch
>
>
> Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving 
> new checkpoint.
> Tests XAttr for Snapshot, symlinks.
> Tests XAttr for HA failover.
> And more...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HDFS-6314) Test cases for XAttrs

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-6314 started by Yi Liu.

> Test cases for XAttrs
> -
>
> Key: HDFS-6314
> URL: https://issues.apache.org/jira/browse/HDFS-6314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6314.patch
>
>
> Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving 
> new checkpoint.
> Tests XAttr for Snapshot, symlinks.
> Tests XAttr for HA failover.
> And more...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5147) Certain dfsadmin commands such as safemode do not interact with the active namenode in ha setup

2014-04-30 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985760#comment-13985760
 ] 

Jing Zhao commented on HDFS-5147:
-

Good idea, [~vinayrpet]! I will check it later following your ideas. Also 
please feel free to assign this jira to yourself if you want to work on this. 

> Certain dfsadmin commands such as safemode do not interact with the active 
> namenode in ha setup
> ---
>
> Key: HDFS-5147
> URL: https://issues.apache.org/jira/browse/HDFS-5147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.1.0-beta
>Reporter: Arpit Gupta
>Assignee: Jing Zhao
>
> There are certain commands in dfsadmin return the status of the first 
> namenode specified in the configs rather than interacting with the active 
> namenode
> For example. Issue
> hdfs dfsadmin -safemode get
> and it will return the status of the first namenode in the configs rather 
> than the active namenode.
> I think all dfsadmin commands should determine which is the active namenode 
> do the operation on it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5147) Certain dfsadmin commands such as safemode do not interact with the active namenode in ha setup

2014-04-30 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5147:


Assignee: (was: Jing Zhao)

> Certain dfsadmin commands such as safemode do not interact with the active 
> namenode in ha setup
> ---
>
> Key: HDFS-5147
> URL: https://issues.apache.org/jira/browse/HDFS-5147
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.1.0-beta
>Reporter: Arpit Gupta
>
> There are certain commands in dfsadmin return the status of the first 
> namenode specified in the configs rather than interacting with the active 
> namenode
> For example. Issue
> hdfs dfsadmin -safemode get
> and it will return the status of the first namenode in the configs rather 
> than the active namenode.
> I think all dfsadmin commands should determine which is the active namenode 
> do the operation on it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6303) HDFS implementation of FileContext API for XAttrs.

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6303:
-

Attachment: HDFS-6303.patch

Have a seperate patch for HDFS implementation of FileContext API for XAttrs and 
test case. This is also in line with review comment of Uma in HDFS-6258.

> HDFS implementation of FileContext API for XAttrs.
> --
>
> Key: HDFS-6303
> URL: https://issues.apache.org/jira/browse/HDFS-6303
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6303.patch
>
>
> HDFS implementation of AbstractFileSystem and FileContext for XAttrs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6312) WebHdfs HA failover is broken on secure clusters

2014-04-30 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985788#comment-13985788
 ] 

Arpit Gupta commented on HDFS-6312:
---

Ah i see. I was just curious to see if could add more tests to reach this issue 
:).

> WebHdfs HA failover is broken on secure clusters
> 
>
> Key: HDFS-6312
> URL: https://issues.apache.org/jira/browse/HDFS-6312
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Priority: Blocker
>
> When webhdfs does a failover, it blanks out the delegation token.  This will 
> cause subsequent operations against the other NN to acquire a new token.  
> Tasks cannot acquire a token (no kerberos credentials) so jobs will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6289) HA failover can fail if there are pending DN messages for DNs which no longer exist

2014-04-30 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985795#comment-13985795
 ] 

Aaron T. Myers commented on HDFS-6289:
--

The latest test failure is just because of the following:

{noformat}
java.net.BindException: Port in use: localhost:50070
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at 
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:853)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:794)
{noformat}

I ran TestBlockRecovery several times on my box and it passes without issue.

I'm going to go ahead and commit this momentarily.

> HA failover can fail if there are pending DN messages for DNs which no longer 
> exist
> ---
>
> Key: HDFS-6289
> URL: https://issues.apache.org/jira/browse/HDFS-6289
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>Priority: Critical
> Attachments: HDFS-6289.patch, HDFS-6289.patch
>
>
> In an HA setup, the standby NN may receive messages from DNs for blocks which 
> the standby NN is not yet aware of. It queues up these messages and replays 
> them when it next reads from the edit log or fails over. On a failover, all 
> of these pending DN messages must be processed successfully in order for the 
> failover to succeed. If one of these pending DN messages refers to a DN 
> storageId that no longer exists (because the DN with that transfer address 
> has been reformatted and has re-registered with the same transfer address) 
> then on transition to active the NN will not be able to process this DN 
> message and will suicide with an error like the following:
> {noformat}
> 2014-04-25 14:23:17,922 FATAL namenode.NameNode 
> (NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
> shutdown. Shutting down immediately.
> java.io.IOException: Cannot mark 
> blk_1073741825_900(stored=blk_1073741825_1001) as corrupt because datanode 
> 127.0.0.1:33324 does not exist
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HDFS-6303) HDFS implementation of FileContext API for XAttrs.

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-6303 started by Yi Liu.

> HDFS implementation of FileContext API for XAttrs.
> --
>
> Key: HDFS-6303
> URL: https://issues.apache.org/jira/browse/HDFS-6303
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6303.patch
>
>
> HDFS implementation of AbstractFileSystem and FileContext for XAttrs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-04-30 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985813#comment-13985813
 ] 

Haohui Mai commented on HDFS-6310:
--

Delegation tokens are intentionally left out in PBImageXmlWriter as they are 
sensitive information.

> PBImageXmlWriter should output information about Delegation Tokens
> --
>
> Key: HDFS-6310
> URL: https://issues.apache.org/jira/browse/HDFS-6310
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-6310.patch
>
>
> Separated from HDFS-6293.
> The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
> option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6289) HA failover can fail if there are pending DN messages for DNs which no longer exist

2014-04-30 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-6289:
-

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the reviews, Todd and Yongjun.

> HA failover can fail if there are pending DN messages for DNs which no longer 
> exist
> ---
>
> Key: HDFS-6289
> URL: https://issues.apache.org/jira/browse/HDFS-6289
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>Priority: Critical
> Fix For: 2.5.0
>
> Attachments: HDFS-6289.patch, HDFS-6289.patch
>
>
> In an HA setup, the standby NN may receive messages from DNs for blocks which 
> the standby NN is not yet aware of. It queues up these messages and replays 
> them when it next reads from the edit log or fails over. On a failover, all 
> of these pending DN messages must be processed successfully in order for the 
> failover to succeed. If one of these pending DN messages refers to a DN 
> storageId that no longer exists (because the DN with that transfer address 
> has been reformatted and has re-registered with the same transfer address) 
> then on transition to active the NN will not be able to process this DN 
> message and will suicide with an error like the following:
> {noformat}
> 2014-04-25 14:23:17,922 FATAL namenode.NameNode 
> (NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
> shutdown. Shutting down immediately.
> java.io.IOException: Cannot mark 
> blk_1073741825_900(stored=blk_1073741825_1001) as corrupt because datanode 
> 127.0.0.1:33324 does not exist
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HDFS-6298) XML based End-to-End test for getfattr and setfattr commands

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-6298 started by Yi Liu.

> XML based End-to-End test for getfattr and setfattr commands
> 
>
> Key: HDFS-6298
> URL: https://issues.apache.org/jira/browse/HDFS-6298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6298.patch
>
>
> This JIRA to add test cases with CLI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6298) XML based End-to-End test for getfattr and setfattr commands

2014-04-30 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6298:
-

Attachment: HDFS-6298.patch

This test case can be used to verify the whole functionality.

> XML based End-to-End test for getfattr and setfattr commands
> 
>
> Key: HDFS-6298
> URL: https://issues.apache.org/jira/browse/HDFS-6298
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6298.patch
>
>
> This JIRA to add test cases with CLI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6309) Javadocs for Xattrs apis in DFSClient and other minor fixups

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985845#comment-13985845
 ] 

Uma Maheswara Rao G commented on HDFS-6309:
---

Thanks a lot Charles for the patch.

Patch looks good to me.
+1

I will commit the patch shortly to the branch!

> Javadocs for Xattrs apis in DFSClient and other minor fixups
> 
>
> Key: HDFS-6309
> URL: https://issues.apache.org/jira/browse/HDFS-6309
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Uma Maheswara Rao G
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6309.1.patch, HDFS-6309.2.patch
>
>
> Some javadoc improvements and minor comment fixups from HDFS-6299



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6286) adding a timeout setting for local read io

2014-04-30 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985849#comment-13985849
 ] 

Colin Patrick McCabe commented on HDFS-6286:


bq. 2) read(buf) inside DFSInputStream has a synchronized, that means if HBase 
scan() hits into sick disk or severe io contention, it will block all 
subsequent read(buf) requests, right? 

It will block all subsequent read requests on that stream, yes.  We don't 
guarantee thread-safety if you start using a stream from multiple threads 
anyway.

bq. 1) Hedged read does work on pread only currently, not against read(buf) 
operation, and HBase scan() will call into read(buf).

Yes, hedged reads only work for {{pread()}} now.  We ought to extend it to all 
forms of {{read()}}.  This will be a big latency win across the board, and not 
only for local reads.

> adding a timeout setting for local read io
> --
>
> Key: HDFS-6286
> URL: https://issues.apache.org/jira/browse/HDFS-6286
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>
> Currently, if a write or remote read requested into a sick disk, 
> DFSClient.hdfsTimeout could help the caller have a guaranteed time cost to 
> return back. but it doesn't work on local read. Take HBase scan for example,
> DFSInputStream.read -> readWithStrategy -> readBuffer -> 
> BlockReaderLocal.read ->  dataIn.read -> FileChannelImpl.read
> if it hits a bad disk, the low read io probably takes tens of seconds,  and 
> what's worse is, the "DFSInputStream.read" hold a lock always.
> Per my knowledge, there's no good mechanism to cancel a running read 
> io(Please correct me if it's wrong), so my opinion is adding a future around 
> the read request, and we could set a timeout there, if the threshold reached, 
> we can add the local node into deadnode probably...
> Any thought?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6258) Namenode server-side storage for XAttrs

2014-04-30 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985864#comment-13985864
 ] 

Charles Lamb commented on HDFS-6258:


A few minor nits.

In XAttrPermissionFilter.java, please change the javadoc to read:

+/**
+ * There are 4 types for extended attribute :

There are four types of extended attributes  defined by the
following namespaces:

USER -- extended user attributes: these can be assigned to files and
directories to store arbitrary additional information. The access
permissions for user attributes are defined by the file permission
bits.

TRUSTED -- trusted extended attributes: these are visible/accessible
only to/by the super user.

SECURITY -- extended security attributes: these are used by the HDFS
core for security purposes and are not available through admin/user
API.

SYSTEM -- extended system attributes: these are used by the HDFS
core and are not available through admin/user API.

Formatting nit, there's an extra newline right before the last } closing the 
class.

XAttrStorage.java:

+ * XAttrStorage is used to read and set xattrs for inode.

s/for/on an/


> Namenode server-side storage for XAttrs
> ---
>
> Key: HDFS-6258
> URL: https://issues.apache.org/jira/browse/HDFS-6258
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
> HDFS-6258.4.patch, HDFS-6258.patch
>
>
> Namenode Server-side storage for XAttrs: FSNamesystem and friends.
> Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6258) Namenode server-side storage for XAttrs

2014-04-30 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985877#comment-13985877
 ] 

Uma Maheswara Rao G commented on HDFS-6258:
---

Thanks a lot Charles for the review!

I also did my initial review on this patch, please find my comments below along 
with Charles comments.

DFSConfigKeys.java:
{code}
  public static final String  DFS_NAMENODE_XATTRS_MAX_LIMIT_KEY = 
"dfs.namenode.xattrs.max-limit";
+  public static final int DFS_NAMENODE_XATTRS_MAX_LIMIT_DEFAULT = 32;
{code}
Do we need to mention per Inode max limit?

Tests:
I think we need test cases for verifying inmemory xattrs updations and getting 
them back. Which is kind of integrations test from client side to XattrFeature 
usage.

ConfigFlag.java:

How about having NNConf like DNConf which will load the config on statup and 
have the check methods there?
{code}
/**
 * Simple class encapsulating all of the configuration that the DataNode
 * loads at startup time.
 */
@InterfaceAudience.Private
public class DNConf {
{code}



{code}
+XAttrStorage.updateINodeXAttrs(inode, newXAttrs, snapshotId);
+
+return newXAttrs;


EnumSet flag) throws IOException {
+
+assert hasWriteLock();
{code}
You don't need empty line here.


FSDirectory.java

{code}
 if (xAttrs.size() > xAttrsLimit) {
+  throw new IOException("Operation fails, XAttrs of " +
+   "inode exceeds maximum limit of " + xAttrsLimit);
+}
{code}
Please remove tab characters above.


FSNamesystem.java:

{code}
 try {
+  XAttrPermissionFilter.checkPermissionForApi(pc, xAttr);
+} catch (AccessControlException e) {
+  logAuditEvent(false, "setXAttr", src);
+}
+checkOperation(OperationCategory.WRITE);
{code}
We need to rethrow exception from here right insted of continueing?


{code}
void removeXAttr(String src, XAttr xAttr) throws IOException {
+configFlag.checkXAttrsConfigFlag();
+HdfsFileStatus resultingStat = null;
+FSPermissionChecker pc = getPermissionChecker();
+try {
+  XAttrPermissionFilter.checkPermissionForApi(pc, xAttr);
+} catch (AccessControlException e) {
+  logAuditEvent(false, "removeXAttr", src);
+}
{code}
same as above comment.


{code}
 try {
+  XAttrPermissionFilter.checkPermissionForApi(pc, xAttr);
+} catch (AccessControlException e) {
+  logAuditEvent(false, "setXAttr", src);
+}
+checkOperation(OperationCategory.WRITE);
+byte[][] pathComponents = 
FSDirectory.getPathComponentsForReservedPath(src);
+writeLock();
+try {
+  checkOperation(OperationCategory.WRITE);
+  checkNameNodeSafeMode("Cannot set XAttr on " + src);
+  src = FSDirectory.resolvePath(src, pathComponents, dir);
+  if (isPermissionEnabled) {
+checkPathAccess(pc, src, FsAction.WRITE);
+  }
{code}
Here checkPathAccess also throws AccessControlException, so we are not 
considering it for failure audit log?


I will continue my further review tomorrow on this. Thanks for the work on this 
patch, Yi!

> Namenode server-side storage for XAttrs
> ---
>
> Key: HDFS-6258
> URL: https://issues.apache.org/jira/browse/HDFS-6258
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
> HDFS-6258.4.patch, HDFS-6258.patch
>
>
> Namenode Server-side storage for XAttrs: FSNamesystem and friends.
> Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6301) NameNode: persist XAttrs in fsimage and record XAttrs modifications to edit log.

2014-04-30 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985924#comment-13985924
 ] 

Charles Lamb commented on HDFS-6301:


Hi Yi,

In general this looks fine.

I noticed that you never say anywhere whether namespaces and xattr names are 
case sensitive or insensitive. That should be spelled out somewhere in the 
javadoc.

FSEditLog.java:

s/src =src/src = src/
"final" could be added to the decl ofSetXAttrsOp.

FSEditLogOp.java:

XAttrsEditLogUtil: when would an xattr not have a name? I see why 
HAS_VALUE_OFFSET is necessary (some xattrs are name only), but why do you need 
a flag for whether a name is present? Don't all xattrs have at least a name?

In general things like boolean hasName could benefit from final decls.

   if (this.opCode == OP_ADD) {
 AclEditLogUtil.write(aclEntries, out);
+XAttrsEditLogUtil.write(xAttrs, out);
 FSImageSerialization.writeString(clientName,out);
 FSImageSerialization.writeString(clientMachine,out);
 // write clientId and callId
@@ -542,6 +617,7 @@
   // clientname, clientMachine and block locations of last block.
   if (this.opCode == OP_ADD) {
 aclEntries = AclEditLogUtil.read(in, logVersion);
+xAttrs = XAttrsEditLogUtil.read(in, logVersion);
 this.clientName = FSImageSerialization.readString(in);
 this.clientMachine = FSImageSerialization.readString(in);
 // read clientId and callId

Is it possible for xAttrs (and aclEntries for that matter) to be uninit'd if 
opCode != OP_ADD? I'm looking at this.clientName and this.clientMachine and 
wondering why there isn't an equivalent for xAttrs (and aclEntries) in the else?

void readFields(DataInputStream in, int logVersion) throws IOException {
  XAttrEditLogProto p = XAttrEditLogProto.parseDelimitedFrom(
  (DataInputStream)in);

I don't understand why you need the (DataInputStream) cast above. Isn't it 
already known to be a DIS from the formal arg decl?

TestFSImageWithXAttr.java:

There's an extra newline just before the last } closing the class def.


> NameNode: persist XAttrs in fsimage and record XAttrs modifications to edit 
> log.
> 
>
> Key: HDFS-6301
> URL: https://issues.apache.org/jira/browse/HDFS-6301
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6301.patch
>
>
> Store XAttrs in fsimage so that XAttrs are retained across NameNode restarts.
> Implement a new edit log opcode, {{OP_SET_XATTRS}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-2856) Fix block protocol so that Datanodes don't require root or jsvc

2014-04-30 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-2856:


Attachment: HDFS-2856.prototype.patch

It's been a while since we've discussed this one, so here is a recap.  We (the 
names listed in the design doc) proposed introducing challenge-response 
authentication on DataTransferProtocol based on exchanging a digest calculated 
using the block access token as a shared secret.  This would establish mutual 
authentication between client and DataNode before tokens were exchanged, and 
thus it would remove the requirement to launch as root and bind to a privileged 
port.  There were a few rounds of feedback discussing exactly which pieces of 
data to feed into the digest calculation.  [~atm] also suggested folding this 
into the SASL handshake he had implemented for DataTransferProtocol encryption 
in HDFS-3637.

I'm attaching a prototype patch.  This is not intended to be committed.  It's 
just a high-level demonstration intended to revive discussion on this issue.

The suggestion to fold this into the SASL handshake makes sense, because we can 
rely on the existing DIGEST-MD5 mechanism to handle verifying the digests.  
This means the scope of this issue is about adding support for the full range 
of SASL QOPs on DataTransferProtocol.  We already support auth-conf, and now we 
need to add support for auth and auth-int.

The patch demonstrates this by hacking on the existing 
{{DataTransferEncryptor}} code.  I changed the configured QOP to auth and 
changed the password calculation to use the block access token's password + the 
target DataNode's UUID + a client-supplied request timestamp.  I tested this 
manually end-to-end.  (I needed to set {{dfs.encrypt.data.transfer}} to 
{{true}} to trigger the code, but it's not really encrypting.)  I ran tcpdump 
while reading a file, and I confirmed that the SASL negotiation is using auth 
for the QOP, no cipher parameter (so no encryption), and the block content is 
passed unencrypted on the wire.

Early feedback is welcome.  There is still a lot of work remaining: 
renegotiating SASL between multiple block ops with different tokens, 
reconciling all of this code against the existing HDFS-3637 code, actually 
removing the privileged port restriction, and automated tests.

> Fix block protocol so that Datanodes don't require root or jsvc
> ---
>
> Key: HDFS-2856
> URL: https://issues.apache.org/jira/browse/HDFS-2856
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, security
>Reporter: Owen O'Malley
>Assignee: Chris Nauroth
> Attachments: Datanode-Security-Design.pdf, 
> Datanode-Security-Design.pdf, Datanode-Security-Design.pdf, 
> HDFS-2856.prototype.patch
>
>
> Since we send the block tokens unencrypted to the datanode, we currently 
> start the datanode as root using jsvc and get a secure (< 1024) port.
> If we have the datanode generate a nonce and send it on the connection and 
> the sends an hmac of the nonce back instead of the block token it won't 
> reveal any secrets. Thus, we wouldn't require a secure port and would not 
> require root or jsvc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6165) "hdfs dfs -rm -r" and "hdfs -rmdir" commands can't remove empty directory

2014-04-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985954#comment-13985954
 ] 

Hadoop QA commented on HDFS-6165:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12642655/HDFS-6165.006.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6777//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6777//console

This message is automatically generated.

> "hdfs dfs -rm -r" and "hdfs -rmdir" commands can't remove empty directory 
> --
>
> Key: HDFS-6165
> URL: https://issues.apache.org/jira/browse/HDFS-6165
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Minor
> Attachments: HDFS-6165.001.patch, HDFS-6165.002.patch, 
> HDFS-6165.003.patch, HDFS-6165.004.patch, HDFS-6165.004.patch, 
> HDFS-6165.005.patch, HDFS-6165.006.patch, HDFS-6165.006.patch
>
>
> Given a directory owned by user A with WRITE permission containing an empty 
> directory owned by user B, it is not possible to delete user B's empty 
> directory with either "hdfs dfs -rm -r" or "hdfs dfs -rmdir". Because the 
> current implementation requires FULL permission of the empty directory, and 
> throws exception. 
> On the other hand, on linux, "rm -r" and "rmdir" command can remove empty 
> directory as long as the parent directory has WRITE permission (and prefix 
> component of the path have EXECUTE permission), For the tested OSes, some 
> prompt user asking for confirmation, some don't.
> Here's a reproduction:
> {code}
> [root@vm01 ~]# hdfs dfs -ls /user/
> Found 4 items
> drwxr-xr-x   - userabc users   0 2013-05-03 01:55 /user/userabc
> drwxr-xr-x   - hdfssupergroup  0 2013-05-03 00:28 /user/hdfs
> drwxrwxrwx   - mapred  hadoop  0 2013-05-03 00:13 /user/history
> drwxr-xr-x   - hdfssupergroup  0 2013-04-14 16:46 /user/hive
> [root@vm01 ~]# hdfs dfs -ls /user/userabc
> Found 8 items
> drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
> drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
> drwx--   - userabc users  0 2013-05-03 01:06 
> /user/userabc/.staging
> drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
> drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:54 /user/userabc/foo
> drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
> /user/userabc/maven_source
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
> /user/userabc/test-restore
> [root@vm01 ~]# hdfs dfs -ls /user/userabc/foo/
> [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -r -skipTrash /user/userabc/foo
> rm: Permission denied: user=userabc, access=ALL, 
> inode="/user/userabc/foo":hdfs:users:drwxr-xr-x
> {code}
> The super user can delete the directory.
> {code}
> [root@vm01 ~]# sudo -u hdfs hdfs dfs -rm -r -skipTrash /user/userabc/foo
> Deleted /user/userabc/foo
> {code}
> The same is not true for files, however. They have the correct behavior.
> {code}
> [root@vm01 ~]# sudo -u hdfs hdfs dfs -touchz /user/userabc/foo-file
> [root@vm01 ~]# hdfs dfs -ls /user/userabc/
> Found 8 items
> drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
> drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
> drwx--   - userabc users  0 2013-05-03 01:06 
> /user/userabc/.staging
> drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps

[jira] [Commented] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-04-30 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985963#comment-13985963
 ] 

Mit Desai commented on HDFS-6230:
-

[~arpitagarwal] are you working on the jira?

> Expose upgrade status through NameNode web UI
> -
>
> Key: HDFS-6230
> URL: https://issues.apache.org/jira/browse/HDFS-6230
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 
> also does not have the _hadoop dfsadmin -upgradeProgress_ command to check 
> the upgrade status.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-04-30 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985971#comment-13985971
 ] 

Arpit Agarwal commented on HDFS-6230:
-

No I haven't started. Feel free to pick it up if you want.

> Expose upgrade status through NameNode web UI
> -
>
> Key: HDFS-6230
> URL: https://issues.apache.org/jira/browse/HDFS-6230
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 
> also does not have the _hadoop dfsadmin -upgradeProgress_ command to check 
> the upgrade status.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6165) "hdfs dfs -rm -r" and "hdfs -rmdir" commands can't remove empty directory

2014-04-30 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985972#comment-13985972
 ] 

Yongjun Zhang commented on HDFS-6165:
-

The failed test appears to be 
https://issues.apache.org/jira/browse/HADOOP-10062.


> "hdfs dfs -rm -r" and "hdfs -rmdir" commands can't remove empty directory 
> --
>
> Key: HDFS-6165
> URL: https://issues.apache.org/jira/browse/HDFS-6165
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Minor
> Attachments: HDFS-6165.001.patch, HDFS-6165.002.patch, 
> HDFS-6165.003.patch, HDFS-6165.004.patch, HDFS-6165.004.patch, 
> HDFS-6165.005.patch, HDFS-6165.006.patch, HDFS-6165.006.patch
>
>
> Given a directory owned by user A with WRITE permission containing an empty 
> directory owned by user B, it is not possible to delete user B's empty 
> directory with either "hdfs dfs -rm -r" or "hdfs dfs -rmdir". Because the 
> current implementation requires FULL permission of the empty directory, and 
> throws exception. 
> On the other hand, on linux, "rm -r" and "rmdir" command can remove empty 
> directory as long as the parent directory has WRITE permission (and prefix 
> component of the path have EXECUTE permission), For the tested OSes, some 
> prompt user asking for confirmation, some don't.
> Here's a reproduction:
> {code}
> [root@vm01 ~]# hdfs dfs -ls /user/
> Found 4 items
> drwxr-xr-x   - userabc users   0 2013-05-03 01:55 /user/userabc
> drwxr-xr-x   - hdfssupergroup  0 2013-05-03 00:28 /user/hdfs
> drwxrwxrwx   - mapred  hadoop  0 2013-05-03 00:13 /user/history
> drwxr-xr-x   - hdfssupergroup  0 2013-04-14 16:46 /user/hive
> [root@vm01 ~]# hdfs dfs -ls /user/userabc
> Found 8 items
> drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
> drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
> drwx--   - userabc users  0 2013-05-03 01:06 
> /user/userabc/.staging
> drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
> drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:54 /user/userabc/foo
> drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
> /user/userabc/maven_source
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
> /user/userabc/test-restore
> [root@vm01 ~]# hdfs dfs -ls /user/userabc/foo/
> [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -r -skipTrash /user/userabc/foo
> rm: Permission denied: user=userabc, access=ALL, 
> inode="/user/userabc/foo":hdfs:users:drwxr-xr-x
> {code}
> The super user can delete the directory.
> {code}
> [root@vm01 ~]# sudo -u hdfs hdfs dfs -rm -r -skipTrash /user/userabc/foo
> Deleted /user/userabc/foo
> {code}
> The same is not true for files, however. They have the correct behavior.
> {code}
> [root@vm01 ~]# sudo -u hdfs hdfs dfs -touchz /user/userabc/foo-file
> [root@vm01 ~]# hdfs dfs -ls /user/userabc/
> Found 8 items
> drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
> drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
> drwx--   - userabc users  0 2013-05-03 01:06 
> /user/userabc/.staging
> drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
> drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
> -rw-r--r--   1 hdfsusers  0 2013-05-03 02:11 
> /user/userabc/foo-file
> drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
> /user/userabc/maven_source
> drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
> /user/userabc/test-restore
> [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -skipTrash /user/userabc/foo-file
> Deleted /user/userabc/foo-file
> {code}
> Using "hdfs dfs -rmdir" command:
> {code}
> bash-4.1$ hadoop fs -lsr /
> lsr: DEPRECATED: Please use 'ls -R' instead.
> drwxr-xr-x   - hdfs supergroup  0 2014-03-25 16:29 /user
> drwxr-xr-x   - hdfs   supergroup  0 2014-03-25 16:28 /user/hdfs
> drwxr-xr-x   - usrabc users   0 2014-03-28 23:39 /user/usrabc
> drwxr-xr-x   - abcabc 0 2014-03-28 23:39 
> /user/usrabc/foo-empty1
> [root@vm01 usrabc]# su usrabc
> [usrabc@vm01 ~]$ hdfs dfs -rmdir /user/usrabc/foo-empty1
> rmdir: Permission denied: user=usrabc, access=ALL, 
> inode="/user/usrabc/foo-empty1":abc:abc:drwxr-xr-x
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-04-30 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai reassigned HDFS-6230:
---

Assignee: Mit Desai  (was: Arpit Agarwal)

> Expose upgrade status through NameNode web UI
> -
>
> Key: HDFS-6230
> URL: https://issues.apache.org/jira/browse/HDFS-6230
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Mit Desai
>
> The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 
> also does not have the _hadoop dfsadmin -upgradeProgress_ command to check 
> the upgrade status.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-04-30 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985985#comment-13985985
 ] 

Mit Desai commented on HDFS-6230:
-

Thanks! Taking it over

> Expose upgrade status through NameNode web UI
> -
>
> Key: HDFS-6230
> URL: https://issues.apache.org/jira/browse/HDFS-6230
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Mit Desai
>
> The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 
> also does not have the _hadoop dfsadmin -upgradeProgress_ command to check 
> the upgrade status.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6314) Test cases for XAttrs

2014-04-30 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986015#comment-13986015
 ] 

Charles Lamb commented on HDFS-6314:


FSXAttrBaseTest.java:

Add a newline before the package decl.
s/restarting NN/restarting the NN/
s/saving new/saving a new/

+   * Tests for creating xattr
+   * 1. create xattr using XAttrSetFlag.CREATE flag.
+   * 2. Assert exception of creating xattr which already exists.
+   * 3. Create multiple xattrs
+   * 4. Restart NN, save checkpoint scenarios.

Tests for creating xattrs
1. Create an xattr using XAttrSetFlag.CREATE
2. Create it again and expect an exception
3. Create multiple xattrs
4. Restart the NN in save checkpoint scenarios.

+   * Tests for replacing xattr
+   * 1. Replace xattr using XAttrSetFlag.REPLACE flag.
+   * 2. Assert exception of replacing xattr which does not exist.
+   * 3. Create multiple xattrs, and replace some.
+   * 4. Restart NN, save checkpoint scenarios.

* Tests for replacing xattrs
* 1. Replace an xattr using XAttrSetFlag.REPLACE.
* 2. Replace an xattr which doesn't exist and expect an exception
* 3. Create multiple xattrs and replace some.
* 4. Restart the NNsave in checkpoint scenarios.

s/Tests for setting xattr/Tests for setting xattrs/
s/Tests for removing xattr/Tests for removing xattrs/

I noticed that the blank lines have whitespace in them. That should be removed. 
e.g. the blank line between 

+Assert.assertArrayEquals(value2, xattrs.get(name2));
+<--- THERE'S WHITESPACE HERE
+fs.removeXAttr(path, name1);

> Test cases for XAttrs
> -
>
> Key: HDFS-6314
> URL: https://issues.apache.org/jira/browse/HDFS-6314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6314.patch
>
>
> Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving 
> new checkpoint.
> Tests XAttr for Snapshot, symlinks.
> Tests XAttr for HA failover.
> And more...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6315) Decouple recoding edit logs from FSDirectory

2014-04-30 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6315:
-

Attachment: HDFS-6315.000.patch

> Decouple recoding edit logs from FSDirectory
> 
>
> Key: HDFS-6315
> URL: https://issues.apache.org/jira/browse/HDFS-6315
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-6315.000.patch
>
>
> Currently both FSNamesystem and FSDirectory record edit logs. This design 
> requires both FSNamesystem and FSDirectory to be tightly coupled together to 
> implement a durable namespace.
> This jira proposes to separate the responsibility of implementing the 
> namespace and providing durability with edit logs. Specifically, FSDirectory 
> implements the namespace (which should have no edit log operations), and 
> FSNamesystem implement durability by recording the edit logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6315) Decouple recoding edit logs from FSDirectory

2014-04-30 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6315:


 Summary: Decouple recoding edit logs from FSDirectory
 Key: HDFS-6315
 URL: https://issues.apache.org/jira/browse/HDFS-6315
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6315.000.patch

Currently both FSNamesystem and FSDirectory record edit logs. This design 
requires both FSNamesystem and FSDirectory to be tightly coupled together to 
implement a durable namespace.

This jira proposes to separate the responsibility of implementing the namespace 
and providing durability with edit logs. Specifically, FSDirectory implements 
the namespace (which should have no edit log operations), and FSNamesystem 
implement durability by recording the edit logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >