[jira] [Created] (HDFS-10948) HDFS setOwner no-op should not generate an edit log transaction

2016-10-03 Thread Karthik Palaniappan (JIRA)
Karthik Palaniappan created HDFS-10948:
--

 Summary: HDFS setOwner no-op should not generate an edit log 
transaction
 Key: HDFS-10948
 URL: https://issues.apache.org/jira/browse/HDFS-10948
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.7.0
Reporter: Karthik Palaniappan
Priority: Minor


The HDFS setOwner RPC takes a path, optional username, and optional groupname 
(see ClientNamenodeProtocol.proto). If neither username nor groupname are set, 
the RPC is a no-op. However, an entry in the edit log is still committed, and 
when retrieved through getEditsFromTxid, both username and groupname are set to 
the empty string.

This does not match the behavior of the setTimes RPC. There atime and mtime are 
both required, and you can set "-1" to indicated "don't change the time". If 
both are set to "-1", the RPC is a no-op, and appropriately no edit log 
transaction is created.

Here's an example of an (untested) patch to fix the issue:

```
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index 3dc1c30..0728bc5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -77,6 +77,7 @@ static HdfsFileStatus setOwner(
 if (FSDirectory.isExactReservedName(src)) {
   throw new InvalidPathException(src);
 }
+boolean changed = false;
 FSPermissionChecker pc = fsd.getPermissionChecker();
 INodesInPath iip;
 fsd.writeLock();
@@ -92,11 +93,13 @@ static HdfsFileStatus setOwner(
   throw new AccessControlException("User does not belong to " + group);
 }
   }
-  unprotectedSetOwner(fsd, src, username, group);
+  changed = unprotectedSetOwner(fsd, src, username, group);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetOwner(src, username, group);
+if (changed) {
+  fsd.getEditLog().logSetOwner(src, username, group);
+}
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -287,22 +290,27 @@ static void unprotectedSetPermission(
 inode.setPermission(permissions, snapshotId);
   }
 
-  static void unprotectedSetOwner(
+  static boolean unprotectedSetOwner(
   FSDirectory fsd, String src, String username, String groupname)
   throws FileNotFoundException, UnresolvedLinkException,
   QuotaExceededException, SnapshotAccessControlException {
 assert fsd.hasWriteLock();
+boolean status = false;
 final INodesInPath inodesInPath = fsd.getINodesInPath4Write(src, true);
 INode inode = inodesInPath.getLastINode();
 if (inode == null) {
   throw new FileNotFoundException("File does not exist: " + src);
 }
 if (username != null) {
+  status = true;
   inode = inode.setUser(username, inodesInPath.getLatestSnapshotId());
 }
 if (groupname != null) {
+  status = true;
   inode.setGroup(groupname, inodesInPath.getLatestSnapshotId());
 }
+
+return status;
   }
 
   static boolean setTimes(
```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-9878) Data transfer encryption with AES 192: Invalid key length.

2016-02-29 Thread Karthik Palaniappan (JIRA)
Karthik Palaniappan created HDFS-9878:
-

 Summary: Data transfer encryption with AES 192: Invalid key length.
 Key: HDFS-9878
 URL: https://issues.apache.org/jira/browse/HDFS-9878
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, hdfs-client
Affects Versions: 2.7.2
 Environment: OS: Ubuntu 14.04

/hadoop-2.7.2/bin$ uname -a
Linux wkstn-kpalaniappan 3.13.0-79-generic #123-Ubuntu SMP Fri Feb 19 14:27:58 
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

/hadoop-2.7.2/bin$ java -version
java version "1.7.0_95"
OpenJDK Runtime Environment (IcedTea 2.6.4) (7u95-2.6.4-0ubuntu0.14.04.1)
OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)

Hadoop version: 2.7.2
Reporter: Karthik Palaniappan


Configuring aes 128 or aes 256 encryption 
(dfs.encrypt.data.transfer.cipher.key.bitlength = [128, 256]) works perfectly 
fine. Trying to use AES 192 generates this exception on the datanode:
16/02/29 17:34:10 ERROR datanode.DataNode: wkstn-kpalaniappan:50010:DataXceiver 
error processing unknown operation  src: /127.0.0.1:57237 dst: /127.0.0.1:50010
java.lang.IllegalArgumentException: Invalid key length.
at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
at 
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
at 
org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
at 
org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
at 
org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
at 
org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
at 
org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:396)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getEncryptedStreams(SaslDataTransferServer.java:178)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:110)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:193)
at java.lang.Thread.run(Thread.java:745)

And this exception on the client:
/hadoop-2.7.2/bin$ ./hdfs dfs -copyFromLocal ~/.vimrc /vimrc
16/02/29 17:34:10 WARN hdfs.DFSClient: DataStreamer Exception
java.lang.IllegalArgumentException: Invalid key length.
at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
at 
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
at 
org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
at 
org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
at 
org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
at 
org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
at 
org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1266)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449)
copyFromLocal: DataStreamer Exception: 

The issue is in the openssl c code: 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c#L204.

It asserts that the key length is 128 or 256 bits, but does not allow 192.

Multiple 

[jira] [Created] (HDFS-9302) WebHDFS throws NullPointerException if newLength is not provided

2015-10-23 Thread Karthik Palaniappan (JIRA)
Karthik Palaniappan created HDFS-9302:
-

 Summary: WebHDFS throws NullPointerException if newLength is not 
provided
 Key: HDFS-9302
 URL: https://issues.apache.org/jira/browse/HDFS-9302
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.1
 Environment: Centos6
Reporter: Karthik Palaniappan
Priority: Minor


$ curl -X POST "http://namenode:50070/webhdfs/v1/foo?op=truncate";
{"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}

We should change newLength to be a required parameter in the webhdfs 
documentation 
(https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#New_Length),
 and throw an IllegalArgumentException if isn't provided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)