[jira] [Created] (HDFS-7298) HDFS may honor socket timeout configuration

2014-10-27 Thread Guo Ruijing (JIRA)
Guo Ruijing created HDFS-7298:
-

 Summary: HDFS may honor socket timeout configuration
 Key: HDFS-7298
 URL: https://issues.apache.org/jira/browse/HDFS-7298
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Guo Ruijing


DFS_CLIENT_SOCKET_TIMEOUT_KEY: HDFS socket read timeout
DFS_DATANODE_SOCKET_WRITE_TIMEOUT_KEY: HDFS socket write timeout 

HDFS may honor socket timeout configuration:

1. DataXceiver.java:

1) existing code(not expected)
   int timeoutValue = dnConf.socketTimeout
  + (HdfsServerConstants.READ_TIMEOUT_EXTENSION * targets.length);
  int writeTimeout = dnConf.socketWriteTimeout +
  (HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * 
targets.length);

2) proposed code:

   int timeoutValue = dnConf.socketTimeout ? (dnConf.socketTimeout
  + (HdfsServerConstants.READ_TIMEOUT_EXTENSION * targets.length) : 
0;

  int writeTimeout = dnConf.socketWriteTimeout ? 
(dnConf.socketWriteTimeout + 
  (HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * 
targets.length)) : 0;

2) DFSClient.java

existing code is expected:

  int getDatanodeWriteTimeout(int numNodes) {
return (dfsClientConf.confTime  0) ?
  (dfsClientConf.confTime + HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * 
numNodes) : 0;
  }

  int getDatanodeReadTimeout(int numNodes) {
return dfsClientConf.socketTimeout  0 ?
(HdfsServerConstants.READ_TIMEOUT_EXTENSION * numNodes +
dfsClientConf.socketTimeout) : 0;
  }

3) DataNode.java:

existing code is not expected: 

long writeTimeout = dnConf.socketWriteTimeout +
HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * 
(targets.length-1);

proposed code:

long writeTimeout = dnConf.socketWriteTimeout ? 
(dnConf.socketWriteTimeout  +
HdfsServerConstants.WRITE_TIMEOUT_EXTENSION * 
(targets.length-1)) : 0;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7133) Support clearing namespace quota on /

2014-09-22 Thread Guo Ruijing (JIRA)
Guo Ruijing created HDFS-7133:
-

 Summary: Support clearing namespace quota on /
 Key: HDFS-7133
 URL: https://issues.apache.org/jira/browse/HDFS-7133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Guo Ruijing


existing implementation:

1. support set namespace quota on /
2. doesn't support clear namespace quota on / due to HDFS-1258

expected implementation:

support clearing namespace quota on /



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6869) Logfile mode in secure datanode is expected to be 644

2014-08-18 Thread Guo Ruijing (JIRA)
Guo Ruijing created HDFS-6869:
-

 Summary: Logfile mode in secure datanode is expected to be 644
 Key: HDFS-6869
 URL: https://issues.apache.org/jira/browse/HDFS-6869
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Guo Ruijing


Logfile mode in secure datanode is expected to be 644.

[root@centos64-2 hadoop-hdfs]# ll
total 136
-rw-r--r-- 1 hdfs hadoop 19455 Aug 18 22:40 
hadoop-hdfs-journalnode-centos64-2.log
-rw-r--r-- 1 hdfs hadoop   718 Aug 18 22:38 
hadoop-hdfs-journalnode-centos64-2.out
-rw-r--r-- 1 hdfs hadoop 62316 Aug 18 22:40 hadoop-hdfs-namenode-centos64-2.log
-rw-r--r-- 1 hdfs hadoop   718 Aug 18 22:38 hadoop-hdfs-namenode-centos64-2.out
-rw-r--r-- 1 hdfs hadoop 15485 Aug 18 22:39 hadoop-hdfs-zkfc-centos64-2.log
-rw-r--r-- 1 hdfs hadoop   718 Aug 18 22:39 hadoop-hdfs-zkfc-centos64-2.out
drwxr-xr-x 2 hdfs root4096 Aug 18 22:38 hdfs
-rw-r--r-- 1 hdfs hadoop 0 Aug 18 22:38 hdfs-audit.log
-rw-r--r-- 1 hdfs hadoop 15029 Aug 18 22:40 SecurityAuth-hdfs.audit
[root@centos64-2 hadoop-hdfs]#
[root@centos64-2 hadoop-hdfs]#
[root@centos64-2 hadoop-hdfs]# ll hdfs/
total 52
-rw--- 1 hdfs hadoop 43526 Aug 18 22:41 hadoop-hdfs-datanode-centos64-2.log 
 expected -rw-r--r-- (same with namenode log)
-rw-r--r-- 1 root root 734 Aug 18 22:38 hadoop-hdfs-datanode-centos64-2.out
-rw--- 1 root root 343 Aug 18 22:38 jsvc.err
-rw--- 1 root root   0 Aug 18 22:38 jsvc.out
-rw--- 1 hdfs hadoop 0 Aug 18 22:38 SecurityAuth-hdfs.audit







--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6590) NullPointerException was generated when calling getBlockLocalPathInfo

2014-06-22 Thread Guo Ruijing (JIRA)
Guo Ruijing created HDFS-6590:
-

 Summary: NullPointerException was generated when calling 
getBlockLocalPathInfo
 Key: HDFS-6590
 URL: https://issues.apache.org/jira/browse/HDFS-6590
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.2.0
Reporter: Guo Ruijing


2014-06-11 20:34:40.240119, p43949, th140725562181728, ERROR cannot setup block 
reader for Block: [block pool ID: BP-1901161041-172.28.1.251-1402542341112 
block ID 1073741926_1102] on Datanode: sdw3(172.28.1.3).
RpcHelper.h: 74: HdfsIOException: Unexpected exception: when unwrap the rpc 
remote exception java.lang.NullPointerException, 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1014)
at 
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:6373)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6454) Configurable Block Placement Policy

2014-05-28 Thread Guo Ruijing (JIRA)
Guo Ruijing created HDFS-6454:
-

 Summary: Configurable Block Placement Policy
 Key: HDFS-6454
 URL: https://issues.apache.org/jira/browse/HDFS-6454
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Guo Ruijing


In existing implementation,  block choose priority is localhost/remote 
rack/local rack/ramdon.
in BlockPlacementPolicyDefault, network topology is /rack/host.
In BlockPlacementPolicyWithNodeGroup, network topology is /rack/nodegroup/host.

This JIRA is to propose block choose priority can be configurable as:

property
  namedfs.block.replicator.priority/name
  value0, 2, 1, */value
  description
   default network topology is /level2/level1
   nodegroup network topology is /level3/level2/level1. choose priority can 
be 0(localhost), 3(remote rack), 2(local rack), *(any host)
  /description
/property

Another example (one VM includes serveral dockers/containers) so network 
topology can be /rack/nodegroup/container/host. in this case, block replicator 
priority can be
0(localhost), 4(remote rack), 3(local rack), *(any host)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6087) Unify HDFS write/append/truncate

2014-03-11 Thread Guo Ruijing (JIRA)
Guo Ruijing created HDFS-6087:
-

 Summary: Unify HDFS write/append/truncate
 Key: HDFS-6087
 URL: https://issues.apache.org/jira/browse/HDFS-6087
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Guo Ruijing
 Attachments: HDFS Design Proposal.pdf

In existing implementation, HDFS file can be appended and HDFS block can be 
reopened for append. This design will introduce complexity including lease 
recovery. If we design HDFS block as immutable, it will be very simple for 
append  truncate. The idea is that HDFS block is immutable if the block is 
committed to namenode. If the block is not committed to namenode, it is HDFS 
client’s responsibility to re-added with new block ID.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-5765) Append to original snapshotted files was broken

2014-01-14 Thread Guo Ruijing (JIRA)
Guo Ruijing created HDFS-5765:
-

 Summary: Append to original snapshotted files was broken
 Key: HDFS-5765
 URL: https://issues.apache.org/jira/browse/HDFS-5765
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Guo Ruijing






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)