[jira] [Commented] (HDFS-7353) Raw Erasure Coder API for concrete encoding and decoding

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283195#comment-14283195
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7353:
---

Then how about call it AbstractByteBufferErasureCoder or 
AbstractByteErasureCoder?  It seems not clear that Raw means byte.  And the 
ErasureCoder class you mentioned should call BlockGroupErasureCoder.

 Raw Erasure Coder API for concrete encoding and decoding
 

 Key: HDFS-7353
 URL: https://issues.apache.org/jira/browse/HDFS-7353
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-EC

 Attachments: HDFS-7353-v1.patch


 This is to abstract and define raw erasure coder API across different codes 
 algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
 various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7640:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~brandonli] for the 
contribution.

 print NFS Client in the NFS log
 ---

 Key: HDFS-7640
 URL: https://issues.apache.org/jira/browse/HDFS-7640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7640.001.patch


 Currently hdfs-nfs logs does not have any information about nfs clients.
 When multiple clients are using nfs, it becomes hard to distinguish which 
 request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283273#comment-14283273
 ] 

Haohui Mai commented on HDFS-7640:
--

+1. I'll commit it shortly.

 print NFS Client in the NFS log
 ---

 Key: HDFS-7640
 URL: https://issues.apache.org/jira/browse/HDFS-7640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-7640.001.patch


 Currently hdfs-nfs logs does not have any information about nfs clients.
 When multiple clients are using nfs, it becomes hard to distinguish which 
 request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6673) Add Delimited format supports for PB OIV tool

2015-01-19 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283299#comment-14283299
 ] 

Lei (Eddy) Xu commented on HDFS-6673:
-

[~wheat9] Thank you for reviewing it.

By {{IN || parent_id || localName}}, do you mean concating {{inode}},  {{parent 
inode}} and INode {{localName}} as key in LevelDB? In this case, since INode is 
the prefix of the key, is the order of keys still determined by inode?


 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6673) Add Delimited format supports for PB OIV tool

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283389#comment-14283389
 ] 

Hadoop QA commented on HDFS-6673:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693192/HDFS-6673.005.patch
  against trunk revision 0a2d3e7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9271//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9271//console

This message is automatically generated.

 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283179#comment-14283179
 ] 

Brandon Li commented on HDFS-7640:
--

Thank you, Yongjun.

 print NFS Client in the NFS log
 ---

 Key: HDFS-7640
 URL: https://issues.apache.org/jira/browse/HDFS-7640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-7640.001.patch


 Currently hdfs-nfs logs does not have any information about nfs clients.
 When multiple clients are using nfs, it becomes hard to distinguish which 
 request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283180#comment-14283180
 ] 

Hadoop QA commented on HDFS-7640:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693171/HDFS-7640.001.patch
  against trunk revision 4a44508.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9270//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9270//console

This message is automatically generated.

 print NFS Client in the NFS log
 ---

 Key: HDFS-7640
 URL: https://issues.apache.org/jira/browse/HDFS-7640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-7640.001.patch


 Currently hdfs-nfs logs does not have any information about nfs clients.
 When multiple clients are using nfs, it becomes hard to distinguish which 
 request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7637) Fix the check condition for reserved path

2015-01-19 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283234#comment-14283234
 ] 

Yi Liu commented on HDFS-7637:
--

Thanks [~jingzhao] and [~clamb] for review and commit.

 Fix the check condition for reserved path
 -

 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7637.001.patch


 Currently the {{.reserved}} patch check function is:
 {code}
 public static boolean isReservedName(String src) {
   return src.startsWith(DOT_RESERVED_PATH_PREFIX);
 }
 {code}
 And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
 {{/.reserved/}}, for example: if some other directory prefix with 
 _/.reserved_, we say it's _/.reservedpath_, then the check is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-6576) Datanode log is generating at root directory in security mode

2015-01-19 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore reassigned HDFS-6576:


Assignee: surendra singh lilhore

 Datanode log is generating at root directory in security mode
 -

 Key: HDFS-6576
 URL: https://issues.apache.org/jira/browse/HDFS-6576
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore
Priority: Minor
 Attachments: HDFS-6576.patch


 In hadoop-env.sh script we are exporting HADOOP_SECURE_DN_LOG_DIR , but in 
 above line export statement for HADOOP_LOG_DIR is commented 
 If in user environment HADOOP_LOG_DIR is not exported then 
 HADOOP_SECURE_DN_LOG_DIR env variable will export with / value and DN will 
 logs in root directory.
 {noformat}
 # Where log files are stored.  $HADOOP_HOME/logs by default.
 #export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
 # Where log files are stored in the secure data environment.
 export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
 {noformat}
 I think we should comment this line.
 In hadoop-daemon.sh already handled case if value of HADOOP_SECURE_DN_LOG_DIR 
 and HADOOP_LOG_DIR is empty.
 In hadoop-daemon.sh we assigning value of HADOOP_SECURE_DN_LOG_DIR in 
 HADOOP_LOG_DIR and after that we are checking if HADOOP_LOG_DIR is empty then 
 HADOOP_LOG_DIR  env variable export with $HADOOP_PREFIX/logs value
 {noformat}
 # Determine if we're starting a secure datanode, and if so, redefine 
 appropriate variables
 if [ $command == datanode ]  [ $EUID -eq 0 ]  [ -n 
 $HADOOP_SECURE_DN_USER ]; then
   export HADOOP_PID_DIR=$HADOOP_SECURE_DN_PID_DIR
   export HADOOP_LOG_DIR=$HADOOP_SECURE_DN_LOG_DIR
   export HADOOP_IDENT_STRING=$HADOOP_SECURE_DN_USER
   starting_secure_dn=true
 fi
 if [ $HADOOP_IDENT_STRING =  ]; then
   export HADOOP_IDENT_STRING=$USER
 fi
 # get log directory
 if [ $HADOOP_LOG_DIR =  ]; then
   export HADOOP_LOG_DIR=$HADOOP_PREFIX/logs
 fi 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7640:
-
Status: Patch Available  (was: Open)

 print NFS Client in the NFS log
 ---

 Key: HDFS-7640
 URL: https://issues.apache.org/jira/browse/HDFS-7640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-7640.001.patch


 Currently hdfs-nfs logs does not have any information about nfs clients.
 When multiple clients are using nfs, it becomes hard to distinguish which 
 request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283291#comment-14283291
 ] 

Hudson commented on HDFS-7640:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6892 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6892/])
HDFS-7640. print NFS Client in the NFS log. Contributed by Brandon Li. (wheat9: 
rev 5e5e35b1856293503124b77d5d4998a4d8e83082)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java


 print NFS Client in the NFS log
 ---

 Key: HDFS-7640
 URL: https://issues.apache.org/jira/browse/HDFS-7640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7640.001.patch


 Currently hdfs-nfs logs does not have any information about nfs clients.
 When multiple clients are using nfs, it becomes hard to distinguish which 
 request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7638:
-
   Resolution: Fixed
Fix Version/s: (was: 2.7.0)
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

 Small fix and few refinements for FSN#truncate
 --

 Key: HDFS-7638
 URL: https://issues.apache.org/jira/browse/HDFS-7638
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-7638.001.patch


 *1.* 
 {code}
 removeBlocks(collectedBlocks);
 {code}
 should be after {{logSync}}, as we do in other FSN places (rename, delete, 
 write with overwrite), the reason is discussed in HDFS-2815 and 
 https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068
 *2.*
 {code}
 stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
 FSDirectory.isReservedRawName(src), true);
 {code}
 We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. 
 If audit log is not on, we don't need to get the file info.
 *3.*
 In {{truncateInternal}}, 
 {code}
 INodeFile file = iip.getLastINode().asFile();
 {code}
 is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283290#comment-14283290
 ] 

Hudson commented on HDFS-7638:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6892 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6892/])
HDFS-7638: Small fix and few refinements for FSN#truncate. (yliu) (yliu: rev 
5a6c084f074990a1f412475b147fd4f040b57d57)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Small fix and few refinements for FSN#truncate
 --

 Key: HDFS-7638
 URL: https://issues.apache.org/jira/browse/HDFS-7638
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-7638.001.patch


 *1.* 
 {code}
 removeBlocks(collectedBlocks);
 {code}
 should be after {{logSync}}, as we do in other FSN places (rename, delete, 
 write with overwrite), the reason is discussed in HDFS-2815 and 
 https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068
 *2.*
 {code}
 stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
 FSDirectory.isReservedRawName(src), true);
 {code}
 We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. 
 If audit log is not on, we don't need to get the file info.
 *3.*
 In {{truncateInternal}}, 
 {code}
 INodeFile file = iip.getLastINode().asFile();
 {code}
 is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7568) Support immutability (Write-once-read-many) in HDFS

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283316#comment-14283316
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7568:
---

  Administrator or owner of the directory can change immutability of directory 
  to off to make it a regular directory.

If immutability can be turned off, is it still satisfied the regulatory 
compliance requirement?

 Looks like the first type of immutability is similar to setting read-only  
 {{r-\-r-\-r--}} permission on a directory recursively? ...

I guess you probably mean r-xr-xr-x.  Otherwise, file read is not allowed.  For 
r-xr-xr-x directories, append to a file is allowed.

 Support immutability (Write-once-read-many) in HDFS
 ---

 Key: HDFS-7568
 URL: https://issues.apache.org/jira/browse/HDFS-7568
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Affects Versions: 2.7.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas

 Many regulatory compliance requires storage to support WORM functionality to 
 protect sensitive data from being modified or deleted. This jira proposes 
 adding that feature to HDFS.
 See the following comment for more description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3443:

Attachment: HDFS-3443-006.patch

attaching patch for the inclusion of boolean check for every RPC in NameNode.

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover, ha
Reporter: suja s
Assignee: Vinayakumar B
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443-005.patch, HDFS-3443-006.patch, HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283156#comment-14283156
 ] 

Yongjun Zhang commented on HDFS-7640:
-

Hi [~brandonli],

Good message improvement. I went through and your patch looks good to me. 
Thanks.




 print NFS Client in the NFS log
 ---

 Key: HDFS-7640
 URL: https://issues.apache.org/jira/browse/HDFS-7640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-7640.001.patch


 Currently hdfs-nfs logs does not have any information about nfs clients.
 When multiple clients are using nfs, it becomes hard to distinguish which 
 request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3689) Add support for variable length block

2015-01-19 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-3689:

Attachment: HDFS-3689.007.patch

Thanks for the review, Nicholas! Update the patch to address the comments.

bq. We also need to enforce the same replication in concat if we don't want to 
update disk quota.

The new patch just updates the diskspace quota usage after the concat.

 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
 HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
 HDFS-3689.004.patch, HDFS-3689.005.patch, HDFS-3689.006.patch, 
 HDFS-3689.007.patch


 Currently HDFS supports fixed length blocks. Supporting variable length block 
 will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7353) Raw Erasure Coder API for concrete encoding and decoding

2015-01-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283136#comment-14283136
 ] 

Kai Zheng commented on HDFS-7353:
-

Hi [~szetszwo],

Thanks for your review. Why it would have Raw for the erasure coder classes, 
as the JIRA's title indicates, it's because they're for encoding/decoding in 
lowest bytes level. As you might help take a look at the codec JIRA (HDFS-7337) 
and the bundle of prototype codes attached there, we do have ErasureCoder 
classes, which perform encoding/decoding against BlockGroup. Generally an 
ErasureCoder needed by an ErasureCodec can use one or more RawErasureCoders to 
do the real work. I'm working on the breakdown and this patch is the first one. 
I will address your comments updating the patch. Please let me know if you have 
more questions.

 Raw Erasure Coder API for concrete encoding and decoding
 

 Key: HDFS-7353
 URL: https://issues.apache.org/jira/browse/HDFS-7353
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-EC

 Attachments: HDFS-7353-v1.patch


 This is to abstract and define raw erasure coder API across different codes 
 algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
 various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7640:
-
Attachment: HDFS-7640.001.patch

 print NFS Client in the NFS log
 ---

 Key: HDFS-7640
 URL: https://issues.apache.org/jira/browse/HDFS-7640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Attachments: HDFS-7640.001.patch


 Currently hdfs-nfs logs does not have any information about nfs clients.
 When multiple clients are using nfs, it becomes hard to distinguish which 
 request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6673) Add Delimited format supports for PB OIV tool

2015-01-19 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283272#comment-14283272
 ] 

Haohui Mai commented on HDFS-6673:
--

The current patch stores the map from  {{inodeid}} to {{parent}} in LevelDB in 
the first pass, then in the second pass it iterates all inodes in the fsimage 
then prints out the results.

However, given the fact that (1) LevelDB stores the KV pair in sorted order on 
the disk, and (2) the inodes are stored in random orders in the fsimage, the 
scheme requires one seek per file. It makes more sense to adopt the scheme 
demonstrated in HDFS-6293, that is, using {{IN || parent_id || localName}} as 
the key. That way it requires at most one seek per directory instead of one 
seek per file.

 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283403#comment-14283403
 ] 

Hadoop QA commented on HDFS-3689:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693194/HDFS-3689.007.patch
  against trunk revision 0a2d3e7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-nfs:

  org.apache.hadoop.ha.TestZKFailoverController
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9272//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9272//console

This message is automatically generated.

 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
 HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
 HDFS-3689.004.patch, HDFS-3689.005.patch, HDFS-3689.006.patch, 
 HDFS-3689.007.patch


 Currently HDFS supports fixed length blocks. Supporting variable length block 
 will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283152#comment-14283152
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3689:
---

- For concat,
-* We also need to enforce the same replication in concat if we don't want to 
update disk quota.
-* Let's move the code for checking src parent directory to verifySrcFiles.  We 
should print out the path when creating an IllegalArgumentException.
-* In addition, could you also check if debug is enabled in 
FSDirConcatOp.concat?  Otherwise, it will compute the srcs string even if debug 
is disabled.
- Unintentional format change in PBHelper.convertEditsResponse(..)?  There is a 
long line.
- Let's also chanage WebHDFS to support append to new block.  We may do it 
separately.


 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
 HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
 HDFS-3689.004.patch, HDFS-3689.005.patch, HDFS-3689.006.patch


 Currently HDFS supports fixed length blocks. Supporting variable length block 
 will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6673) Add Delimited format supports for PB OIV tool

2015-01-19 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6673:

Attachment: HDFS-6673.005.patch

Update patch to fix test failures and findbugs warnings.

 Add Delimited format supports for PB OIV tool
 -

 Key: HDFS-6673
 URL: https://issues.apache.org/jira/browse/HDFS-6673
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Attachments: HDFS-6673.000.patch, HDFS-6673.001.patch, 
 HDFS-6673.002.patch, HDFS-6673.003.patch, HDFS-6673.004.patch, 
 HDFS-6673.005.patch


 The new oiv tool, which is designed for Protobuf fsimage, lacks a few 
 features supported in the old {{oiv}} tool. 
 This task adds supports of _Delimited_ processor to the oiv tool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3689) Add support for variable length block

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283222#comment-14283222
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3689:
---

- Let's rename prepareFileForWrite to prepareFileForAppend.
- Need default for inotify.proto

 Add support for variable length block
 -

 Key: HDFS-3689
 URL: https://issues.apache.org/jira/browse/HDFS-3689
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, 
 HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, 
 HDFS-3689.004.patch, HDFS-3689.005.patch, HDFS-3689.006.patch


 Currently HDFS supports fixed length blocks. Supporting variable length block 
 will allow new use cases and features to be built on top of HDFS. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7353) Raw Erasure Coder API for concrete encoding and decoding

2015-01-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283238#comment-14283238
 ] 

Kai Zheng commented on HDFS-7353:
-

I agree the naming might not be very clear or concrete but I guess the 
abstraction has some benefits so that it's flexible and has potential to 
contain more APIs some of which I'm going to add.

 Raw Erasure Coder API for concrete encoding and decoding
 

 Key: HDFS-7353
 URL: https://issues.apache.org/jira/browse/HDFS-7353
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-EC

 Attachments: HDFS-7353-v1.patch


 This is to abstract and define raw erasure coder API across different codes 
 algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
 various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7338) Reed-Solomon codes using Intel ISA-L library

2015-01-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283239#comment-14283239
 ] 

Kai Zheng commented on HDFS-7338:
-

Thanks ! The new summary and description do clarify a lot.

 Reed-Solomon codes using Intel ISA-L library
 

 Key: HDFS-7338
 URL: https://issues.apache.org/jira/browse/HDFS-7338
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-EC
Reporter: Zhe Zhang
Assignee: Kai Zheng

 This is to provide RS codec implementation using Intel ISA-L library for 
 encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7634) Lazy persist (memory) file should not support truncate currently

2015-01-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7634:
-
Attachment: HDFS-7634.002.patch

Thanks [~shv] for review. You are right, the first assign is unnecessary and I 
was going to remove it. It was removed as part of HDFS-7638.

So we just need to rebase the patch, please look at the new patch, thanks.

 Lazy persist (memory) file should not support truncate currently
 

 Key: HDFS-7634
 URL: https://issues.apache.org/jira/browse/HDFS-7634
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-7634.001.patch, HDFS-7634.002.patch


 Similar with {{append}}, lazy persist (memory) file should not support 
 truncate currently. Quote the reason from HDFS-6581 design doc:
 {quote}
 Appends to files created with the LAZY_PERSISTflag will not be allowed in the 
 initial implementation to avoid the complexity of keeping in­memory and 
 on­disk replicas in sync on a given DataNode.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7640:


 Summary: print NFS Client in the NFS log
 Key: HDFS-7640
 URL: https://issues.apache.org/jira/browse/HDFS-7640
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial


Currently hdfs-nfs logs does not have any information about nfs clients.
When multiple clients are using nfs, it becomes hard to distinguish which 
request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-19 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283269#comment-14283269
 ] 

Yi Liu commented on HDFS-7638:
--

Thanks [~jingzhao] for the review, the test failure is not related, it runs 
successfully in my local env, will commit shortly.

 Small fix and few refinements for FSN#truncate
 --

 Key: HDFS-7638
 URL: https://issues.apache.org/jira/browse/HDFS-7638
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7638.001.patch


 *1.* 
 {code}
 removeBlocks(collectedBlocks);
 {code}
 should be after {{logSync}}, as we do in other FSN places (rename, delete, 
 write with overwrite), the reason is discussed in HDFS-2815 and 
 https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068
 *2.*
 {code}
 stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
 FSDirectory.isReservedRawName(src), true);
 {code}
 We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. 
 If audit log is not on, we don't need to get the file info.
 *3.*
 In {{truncateInternal}}, 
 {code}
 INodeFile file = iip.getLastINode().asFile();
 {code}
 is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283386#comment-14283386
 ] 

Vinayakumar B commented on HDFS-3443:
-

bq. Hi Vinay, I do not oppose the idea of using lock. But it seems not easy to 
get it right as some unit tests still failing. Also, it will be harder for 
changing the code later on. Why not adding a boolean for indicating namenode 
starting up? It looks like a straightforward solution to me.
Thanks for the clarification [~szetszwo]. I am fine with using boolean option.
I will try to post a patch with the boolean changes soon.

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover, ha
Reporter: suja s
Assignee: Vinayakumar B
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443-005.patch, HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-19 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282267#comment-14282267
 ] 

Yi Liu commented on HDFS-3107:
--

I have done some functionality tests in my cluster, and do some code review 
from my point of view (I'm not very good at snapshot part, so just did some 
basic review for that part :)). 

+1 (non-binding) for merging into branch-2, it's good to me, except some small 
nits, and I filed them in HDFS-7634 and HDFS-7638. Certainly we can file follow 
ups if I find some new issues.

 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per file in a separate metadata 
 store, and periodically running a vacuum process to rewrite compacted files) 
 to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7637) Fix the check condition for reserved path

2015-01-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7637:
-
Status: Patch Available  (was: Open)

 Fix the check condition for reserved path
 -

 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Attachments: HDFS-7637.001.patch


 Currently the {{.reserved}} patch check function is:
 {code}
 public static boolean isReservedName(String src) {
   return src.startsWith(DOT_RESERVED_PATH_PREFIX);
 }
 {code}
 And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
 {{/.reserved/}}, for example: if some other directory prefix with 
 _/.reserved_, we say it's _/.reservedpath_, then the check is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7638:
-
Status: Patch Available  (was: Open)

 Small fix and few refinements for FSN#truncate
 --

 Key: HDFS-7638
 URL: https://issues.apache.org/jira/browse/HDFS-7638
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7638.001.patch


 *1.* 
 {code}
 removeBlocks(collectedBlocks);
 {code}
 should be after {{logSync}}, as we do in other FSN places (rename, delete, 
 write with overwrite), the reason is discussed in HDFS-2815 and 
 https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068
 *2.*
 {code}
 stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
 FSDirectory.isReservedRawName(src), true);
 {code}
 We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. 
 If audit log is not on, we don't need to get the file info.
 *3.*
 In {{truncateInternal}}, 
 {code}
 INodeFile file = iip.getLastINode().asFile();
 {code}
 is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7353) Raw Erasure Coder API for concrete encoding and decoding

2015-01-19 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-7353:

Status: Patch Available  (was: Open)

Submitted an initial patch.

 Raw Erasure Coder API for concrete encoding and decoding
 

 Key: HDFS-7353
 URL: https://issues.apache.org/jira/browse/HDFS-7353
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-EC

 Attachments: HDFS-7353-v1.patch


 This is to abstract and define raw erasure coder API across different codes 
 algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
 various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7353) Raw Erasure Coder API for concrete encoding and decoding

2015-01-19 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-7353:

Attachment: HDFS-7353-v1.patch

 Raw Erasure Coder API for concrete encoding and decoding
 

 Key: HDFS-7353
 URL: https://issues.apache.org/jira/browse/HDFS-7353
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-EC

 Attachments: HDFS-7353-v1.patch


 This is to abstract and define raw erasure coder API across different codes 
 algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
 various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7638:
-
Attachment: HDFS-7638.001.patch

 Small fix and few refinements for FSN#truncate
 --

 Key: HDFS-7638
 URL: https://issues.apache.org/jira/browse/HDFS-7638
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7638.001.patch


 *1.* 
 {code}
 removeBlocks(collectedBlocks);
 {code}
 should be after {{logSync}}, as we do in other FSN places (rename, delete, 
 write with overwrite), the reason is discussed in HDFS-2815 and 
 https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068
 *2.*
 {code}
 stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
 FSDirectory.isReservedRawName(src), true);
 {code}
 We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. 
 If audit log is not on, we don't need to get the file info.
 *3.*
 In {{truncateInternal}}, 
 {code}
 INodeFile file = iip.getLastINode().asFile();
 {code}
 is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282279#comment-14282279
 ] 

Hadoop QA commented on HDFS-3443:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693012/HDFS-3443-004.patch
  against trunk revision 24315e7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestStartup
  org.apache.hadoop.hdfs.server.namenode.TestBackupNode
  org.apache.hadoop.hdfs.server.namenode.TestFsLimits

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9262//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9262//console

This message is automatically generated.

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover, ha
Reporter: suja s
Assignee: Vinayakumar B
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at 

[jira] [Updated] (HDFS-7637) Fix the check condition for reserved path

2015-01-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7637:
-
Attachment: HDFS-7637.001.patch

The fix is obvious, no need for test.

 Fix the check condition for reserved path
 -

 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Attachments: HDFS-7637.001.patch


 Currently the {{.reserved}} patch check function is:
 {code}
 public static boolean isReservedName(String src) {
   return src.startsWith(DOT_RESERVED_PATH_PREFIX);
 }
 {code}
 And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
 {{/.reserved/}}, for example: if some other directory prefix with 
 _/.reserved_, we say it's _/.reservedpath_, then the check is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-19 Thread Yi Liu (JIRA)
Yi Liu created HDFS-7638:


 Summary: Small fix and few refinements for FSN#truncate
 Key: HDFS-7638
 URL: https://issues.apache.org/jira/browse/HDFS-7638
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0


*1.* 
{code}
removeBlocks(collectedBlocks);
{code}
should be after {{logSync}}, as we do in other FSN places (rename, delete, 
write with overwrite), the reason is discussed in HDFS-2815 and 
https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068

*2.*
{code}
stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
FSDirectory.isReservedRawName(src), true);
{code}
We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. If 
audit log is not on, we don't need to get the file info.

*3.*
In {{truncateInternal}}, 
{code}
INodeFile file = iip.getLastINode().asFile();
{code}
is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7637) Fix the check condition for reserved path

2015-01-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7637:
-
Description: 
Currently the {{.reserved}} patch check function is:
{code}
public static boolean isReservedName(String src) {
  return src.startsWith(DOT_RESERVED_PATH_PREFIX);
}
{code}
And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
{{/.reserved/}}, for example: if some other directory prefix with _/.reserved_, 
we say it's _/.reservedpath_, then the check is wrong.

  was:
Currently the {{.reserved}} patch check function is:
{code}
public static boolean isReservedName(String src) {
  return src.startsWith(DOT_RESERVED_PATH_PREFIX);
}
{code}
And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
{{/.reserved/}}, other some directory may prefix with _/.reserved_.


 Fix the check condition for reserved path
 -

 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor

 Currently the {{.reserved}} patch check function is:
 {code}
 public static boolean isReservedName(String src) {
   return src.startsWith(DOT_RESERVED_PATH_PREFIX);
 }
 {code}
 And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
 {{/.reserved/}}, for example: if some other directory prefix with 
 _/.reserved_, we say it's _/.reservedpath_, then the check is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6727) Refresh data volumes on DataNode based on configuration changes

2015-01-19 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282277#comment-14282277
 ] 

Lei (Eddy) Xu commented on HDFS-6727:
-

Great question, [~jiangyu1211].

Our original design was relying on DataNode's own timeout to do the 
blockreport, mostly for the simplicity reason. You can use {{hdfs dfsadmin 
-triggerBlockReport}} to trigger block report if necessary. If there is a 
strong desire to always send blockreport after swapping a drive, I'd add it in 
a followup JIRA. 

bq. obvious you have data in it.

We were actually thinking about removing bad disks and adding larger empty 
disks as major cases. As mentioned above, if adding a loaded disk is a very 
common task, I'd love to file another JIRA to let DN send block report right 
after how swap job finishes. 

 Refresh data volumes on DataNode based on configuration changes
 ---

 Key: HDFS-6727
 URL: https://issues.apache.org/jira/browse/HDFS-6727
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 2.5.0, 2.4.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
  Labels: datanode
 Fix For: 2.6.0

 Attachments: HDFS-6727.000.delta-HDFS-6775.txt, HDFS-6727.001.patch, 
 HDFS-6727.002.patch, HDFS-6727.003.patch, HDFS-6727.004.patch, 
 HDFS-6727.005.patch, HDFS-6727.006.patch, HDFS-6727.006.patch, 
 HDFS-6727.007.patch, HDFS-6727.008.patch, HDFS-6727.combo.patch, 
 patchFindBugsOutputhadoop-hdfs.txt


 HDFS-1362 requires DataNode to reload configuration file during the runtime, 
 so that DN can change the data volumes dynamically. This JIRA reuses the 
 reconfiguration framework introduced by HADOOP-7001 to enable DN to 
 reconfigure at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7641) Update archival storage user doc for list/set/get block storage policies

2015-01-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7641:
-
Attachment: HDFS-7641.001.patch

Attach the simple fix. Hi [~jingzhao], please help to take a look, thanks.

 Update archival storage user doc for list/set/get block storage policies
 

 Key: HDFS-7641
 URL: https://issues.apache.org/jira/browse/HDFS-7641
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Attachments: HDFS-7641.001.patch


 After HDFS-7323, the list/set/get block storage policies commands are 
 different, we should update the corresponding user doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7641) Update archival storage user doc for list/set/get block storage policies

2015-01-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7641:
-
Status: Patch Available  (was: Open)

 Update archival storage user doc for list/set/get block storage policies
 

 Key: HDFS-7641
 URL: https://issues.apache.org/jira/browse/HDFS-7641
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Attachments: HDFS-7641.001.patch


 After HDFS-7323, the list/set/get block storage policies commands are 
 different, we should update the corresponding user doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7641) Update archival storage user doc for list/set/get block storage policies

2015-01-19 Thread Yi Liu (JIRA)
Yi Liu created HDFS-7641:


 Summary: Update archival storage user doc for list/set/get block 
storage policies
 Key: HDFS-7641
 URL: https://issues.apache.org/jira/browse/HDFS-7641
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor


After HDFS-7323, the list/set/get block storage policies commands are 
different, we should update the corresponding user doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7076) Persist Block Storage Policy in FSImage and Edit Log

2015-01-19 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283506#comment-14283506
 ] 

Yi Liu commented on HDFS-7076:
--

Thanks Jing for updating the patch. Sorry for late review.

The patch itself is pretty good to me, I have few questions:
*1).* About the compatibility, original storage policies are pre-defined and 
hardcode, but now we let admin to overwrite/remove, then how to handle the 
block storage policies for old files.
*2).* Even for new files, if admin changes the storage policies. Then user 
needs to utilize the {{mover}} (the new data migration tool)?

My initial comments are as following, seems we need to rebase the patch again:
*1.* We should disallow the operations if the storage policy feature is not 
enabled.
*2.* Do we need to supply dfs admin command line for this feature?

 Persist Block Storage Policy in FSImage and Edit Log
 

 Key: HDFS-7076
 URL: https://issues.apache.org/jira/browse/HDFS-7076
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7076.000.patch, HDFS-7076.001.patch, 
 HDFS-7076.002.patch, HDFS-7076.003.patch, HDFS-7076.004.patch, 
 HDFS-7076.005.patch, HDFS-7076.005.patch, editsStored


 Currently block storage policies are hard coded.  This JIRA is to persist the 
 policies in FSImage and Edit Log in order to support adding new policies or 
 modifying existing policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7634) Lazy persist (memory) file should not support truncate currently

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283412#comment-14283412
 ] 

Hadoop QA commented on HDFS-7634:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693203/HDFS-7634.002.patch
  against trunk revision 5a6c084.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9273//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9273//console

This message is automatically generated.

 Lazy persist (memory) file should not support truncate currently
 

 Key: HDFS-7634
 URL: https://issues.apache.org/jira/browse/HDFS-7634
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-7634.001.patch, HDFS-7634.002.patch


 Similar with {{append}}, lazy persist (memory) file should not support 
 truncate currently. Quote the reason from HDFS-6581 design doc:
 {quote}
 Appends to files created with the LAZY_PERSISTflag will not be allowed in the 
 initial implementation to avoid the complexity of keeping in­memory and 
 on­disk replicas in sync on a given DataNode.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5631) Expose interfaces required by FsDatasetSpi implementations

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-5631:
--
Hadoop Flags: Reviewed

+1 patch looks good.  The failed tests are not related.

 Expose interfaces required by FsDatasetSpi implementations
 --

 Key: HDFS-5631
 URL: https://issues.apache.org/jira/browse/HDFS-5631
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: David Powell
Assignee: Joe Pallas
Priority: Minor
 Attachments: HDFS-5631-LazyPersist.patch, 
 HDFS-5631-LazyPersist.patch, HDFS-5631.patch, HDFS-5631.patch


 This sub-task addresses section 4.1 of the document attached to HDFS-5194,
 the exposure of interfaces needed by a FsDatasetSpi implementation.
 Specifically it makes ChunkChecksum public and BlockMetadataHeader's
 readHeader() and writeHeader() methods public.
 The changes to BlockReaderUtil (and related classes) discussed by section
 4.1 are only needed if supporting short-circuit, and should be addressed
 as part of an effort to provide such support rather than this JIRA.
 To help ensure these changes are complete and are not regressed in the
 future, tests that gauge the accessibility (though *not* behavior)
 of interfaces needed by a FsDatasetSpi subclass are also included.
 These take the form of a dummy FsDatasetSpi subclass -- a successful
 compilation is effectively a pass.  Trivial unit tests are included so
 that there is something tangible to track.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283064#comment-14283064
 ] 

Hadoop QA commented on HDFS-7638:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693037/HDFS-7638.001.patch
  against trunk revision e843a0a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9268//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9268//console

This message is automatically generated.

 Small fix and few refinements for FSN#truncate
 --

 Key: HDFS-7638
 URL: https://issues.apache.org/jira/browse/HDFS-7638
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7638.001.patch


 *1.* 
 {code}
 removeBlocks(collectedBlocks);
 {code}
 should be after {{logSync}}, as we do in other FSN places (rename, delete, 
 write with overwrite), the reason is discussed in HDFS-2815 and 
 https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068
 *2.*
 {code}
 stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
 FSDirectory.isReservedRawName(src), true);
 {code}
 We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. 
 If audit log is not on, we don't need to get the file info.
 *3.*
 In {{truncateInternal}}, 
 {code}
 INodeFile file = iip.getLastINode().asFile();
 {code}
 is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5631) Expose interfaces required by FsDatasetSpi implementations

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-5631:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, David and Joe!

 Expose interfaces required by FsDatasetSpi implementations
 --

 Key: HDFS-5631
 URL: https://issues.apache.org/jira/browse/HDFS-5631
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: David Powell
Assignee: Joe Pallas
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5631-LazyPersist.patch, 
 HDFS-5631-LazyPersist.patch, HDFS-5631.patch, HDFS-5631.patch


 This sub-task addresses section 4.1 of the document attached to HDFS-5194,
 the exposure of interfaces needed by a FsDatasetSpi implementation.
 Specifically it makes ChunkChecksum public and BlockMetadataHeader's
 readHeader() and writeHeader() methods public.
 The changes to BlockReaderUtil (and related classes) discussed by section
 4.1 are only needed if supporting short-circuit, and should be addressed
 as part of an effort to provide such support rather than this JIRA.
 To help ensure these changes are complete and are not regressed in the
 future, tests that gauge the accessibility (though *not* behavior)
 of interfaces needed by a FsDatasetSpi subclass are also included.
 These take the form of a dummy FsDatasetSpi subclass -- a successful
 compilation is effectively a pass.  Trivial unit tests are included so
 that there is something tangible to track.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5631) Expose interfaces required by FsDatasetSpi implementations

2015-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283069#comment-14283069
 ] 

Hudson commented on HDFS-5631:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #6889 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6889/])
HDFS-5631. Change BlockMetadataHeader.readHeader(..), ChunkChecksum class and 
constructor to public; and fix FsDatasetSpi to use generic type instead of 
FsVolumeImpl.  Contributed by David Powell and Joe Pallas (szetszwo: rev 
4a4450836c8972480b9387b5e31bab57ae2b5baa)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/TestExternalDataset.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalRollingLogs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskAsyncLazyPersistService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ChunkChecksum.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java


 Expose interfaces required by FsDatasetSpi implementations
 --

 Key: HDFS-5631
 URL: https://issues.apache.org/jira/browse/HDFS-5631
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: David Powell
Assignee: Joe Pallas
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5631-LazyPersist.patch, 
 HDFS-5631-LazyPersist.patch, HDFS-5631.patch, HDFS-5631.patch


 This sub-task addresses section 4.1 of the document attached to HDFS-5194,
 the exposure of interfaces needed by a FsDatasetSpi implementation.
 Specifically it makes ChunkChecksum public and BlockMetadataHeader's
 readHeader() and writeHeader() methods public.
 The changes to BlockReaderUtil (and related classes) discussed by section
 4.1 are only needed if supporting short-circuit, and should be addressed
 as part of an effort to provide such support rather than this JIRA.
 To help ensure these changes are complete and are not regressed in the
 future, tests that gauge the accessibility (though *not* behavior)
 of interfaces needed by a FsDatasetSpi subclass are also included.
 These take the form of a dummy FsDatasetSpi subclass -- a successful
 compilation is effectively a pass.  Trivial unit tests are included so
 that there is something tangible to track.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6727) Refresh data volumes on DataNode based on configuration changes

2015-01-19 Thread jiangyu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282292#comment-14282292
 ] 

jiangyu commented on HDFS-6727:
---

Hi [~eddyxu] . There will be some chance to trigger a bug when you add a 
hotswap disk. For example, if you remove a disk as disk1 from DN1, the block 
on disk1 DN1 should be repicated to other machine, there is a chance for DN1 to 
accept the block on disk1 to another disk on DN1 as disk2. Then when you add 
disk1 from a hotswap state,  the Volumemap in datanode will add replica from 
disk1 to replace the same block in disk2 . Code as below:
  ReplicaInfo oldReplica = volumeMap.add(bpid, newReplica);
  if (oldReplica != null) {
FsDatasetImpl.LOG.warn(Two block files with the same block id exist  +
on disk:  + oldReplica.getBlockFile() +  and  + blockFile );
  }
When you trigger the blockreport, the block on disk1 is 0 currently on namenode 
side.So processReport will go into processFirstBlockReport method  in 
BlockManager.java. Follow the code path , when call BlockInfo.addStorage method 
and the StorageInfo in namenode is disk2 don't match the storage report by 
disk1, so we just update the storageinfo , and set prev and next node to null 
in triplet cause we don't call listInsert on DatanodeStroage.addBlock. We miss 
the chance to move the blockInfo to head. 
It is okay if you don't restart the datanode. If you restart the datanode, 
after you insert delimiter, we keep to move the node to the head, when we 
encounter the block we miss to set the next and prev node, npe will throw. And 
after that , the triplet on BlockInfo is mess. The datanode can't be use anyone 
except you restart your namenode.
Hope i make it clearly.

 Refresh data volumes on DataNode based on configuration changes
 ---

 Key: HDFS-6727
 URL: https://issues.apache.org/jira/browse/HDFS-6727
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 2.5.0, 2.4.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
  Labels: datanode
 Fix For: 2.6.0

 Attachments: HDFS-6727.000.delta-HDFS-6775.txt, HDFS-6727.001.patch, 
 HDFS-6727.002.patch, HDFS-6727.003.patch, HDFS-6727.004.patch, 
 HDFS-6727.005.patch, HDFS-6727.006.patch, HDFS-6727.006.patch, 
 HDFS-6727.007.patch, HDFS-6727.008.patch, HDFS-6727.combo.patch, 
 patchFindBugsOutputhadoop-hdfs.txt


 HDFS-1362 requires DataNode to reload configuration file during the runtime, 
 so that DN can change the data volumes dynamically. This JIRA reuses the 
 reconfiguration framework introduced by HADOOP-7001 to enable DN to 
 reconfigure at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6727) Refresh data volumes on DataNode based on configuration changes

2015-01-19 Thread jiangyu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282291#comment-14282291
 ] 

jiangyu commented on HDFS-6727:
---

Hi [~eddyxu] . There will be some chance to trigger a bug when you add a 
hotswap disk. For example, if you remove a disk as disk1 from DN1, the block 
on disk1 DN1 should be repicated to other machine, there is a chance for DN1 to 
accept the block on disk1 to another disk on DN1 as disk2. Then when you add 
disk1 from a hotswap state,  the Volumemap in datanode will add replica from 
disk1 to replace the same block in disk2 . Code as below:
  ReplicaInfo oldReplica = volumeMap.add(bpid, newReplica);
  if (oldReplica != null) {
FsDatasetImpl.LOG.warn(Two block files with the same block id exist  +
on disk:  + oldReplica.getBlockFile() +  and  + blockFile );
  }
When you trigger the blockreport, the block on disk1 is 0 currently on namenode 
side.So processReport will go into processFirstBlockReport method  in 
BlockManager.java. Follow the code path , when call BlockInfo.addStorage method 
and the StorageInfo in namenode is disk2 don't match the storage report by 
disk1, so we just update the storageinfo , and set prev and next node to null 
in triplet cause we don't call listInsert on DatanodeStroage.addBlock. We miss 
the chance to move the blockInfo to head. 
It is okay if you don't restart the datanode. If you restart the datanode, 
after you insert delimiter, we keep to move the node to the head, when we 
encounter the block we miss to set the next and prev node, npe will throw. And 
after that , the triplet on BlockInfo is mess. The datanode can't be use anyone 
except you restart your namenode.
Hope i make it clearly.

 Refresh data volumes on DataNode based on configuration changes
 ---

 Key: HDFS-6727
 URL: https://issues.apache.org/jira/browse/HDFS-6727
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 2.5.0, 2.4.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
  Labels: datanode
 Fix For: 2.6.0

 Attachments: HDFS-6727.000.delta-HDFS-6775.txt, HDFS-6727.001.patch, 
 HDFS-6727.002.patch, HDFS-6727.003.patch, HDFS-6727.004.patch, 
 HDFS-6727.005.patch, HDFS-6727.006.patch, HDFS-6727.006.patch, 
 HDFS-6727.007.patch, HDFS-6727.008.patch, HDFS-6727.combo.patch, 
 patchFindBugsOutputhadoop-hdfs.txt


 HDFS-1362 requires DataNode to reload configuration file during the runtime, 
 so that DN can change the data volumes dynamically. This JIRA reuses the 
 reconfiguration framework introduced by HADOOP-7001 to enable DN to 
 reconfigure at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282550#comment-14282550
 ] 

Hadoop QA commented on HDFS-3443:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693066/HDFS-3443-005.patch
  against trunk revision 19cbce3.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
  org.apache.hadoop.hdfs.server.namenode.TestBackupNode
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9266//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9266//console

This message is automatically generated.

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover, ha
Reporter: suja s
Assignee: Vinayakumar B
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443-005.patch, HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at 

[jira] [Commented] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282445#comment-14282445
 ] 

Hadoop QA commented on HDFS-7638:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693037/HDFS-7638.001.patch
  against trunk revision 19cbce3.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1244 javac 
compiler warnings (more than the trunk's current 1221 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9265//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9265//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9265//console

This message is automatically generated.

 Small fix and few refinements for FSN#truncate
 --

 Key: HDFS-7638
 URL: https://issues.apache.org/jira/browse/HDFS-7638
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7638.001.patch


 *1.* 
 {code}
 removeBlocks(collectedBlocks);
 {code}
 should be after {{logSync}}, as we do in other FSN places (rename, delete, 
 write with overwrite), the reason is discussed in HDFS-2815 and 
 https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068
 *2.*
 {code}
 stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
 FSDirectory.isReservedRawName(src), true);
 {code}
 We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. 
 If audit log is not on, we don't need to get the file info.
 *3.*
 In {{truncateInternal}}, 
 {code}
 INodeFile file = iip.getLastINode().asFile();
 {code}
 is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7353) Raw Erasure Coder API for concrete encoding and decoding

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282444#comment-14282444
 ] 

Hadoop QA commented on HDFS-7353:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693038/HDFS-7353-v1.patch
  against trunk revision 19cbce3.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1242 javac 
compiler warnings (more than the trunk's current 1221 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.util.TestMD5FileUtTests
oTests
org.apache.hadoop.hdfs.server.daTests
org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingTests
org.apache.hadoop.hdTests
org.apache.hadoop.hdfs.server.naTests
org.apacTests

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9264//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9264//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9264//console

This message is automatically generated.

 Raw Erasure Coder API for concrete encoding and decoding
 

 Key: HDFS-7353
 URL: https://issues.apache.org/jira/browse/HDFS-7353
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-EC

 Attachments: HDFS-7353-v1.patch


 This is to abstract and define raw erasure coder API across different codes 
 algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
 various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7338) Reed-Solomon codes

2015-01-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282290#comment-14282290
 ] 

Kai Zheng commented on HDFS-7338:
-

Thanks for your clarifying. We would leverage existing native libraries 
implementing the code. I have checked the ISA-L library, which supports 
encoding and decoding using either Cauchy matrix or Classic Vandermonde matrix. 
For decoding, since in most time only one parity block is to be recovered, one 
optimization would be to simply use XOR.

 Reed-Solomon codes
 --

 Key: HDFS-7338
 URL: https://issues.apache.org/jira/browse/HDFS-7338
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-EC
Reporter: Zhe Zhang
Assignee: Kai Zheng

 This is to provide RS codec implementation for encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7637) Fix the check condition for reserved path

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282423#comment-14282423
 ] 

Hadoop QA commented on HDFS-7637:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693029/HDFS-7637.001.patch
  against trunk revision 19cbce3.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9263//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9263//console

This message is automatically generated.

 Fix the check condition for reserved path
 -

 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Attachments: HDFS-7637.001.patch


 Currently the {{.reserved}} patch check function is:
 {code}
 public static boolean isReservedName(String src) {
   return src.startsWith(DOT_RESERVED_PATH_PREFIX);
 }
 {code}
 And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
 {{/.reserved/}}, for example: if some other directory prefix with 
 _/.reserved_, we say it's _/.reservedpath_, then the check is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7637) Fix the check condition for reserved path

2015-01-19 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282453#comment-14282453
 ] 

Charles Lamb commented on HDFS-7637:


LGTM [~hitliuyi].

Charles


 Fix the check condition for reserved path
 -

 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Attachments: HDFS-7637.001.patch


 Currently the {{.reserved}} patch check function is:
 {code}
 public static boolean isReservedName(String src) {
   return src.startsWith(DOT_RESERVED_PATH_PREFIX);
 }
 {code}
 And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
 {{/.reserved/}}, for example: if some other directory prefix with 
 _/.reserved_, we say it's _/.reservedpath_, then the check is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6727) Refresh data volumes on DataNode based on configuration changes

2015-01-19 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282309#comment-14282309
 ] 

Lei (Eddy) Xu commented on HDFS-6727:
-

Hi, [~jiangyu1211] Thank you so much for spotting this! I will digg it this 
week and hopefully get back to you ASAP.


 Refresh data volumes on DataNode based on configuration changes
 ---

 Key: HDFS-6727
 URL: https://issues.apache.org/jira/browse/HDFS-6727
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 2.5.0, 2.4.1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
  Labels: datanode
 Fix For: 2.6.0

 Attachments: HDFS-6727.000.delta-HDFS-6775.txt, HDFS-6727.001.patch, 
 HDFS-6727.002.patch, HDFS-6727.003.patch, HDFS-6727.004.patch, 
 HDFS-6727.005.patch, HDFS-6727.006.patch, HDFS-6727.006.patch, 
 HDFS-6727.007.patch, HDFS-6727.008.patch, HDFS-6727.combo.patch, 
 patchFindBugsOutputhadoop-hdfs.txt


 HDFS-1362 requires DataNode to reload configuration file during the runtime, 
 so that DN can change the data volumes dynamically. This JIRA reuses the 
 reconfiguration framework introduced by HADOOP-7001 to enable DN to 
 reconfigure at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-3443:

Attachment: HDFS-3443-005.patch

Fixed More tests

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover, ha
Reporter: suja s
Assignee: Vinayakumar B
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443-005.patch, HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-19 Thread Milan Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Milan Desai updated HDFS-7057:
--
Status: Patch Available  (was: In Progress)

 Expose truncate API via FileSystem and shell command
 

 Key: HDFS-7057
 URL: https://issues.apache.org/jira/browse/HDFS-7057
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Milan Desai
 Attachments: HDFS-7057-2.patch, HDFS-7057-3.patch, HDFS-7057-4.patch, 
 HDFS-7057-5.patch, HDFS-7057.patch


 Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-19 Thread Milan Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Milan Desai updated HDFS-7057:
--
Attachment: HDFS-7057-5.patch

Fixed above nits.

 Expose truncate API via FileSystem and shell command
 

 Key: HDFS-7057
 URL: https://issues.apache.org/jira/browse/HDFS-7057
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Milan Desai
 Attachments: HDFS-7057-2.patch, HDFS-7057-3.patch, HDFS-7057-4.patch, 
 HDFS-7057-5.patch, HDFS-7057.patch


 Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7634) Lazy persist (memory) file should not support truncate currently

2015-01-19 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282927#comment-14282927
 ] 

Konstantin Shvachko commented on HDFS-7634:
---

Looks good. Only you don't need to assign file twice. You first assign it using 
asFile(), then using INodeFile.valueOf().

 Lazy persist (memory) file should not support truncate currently
 

 Key: HDFS-7634
 URL: https://issues.apache.org/jira/browse/HDFS-7634
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-7634.001.patch


 Similar with {{append}}, lazy persist (memory) file should not support 
 truncate currently. Quote the reason from HDFS-6581 design doc:
 {quote}
 Appends to files created with the LAZY_PERSISTflag will not be allowed in the 
 initial implementation to avoid the complexity of keeping in­memory and 
 on­disk replicas in sync on a given DataNode.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-19 Thread Milan Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Milan Desai updated HDFS-7057:
--
Status: In Progress  (was: Patch Available)

 Expose truncate API via FileSystem and shell command
 

 Key: HDFS-7057
 URL: https://issues.apache.org/jira/browse/HDFS-7057
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Milan Desai
 Attachments: HDFS-7057-2.patch, HDFS-7057-3.patch, HDFS-7057-4.patch, 
 HDFS-7057.patch


 Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282794#comment-14282794
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3443:
---

 All these will be processed once all the services (common and state specific) 
 are started, because after this patch everything starts under same lock.

Hi Vinay, I do not oppose the idea of using lock.  But it seems not easy to get 
it right as some unit tests still failing.  Also, it will be harder for 
changing the code later on.  Why not adding a boolean for indicating namenode 
starting up?  It looks like a straightforward solution to me.

 Unable to catch up edits during standby to active switch due to NPE
 ---

 Key: HDFS-3443
 URL: https://issues.apache.org/jira/browse/HDFS-3443
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover, ha
Reporter: suja s
Assignee: Vinayakumar B
 Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
 HDFS-3443-005.patch, HDFS-3443_1.patch, HDFS-3443_1.patch


 Start NN
 Let NN standby services be started.
 Before the editLogTailer is initialised start ZKFC and allow the 
 activeservices start to proceed further.
 Here editLogTailer.catchupDuringFailover() will throw NPE.
 void startActiveServices() throws IOException {
 LOG.info(Starting services required for active state);
 writeLock();
 try {
   FSEditLog editLog = dir.fsImage.getEditLog();
   
   if (!editLog.isOpenForWrite()) {
 // During startup, we're already open for write during initialization.
 editLog.initJournalsForWrite();
 // May need to recover
 editLog.recoverUnclosedStreams();
 
 LOG.info(Catching up to latest edits from old active before  +
 taking over writer role in edits logs.);
 editLogTailer.catchupDuringFailover();
 {noformat}
 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
 Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
 XX.XX.XX.55:58003: output error
 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
 from XX.XX.XX.55:58004: error: java.lang.NullPointerException
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
   at 
 org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
   at 
 org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
 9 on 8020 caught an exception
 java.nio.channels.ClosedChannelException
   at 
 sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
   at 
 org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282775#comment-14282775
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3107:
---

It is great to merge this to branch-2.  How about finishing the tests 
(HDFS-7058) first?  I am also going to test it this week.

 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per file in a separate metadata 
 store, and periodically running a vacuum process to rewrite compacted files) 
 to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5631) Expose interfaces required by FsDatasetSpi implementations

2015-01-19 Thread Joe Pallas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Pallas updated HDFS-5631:
-
Attachment: HDFS-5631-LazyPersist.patch

Updated patch to work after HDFS-7056.

 Expose interfaces required by FsDatasetSpi implementations
 --

 Key: HDFS-5631
 URL: https://issues.apache.org/jira/browse/HDFS-5631
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: David Powell
Assignee: Joe Pallas
Priority: Minor
 Attachments: HDFS-5631-LazyPersist.patch, 
 HDFS-5631-LazyPersist.patch, HDFS-5631.patch, HDFS-5631.patch


 This sub-task addresses section 4.1 of the document attached to HDFS-5194,
 the exposure of interfaces needed by a FsDatasetSpi implementation.
 Specifically it makes ChunkChecksum public and BlockMetadataHeader's
 readHeader() and writeHeader() methods public.
 The changes to BlockReaderUtil (and related classes) discussed by section
 4.1 are only needed if supporting short-circuit, and should be addressed
 as part of an effort to provide such support rather than this JIRA.
 To help ensure these changes are complete and are not regressed in the
 future, tests that gauge the accessibility (though *not* behavior)
 of interfaces needed by a FsDatasetSpi subclass are also included.
 These take the form of a dummy FsDatasetSpi subclass -- a successful
 compilation is effectively a pass.  Trivial unit tests are included so
 that there is something tangible to track.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7638) Small fix and few refinements for FSN#truncate

2015-01-19 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282841#comment-14282841
 ] 

Jing Zhao commented on HDFS-7638:
-

Thanks for working on this, [~hitliuyi]! The patch looks good to me. +1.

The javac warning looks unrelated. I just re-triggered the Jenkins to confirm.

 Small fix and few refinements for FSN#truncate
 --

 Key: HDFS-7638
 URL: https://issues.apache.org/jira/browse/HDFS-7638
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7638.001.patch


 *1.* 
 {code}
 removeBlocks(collectedBlocks);
 {code}
 should be after {{logSync}}, as we do in other FSN places (rename, delete, 
 write with overwrite), the reason is discussed in HDFS-2815 and 
 https://issues.apache.org/jira/browse/HDFS-6871?focusedCommentId=14110068page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14110068
 *2.*
 {code}
 stat = FSDirStatAndListingOp.getFileInfo(dir, src, false,
 FSDirectory.isReservedRawName(src), true);
 {code}
 We'd better to use {{dir.getAuditFileInfo}}, since it's only for audit log. 
 If audit log is not on, we don't need to get the file info.
 *3.*
 In {{truncateInternal}}, 
 {code}
 INodeFile file = iip.getLastINode().asFile();
 {code}
 is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7338) Reed-Solomon codes using Intel ISA-L library

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7338:
--
Description: This is to provide RS codec implementation using Intel ISA-L 
library for encoding and decoding.  (was: This is to provide RS codec 
implementation for encoding and decoding.)
Summary: Reed-Solomon codes using Intel ISA-L library  (was: 
Reed-Solomon codes)

 Reed-Solomon codes using Intel ISA-L library
 

 Key: HDFS-7338
 URL: https://issues.apache.org/jira/browse/HDFS-7338
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-EC
Reporter: Zhe Zhang
Assignee: Kai Zheng

 This is to provide RS codec implementation using Intel ISA-L library for 
 encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7058) Tests for truncate

2015-01-19 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282809#comment-14282809
 ] 

Konstantin Shvachko commented on HDFS-7058:
---

All tests except TestCLI are already included in the unit tests under HDFS-3107 
and HDFS-7056.
So this is only about TestCLI now unless something additional comes up.

 Tests for truncate
 --

 Key: HDFS-7058
 URL: https://issues.apache.org/jira/browse/HDFS-7058
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko

 Comprehensive test coverage for truncate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7637) Fix the check condition for reserved path

2015-01-19 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282816#comment-14282816
 ] 

Jing Zhao commented on HDFS-7637:
-

+1. I will commit it shortly.

 Fix the check condition for reserved path
 -

 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Attachments: HDFS-7637.001.patch


 Currently the {{.reserved}} patch check function is:
 {code}
 public static boolean isReservedName(String src) {
   return src.startsWith(DOT_RESERVED_PATH_PREFIX);
 }
 {code}
 And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
 {{/.reserved/}}, for example: if some other directory prefix with 
 _/.reserved_, we say it's _/.reservedpath_, then the check is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7588) Improve the HDFS Web UI browser to allow chowning / chmoding, creating dirs and uploading files

2015-01-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reassigned HDFS-7588:
--

Assignee: Ravi Prakash

 Improve the HDFS Web UI browser to allow chowning / chmoding, creating dirs 
 and uploading files
 ---

 Key: HDFS-7588
 URL: https://issues.apache.org/jira/browse/HDFS-7588
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7588.01.patch, HDFS-7588.02.patch


 The new HTML5 web browser is neat, however it lacks a few features that might 
 make it more useful:
 1. chown
 2. chmod
 3. Uploading files
 4. mkdir



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7637) Fix the check condition for reserved path

2015-01-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282828#comment-14282828
 ] 

Hudson commented on HDFS-7637:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6887 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6887/])
HDFS-7637. Fix the check condition for reserved path. Contributed by Yi Liu. 
(jing9: rev e843a0a8cee5c704a5d28cf14b5a4050094d341b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fix the check condition for reserved path
 -

 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7637.001.patch


 Currently the {{.reserved}} patch check function is:
 {code}
 public static boolean isReservedName(String src) {
   return src.startsWith(DOT_RESERVED_PATH_PREFIX);
 }
 {code}
 And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
 {{/.reserved/}}, for example: if some other directory prefix with 
 _/.reserved_, we say it's _/.reservedpath_, then the check is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7338) Reed-Solomon codes

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282804#comment-14282804
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7338:
---

I see.  You are going to use the Intel ISA-L library but not actually 
implementing it.  Let's revise the summary and description.

 Reed-Solomon codes
 --

 Key: HDFS-7338
 URL: https://issues.apache.org/jira/browse/HDFS-7338
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-EC
Reporter: Zhe Zhang
Assignee: Kai Zheng

 This is to provide RS codec implementation for encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7637) Fix the check condition for reserved path

2015-01-19 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7637:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks for the contribution, 
[~hitliuyi]! And thanks for the review, [~clamb]!

 Fix the check condition for reserved path
 -

 Key: HDFS-7637
 URL: https://issues.apache.org/jira/browse/HDFS-7637
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7637.001.patch


 Currently the {{.reserved}} patch check function is:
 {code}
 public static boolean isReservedName(String src) {
   return src.startsWith(DOT_RESERVED_PATH_PREFIX);
 }
 {code}
 And {{DOT_RESERVED_PATH_PREFIX}} is {{/.reserved}}, it should be 
 {{/.reserved/}}, for example: if some other directory prefix with 
 _/.reserved_, we say it's _/.reservedpath_, then the check is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-19 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282995#comment-14282995
 ] 

Konstantin Shvachko commented on HDFS-3107:
---

This is tracked under HDFS-7611. Working on it.

 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per file in a separate metadata 
 store, and periodically running a vacuum process to rewrite compacted files) 
 to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7639) Remove the limitation imposed by dfs.balancer.moverThreads

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-7639:
-

 Summary: Remove the limitation imposed by dfs.balancer.moverThreads
 Key: HDFS-7639
 URL: https://issues.apache.org/jira/browse/HDFS-7639
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover
Reporter: Tsz Wo Nicholas Sze


In Balancer/Mover, the number of dispatcher threads (dfs.balancer.moverThreads) 
limits the number of concurrent moves.  Each dispatcher thread sends request to 
a datanode and then is blocked for waiting the response.  We should remove such 
limitation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5631) Expose interfaces required by FsDatasetSpi implementations

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14283043#comment-14283043
 ] 

Hadoop QA commented on HDFS-5631:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12693124/HDFS-5631-LazyPersist.patch
  against trunk revision 4a5c3a4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9267//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9267//console

This message is automatically generated.

 Expose interfaces required by FsDatasetSpi implementations
 --

 Key: HDFS-5631
 URL: https://issues.apache.org/jira/browse/HDFS-5631
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: David Powell
Assignee: Joe Pallas
Priority: Minor
 Attachments: HDFS-5631-LazyPersist.patch, 
 HDFS-5631-LazyPersist.patch, HDFS-5631.patch, HDFS-5631.patch


 This sub-task addresses section 4.1 of the document attached to HDFS-5194,
 the exposure of interfaces needed by a FsDatasetSpi implementation.
 Specifically it makes ChunkChecksum public and BlockMetadataHeader's
 readHeader() and writeHeader() methods public.
 The changes to BlockReaderUtil (and related classes) discussed by section
 4.1 are only needed if supporting short-circuit, and should be addressed
 as part of an effort to provide such support rather than this JIRA.
 To help ensure these changes are complete and are not regressed in the
 future, tests that gauge the accessibility (though *not* behavior)
 of interfaces needed by a FsDatasetSpi subclass are also included.
 These take the form of a dummy FsDatasetSpi subclass -- a successful
 compilation is effectively a pass.  Trivial unit tests are included so
 that there is something tangible to track.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3519) Checkpoint upload may interfere with a concurrent saveNamespace

2015-01-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282985#comment-14282985
 ] 

Chris Nauroth commented on HDFS-3519:
-

[~mingma], thank you for working on this.

I think the test failures shown in the last Jenkins run are unrelated.  I did 
multiple test runs locally, and they always passed.

A few comments on the patch:
* {{FSImage#saveNamespace}}: It's possible to leave this method after adding 
the transaction ID to the checkpointing set, but without removing it.  This 
would leave the transaction ID in the set permanently, and I believe it would 
then be impossible to checkpoint at that transaction ID again.  Even though 
{{removeFromCheckpointing}} is called in a {{finally}} block, it is preceded by 
a call to {{FSEditLog#startLogSegmentAndWriteHeaderTxn}}, which can throw 
{{IOException}}.  I think we'll need to wrap the whole logic in a second layer 
of try-finally to guarantee the transaction ID gets removed from the set.
* {{FSImage#saveFSImageInAllDirs}}: Now that this is wrapped in try-finally, 
there is an existing line of code that needs to be indented.

 Checkpoint upload may interfere with a concurrent saveNamespace
 ---

 Key: HDFS-3519
 URL: https://issues.apache.org/jira/browse/HDFS-3519
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Todd Lipcon
Assignee: Ming Ma
Priority: Critical
 Attachments: HDFS-3519-2.patch, HDFS-3519.patch, test-output.txt


 TestStandbyCheckpoints failed in [precommit build 
 2620|https://builds.apache.org/job/PreCommit-HDFS-Build/2620//testReport/] 
 due to the following issue:
 - both nodes were in Standby state, and configured to checkpoint as fast as 
 possible
 - NN1 starts to save its own namespace
 - NN2 starts to upload a checkpoint for the same txid. So, both threads are 
 writing to the same file fsimage.ckpt_12, but the actual file contents 
 correspond to the uploading thread's data.
 - NN1 finished its saveNamespace operation while NN2 was still uploading. So, 
 it renamed the ckpt file. However, the contents of the file are still empty 
 since NN2 hasn't sent any bytes
 - NN2 finishes the upload, and the rename() call fails, which causes the 
 directory to be marked failed, etc.
 The result is that there is a file fsimage_12 which appears to be a finalized 
 image but in fact is incompletely transferred. When the transfer completes, 
 the problem heals itself so there wouldn't be persistent corruption unless 
 the machine crashes at the same time. And even then, we'd still have the 
 earlier checkpoint to restore from.
 This same race could occur in a non-HA setup if a user puts the NN in safe 
 mode and issues saveNamespace operations concurrent with a 2NN checkpointing, 
 I believe.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-19 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282946#comment-14282946
 ] 

Konstantin Shvachko commented on HDFS-7057:
---

+1.
Will convert this into HADOOP jira as most changes are in common and commit as 
such.

 Expose truncate API via FileSystem and shell command
 

 Key: HDFS-7057
 URL: https://issues.apache.org/jira/browse/HDFS-7057
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Milan Desai
 Attachments: HDFS-7057-2.patch, HDFS-7057-3.patch, HDFS-7057-4.patch, 
 HDFS-7057-5.patch, HDFS-7057.patch


 Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7057) Expose truncate API via FileSystem and shell command

2015-01-19 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-7057:
--
Issue Type: New Feature  (was: Sub-task)
Parent: (was: HDFS-3107)

 Expose truncate API via FileSystem and shell command
 

 Key: HDFS-7057
 URL: https://issues.apache.org/jira/browse/HDFS-7057
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Milan Desai
 Attachments: HDFS-7057-2.patch, HDFS-7057-3.patch, HDFS-7057-4.patch, 
 HDFS-7057-5.patch, HDFS-7057.patch


 Add truncate operation to FileSystem and expose it to users via shell command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282983#comment-14282983
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3107:
---

BTW, the truncate unit test is failing occasionally; see [build 
#9264|https://builds.apache.org/job/PreCommit-HDFS-Build/9264//testReport/], 
[build 
#9257|https://builds.apache.org/job/PreCommit-HDFS-Build/9257//testReport/], 
[build 
#9255|https://builds.apache.org/job/PreCommit-HDFS-Build/9255//testReport/] and 
[build 
#9244|https://builds.apache.org/job/PreCommit-HDFS-Build/9244/testReport/].

 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 3.0.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, 
 HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf, 
 HDFS_truncate_semantics_Mar21.pdf, editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per file in a separate metadata 
 store, and periodically running a vacuum process to rewrite compacted files) 
 to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7353) Raw Erasure Coder API for concrete encoding and decoding

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14282989#comment-14282989
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7353:
---

- There are quite a few classes added by the patch.  Please add javadoc for all 
the new classes.
- For XorRawEncoder/XorRawDecoder, we should have javadoc describing the class 
but not only Ported from HDFS-RAID.
{code}
+/**
+ * Ported from HDFS-RAID
+ */
+public class XorRawEncoder extends AbstractRawErasureEncoder{
{code}
- The word Raw can be dropped in the class names.  For example, 
AbstractRawErasureCoder is better to be called AbstractErasureCoder

 Raw Erasure Coder API for concrete encoding and decoding
 

 Key: HDFS-7353
 URL: https://issues.apache.org/jira/browse/HDFS-7353
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-EC

 Attachments: HDFS-7353-v1.patch


 This is to abstract and define raw erasure coder API across different codes 
 algorithms like RS, XOR and etc. Such API can be implemented by utilizing 
 various library support, such as Intel ISA library and Jerasure library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)