[jira] [Commented] (HDFS-7698) Fix locking on HDFS read statistics and add a method for clearing them.

2015-02-05 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306803#comment-14306803
 ] 

Yi Liu commented on HDFS-7698:
--

Thanks Colin for the patch.
It looks good to me, just few comments. +1 after addressing.
# In DFSInputStream#tryReadZeroCopy, we should also hold the {{infoLock}} for 
{{readStatistics}}.
# For HdfsDataInputStream#getAllBlocks, we could remove the {{synchronized}} 
too.

 Fix locking on HDFS read statistics and add a method for clearing them.
 ---

 Key: HDFS-7698
 URL: https://issues.apache.org/jira/browse/HDFS-7698
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7698.002.patch


 Fix locking on HDFS read statistics and add a method for clearing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7709) Fix Findbug Warnings

2015-02-05 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7709:
---
Attachment: HDFS-7709.patch

 Fix Findbug Warnings
 

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7709) Fix Findbug Warnings in httpfs

2015-02-05 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7709:
---
Summary: Fix Findbug Warnings in httpfs  (was: Fix Findbug Warnings)

 Fix Findbug Warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7738) Add more negative tests for truncate

2015-02-05 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306822#comment-14306822
 ] 

Konstantin Shvachko commented on HDFS-7738:
---

Hey Nicholas, agreed more test cases is a good idea.
Few comments on the patch:
# I would wrap the {{op}} parameter in {{recoverLeaseInternal()}} as {{enum}} 
rather than passing an arbitrary string.
# The if-esle statement in {{testBasicTruncate()}} can be replaced with a 
single assert
{code}
assertEquals(File is expected to be closed only for truncates to the block 
boundary,
 isReady, (toTruncate == 0 || newLength % BLOCK_SIZE == 0));
{code}
I think comments in asserts are important.
# Why extra bracket blocks in {{testTruncateFailure()}}? I don't think freeing 
local variables worth it.
# In {{testTruncateFailure()}} you should probably handle 
{{InterruptedException}} rather than passing it through the test case.

 Add more negative tests for truncate
 

 Key: HDFS-7738
 URL: https://issues.apache.org/jira/browse/HDFS-7738
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 2.7.0

 Attachments: h7738_20150204.patch


 The following are negative test cases for truncate.
 - new length  old length
 - truncating a directory
 - truncating a non-existing file
 - truncating a file without write permission
 - truncating a file opened for append
 - truncating a file in safemode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7686) Corrupt block reporting to namenode soon feature is overwritten by HDFS-7430

2015-02-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308565#comment-14308565
 ] 

Colin Patrick McCabe commented on HDFS-7686:


Thanks, guys.  I have a patch for this one but I want to add a few unit tests 
to it.  Will post tomorrow.

 Corrupt block reporting to namenode soon feature is overwritten  by 
 HDFS-7430
 ---

 Key: HDFS-7686
 URL: https://issues.apache.org/jira/browse/HDFS-7686
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Rushabh S Shah
Assignee: Colin Patrick McCabe
Priority: Blocker

 The feature implemented in HDFS-7548 is removed by HDFS-7430.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path

2015-02-05 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-6133:
---
Attachment: (was: HDFS-6133-7.patch)

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover, namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-6133-1.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, 
 HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7740) Test truncate with DataNodes restarting

2015-02-05 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308737#comment-14308737
 ] 

Yi Liu commented on HDFS-7740:
--

I will update the patch to cover these test scenarios, thanks guys.

 Test truncate with DataNodes restarting
 ---

 Key: HDFS-7740
 URL: https://issues.apache.org/jira/browse/HDFS-7740
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
 Fix For: 2.7.0


 Add a test case, which ensures replica consistency when DNs are failing and 
 restarting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path

2015-02-05 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-6133:
---
Attachment: HDFS-6133-7.patch

Thanks Nicholas.
Update patch to fix test failures.
By the way, I use dev-support/test-patch.sh to test my patch yesterday, and 
it didn't find out the test failure, seems my local test environments have some 
problems.

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer  mover, namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-6133-1.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, 
 HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, 
 HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7741) Remove unnecessary synchronized in FSDataInputStream and HdfsDataInputStream

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308632#comment-14308632
 ] 

Hadoop QA commented on HDFS-7741:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696930/HDFS-7741.001.patch
  against trunk revision 6583ad1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9450//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9450//console

This message is automatically generated.

 Remove unnecessary synchronized in FSDataInputStream and HdfsDataInputStream
 

 Key: HDFS-7741
 URL: https://issues.apache.org/jira/browse/HDFS-7741
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Attachments: HDFS-7741.001.patch


 The {{synchronized}} for {{HdfsDataInputStream#getAllBlocks}} and 
 {{FSDataInputStream#seek}} are unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7694) FSDataInputStream should support unbuffer

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308630#comment-14308630
 ] 

Hadoop QA commented on HDFS-7694:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696959/HDFS-7694.002.patch
  against trunk revision 9d91069.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9452//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9452//console

This message is automatically generated.

 FSDataInputStream should support unbuffer
 ---

 Key: HDFS-7694
 URL: https://issues.apache.org/jira/browse/HDFS-7694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7694.001.patch, HDFS-7694.002.patch


 For applications that have many open HDFS (or other Hadoop filesystem) files, 
 it would be useful to have an API to clear readahead buffers and sockets.  
 This could be added to the existing APIs as an optional interface, in much 
 the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2319) Add test cases for FSshell -stat

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308553#comment-14308553
 ] 

Hadoop QA commented on HDFS-2319:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12493293/HDFS-2319.patch
  against trunk revision 4641196.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
  org.apache.hadoop.cli.TestHDFSCLI

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9447//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9447//console

This message is automatically generated.

 Add test cases for FSshell -stat
 

 Key: HDFS-2319
 URL: https://issues.apache.org/jira/browse/HDFS-2319
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.24.0
Reporter: XieXianshan
Priority: Trivial
 Attachments: HDFS-2319.patch


 Add test cases for HADOOP-7574.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7694) FSDataInputStream should support unbuffer

2015-02-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308729#comment-14308729
 ] 

stack commented on HDFS-7694:
-

CanUnbuffer ain't too pretty. Unbufferable is about as ugly.  Its fine I 
suppose as is.

In DFSIS#unbuffer, should we be resetting data members back to zero, etc?

In testOpenManyFilesViaTcp, we assert we can read but is there a reason why we 
would not be able to that unbuffer enables? (pardon if dumb question)



 FSDataInputStream should support unbuffer
 ---

 Key: HDFS-7694
 URL: https://issues.apache.org/jira/browse/HDFS-7694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7694.001.patch, HDFS-7694.002.patch


 For applications that have many open HDFS (or other Hadoop filesystem) files, 
 it would be useful to have an API to clear readahead buffers and sockets.  
 This could be added to the existing APIs as an optional interface, in much 
 the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7741) Remove unnecessary synchronized in FSDataInputStream and HdfsDataInputStream

2015-02-05 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308768#comment-14308768
 ] 

Yi Liu commented on HDFS-7741:
--

Thanks Colin for review.

 Remove unnecessary synchronized in FSDataInputStream and HdfsDataInputStream
 

 Key: HDFS-7741
 URL: https://issues.apache.org/jira/browse/HDFS-7741
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7741.001.patch


 The {{synchronized}} for {{HdfsDataInputStream#getAllBlocks}} and 
 {{FSDataInputStream#seek}} are unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7741) Remove unnecessary synchronized in FSDataInputStream and HdfsDataInputStream

2015-02-05 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7741:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

 Remove unnecessary synchronized in FSDataInputStream and HdfsDataInputStream
 

 Key: HDFS-7741
 URL: https://issues.apache.org/jira/browse/HDFS-7741
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7741.001.patch


 The {{synchronized}} for {{HdfsDataInputStream#getAllBlocks}} and 
 {{FSDataInputStream#seek}} are unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7741) Remove unnecessary synchronized in FSDataInputStream and HdfsDataInputStream

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308774#comment-14308774
 ] 

Hudson commented on HDFS-7741:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7036 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7036/])
HDFS-7741. Remove unnecessary synchronized in FSDataInputStream and 
HdfsDataInputStream. (yliu) (yliu: rev 7b10ef0c3bfec9cdf20d6e2385b6d218809a37b9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java


 Remove unnecessary synchronized in FSDataInputStream and HdfsDataInputStream
 

 Key: HDFS-7741
 URL: https://issues.apache.org/jira/browse/HDFS-7741
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7741.001.patch


 The {{synchronized}} for {{HdfsDataInputStream#getAllBlocks}} and 
 {{FSDataInputStream#seek}} are unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7720) Quota by Storage Type API, tools and ClientNameNode Protocol changes

2015-02-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7720:
-
Attachment: HDFS-7720.3.patch

Thanks [~arpitagarwal] again for the review. I've updated the patch to throw 
UnsupportedActionException in NamenodeRpcServer#setQuota when storage type is 
not null as this is not fully supported yet from namenode until HDFS-7723 is 
completed.

 Quota by Storage Type API, tools and ClientNameNode Protocol changes
 

 Key: HDFS-7720
 URL: https://issues.apache.org/jira/browse/HDFS-7720
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7720.0.patch, HDFS-7720.1.patch, HDFS-7720.2.patch, 
 HDFS-7720.3.patch


 Split the patch into small ones based on the feedback. This one covers the 
 HDFS API changes, tool changes and ClientNameNode protocol changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7699) Erasure Codec API covering the essential aspects for an erasure code

2015-02-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-7699:

Summary: Erasure Codec API covering the essential aspects for an erasure 
code  (was: Erasure Codec API to possiblly consider all the essential aspects 
for an erasure code)

 Erasure Codec API covering the essential aspects for an erasure code
 

 Key: HDFS-7699
 URL: https://issues.apache.org/jira/browse/HDFS-7699
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng

 This is to define the even higher level API *ErasureCodec* to possiblly 
 consider all the essential aspects for an erasure code, as discussed in in 
 HDFS-7337 in details. Generally, it will cover the necessary configurations 
 about which *RawErasureCoder* to use for the code scheme, how to form and 
 layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
 will be used in both client and DataNode, in all the supported modes related 
 to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7736) [HDFS]Few Command print incorrect command usage

2015-02-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7736:
---
Priority: Minor  (was: Trivial)

 [HDFS]Few Command print incorrect command usage
 ---

 Key: HDFS-7736
 URL: https://issues.apache.org/jira/browse/HDFS-7736
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7736-branch-2-001.patch


 Scenario --
 Try the following hdfs commands --
 Scenario --
 Try the following hdfs commands --
 1. 
 # ./hdfs dfsadmin -getStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path]
 2.
 # ./hdfs dfsadmin -setStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName]
 3.
 # ./hdfs fsck
 Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | 
 -delete | -openforwrite] [-files [-blocks [-locations | -racks
 Expected- 
 Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | 
 [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks
 4.
 # ./hdfs snapshotDiff
 Usage:
 *{color:red}SnapshotDiff{color}* snapshotDir from to:
 Expected- 
 Usage:
 *{color:green}snapshotDiff{color}* snapshotDir from to:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7736) [HDFS]Few Command print incorrect command usage

2015-02-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7736:
---
Attachment: HDFS-7736-branch-2-001.patch

 [HDFS]Few Command print incorrect command usage
 ---

 Key: HDFS-7736
 URL: https://issues.apache.org/jira/browse/HDFS-7736
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Trivial
 Attachments: HDFS-7736-branch-2-001.patch


 Scenario --
 Try the following hdfs commands --
 Scenario --
 Try the following hdfs commands --
 1. 
 # ./hdfs dfsadmin -getStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path]
 2.
 # ./hdfs dfsadmin -setStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName]
 3.
 # ./hdfs fsck
 Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | 
 -delete | -openforwrite] [-files [-blocks [-locations | -racks
 Expected- 
 Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | 
 [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks
 4.
 # ./hdfs snapshotDiff
 Usage:
 *{color:red}SnapshotDiff{color}* snapshotDir from to:
 Expected- 
 Usage:
 *{color:green}snapshotDiff{color}* snapshotDir from to:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7738) Add more negative tests for truncate

2015-02-05 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308779#comment-14308779
 ] 

Konstantin Shvachko commented on HDFS-7738:
---

# {{RecoverLeaseOp}} should be static
# Unused import Assert in {{TestSafeMode}}
# It seems that all test cases of {{testMultipleTruncate()}} are already 
covered in {{testBasicTruncate()}}, and in deterministic way. I would remove 
it. Unless random truncates increase your confidence.
# {{TestHAAppend}} changes look like a complete refactoring of the test. It is 
not necessary, but would've been fine with me if it was not failing. Ran it 
several times, failed every time. It would be OK to move it to another jira if 
you wish. I did not expect so many changes.

 Add more negative tests for truncate
 

 Key: HDFS-7738
 URL: https://issues.apache.org/jira/browse/HDFS-7738
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 2.7.0

 Attachments: h7738_20150204.patch, h7738_20150205.patch, 
 h7738_20150205b.patch


 The following are negative test cases for truncate.
 - new length  old length
 - truncating a directory
 - truncating a non-existing file
 - truncating a file without write permission
 - truncating a file opened for append
 - truncating a file in safemode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7738) Add more negative tests for truncate

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308616#comment-14308616
 ] 

Hadoop QA commented on HDFS-7738:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696961/h7738_20150205b.patch
  against trunk revision 9d91069.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9453//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9453//console

This message is automatically generated.

 Add more negative tests for truncate
 

 Key: HDFS-7738
 URL: https://issues.apache.org/jira/browse/HDFS-7738
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 2.7.0

 Attachments: h7738_20150204.patch, h7738_20150205.patch, 
 h7738_20150205b.patch


 The following are negative test cases for truncate.
 - new length  old length
 - truncating a directory
 - truncating a non-existing file
 - truncating a file without write permission
 - truncating a file opened for append
 - truncating a file in safemode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7736) [HDFS]Few Command print incorrect command usage

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308728#comment-14308728
 ] 

Hadoop QA commented on HDFS-7736:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12696953/HDFS-7736-branch-2-001.patch
  against trunk revision af3aadf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9451//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9451//console

This message is automatically generated.

 [HDFS]Few Command print incorrect command usage
 ---

 Key: HDFS-7736
 URL: https://issues.apache.org/jira/browse/HDFS-7736
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7736-branch-2-001.patch


 Scenario --
 Try the following hdfs commands --
 Scenario --
 Try the following hdfs commands --
 1. 
 # ./hdfs dfsadmin -getStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path]
 2.
 # ./hdfs dfsadmin -setStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName]
 3.
 # ./hdfs fsck
 Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | 
 -delete | -openforwrite] [-files [-blocks [-locations | -racks
 Expected- 
 Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | 
 [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks
 4.
 # ./hdfs snapshotDiff
 Usage:
 *{color:red}SnapshotDiff{color}* snapshotDir from to:
 Expected- 
 Usage:
 *{color:green}snapshotDiff{color}* snapshotDir from to:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6054) MiniQJMHACluster should not use static port to avoid binding failure in unit test

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308567#comment-14308567
 ] 

Hadoop QA commented on HDFS-6054:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696920/HDFS-6054.002.patch
  against trunk revision 4641196.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9449//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9449//console

This message is automatically generated.

 MiniQJMHACluster should not use static port to avoid binding failure in unit 
 test
 -

 Key: HDFS-6054
 URL: https://issues.apache.org/jira/browse/HDFS-6054
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Brandon Li
Assignee: Yongjun Zhang
 Attachments: HDFS-6054.001.patch, HDFS-6054.002.patch, 
 HDFS-6054.002.patch


 One example of the test failues: TestFailureToReadEdits
 {noformat}
 Error Message
 Port in use: localhost:10003
 Stacktrace
 java.net.BindException: Port in use: localhost:10003
   at sun.nio.ch.Net.bind(Native Method)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
   at 
 org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
   at 
 org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:845)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:786)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:132)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:593)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:492)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:650)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:635)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1283)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:966)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:851)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:697)
   at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:374)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setUpCluster(TestFailureToReadEdits.java:108)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7694) FSDataInputStream should support unbuffer

2015-02-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308568#comment-14308568
 ] 

Colin Patrick McCabe commented on HDFS-7694:


bq. One question, in what cases, user needs to unbuffer instead of closing the 
stream?

Good question.  The main answer is that re-opening a stream will cause a 
getBlockLocations RPC to the NameNode.  Some applications cache a lot of open 
streams in order to avoid generating a lot of NameNode traffic.  HBase is one, 
Impala is another.  This change is a really easy way to let those applications 
save memory without generating a lot of RPC load on the NN.

 FSDataInputStream should support unbuffer
 ---

 Key: HDFS-7694
 URL: https://issues.apache.org/jira/browse/HDFS-7694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7694.001.patch


 For applications that have many open HDFS (or other Hadoop filesystem) files, 
 it would be useful to have an API to clear readahead buffers and sockets.  
 This could be added to the existing APIs as an optional interface, in much 
 the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7655) Expose truncate API for Web HDFS

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308579#comment-14308579
 ] 

Hadoop QA commented on HDFS-7655:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696918/HDFS-7655.004.patch
  against trunk revision 4641196.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9448//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9448//console

This message is automatically generated.

 Expose truncate API for Web HDFS
 

 Key: HDFS-7655
 URL: https://issues.apache.org/jira/browse/HDFS-7655
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7655.001.patch, HDFS-7655.002.patch, 
 HDFS-7655.003.patch, HDFS-7655.004.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7736) [HDFS]Few Command print incorrect command usage

2015-02-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7736:
---
Status: Patch Available  (was: Open)

Attached branch-2 patch..

 [HDFS]Few Command print incorrect command usage
 ---

 Key: HDFS-7736
 URL: https://issues.apache.org/jira/browse/HDFS-7736
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7736-branch-2-001.patch


 Scenario --
 Try the following hdfs commands --
 Scenario --
 Try the following hdfs commands --
 1. 
 # ./hdfs dfsadmin -getStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path]
 2.
 # ./hdfs dfsadmin -setStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName]
 3.
 # ./hdfs fsck
 Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | 
 -delete | -openforwrite] [-files [-blocks [-locations | -racks
 Expected- 
 Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | 
 [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks
 4.
 # ./hdfs snapshotDiff
 Usage:
 *{color:red}SnapshotDiff{color}* snapshotDir from to:
 Expected- 
 Usage:
 *{color:green}snapshotDiff{color}* snapshotDir from to:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7736) [HDFS]Few Command print incorrect command usage

2015-02-05 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308550#comment-14308550
 ] 

Brahma Reddy Battula commented on HDFS-7736:


Hi [~archanat], Thanks for reporting this...

AFAIK Currently get and setStorage Policy are not merged to trunk and branch-2( 
need to confirm and raise separate issue).Hence currently correcting the 
dfsadmin and fsck command usage and java doc..





 [HDFS]Few Command print incorrect command usage
 ---

 Key: HDFS-7736
 URL: https://issues.apache.org/jira/browse/HDFS-7736
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.6.0
Reporter: Archana T
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: HDFS-7736-branch-2-001.patch


 Scenario --
 Try the following hdfs commands --
 Scenario --
 Try the following hdfs commands --
 1. 
 # ./hdfs dfsadmin -getStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path]
 2.
 # ./hdfs dfsadmin -setStoragePolicy
 Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName]
 Expected- 
 Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName]
 3.
 # ./hdfs fsck
 Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | 
 -delete | -openforwrite] [-files [-blocks [-locations | -racks
 Expected- 
 Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | 
 [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks
 4.
 # ./hdfs snapshotDiff
 Usage:
 *{color:red}SnapshotDiff{color}* snapshotDir from to:
 Expected- 
 Usage:
 *{color:green}snapshotDiff{color}* snapshotDir from to:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7694) FSDataInputStream should support unbuffer

2015-02-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7694:
---
Attachment: HDFS-7694.002.patch

rebased patch

 FSDataInputStream should support unbuffer
 ---

 Key: HDFS-7694
 URL: https://issues.apache.org/jira/browse/HDFS-7694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7694.001.patch, HDFS-7694.002.patch


 For applications that have many open HDFS (or other Hadoop filesystem) files, 
 it would be useful to have an API to clear readahead buffers and sockets.  
 This could be added to the existing APIs as an optional interface, in much 
 the same way as we added setReadahead / setDropBehind / etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7738) Add more negative tests for truncate

2015-02-05 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7738:
--
Attachment: h7738_20150205b.patch

h7738_20150205b.patch
- fixes TestFileCreation,
- changes TestHAAppend to use multi threads,
- add more tests, testMultipleTruncate and testTruncateWithOtherOperations.

 Add more negative tests for truncate
 

 Key: HDFS-7738
 URL: https://issues.apache.org/jira/browse/HDFS-7738
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 2.7.0

 Attachments: h7738_20150204.patch, h7738_20150205.patch, 
 h7738_20150205b.patch


 The following are negative test cases for truncate.
 - new length  old length
 - truncating a directory
 - truncating a non-existing file
 - truncating a file without write permission
 - truncating a file opened for append
 - truncating a file in safemode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7708) Balancer should delete its pid file when it completes rebalance

2015-02-05 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7708:
---
Attachment: HDFS-7708.patch

 Balancer should delete its pid file when it completes rebalance
 ---

 Key: HDFS-7708
 URL: https://issues.apache.org/jira/browse/HDFS-7708
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.6.0
Reporter: Akira AJISAKA
 Attachments: HDFS-7708.patch


 When balancer completes rebalance and exits, it does not delete its pid file. 
 Starting balancer again, then kill -0 pid to confirm the balancer process 
 is not running.
 The problem is: If another process is running as the same pid as `cat 
 pidfile`, balancer fails to start with following message:
 {code}
   balancer is running as process 3443. Stop it first.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7655) Expose truncate API for Web HDFS

2015-02-05 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7655:
-
Attachment: (was: HDFS-7655.003.patch)

 Expose truncate API for Web HDFS
 

 Key: HDFS-7655
 URL: https://issues.apache.org/jira/browse/HDFS-7655
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-7655.001.patch, HDFS-7655.002.patch, 
 HDFS-7655.003.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7655) Expose truncate API for Web HDFS

2015-02-05 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7655:
-
Attachment: HDFS-7655.003.patch

 Expose truncate API for Web HDFS
 

 Key: HDFS-7655
 URL: https://issues.apache.org/jira/browse/HDFS-7655
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-7655.001.patch, HDFS-7655.002.patch, 
 HDFS-7655.003.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7708) Balancer should delete its pid file when it completes rebalance

2015-02-05 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R reassigned HDFS-7708:
--

Assignee: Rakesh R

 Balancer should delete its pid file when it completes rebalance
 ---

 Key: HDFS-7708
 URL: https://issues.apache.org/jira/browse/HDFS-7708
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.6.0
Reporter: Akira AJISAKA
Assignee: Rakesh R
 Attachments: HDFS-7708.patch


 When balancer completes rebalance and exits, it does not delete its pid file. 
 Starting balancer again, then kill -0 pid to confirm the balancer process 
 is not running.
 The problem is: If another process is running as the same pid as `cat 
 pidfile`, balancer fails to start with following message:
 {code}
   balancer is running as process 3443. Stop it first.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7708) Balancer should delete its pid file when it completes rebalance

2015-02-05 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307205#comment-14307205
 ] 

Rakesh R commented on HDFS-7708:


Thanks [~cmccabe] for the hint. Attached simple fix. Kindly review it.

While removing pid file, its not doing special checks for 'balancer' command, 
considering that the same problem is applicable to all commands. Please feel 
free to correct me if am missing anything.

 Balancer should delete its pid file when it completes rebalance
 ---

 Key: HDFS-7708
 URL: https://issues.apache.org/jira/browse/HDFS-7708
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover
Affects Versions: 2.6.0
Reporter: Akira AJISAKA
 Attachments: HDFS-7708.patch


 When balancer completes rebalance and exits, it does not delete its pid file. 
 Starting balancer again, then kill -0 pid to confirm the balancer process 
 is not running.
 The problem is: If another process is running as the same pid as `cat 
 pidfile`, balancer fails to start with following message:
 {code}
   balancer is running as process 3443. Stop it first.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7655) Expose truncate API for Web HDFS

2015-02-05 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7655:
-
Attachment: HDFS-7655.003.patch

Update the patch

 Expose truncate API for Web HDFS
 

 Key: HDFS-7655
 URL: https://issues.apache.org/jira/browse/HDFS-7655
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-7655.001.patch, HDFS-7655.002.patch, 
 HDFS-7655.003.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7719) BlockPoolSliceStorage#removeVolumes fails to remove some in-memory state associated with volumes

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307313#comment-14307313
 ] 

Hudson commented on HDFS-7719:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #92 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/92/])
HDFS-7719. BlockPoolSliceStorage#removeVolumes fails to remove some in-memory 
state associated with volumes. (Lei (Eddy) Xu via Colin P. McCabe) (cmccabe: 
rev 40a415799b1ff3602fbb461765f8b36f1133bda2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java


 BlockPoolSliceStorage#removeVolumes fails to remove some in-memory state 
 associated with volumes
 

 Key: HDFS-7719
 URL: https://issues.apache.org/jira/browse/HDFS-7719
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 2.7.0

 Attachments: HDFS-7719.000.patch, HDFS-7719.001.patch, 
 HDFS-7719.002.patch, HDFS-7719.003.patch


 The parameter of {{BlockPoolSliceStorage#removeVolumes()}} is a set of volume 
 level directories, thus {{BlockPoolSliceStorage}} could not directly compare 
 its own {{StorageDirs}} with this volume-level directory. The result of that 
 is {{BlockPoolSliceStorage}} did not actually remove the targeted 
 {{StorageDirectory}}. 
 It will cause failure when remove a volume and then immediately add a volume 
 back with the same mount point..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7709) Fix findbug warnings in httpfs

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307305#comment-14307305
 ] 

Hudson commented on HDFS-7709:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #92 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/92/])
HDFS-7709. Fix findbug warnings in httpfs. Contributed by Rakesh R. (ozawa: rev 
20660b7a67b7f2815b1e27b98dce2b2682399505)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/JSONMapProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/JSONProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSAuthenticationFilter.java


 Fix findbug warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7709.patch, HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7733) NFS: readdir/readdirplus return null directory attribute on failure

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307308#comment-14307308
 ] 

Hudson commented on HDFS-7733:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #92 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/92/])
HDFS-7733. NFS: readdir/readdirplus return null directory attribute on failure. 
(Contributed by Arpit Agarwal) (arp: rev 
c6f20007ebda509b39a7e4098b99e9b43d73d5b2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java


 NFS: readdir/readdirplus return null directory attribute on failure
 ---

 Key: HDFS-7733
 URL: https://issues.apache.org/jira/browse/HDFS-7733
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.6.1

 Attachments: HDFS-7733.01.patch


 NFS readdir and readdirplus operations return a null directory attribute on 
 some failure paths. This causes clients to get a 'Stale file handle' error 
 which can only be fixed by unmounting and remounting the share.
 The issue can be reproduced by running 'ls' against a large directory which 
 is being actively modified, triggering the 'cookie mismatch' failure path.
 {code}
 } else {
   LOG.error(cookieverf mismatch. request cookieverf:  + cookieVerf
   +  dir cookieverf:  + dirStatus.getModificationTime());
   return new READDIRPLUS3Response(Nfs3Status.NFS3ERR_BAD_COOKIE);
 }
 {code}
 Thanks to [~brandonli] for catching the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7734) Class cast exception in NameNode#main

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307310#comment-14307310
 ] 

Hudson commented on HDFS-7734:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #92 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/92/])
HDFS-7734. Class cast exception in NameNode#main. Contributed by Yi Liu. (wang: 
rev 9175105eeaecf0a1d60b57989b73ce45cee4689b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SignalLogger.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java


 Class cast exception in NameNode#main
 -

 Key: HDFS-7734
 URL: https://issues.apache.org/jira/browse/HDFS-7734
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Arpit Agarwal
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7734.001.patch, HDFS-7734.002.patch


 NameNode hits the following exception immediately on startup.
 {code}
 15/02/03 15:50:25 ERROR namenode.NameNode: Failed to start namenode.
 java.lang.ClassCastException: org.apache.log4j.Logger cannot be cast to 
 org.apache.commons.logging.Log
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1557)
 15/02/03 15:50:25 INFO util.ExitUtil: Exiting with status 1
 {code}
 Location of the exception in NameNode.java:
 {code}
   public static void main(String argv[]) throws Exception {
 if (DFSUtil.parseHelpArgument(argv, NameNode.USAGE, System.out, true)) {
   System.exit(0);
 }
 try {
   StringUtils.startupShutdownMessage(NameNode.class, argv,
   (org.apache.commons.logging.Log) 
 LogManager.getLogger(LOG.getName()));Failed here.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7270) Add congestion signaling capability to DataNode write protocol

2015-02-05 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307970#comment-14307970
 ] 

Suresh Srinivas commented on HDFS-7270:
---

[~daryn], let me understand the concerns. 

This feature has a configuration to turn it on or off. By default it is turned 
off.

There are two scenarios to consider:
# Rolling upgrades: Turning on this feature during rolling upgrades will result 
in some datanodes running old code and some running new. The incompatibility 
introduced in the feature will result in old clients not being able to talk to 
the upgraded node. However, the standard procedure for rolling upgrades is, not 
to turn on the new feature, while rolling upgrade is in progress. Hence the 
incompatibility should not affect rolling upgrades.
# Multi cluster environment running old and new releases: In this case, the 
feature should not be turned on. That is a simple choice for cluster admins.

[~wheat9], is there a way to support both old and new clients?

 Add congestion signaling capability to DataNode write protocol
 --

 Key: HDFS-7270
 URL: https://issues.apache.org/jira/browse/HDFS-7270
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7270.000.patch, HDFS-7270.001.patch, 
 HDFS-7270.002.patch, HDFS-7270.003.patch, HDFS-7270.004.patch


 When a client writes to HDFS faster than the disk bandwidth of the DNs, it  
 saturates the disk bandwidth and put the DNs unresponsive. The client only 
 backs off by aborting / recovering the pipeline, which leads to failed writes 
 and unnecessary pipeline recovery.
 This jira proposes to add explicit congestion control mechanisms in the 
 writing pipeline. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7682) {{DistributedFileSystem#getFileChecksum}} of a snapshotted file includes non-snapshotted content

2015-02-05 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308012#comment-14308012
 ] 

Jing Zhao commented on HDFS-7682:
-

Thanks Charles. My main concern is that the patch is just a partial fix, since 
it cannot cover the case that the file is snapshotted but still being written. 

bq. In other words, the behavior for non-snapshotted files that are still open 
(and possibly being appended to) is not changed by this patch, only that of 
snapshotted files, for which isLastBlockComplete() is a valid check.

The behavior for snapshotted files that are still open also have not been 
changed.

Actually for a snapshotted file, {{blockLocations.getFileLength}} should equal 
to the file length explicitly recorded in the snapshot diff. If there is not 
such length recorded, {{blockLocations.getFileLength}} should be the current 
file length including the last uc block's length (please read the current code 
to confirm). In that case, the check condition should be if the src is a 
snapshot path, and we should use {{blockLocations.getFileLength}} as the limit.

 {{DistributedFileSystem#getFileChecksum}} of a snapshotted file includes 
 non-snapshotted content
 

 Key: HDFS-7682
 URL: https://issues.apache.org/jira/browse/HDFS-7682
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
 Attachments: HDFS-7682.000.patch, HDFS-7682.001.patch, 
 HDFS-7682.002.patch


 DistributedFileSystem#getFileChecksum of a snapshotted file includes 
 non-snapshotted content.
 The reason why this happens is because DistributedFileSystem#getFileChecksum 
 simply calculates the checksum of all of the CRCs from the blocks in the 
 file. But, in the case of a snapshotted file, we don't want to include data 
 in the checksum that was appended to the last block in the file after the 
 snapshot was taken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7740) Test truncate with DataNodes restarting

2015-02-05 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-7740:
-

 Summary: Test truncate with DataNodes restarting
 Key: HDFS-7740
 URL: https://issues.apache.org/jira/browse/HDFS-7740
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko


Add a test case, which ensures replica consistency when DNs are failing and 
restarting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7698) Fix locking on HDFS read statistics and add a method for clearing them.

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308053#comment-14308053
 ] 

Hadoop QA commented on HDFS-7698:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696804/HDFS-7698.003.patch
  against trunk revision b6466de.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9442//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9442//console

This message is automatically generated.

 Fix locking on HDFS read statistics and add a method for clearing them.
 ---

 Key: HDFS-7698
 URL: https://issues.apache.org/jira/browse/HDFS-7698
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-7698.002.patch, HDFS-7698.003.patch


 Fix locking on HDFS read statistics and add a method for clearing them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7740) Test truncate with DataNodes restarting

2015-02-05 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308056#comment-14308056
 ] 

Konstantin Shvachko commented on HDFS-7740:
---

Scenario for the test:
- Create file with 3 DNs up. Kill DN(0). Truncate file. Restart DN(0), make 
sure the old replica is disregarded and replaced with the truncated one.
- Kill DN(1). Truncate within the same last block with copy-on-truncate. 
Restart DN(1), verify replica consistency.

 Test truncate with DataNodes restarting
 ---

 Key: HDFS-7740
 URL: https://issues.apache.org/jira/browse/HDFS-7740
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
 Fix For: 2.7.0


 Add a test case, which ensures replica consistency when DNs are failing and 
 restarting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7666) Datanode blockId layout upgrade threads should be daemon thread

2015-02-05 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-7666:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

 Datanode blockId layout upgrade threads should be daemon thread
 ---

 Key: HDFS-7666
 URL: https://issues.apache.org/jira/browse/HDFS-7666
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7666-v1.patch


 This jira is to mark the layout upgrade thread as daemon thread.
 {code}
  int numLinkWorkers = datanode.getConf().getInt(
  DFSConfigKeys.DFS_DATANODE_BLOCK_ID_LAYOUT_UPGRADE_THREADS_KEY,
  DFSConfigKeys.DFS_DATANODE_BLOCK_ID_LAYOUT_UPGRADE_THREADS);
 ExecutorService linkWorkers = 
 Executors.newFixedThreadPool(numLinkWorkers);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7662) Erasure Coder API for encoding and decoding of block group

2015-02-05 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-7662:

Attachment: HDFS-7662-v2.patch

Updated the patch with all the necessary codes from other pending patches and 
workable examples.

 Erasure Coder API for encoding and decoding of block group
 --

 Key: HDFS-7662
 URL: https://issues.apache.org/jira/browse/HDFS-7662
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-7662-v1.patch, HDFS-7662-v2.patch


 This is to define ErasureCoder API for encoding and decoding of BlockGroup. 
 Given a BlockGroup, ErasureCoder extracts data chunks from the blocks and 
 leverages RawErasureCoder defined in HDFS-7353 to perform concrete encoding 
 or decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6420) DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup

2015-02-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-6420:
---
Status: Open  (was: Patch Available)

 DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup
 --

 Key: HDFS-6420
 URL: https://issues.apache.org/jira/browse/HDFS-6420
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-6420.000.patch


 Currently in HA setup (with logical URI), the DFSAdmin#refreshNodes command 
 is sent to the NameNode first specified in the configuration by default. 
 Users can use -fs option to specify which NN to connect to, but in this 
 case, they usually need to send two separate commands. We should let 
 refreshNodes be sent to both NameNodes by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6420) DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup

2015-02-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-6420:
---
Status: Patch Available  (was: Open)

 DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup
 --

 Key: HDFS-6420
 URL: https://issues.apache.org/jira/browse/HDFS-6420
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-6420.000.patch


 Currently in HA setup (with logical URI), the DFSAdmin#refreshNodes command 
 is sent to the NameNode first specified in the configuration by default. 
 Users can use -fs option to specify which NN to connect to, but in this 
 case, they usually need to send two separate commands. We should let 
 refreshNodes be sent to both NameNodes by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-02-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-4167:
---
Status: Patch Available  (was: Open)

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
 HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-02-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-4167:
---
Status: Open  (was: Patch Available)

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
 HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307790#comment-14307790
 ] 

Hadoop QA commented on HDFS-4167:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645278/HDFS-4167.004.patch
  against trunk revision c4980a2.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9444//console

This message is automatically generated.

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
 HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6420) DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307789#comment-14307789
 ] 

Hadoop QA commented on HDFS-6420:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645290/HDFS-6420.000.patch
  against trunk revision c4980a2.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9443//console

This message is automatically generated.

 DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup
 --

 Key: HDFS-6420
 URL: https://issues.apache.org/jira/browse/HDFS-6420
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-6420.000.patch


 Currently in HA setup (with logical URI), the DFSAdmin#refreshNodes command 
 is sent to the NameNode first specified in the configuration by default. 
 Users can use -fs option to specify which NN to connect to, but in this 
 case, they usually need to send two separate commands. We should let 
 refreshNodes be sent to both NameNodes by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7270) Add congestion signaling capability to DataNode write protocol

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307795#comment-14307795
 ] 

Hudson commented on HDFS-7270:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7024 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7024/])
HDFS-7270. Add congestion signaling capability to DataNode write protocol. 
Contributed by Haohui Mai. (wheat9: rev 
c4980a2f343778544ca20ebea1338651793ea0d9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java


 Add congestion signaling capability to DataNode write protocol
 --

 Key: HDFS-7270
 URL: https://issues.apache.org/jira/browse/HDFS-7270
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7270.000.patch, HDFS-7270.001.patch, 
 HDFS-7270.002.patch, HDFS-7270.003.patch, HDFS-7270.004.patch


 When a client writes to HDFS faster than the disk bandwidth of the DNs, it  
 saturates the disk bandwidth and put the DNs unresponsive. The client only 
 backs off by aborting / recovering the pipeline, which leads to failed writes 
 and unnecessary pipeline recovery.
 This jira proposes to add explicit congestion control mechanisms in the 
 writing pipeline. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7270) Add congestion signaling capability to DataNode write protocol

2015-02-05 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7270:
-
  Resolution: Fixed
Hadoop Flags: Reviewed  (was: Incompatible change)
  Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks everybody for the review.

 Add congestion signaling capability to DataNode write protocol
 --

 Key: HDFS-7270
 URL: https://issues.apache.org/jira/browse/HDFS-7270
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7270.000.patch, HDFS-7270.001.patch, 
 HDFS-7270.002.patch, HDFS-7270.003.patch, HDFS-7270.004.patch


 When a client writes to HDFS faster than the disk bandwidth of the DNs, it  
 saturates the disk bandwidth and put the DNs unresponsive. The client only 
 backs off by aborting / recovering the pipeline, which leads to failed writes 
 and unnecessary pipeline recovery.
 This jira proposes to add explicit congestion control mechanisms in the 
 writing pipeline. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6420) DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup

2015-02-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-6420:
---
Status: Open  (was: Patch Available)

Cancelling patch since it no longer applies.

 DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup
 --

 Key: HDFS-6420
 URL: https://issues.apache.org/jira/browse/HDFS-6420
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-6420.000.patch


 Currently in HA setup (with logical URI), the DFSAdmin#refreshNodes command 
 is sent to the NameNode first specified in the configuration by default. 
 Users can use -fs option to specify which NN to connect to, but in this 
 case, they usually need to send two separate commands. We should let 
 refreshNodes be sent to both NameNodes by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7270) Add congestion signaling capability to DataNode write protocol

2015-02-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307808#comment-14307808
 ] 

Haohui Mai commented on HDFS-7270:
--

bq.  Could you please elaborate on how you intend to implement its use in a 
followup jira? I'd like to evaluate if your approach will improve or exasperate 
current issues in our environment. How will a DN signal congestion? When will 
it signal congestion? Ie. In a premature ack since prior ack easily becomes 
stale? What will the client do?

To signal congestion, the DN will toggle the ECN flag in the pipeline ack. The 
client will back off if it sees the ECN flag.

One scenario we have tested is that (1) DN signals congestion when the system 
load is greater than a pre-defined threshold. (e.g., 2 * number of processor), 
(2) the client backs off for a fixed amount (e.g., 5s). We found out that with 
these changes HDFS can survive from heavy loads in long periods (e.g. loading 
several hundred TBs of data into a 7-node cluster in 24h). We're evaluating 
using the length of the I/O queues to signal congestion and implementing 
exponential back-off in the client.

 Add congestion signaling capability to DataNode write protocol
 --

 Key: HDFS-7270
 URL: https://issues.apache.org/jira/browse/HDFS-7270
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7270.000.patch, HDFS-7270.001.patch, 
 HDFS-7270.002.patch, HDFS-7270.003.patch, HDFS-7270.004.patch


 When a client writes to HDFS faster than the disk bandwidth of the DNs, it  
 saturates the disk bandwidth and put the DNs unresponsive. The client only 
 backs off by aborting / recovering the pipeline, which leads to failed writes 
 and unnecessary pipeline recovery.
 This jira proposes to add explicit congestion control mechanisms in the 
 writing pipeline. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-02-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-4167:
---
Status: Open  (was: Patch Available)

Cancelling patch as it no longer applies

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
 HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7666) Datanode blockId layout upgrade threads should be daemon thread

2015-02-05 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306829#comment-14306829
 ] 

Rakesh R commented on HDFS-7666:


yeah, you are correct. I agree with you:)

 Datanode blockId layout upgrade threads should be daemon thread
 ---

 Key: HDFS-7666
 URL: https://issues.apache.org/jira/browse/HDFS-7666
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7666-v1.patch


 This jira is to mark the layout upgrade thread as daemon thread.
 {code}
  int numLinkWorkers = datanode.getConf().getInt(
  DFSConfigKeys.DFS_DATANODE_BLOCK_ID_LAYOUT_UPGRADE_THREADS_KEY,
  DFSConfigKeys.DFS_DATANODE_BLOCK_ID_LAYOUT_UPGRADE_THREADS);
 ExecutorService linkWorkers = 
 Executors.newFixedThreadPool(numLinkWorkers);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7738) Add more negative tests for truncate

2015-02-05 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306851#comment-14306851
 ] 

Konstantin Shvachko commented on HDFS-7738:
---

For  truncate tests with HA it should be easy to add the case into 
TestHAAppend. Just create a second file there {{fileToTruncate}}, truncate it 5 
times. The rest should be checked by fsck as in the test.
Would you like to incorporate it in your patch or should we open another jira?

 Add more negative tests for truncate
 

 Key: HDFS-7738
 URL: https://issues.apache.org/jira/browse/HDFS-7738
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 2.7.0

 Attachments: h7738_20150204.patch


 The following are negative test cases for truncate.
 - new length  old length
 - truncating a directory
 - truncating a non-existing file
 - truncating a file without write permission
 - truncating a file opened for append
 - truncating a file in safemode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7662) Erasure Coder API for encoding and decoding of block group

2015-02-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306867#comment-14306867
 ] 

Kai Zheng commented on HDFS-7662:
-

Change summary:
1. Refactored the API, getting rid of the questionable callback for easy 
understanding and using.
2. Provided workable examples per [~zhz]'s request and also for good 
understanding of the API.
3. It only considers the most simple but often case that only *one step* is 
involved to encode/decode a block group. The API is subject to change to 
support complex cases of multiple coding steps. Will have a follow up JIRA for 
this.

 Erasure Coder API for encoding and decoding of block group
 --

 Key: HDFS-7662
 URL: https://issues.apache.org/jira/browse/HDFS-7662
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-7662-v1.patch, HDFS-7662-v2.patch


 This is to define ErasureCoder API for encoding and decoding of BlockGroup. 
 Given a BlockGroup, ErasureCoder extracts data chunks from the blocks and 
 leverages RawErasureCoder defined in HDFS-7353 to perform concrete encoding 
 or decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7709) Fix Findbug Warnings in httpfs

2015-02-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306873#comment-14306873
 ] 

Tsuyoshi OZAWA commented on HDFS-7709:
--

Looks good to me. Pending for Jenkins.

 Fix Findbug Warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7662) Erasure Coder API for encoding and decoding of block group

2015-02-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306872#comment-14306872
 ] 

Kai Zheng commented on HDFS-7662:
-

HADOOP-11550 is opened to enhance the API considering complex cases.

 Erasure Coder API for encoding and decoding of block group
 --

 Key: HDFS-7662
 URL: https://issues.apache.org/jira/browse/HDFS-7662
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HDFS-7662-v1.patch, HDFS-7662-v2.patch


 This is to define ErasureCoder API for encoding and decoding of BlockGroup. 
 Given a BlockGroup, ErasureCoder extracts data chunks from the blocks and 
 leverages RawErasureCoder defined in HDFS-7353 to perform concrete encoding 
 or decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7655) Expose truncate API for Web HDFS

2015-02-05 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307245#comment-14307245
 ] 

Yi Liu edited comment on HDFS-7655 at 2/5/15 2:20 PM:
--

Update the patch. [~shv] please take a look whether it addresses your comments, 
thanks.


was (Author: hitliuyi):
Update the patch

 Expose truncate API for Web HDFS
 

 Key: HDFS-7655
 URL: https://issues.apache.org/jira/browse/HDFS-7655
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-7655.001.patch, HDFS-7655.002.patch, 
 HDFS-7655.003.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7734) Class cast exception in NameNode#main

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307279#comment-14307279
 ] 

Hudson commented on HDFS-7734:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2027 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2027/])
HDFS-7734. Class cast exception in NameNode#main. Contributed by Yi Liu. (wang: 
rev 9175105eeaecf0a1d60b57989b73ce45cee4689b)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SignalLogger.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


 Class cast exception in NameNode#main
 -

 Key: HDFS-7734
 URL: https://issues.apache.org/jira/browse/HDFS-7734
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Arpit Agarwal
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7734.001.patch, HDFS-7734.002.patch


 NameNode hits the following exception immediately on startup.
 {code}
 15/02/03 15:50:25 ERROR namenode.NameNode: Failed to start namenode.
 java.lang.ClassCastException: org.apache.log4j.Logger cannot be cast to 
 org.apache.commons.logging.Log
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1557)
 15/02/03 15:50:25 INFO util.ExitUtil: Exiting with status 1
 {code}
 Location of the exception in NameNode.java:
 {code}
   public static void main(String argv[]) throws Exception {
 if (DFSUtil.parseHelpArgument(argv, NameNode.USAGE, System.out, true)) {
   System.exit(0);
 }
 try {
   StringUtils.startupShutdownMessage(NameNode.class, argv,
   (org.apache.commons.logging.Log) 
 LogManager.getLogger(LOG.getName()));Failed here.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7733) NFS: readdir/readdirplus return null directory attribute on failure

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307277#comment-14307277
 ] 

Hudson commented on HDFS-7733:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2027 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2027/])
HDFS-7733. NFS: readdir/readdirplus return null directory attribute on failure. 
(Contributed by Arpit Agarwal) (arp: rev 
c6f20007ebda509b39a7e4098b99e9b43d73d5b2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java


 NFS: readdir/readdirplus return null directory attribute on failure
 ---

 Key: HDFS-7733
 URL: https://issues.apache.org/jira/browse/HDFS-7733
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.6.1

 Attachments: HDFS-7733.01.patch


 NFS readdir and readdirplus operations return a null directory attribute on 
 some failure paths. This causes clients to get a 'Stale file handle' error 
 which can only be fixed by unmounting and remounting the share.
 The issue can be reproduced by running 'ls' against a large directory which 
 is being actively modified, triggering the 'cookie mismatch' failure path.
 {code}
 } else {
   LOG.error(cookieverf mismatch. request cookieverf:  + cookieVerf
   +  dir cookieverf:  + dirStatus.getModificationTime());
   return new READDIRPLUS3Response(Nfs3Status.NFS3ERR_BAD_COOKIE);
 }
 {code}
 Thanks to [~brandonli] for catching the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7719) BlockPoolSliceStorage#removeVolumes fails to remove some in-memory state associated with volumes

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307282#comment-14307282
 ] 

Hudson commented on HDFS-7719:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2027 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2027/])
HDFS-7719. BlockPoolSliceStorage#removeVolumes fails to remove some in-memory 
state associated with volumes. (Lei (Eddy) Xu via Colin P. McCabe) (cmccabe: 
rev 40a415799b1ff3602fbb461765f8b36f1133bda2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 BlockPoolSliceStorage#removeVolumes fails to remove some in-memory state 
 associated with volumes
 

 Key: HDFS-7719
 URL: https://issues.apache.org/jira/browse/HDFS-7719
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 2.7.0

 Attachments: HDFS-7719.000.patch, HDFS-7719.001.patch, 
 HDFS-7719.002.patch, HDFS-7719.003.patch


 The parameter of {{BlockPoolSliceStorage#removeVolumes()}} is a set of volume 
 level directories, thus {{BlockPoolSliceStorage}} could not directly compare 
 its own {{StorageDirs}} with this volume-level directory. The result of that 
 is {{BlockPoolSliceStorage}} did not actually remove the targeted 
 {{StorageDirectory}}. 
 It will cause failure when remove a volume and then immediately add a volume 
 back with the same mount point..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7709) Fix findbug warnings in httpfs

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307274#comment-14307274
 ] 

Hudson commented on HDFS-7709:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2027 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2027/])
HDFS-7709. Fix findbug warnings in httpfs. Contributed by Rakesh R. (ozawa: rev 
20660b7a67b7f2815b1e27b98dce2b2682399505)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/JSONProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSAuthenticationFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/JSONMapProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java


 Fix findbug warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7709.patch, HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7739) ZKFC - transitionToActive is indefinitely waiting to complete fenceOldActive

2015-02-05 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-7739:
--

 Summary: ZKFC - transitionToActive is indefinitely waiting to 
complete fenceOldActive
 Key: HDFS-7739
 URL: https://issues.apache.org/jira/browse/HDFS-7739
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Critical


 *Scenario:* 

One of the cluster disk got full and ZKFC making tranisionToAcitve ,To fence 
old active node it needs to execute the command and wait for tge result, since 
disk got full, strempumper thread will be indefinitely waiting( Even after free 
the disk also, it will not come out)...

Please check the attached thread dump for ZKFC..

Better to maintain the timeout for stream-pumper thread..

{code}
protected void pump() throws IOException {
InputStreamReader inputStreamReader = new InputStreamReader(stream);
BufferedReader br = new BufferedReader(inputStreamReader);
String line = null;
while ((line = br.readLine()) != null) {
  if (type == StreamType.STDOUT) {
log.info(logPrefix + :  + line);
  } else {
log.warn(logPrefix + :  + line);  
  }
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7739) ZKFC - transitionToActive is indefinitely waiting to complete fenceOldActive

2015-02-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7739:
---
Attachment: zkfctd.out

 ZKFC - transitionToActive is indefinitely waiting to complete fenceOldActive
 

 Key: HDFS-7739
 URL: https://issues.apache.org/jira/browse/HDFS-7739
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Critical
 Attachments: zkfctd.out


  *Scenario:* 
 One of the cluster disk got full and ZKFC making tranisionToAcitve ,To fence 
 old active node it needs to execute the command and wait for tge result, 
 since disk got full, strempumper thread will be indefinitely waiting( Even 
 after free the disk also, it will not come out)...
 Please check the attached thread dump for ZKFC..
 Better to maintain the timeout for stream-pumper thread..
 {code}
 protected void pump() throws IOException {
 InputStreamReader inputStreamReader = new InputStreamReader(stream);
 BufferedReader br = new BufferedReader(inputStreamReader);
 String line = null;
 while ((line = br.readLine()) != null) {
   if (type == StreamType.STDOUT) {
 log.info(logPrefix + :  + line);
   } else {
 log.warn(logPrefix + :  + line);  
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7709) Fix findbug warnings in httpfs

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307351#comment-14307351
 ] 

Hudson commented on HDFS-7709:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #96 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/96/])
HDFS-7709. Fix findbug warnings in httpfs. Contributed by Rakesh R. (ozawa: rev 
20660b7a67b7f2815b1e27b98dce2b2682399505)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSAuthenticationFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/JSONMapProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/JSONProvider.java


 Fix findbug warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7709.patch, HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7719) BlockPoolSliceStorage#removeVolumes fails to remove some in-memory state associated with volumes

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307360#comment-14307360
 ] 

Hudson commented on HDFS-7719:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #96 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/96/])
HDFS-7719. BlockPoolSliceStorage#removeVolumes fails to remove some in-memory 
state associated with volumes. (Lei (Eddy) Xu via Colin P. McCabe) (cmccabe: 
rev 40a415799b1ff3602fbb461765f8b36f1133bda2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 BlockPoolSliceStorage#removeVolumes fails to remove some in-memory state 
 associated with volumes
 

 Key: HDFS-7719
 URL: https://issues.apache.org/jira/browse/HDFS-7719
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 2.7.0

 Attachments: HDFS-7719.000.patch, HDFS-7719.001.patch, 
 HDFS-7719.002.patch, HDFS-7719.003.patch


 The parameter of {{BlockPoolSliceStorage#removeVolumes()}} is a set of volume 
 level directories, thus {{BlockPoolSliceStorage}} could not directly compare 
 its own {{StorageDirs}} with this volume-level directory. The result of that 
 is {{BlockPoolSliceStorage}} did not actually remove the targeted 
 {{StorageDirectory}}. 
 It will cause failure when remove a volume and then immediately add a volume 
 back with the same mount point..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7733) NFS: readdir/readdirplus return null directory attribute on failure

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307355#comment-14307355
 ] 

Hudson commented on HDFS-7733:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #96 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/96/])
HDFS-7733. NFS: readdir/readdirplus return null directory attribute on failure. 
(Contributed by Arpit Agarwal) (arp: rev 
c6f20007ebda509b39a7e4098b99e9b43d73d5b2)
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 NFS: readdir/readdirplus return null directory attribute on failure
 ---

 Key: HDFS-7733
 URL: https://issues.apache.org/jira/browse/HDFS-7733
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.6.1

 Attachments: HDFS-7733.01.patch


 NFS readdir and readdirplus operations return a null directory attribute on 
 some failure paths. This causes clients to get a 'Stale file handle' error 
 which can only be fixed by unmounting and remounting the share.
 The issue can be reproduced by running 'ls' against a large directory which 
 is being actively modified, triggering the 'cookie mismatch' failure path.
 {code}
 } else {
   LOG.error(cookieverf mismatch. request cookieverf:  + cookieVerf
   +  dir cookieverf:  + dirStatus.getModificationTime());
   return new READDIRPLUS3Response(Nfs3Status.NFS3ERR_BAD_COOKIE);
 }
 {code}
 Thanks to [~brandonli] for catching the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7734) Class cast exception in NameNode#main

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307357#comment-14307357
 ] 

Hudson commented on HDFS-7734:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #96 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/96/])
HDFS-7734. Class cast exception in NameNode#main. Contributed by Yi Liu. (wang: 
rev 9175105eeaecf0a1d60b57989b73ce45cee4689b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SignalLogger.java


 Class cast exception in NameNode#main
 -

 Key: HDFS-7734
 URL: https://issues.apache.org/jira/browse/HDFS-7734
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Arpit Agarwal
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7734.001.patch, HDFS-7734.002.patch


 NameNode hits the following exception immediately on startup.
 {code}
 15/02/03 15:50:25 ERROR namenode.NameNode: Failed to start namenode.
 java.lang.ClassCastException: org.apache.log4j.Logger cannot be cast to 
 org.apache.commons.logging.Log
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1557)
 15/02/03 15:50:25 INFO util.ExitUtil: Exiting with status 1
 {code}
 Location of the exception in NameNode.java:
 {code}
   public static void main(String argv[]) throws Exception {
 if (DFSUtil.parseHelpArgument(argv, NameNode.USAGE, System.out, true)) {
   System.exit(0);
 }
 try {
   StringUtils.startupShutdownMessage(NameNode.class, argv,
   (org.apache.commons.logging.Log) 
 LogManager.getLogger(LOG.getName()));Failed here.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7709) Fix findbug warnings in httpfs

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307392#comment-14307392
 ] 

Hudson commented on HDFS-7709:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2046 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2046/])
HDFS-7709. Fix findbug warnings in httpfs. Contributed by Rakesh R. (ozawa: rev 
20660b7a67b7f2815b1e27b98dce2b2682399505)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSAuthenticationFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/JSONMapProvider.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/JSONProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java


 Fix findbug warnings in httpfs
 --

 Key: HDFS-7709
 URL: https://issues.apache.org/jira/browse/HDFS-7709
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.7.0

 Attachments: HDFS-7709.patch, HDFS-7709.patch, HDFS-7709.patch


 There are many findbug warnings related to the warning types, 
 - DM_DEFAULT_ENCODING, 
 - RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE,
 - RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5542//artifact/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-core.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7733) NFS: readdir/readdirplus return null directory attribute on failure

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307396#comment-14307396
 ] 

Hudson commented on HDFS-7733:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2046 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2046/])
HDFS-7733. NFS: readdir/readdirplus return null directory attribute on failure. 
(Contributed by Arpit Agarwal) (arp: rev 
c6f20007ebda509b39a7e4098b99e9b43d73d5b2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java


 NFS: readdir/readdirplus return null directory attribute on failure
 ---

 Key: HDFS-7733
 URL: https://issues.apache.org/jira/browse/HDFS-7733
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.6.1

 Attachments: HDFS-7733.01.patch


 NFS readdir and readdirplus operations return a null directory attribute on 
 some failure paths. This causes clients to get a 'Stale file handle' error 
 which can only be fixed by unmounting and remounting the share.
 The issue can be reproduced by running 'ls' against a large directory which 
 is being actively modified, triggering the 'cookie mismatch' failure path.
 {code}
 } else {
   LOG.error(cookieverf mismatch. request cookieverf:  + cookieVerf
   +  dir cookieverf:  + dirStatus.getModificationTime());
   return new READDIRPLUS3Response(Nfs3Status.NFS3ERR_BAD_COOKIE);
 }
 {code}
 Thanks to [~brandonli] for catching the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7734) Class cast exception in NameNode#main

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307398#comment-14307398
 ] 

Hudson commented on HDFS-7734:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2046 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2046/])
HDFS-7734. Class cast exception in NameNode#main. Contributed by Yi Liu. (wang: 
rev 9175105eeaecf0a1d60b57989b73ce45cee4689b)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SignalLogger.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


 Class cast exception in NameNode#main
 -

 Key: HDFS-7734
 URL: https://issues.apache.org/jira/browse/HDFS-7734
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Arpit Agarwal
Assignee: Yi Liu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7734.001.patch, HDFS-7734.002.patch


 NameNode hits the following exception immediately on startup.
 {code}
 15/02/03 15:50:25 ERROR namenode.NameNode: Failed to start namenode.
 java.lang.ClassCastException: org.apache.log4j.Logger cannot be cast to 
 org.apache.commons.logging.Log
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1557)
 15/02/03 15:50:25 INFO util.ExitUtil: Exiting with status 1
 {code}
 Location of the exception in NameNode.java:
 {code}
   public static void main(String argv[]) throws Exception {
 if (DFSUtil.parseHelpArgument(argv, NameNode.USAGE, System.out, true)) {
   System.exit(0);
 }
 try {
   StringUtils.startupShutdownMessage(NameNode.class, argv,
   (org.apache.commons.logging.Log) 
 LogManager.getLogger(LOG.getName()));Failed here.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7719) BlockPoolSliceStorage#removeVolumes fails to remove some in-memory state associated with volumes

2015-02-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307401#comment-14307401
 ] 

Hudson commented on HDFS-7719:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2046 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2046/])
HDFS-7719. BlockPoolSliceStorage#removeVolumes fails to remove some in-memory 
state associated with volumes. (Lei (Eddy) Xu via Colin P. McCabe) (cmccabe: 
rev 40a415799b1ff3602fbb461765f8b36f1133bda2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 BlockPoolSliceStorage#removeVolumes fails to remove some in-memory state 
 associated with volumes
 

 Key: HDFS-7719
 URL: https://issues.apache.org/jira/browse/HDFS-7719
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Fix For: 2.7.0

 Attachments: HDFS-7719.000.patch, HDFS-7719.001.patch, 
 HDFS-7719.002.patch, HDFS-7719.003.patch


 The parameter of {{BlockPoolSliceStorage#removeVolumes()}} is a set of volume 
 level directories, thus {{BlockPoolSliceStorage}} could not directly compare 
 its own {{StorageDirs}} with this volume-level directory. The result of that 
 is {{BlockPoolSliceStorage}} did not actually remove the targeted 
 {{StorageDirectory}}. 
 It will cause failure when remove a volume and then immediately add a volume 
 back with the same mount point..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7739) ZKFC - transitionToActive is indefinitely waiting to complete fenceOldActive

2015-02-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-7739:
---
Description: 
 *Scenario:* 

One of the cluster disk got full and ZKFC making tranisionToAcitve ,To fence 
old active node it needs to execute the command and wait for tge result, since 
disk got full, strempumper thread will be indefinitely waiting( Even after free 
the disk also, it will not come out)...

 *{color:blue}Please check the attached thread dump of ZKFC{color}* ..

 *{color:green}Better to maintain the timeout for stream-pumper thread{color}* .

{code}
protected void pump() throws IOException {
InputStreamReader inputStreamReader = new InputStreamReader(stream);
BufferedReader br = new BufferedReader(inputStreamReader);
String line = null;
while ((line = br.readLine()) != null) {
  if (type == StreamType.STDOUT) {
log.info(logPrefix + :  + line);
  } else {
log.warn(logPrefix + :  + line);  
  }
{code}


  was:
 *Scenario:* 

One of the cluster disk got full and ZKFC making tranisionToAcitve ,To fence 
old active node it needs to execute the command and wait for tge result, since 
disk got full, strempumper thread will be indefinitely waiting( Even after free 
the disk also, it will not come out)...

Please check the attached thread dump for ZKFC..

Better to maintain the timeout for stream-pumper thread..

{code}
protected void pump() throws IOException {
InputStreamReader inputStreamReader = new InputStreamReader(stream);
BufferedReader br = new BufferedReader(inputStreamReader);
String line = null;
while ((line = br.readLine()) != null) {
  if (type == StreamType.STDOUT) {
log.info(logPrefix + :  + line);
  } else {
log.warn(logPrefix + :  + line);  
  }
{code}



 ZKFC - transitionToActive is indefinitely waiting to complete fenceOldActive
 

 Key: HDFS-7739
 URL: https://issues.apache.org/jira/browse/HDFS-7739
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Critical
 Attachments: zkfctd.out


  *Scenario:* 
 One of the cluster disk got full and ZKFC making tranisionToAcitve ,To fence 
 old active node it needs to execute the command and wait for tge result, 
 since disk got full, strempumper thread will be indefinitely waiting( Even 
 after free the disk also, it will not come out)...
  *{color:blue}Please check the attached thread dump of ZKFC{color}* ..
  *{color:green}Better to maintain the timeout for stream-pumper 
 thread{color}* .
 {code}
 protected void pump() throws IOException {
 InputStreamReader inputStreamReader = new InputStreamReader(stream);
 BufferedReader br = new BufferedReader(inputStreamReader);
 String line = null;
 while ((line = br.readLine()) != null) {
   if (type == StreamType.STDOUT) {
 log.info(logPrefix + :  + line);
   } else {
 log.warn(logPrefix + :  + line);  
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2015-02-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14307428#comment-14307428
 ] 

Hadoop QA commented on HDFS-6133:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12696748/HDFS-6133-7.patch
  against trunk revision 20660b7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDecommission
  org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  org.apache.hadoop.hdfs.TestReadWhileWriting
  org.apache.hadoop.hdfs.TestFileAppend2
  org.apache.hadoop.hdfs.server.mover.TestStorageMover
  org.apache.hadoop.hdfs.TestGetFileChecksum
  org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
  org.apache.hadoop.hdfs.server.namenode.TestAddBlock
  org.apache.hadoop.hdfs.TestFileAppendRestart
  org.apache.hadoop.hdfs.TestReplication
  org.apache.hadoop.hdfs.TestFileCreation
  
org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork
  org.apache.hadoop.hdfs.TestModTime
  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
  
org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
  org.apache.hadoop.fs.permission.TestStickyBit
  
org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks
  
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingReplication
  org.apache.hadoop.hdfs.TestCrcCorruption
  org.apache.hadoop.hdfs.TestGetBlocks
  org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
  org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
  org.apache.hadoop.hdfs.TestSetrepIncreasing
  org.apache.hadoop.hdfs.TestFileAppend3
  
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
  org.apache.hadoop.hdfs.TestPipelines
  org.apache.hadoop.hdfs.TestSetrepDecreasing
  org.apache.hadoop.hdfs.TestFileAppend4
  org.apache.hadoop.hdfs.TestBlockStoragePolicy
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot
  org.apache.hadoop.hdfs.TestEncryptedTransfer
  org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
  org.apache.hadoop.hdfs.server.datanode.TestHSync
  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
  
org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9439//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9439//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9439//console

This message is automatically generated.

 Make Balancer support exclude specified 

[jira] [Updated] (HDFS-7702) Move metadata across namenode - Effort to a real distributed namenode

2015-02-05 Thread Ray (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray updated HDFS-7702:
--
Attachment: 
DATABFDT-MetadataMovingToolDesignProposal-Efforttoarealdistributednamenode-050215-1415-202.pdf

Stop update confluence on cwiki.apache.org, will attache design docs here

 Move metadata across namenode - Effort to a real distributed namenode
 -

 Key: HDFS-7702
 URL: https://issues.apache.org/jira/browse/HDFS-7702
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ray
Assignee: Ray
 Attachments: 
 DATABFDT-MetadataMovingToolDesignProposal-Efforttoarealdistributednamenode-050215-1415-202.pdf


 Implement a tool can show in memory namespace tree structure with 
 weight(size) and a API can move metadata across different namenode. The 
 purpose is moving data efficiently and faster, without moving blocks on 
 datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2015-02-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308115#comment-14308115
 ] 

Andrew Wang commented on HDFS-7411:
---

Nicholas, HDFS-7712 and HDFS-7734 also do not relate to this JIRA. HDFS-7706 is 
what I split out from this JIRA. The other two are a separate effort to convert 
more of HDFS to slf4j.

 Refactor and improve decommissioning logic into DecommissionManager
 ---

 Key: HDFS-7411
 URL: https://issues.apache.org/jira/browse/HDFS-7411
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.5.1
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
 hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
 hdfs-7411.006.patch, hdfs-7411.007.patch, hdfs-7411.008.patch, 
 hdfs-7411.009.patch, hdfs-7411.010.patch


 Would be nice to split out decommission logic from DatanodeManager to 
 DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7720) Quota by Storage Type API, tools and ClientNameNode Protocol changes

2015-02-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7720:
-
Attachment: HDFS-7720.2.patch

Update patch to match the trunk changes.

 Quota by Storage Type API, tools and ClientNameNode Protocol changes
 

 Key: HDFS-7720
 URL: https://issues.apache.org/jira/browse/HDFS-7720
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7720.0.patch, HDFS-7720.1.patch, HDFS-7720.2.patch


 Split the patch into small ones based on the feedback. This one covers the 
 HDFS API changes, tool changes and ClientNameNode protocol changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2015-02-05 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308071#comment-14308071
 ] 

Konstantin Shvachko commented on HDFS-3107:
---

Nicholas, you are right we don't have those tests.
- For truncate with HA as I mentioned in HDFS-7738 we can incorporate truncate 
to TestHAAppend. LMK if you want to add it to your patch, otherwise I'll ad it 
to HDFS-7740.
- Created HDFS-7740 for adding truncate test with DNs restarts. Described two 
scenarios there. Feel free to add other scenarios you have in mind there.

 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Fix For: 2.7.0

 Attachments: HDFS-3107-13.patch, HDFS-3107-14.patch, 
 HDFS-3107-15.patch, HDFS-3107-HDFS-7056-combined.patch, HDFS-3107.008.patch, 
 HDFS-3107.15_branch2.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate.pdf, HDFS_truncate.pdf, 
 HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf, 
 editsStored, editsStored.xml

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per file in a separate metadata 
 store, and periodically running a vacuum process to rewrite compacted files) 
 to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager

2015-02-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308105#comment-14308105
 ] 

Andrew Wang commented on HDFS-7411:
---

I had an offline request to summarize some of the above.

Nicholas's compatibility concern regards the rate limiting of decommissioning. 
Currently, this is expressed as a number of nodes to process per decom manager 
wakeup. There are a number of flaws with this scheme:

* Since the decom manager iterates over the whole datanode list, both live and 
decommissioning nodes count towards the limit. Thus, the actual number of 
decomming nodes processed varies between 0 and the limit.
* Since datanodes have different number of blocks, the amount of actual work 
can vary based on this as well.

This means:
* This config parameter only very loosely corresponds to decom rate and decom 
pause times, which are the two things that admins care about.
* Trying to tune decom behavior with this parameter is thus somewhat futile.
* In the grand scope of HDFS, this is also not a common parameter to be tweaked.

Because this, we felt it was okay to change the interpretation of this config 
option. I view the old behavior more as a bug than something that is being 
depended upon by a user.

Translating this number of nodes limit instead into a number of blocks limit 
(as done in the current patch) makes the config far more predictable and thus 
usable. Since the new code also supports incremental scans (which is what makes 
it faster), specifying the limit in a number of nodes limit doesn't make much 
sense.

The only potential surprise I see for cluster operators is if the translation 
of the limit from {{# nodes}} to {{# blocks}} is too liberal. This would result 
in longer maximum pause times than before. We thought 100k nodes per block was 
a conservative estimate, but this could be further reduced.

One avenue I do not want to pursue is keeping the old code around, as Nicholas 
has proposed. This increases our maintenance burden, and means many people will 
keep running into the same issues surrounding decom.

If Nicholas still does not agree with the above rationale, I see the following 
potential options for improvement:

* Be even more conservative with translation factor, e.g. assume only 50k 
blocks per node
* Factor in the number of nodes and/or avg blocks per node to the translation. 
This will better approximate the old average pause times.
* Make the new decom manager also support a {{# nodes}} limit. This isn't great 
since scans are incremental now, but it means we'll be doing strictly less work 
per pause than before.

 Refactor and improve decommissioning logic into DecommissionManager
 ---

 Key: HDFS-7411
 URL: https://issues.apache.org/jira/browse/HDFS-7411
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.5.1
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
 hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
 hdfs-7411.006.patch, hdfs-7411.007.patch, hdfs-7411.008.patch, 
 hdfs-7411.009.patch, hdfs-7411.010.patch


 Would be nice to split out decommission logic from DatanodeManager to 
 DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7723) Quota By Storage Type namenode implemenation

2015-02-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7723:
-
Attachment: HDFS-7723.1.patch

Update the patch and rebase to match other namenode changes in trunk. Build 
succeeded after apply HDFS-7720.1.patch and HDFS-7723.1.patch with all unit 
tests passed locally.

 Quota By Storage Type namenode implemenation
 

 Key: HDFS-7723
 URL: https://issues.apache.org/jira/browse/HDFS-7723
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7723.0.patch, HDFS-7723.1.patch


 This includes: 1) new editlog to persist quota by storage type op 2) 
 corresponding fsimage load/save the new op. 3) QuotaCount refactor to update 
 usage of the storage types for quota enforcement 4) Snapshot support 5) Unit 
 test update



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7723) Quota By Storage Type namenode implemenation

2015-02-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7723:
-
Attachment: HDFS-7723.2.patch

Update to match HDFS-7720.2.patch.

 Quota By Storage Type namenode implemenation
 

 Key: HDFS-7723
 URL: https://issues.apache.org/jira/browse/HDFS-7723
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7723.0.patch, HDFS-7723.1.patch, HDFS-7723.2.patch


 This includes: 1) new editlog to persist quota by storage type op 2) 
 corresponding fsimage load/save the new op. 3) QuotaCount refactor to update 
 usage of the storage types for quota enforcement 4) Snapshot support 5) Unit 
 test update



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7587) Edit log corruption can happen if append fails with a quota violation

2015-02-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308229#comment-14308229
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7587:
---

Hi [~daryn], are you still working on this?

 Edit log corruption can happen if append fails with a quota violation
 -

 Key: HDFS-7587
 URL: https://issues.apache.org/jira/browse/HDFS-7587
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Kihwal Lee
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HDFS-7587.patch


 We have seen a standby namenode crashing due to edit log corruption. It was 
 complaining that {{OP_CLOSE}} cannot be applied because the file is not 
 under-construction.
 When a client was trying to append to the file, the remaining space quota was 
 very small. This caused a failure in {{prepareFileForWrite()}}, but after the 
 inode was already converted for writing and a lease added. Since these were 
 not undone when the quota violation was detected, the file was left in 
 under-construction with an active lease without edit logging {{OP_ADD}}.
 A subsequent {{append()}} eventually caused a lease recovery after the soft 
 limit period. This resulted in {{commitBlockSynchronization()}}, which closed 
 the file with {{OP_CLOSE}} being logged.  Since there was no corresponding 
 {{OP_ADD}}, edit replaying could not apply this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7270) Add congestion signaling capability to DataNode write protocol

2015-02-05 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308262#comment-14308262
 ] 

Daryn Sharp commented on HDFS-7270:
---

I've looked at the patch again.  The problem is changing the existing protobuf 
Status enum record tag from an enum to a uint32.  That pretty much violates the 
compatibility promise that protobufs are supposed to provide.

The reason the tag was changed was to allow masking the ECN bit atop the status 
value.  If it's off, 0 is masked onto the status which is why it works.

The compatible solution is do what we always do:  Add a new optional protobuf 
tag for ECN.  It's a dangerous precedent to allow breaking compatibility just 
to save nominally 3 bytes across the wire.

 Add congestion signaling capability to DataNode write protocol
 --

 Key: HDFS-7270
 URL: https://issues.apache.org/jira/browse/HDFS-7270
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7270.000.patch, HDFS-7270.001.patch, 
 HDFS-7270.002.patch, HDFS-7270.003.patch, HDFS-7270.004.patch


 When a client writes to HDFS faster than the disk bandwidth of the DNs, it  
 saturates the disk bandwidth and put the DNs unresponsive. The client only 
 backs off by aborting / recovering the pipeline, which leads to failed writes 
 and unnecessary pipeline recovery.
 This jira proposes to add explicit congestion control mechanisms in the 
 writing pipeline. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6054) MiniQJMHACluster should not use static port to avoid binding failure in unit test

2015-02-05 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308277#comment-14308277
 ] 

Kihwal Lee commented on HDFS-6054:
--

The precommit successfully ran 3223 unit tests, but the stdout/stderr redirect 
file was deleted by another process before this build job had a chance to cat 
it.

 MiniQJMHACluster should not use static port to avoid binding failure in unit 
 test
 -

 Key: HDFS-6054
 URL: https://issues.apache.org/jira/browse/HDFS-6054
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Brandon Li
Assignee: Yongjun Zhang
 Attachments: HDFS-6054.001.patch, HDFS-6054.002.patch


 One example of the test failues: TestFailureToReadEdits
 {noformat}
 Error Message
 Port in use: localhost:10003
 Stacktrace
 java.net.BindException: Port in use: localhost:10003
   at sun.nio.ch.Net.bind(Native Method)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
   at 
 org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
   at 
 org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:845)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:786)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:132)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:593)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:492)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:650)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:635)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1283)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:966)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:851)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:697)
   at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:374)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setUpCluster(TestFailureToReadEdits.java:108)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7738) Add more negative tests for truncate

2015-02-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308317#comment-14308317
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7738:
---

h7738_20150205.patch:
- adds RecoverLeaseOp;
- uses assertEquals as suggested
- changes DFSTestUtil.getFileSystemAs to not throwing InterruptedException
- moves safemode test to TestSafeMode
- adds truncate tests with HA

Konstantin, thanks for the review.  I incorporated all your comments except #3 
since I like to reuse the variable names.  Logically, they are separated test 
cases.

For the HA test, it cannot truncate for 5 times unless truncating at block 
boundaries since the file is not ready (it is under recovery.)

 Add more negative tests for truncate
 

 Key: HDFS-7738
 URL: https://issues.apache.org/jira/browse/HDFS-7738
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 2.7.0

 Attachments: h7738_20150204.patch, h7738_20150205.patch


 The following are negative test cases for truncate.
 - new length  old length
 - truncating a directory
 - truncating a non-existing file
 - truncating a file without write permission
 - truncating a file opened for append
 - truncating a file in safemode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7722) DataNode#checkDiskError should also remove Storage when error is found.

2015-02-05 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-7722:

Attachment: HDFS-7722.000.patch

[~usrikanth] Thanks much for re-assigning this JIRA to me.

This patch makes

* {{FsDatasetImpl#checkDataDirs}} has similar logic as 
{{FsDatasetImpl#removeVolumes}}
* {{checkDataDirs()}} returns a set of failed data dirs, so that {{DataNode}} 
can call {{DataStorage#removeVolumes}} later to remove metadata associated with 
the {{StorageDirectory}}. 

 DataNode#checkDiskError should also remove Storage when error is found.
 ---

 Key: HDFS-7722
 URL: https://issues.apache.org/jira/browse/HDFS-7722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-7722.000.patch


 When {{DataNode#checkDiskError}} found disk errors, it removes all block 
 metadatas from {{FsDatasetImpl}}. However, it does not removed the 
 corresponding {{DataStorage}} and {{BlockPoolSliceStorage}}. 
 The result is that, we could not directly run {{reconfig}} to hot swap the 
 failure disks without changing the configure file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2319) Add test cases for FSshell -stat

2015-02-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-2319:

Status: Open  (was: Patch Available)

 Add test cases for FSshell -stat
 

 Key: HDFS-2319
 URL: https://issues.apache.org/jira/browse/HDFS-2319
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.24.0
Reporter: XieXianshan
Priority: Trivial
 Attachments: HDFS-2319.patch


 Add test cases for HADOOP-7574.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2319) Add test cases for FSshell -stat

2015-02-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-2319:

Status: Patch Available  (was: Open)

Resubmitting.

 Add test cases for FSshell -stat
 

 Key: HDFS-2319
 URL: https://issues.apache.org/jira/browse/HDFS-2319
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.24.0
Reporter: XieXianshan
Priority: Trivial
 Attachments: HDFS-2319.patch


 Add test cases for HADOOP-7574.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7655) Expose truncate API for Web HDFS

2015-02-05 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7655:
-
Attachment: HDFS-7655.004.patch

Update the patch to break the line. The test failures are not related, will 
commit the patch shortly.

Thanks [~shv] and [~umamaheswararao] for review.

 Expose truncate API for Web HDFS
 

 Key: HDFS-7655
 URL: https://issues.apache.org/jira/browse/HDFS-7655
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-7655.001.patch, HDFS-7655.002.patch, 
 HDFS-7655.003.patch, HDFS-7655.004.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7270) Add congestion signaling capability to DataNode write protocol

2015-02-05 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308323#comment-14308323
 ] 

Haohui Mai commented on HDFS-7270:
--

bq. The problem is changing the existing protobuf Status enum record tag from 
an enum to a uint32. That pretty much violates the compatibility promise that 
protobufs are supposed to provide.

Let me try to understand a little bit more. Both enum and a uint32 are encoded 
as a varint32 over the wire. Can you clarify what compatibility means in your 
mind? Do your use cases fall into the categories in the ones that are mentioned 
by [~sureshms]?


 Add congestion signaling capability to DataNode write protocol
 --

 Key: HDFS-7270
 URL: https://issues.apache.org/jira/browse/HDFS-7270
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7270.000.patch, HDFS-7270.001.patch, 
 HDFS-7270.002.patch, HDFS-7270.003.patch, HDFS-7270.004.patch


 When a client writes to HDFS faster than the disk bandwidth of the DNs, it  
 saturates the disk bandwidth and put the DNs unresponsive. The client only 
 backs off by aborting / recovering the pipeline, which leads to failed writes 
 and unnecessary pipeline recovery.
 This jira proposes to add explicit congestion control mechanisms in the 
 writing pipeline. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6054) MiniQJMHACluster should not use static port to avoid binding failure in unit test

2015-02-05 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6054:

Attachment: HDFS-6054.002.patch

 MiniQJMHACluster should not use static port to avoid binding failure in unit 
 test
 -

 Key: HDFS-6054
 URL: https://issues.apache.org/jira/browse/HDFS-6054
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Brandon Li
Assignee: Yongjun Zhang
 Attachments: HDFS-6054.001.patch, HDFS-6054.002.patch, 
 HDFS-6054.002.patch


 One example of the test failues: TestFailureToReadEdits
 {noformat}
 Error Message
 Port in use: localhost:10003
 Stacktrace
 java.net.BindException: Port in use: localhost:10003
   at sun.nio.ch.Net.bind(Native Method)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
   at 
 org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
   at 
 org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:845)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:786)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:132)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:593)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:492)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:650)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:635)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1283)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:966)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:851)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:697)
   at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:374)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setUpCluster(TestFailureToReadEdits.java:108)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-2444) A bug in unit test: TestDFSShell.testText()

2015-02-05 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308349#comment-14308349
 ] 

Akira AJISAKA commented on HDFS-2444:
-

Thanks for the report and the patch. Looks like this issue is fixed by 
HADOOP-8449. Closing this.

 A bug in unit test: TestDFSShell.testText()
 ---

 Key: HDFS-2444
 URL: https://issues.apache.org/jira/browse/HDFS-2444
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0, 0.23.0
Reporter: Hou Song
 Attachments: HDFS-2444.patch


 testText() writes the same random String into a ByteArray in memory and a 
 GZIPOutputStream in DFS. After closing GZIPOutputStream, it reads it back 
 using shell command -text, and compares it against the ByteArray. However, 
 before comparing, the shell output and the ByteArray are both reset, making 
 the comparison useless. 
 What's more, after closing the GZIPOutputStream, DFS is unable to find the 
 file, and the -text command failed to read, giving no output. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-2444) A bug in unit test: TestDFSShell.testText()

2015-02-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-2444:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

 A bug in unit test: TestDFSShell.testText()
 ---

 Key: HDFS-2444
 URL: https://issues.apache.org/jira/browse/HDFS-2444
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0, 0.23.0
Reporter: Hou Song
 Attachments: HDFS-2444.patch


 testText() writes the same random String into a ByteArray in memory and a 
 GZIPOutputStream in DFS. After closing GZIPOutputStream, it reads it back 
 using shell command -text, and compares it against the ByteArray. However, 
 before comparing, the shell output and the ByteArray are both reset, making 
 the comparison useless. 
 What's more, after closing the GZIPOutputStream, DFS is unable to find the 
 file, and the -text command failed to read, giving no output. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6054) MiniQJMHACluster should not use static port to avoid binding failure in unit test

2015-02-05 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14308350#comment-14308350
 ] 

Yongjun Zhang commented on HDFS-6054:
-

Thanks Kihwal! I uploaded the same patch again to trigger another run.


 MiniQJMHACluster should not use static port to avoid binding failure in unit 
 test
 -

 Key: HDFS-6054
 URL: https://issues.apache.org/jira/browse/HDFS-6054
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Brandon Li
Assignee: Yongjun Zhang
 Attachments: HDFS-6054.001.patch, HDFS-6054.002.patch, 
 HDFS-6054.002.patch


 One example of the test failues: TestFailureToReadEdits
 {noformat}
 Error Message
 Port in use: localhost:10003
 Stacktrace
 java.net.BindException: Port in use: localhost:10003
   at sun.nio.ch.Net.bind(Native Method)
   at 
 sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
   at 
 org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
   at 
 org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:845)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:786)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:132)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:593)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:492)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:650)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:635)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1283)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:966)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:851)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:697)
   at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:374)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:355)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setUpCluster(TestFailureToReadEdits.java:108)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7738) Add more negative tests for truncate

2015-02-05 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7738:
--
Attachment: h7738_20150205.patch

 Add more negative tests for truncate
 

 Key: HDFS-7738
 URL: https://issues.apache.org/jira/browse/HDFS-7738
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 2.7.0

 Attachments: h7738_20150204.patch, h7738_20150205.patch


 The following are negative test cases for truncate.
 - new length  old length
 - truncating a directory
 - truncating a non-existing file
 - truncating a file without write permission
 - truncating a file opened for append
 - truncating a file in safemode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >