[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-14 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957042#comment-14957042
 ] 

Josh Elser commented on HDFS-9226:
--

bq. I agree with this. Adding mockito as dependency would be better. Every 
other changes in hdfs tests need not verify for transitive dependencies.

I still disagree with you both -- claiming ignorance of the used dependencies 
your code needs is a smell to begin with and keeping the dependencies as 
minimal as possible always desirable. The maven-dependency-plugin's [analyze 
mojo|https://maven.apache.org/plugins/maven-dependency-plugin/analyze-mojo.html]
 would be a way to automate this check.

But, since two of you are leaning this way, let me try that and see what else 
breaks :)

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> 

[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Status: Patch Available  (was: Open)

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-14 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9234:
-
Attachment: HDFS-9234-004.patch

Fixed failed tests
{code}
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
{code}

Attached updated patch..

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Attachment: HDFS-9231.002.patch

Patch 002 addresses the test failure. The findbugs warning seems odd.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Tony Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957068#comment-14957068
 ] 

Tony Wu commented on HDFS-9238:
---

The failed tests are not related to this patch as the patch itself only touches 
a particular test which did not fail.

> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6200) Create a separate jar for hdfs-client

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957045#comment-14957045
 ] 

Hadoop QA commented on HDFS-6200:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12651333/HDFS-6200.007.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d6c8bad |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12980/console |


This message was automatically generated.

> Create a separate jar for hdfs-client
> -
>
> Key: HDFS-6200
> URL: https://issues.apache.org/jira/browse/HDFS-6200
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-6200.000.patch, HDFS-6200.001.patch, 
> HDFS-6200.002.patch, HDFS-6200.003.patch, HDFS-6200.004.patch, 
> HDFS-6200.005.patch, HDFS-6200.006.patch, HDFS-6200.007.patch
>
>
> Currently the hadoop-hdfs jar contain both the hdfs server and the hdfs 
> client. As discussed in the hdfs-dev mailing list 
> (http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201404.mbox/browser),
>  downstream projects are forced to bring in additional dependency in order to 
> access hdfs. The additional dependency sometimes can be difficult to manage 
> for projects like Apache Falcon and Apache Oozie.
> This jira proposes to create a new project, hadoop-hdfs-cliient, which 
> contains the client side of the hdfs code. Downstream projects can use this 
> jar instead of the hadoop-hdfs to avoid unnecessary dependency.
> Note that it does not break the compatibility of downstream projects. This is 
> because old downstream projects implicitly depend on hadoop-hdfs-client 
> through the hadoop-hdfs jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Status: Open  (was: Patch Available)

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9157) [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option is specified as the only option

2015-10-14 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957152#comment-14957152
 ] 

nijel commented on HDFS-9157:
-

bq.-1   hdfs tests

The test fails are un related to patch.

please review

> [OEV and OIV] : Unnecessary parsing for mandatory arguements if "-h" option 
> is specified as the only option
> ---
>
> Key: HDFS-9157
> URL: https://issues.apache.org/jira/browse/HDFS-9157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9157_1.patch, HDFS-9157_2.patch, HDFS-9157_3.patch, 
> HDFS-9157_4.patch, HDFS-9157_5.patch, HDFS-9157_6.patch
>
>
> In both tools, if "-h" is specified as the only option, it throws error as 
> input and output not specified.
> {noformat}
> master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs oev -h
> Error parsing command-line options: Missing required options: o, i
> Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE
> {noformat}
> In code the parsing is happening before the "-h" option is verified
> Can add code to return after initial check.
> {code}
> if (argv.length == 1 && argv[1] == "-h") {
>   printHelp();
>   return 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8050) Separate the client conf key from DFSConfigKeys

2015-10-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957015#comment-14957015
 ] 

Steve Loughran commented on HDFS-8050:
--

What's the status of this? Can we have {{HdfsClientConfigKeys}} public? Maybe 
even moved out of Impl?

> Separate the client conf key from DFSConfigKeys
> ---
>
> Key: HDFS-8050
> URL: https://issues.apache.org/jira/browse/HDFS-8050
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>
> Currently, all the conf keys are in DFSConfigKeys.  We should separate the 
> public client DFSConfigKeys to a new class in org.apache.hadoop.hdfs.client 
> as described by [~wheat9] in HDFS-6566.
> For the private conf keys, they may be moved to a new class in 
> org.apache.hadoop.hdfs.client.impl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957034#comment-14957034
 ] 

Hadoop QA commented on HDFS-9220:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 21s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 25s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 27s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  62m 53s | Tests failed in hadoop-hdfs. |
| | | 109m 17s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestBlockReaderLocal |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
|   | org.apache.hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot |
|   | org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | org.apache.hadoop.hdfs.server.namenode.TestAuditLogs |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766442/HDFS-9220.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d6c8bad |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12978/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12978/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12978/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12978/console |


This message was automatically generated.

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> 
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.
> More generally, the failure happens when reading from the last block of a 
> file and the last block has <= 512 bytes.



--
This message was sent by 

[jira] [Created] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9241:


 Summary: HDFS clients can't construct HdfsConfiguration instances
 Key: HDFS-9241
 URL: https://issues.apache.org/jira/browse/HDFS-9241
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Steve Loughran


the changes for the hdfs client classpath make instantiating 
{{HdfsConfiguration}} from the client impossible; it only lives server side. 
This breaks any app which creates one.

I know people will look at the {{@Private}} tag and say "don't do that then", 
but it's worth considering precisely why I, at least, do this: it's the only 
way to guarantee that the hdfs-default and hdfs-site resources get on the 
classpath, including all the security settings. It's precisely the use case 
which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.

What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-9241:
-
Affects Version/s: 3.0.0
  Component/s: hdfs-client

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9239) DataNode Lifeline Protocol: an alternative protocol for reporting DataNode liveness

2015-10-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957150#comment-14957150
 ] 

Chris Nauroth commented on HDFS-9239:
-

I briefly considered UDP for that very reason.  However, there isn't a strong 
precedent for UDP in Hadoop right now, aside from a few optional components 
like the HDFS NFS gateway and the Ganglia and StatsD metrics sinks.  I'm 
reluctant to add UDP delivery troubleshooting to the mix of operational 
concerns that administrators need to know to support core functionality.

> DataNode Lifeline Protocol: an alternative protocol for reporting DataNode 
> liveness
> ---
>
> Key: HDFS-9239
> URL: https://issues.apache.org/jira/browse/HDFS-9239
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: DataNode-Lifeline-Protocol.pdf
>
>
> This issue proposes introduction of a new feature: the DataNode Lifeline 
> Protocol.  This is an RPC protocol that is responsible for reporting liveness 
> and basic health information about a DataNode to a NameNode.  Compared to the 
> existing heartbeat messages, it is lightweight and not prone to resource 
> contention problems that can harm accurate tracking of DataNode liveness 
> currently.  The attached design document contains more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957369#comment-14957369
 ] 

Hadoop QA commented on HDFS-9231:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 11s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 21s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 22s | The applied patch generated  4 
new checkstyle issues (total was 370, now 372). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 35s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  49m 52s | Tests failed in hadoop-hdfs. |
| | |  96m  2s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.fs.TestGlobPaths |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766570/HDFS-9231.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0d77e85 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12981/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12981/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12981/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12981/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12981/console |


This message was automatically generated.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9243) TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout

2015-10-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9243:
--
Description: 
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
sometimes time out.
This is happening on trunk as can be observed in several recent jenkins job. 
(e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/

On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.

  was:
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
sometimes time out.
This is happening on trunk as can be observed in several recent jenkins job.

On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.


> TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout
> -
>
> Key: HDFS-9243
> URL: https://issues.apache.org/jira/browse/HDFS-9243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
> sometimes time out.
> This is happening on trunk as can be observed in several recent jenkins job. 
> (e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/
> On my local Linux machine, this test case times out 6 out of 10 times. When 
> it does not time out, this test takes about 20 seconds, otherwise it takes 
> more than 60 seconds and then time out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9243) TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout

2015-10-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9243:
--
Description: 
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
sometimes time out.
This is happening on trunk as can be observed in several recent jenkins job. 
(e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/ 
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/472/

On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.

  was:
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
sometimes time out.
This is happening on trunk as can be observed in several recent jenkins job. 
(e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/

On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.


> TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout
> -
>
> Key: HDFS-9243
> URL: https://issues.apache.org/jira/browse/HDFS-9243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
> sometimes time out.
> This is happening on trunk as can be observed in several recent jenkins job. 
> (e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/472/
> On my local Linux machine, this test case times out 6 out of 10 times. When 
> it does not time out, this test takes about 20 seconds, otherwise it takes 
> more than 60 seconds and then time out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957247#comment-14957247
 ] 

Hadoop QA commented on HDFS-9234:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  21m 22s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 35s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 57s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m 10s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 47s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 23s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  62m 12s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 32s | Tests passed in 
hadoop-hdfs-client. |
| | | 117m 39s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestDFSClientExcludedNodes |
|   | hadoop.hdfs.util.TestByteArrayManager |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766559/HDFS-9234-004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d6c8bad |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12979/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12979/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12979/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12979/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12979/console |


This message was automatically generated.

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch, HDFS-9234-004.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957311#comment-14957311
 ] 

Vinayakumar B commented on HDFS-8647:
-

bq. I was trying to point out that cherry-pick likely won't work. It is just to 
make sure after I commit it to trunk, there is patch for reference in case I 
need to manually resolve it as part of cherry-pick.
yes, correct. 
What i exactly meant is, currently there will be lot of difference, since EC is 
not merged to branch-2. after that it would be easy to merge. Anyway I am okay 
if its okay to go with different patch for branch-2.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8647:
---
Attachment: HDFS-8647-006.patch

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9223:

Attachment: HDFS-9223.003.patch

> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-10-14 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-9242:
--

Assignee: Brahma Reddy Battula

> Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache 
> ---
>
> Key: HDFS-9242
> URL: https://issues.apache.org/jira/browse/HDFS-9242
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Brahma Reddy Battula
>
> This was introduced by HDFS-8855 and pre-patch warning can be found at 
> https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK
> {code}
> Code  Warning
> DCPossible doublecheck on 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
>  in new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> Bug type DC_DOUBLECHECK (click for details) 
> In class 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
> In method new 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
>  Configuration)
> On field 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
> At DataNodeUGIProvider.java:[lines 49-51]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8766) Implement a libhdfs(3) compatible API

2015-10-14 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen reassigned HDFS-8766:


Assignee: Haohui Mai  (was: James Clampffer)

Haohui - can you factor and land this as soon as HDFS-9207 is landed?

> Implement a libhdfs(3) compatible API
> -
>
> Key: HDFS-8766
> URL: https://issues.apache.org/jira/browse/HDFS-8766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Haohui Mai
> Attachments: HDFS-8766.HDFS-8707.000.patch, 
> HDFS-8766.HDFS-8707.001.patch, HDFS-8766.HDFS-8707.002.patch, 
> HDFS-8766.HDFS-8707.003.patch, HDFS-8766.HDFS-8707.004.patch, 
> HDFS-8766.HDFS-8707.005.patch
>
>
> Add a synchronous API that is compatible with the hdfs.h header used in 
> libhdfs and libhdfs3.  This will make it possible for projects using 
> libhdfs/libhdfs3 to relink against libhdfspp with minimal changes.
> This also provides a pure C interface that can be linked against projects 
> that aren't built in C++11 mode for various reasons but use the same 
> compiler.  It also allows many other programming languages to access 
> libhdfspp through builtin FFI interfaces.
> The libhdfs API is very similar to the posix file API which makes it easier 
> for programs built using posix filesystem calls to be modified to access HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8647:
---
Attachment: (was: HDFS-8647-006.patch)

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-14 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957503#comment-14957503
 ] 

Josh Elser commented on HDFS-9226:
--

Well, I thought this would be simple, but it seems like I might be running into 
some fallout from HDFS-9166:

{noformat}
testCleanupIndexOpWithDfsDir(org.apache.accumulo.server.util.FileUtilTest)  
Time elapsed: 0.003 sec  <<< ERROR!
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider 
org.apache.hadoop.hdfs.web.HftpFileSystem not found
at java.util.ServiceLoader.fail(ServiceLoader.java:231)
at java.util.ServiceLoader.access$300(ServiceLoader.java:181)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:365)
at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2661)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2672)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2693)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:95)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2732)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2714)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:375)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:346)
at 
org.apache.accumulo.server.fs.VolumeManagerImpl.getLocal(VolumeManagerImpl.java:94)
at 
org.apache.accumulo.server.util.FileUtilTest.testCleanupIndexOpWithDfsDir(FileUtilTest.java:98)
{noformat}

Looks like META-INF service definition isn't getting picked up correctly. I'll 
have to dig into this more tonight and figure out what the heck is going on. 
Pulling in Mockito at least doesn't appear to be causing more immediate 
problems. Let me attach a patch for that meanwhile.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: 

[jira] [Updated] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8647:
---
Attachment: HDFS-8647-006.patch

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9243) TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout

2015-10-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9243:
--
Description: 
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
sometimes time out.
This is happening on trunk as can be observed in several recent jenkins job.

On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.

  was:
This is happening on trunk 
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
 
On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.


> TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout
> -
>
> Key: HDFS-9243
> URL: https://issues.apache.org/jira/browse/HDFS-9243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
> sometimes time out.
> This is happening on trunk as can be observed in several recent jenkins job.
> On my local Linux machine, this test case times out 6 out of 10 times. When 
> it does not time out, this test takes about 20 seconds, otherwise it takes 
> more than 60 seconds and then time out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957248#comment-14957248
 ] 

Ming Ma commented on HDFS-8647:
---

Thanks [~vinayrpet]!

bq. how about waiting for commit to branch-2. After that simple cherry-pick 
might work.
I was trying to point out that cherry-pick likely won't work. It is just to 
make sure after I commit it to trunk, there is patch for reference in case I 
need to manually resolve it as part of cherry-pick.

bq. "# of racks >= # of data blocks" requirement is to ensure rackwise failure 
doesn't create any dataloss in case of EC'ed file.
Is that to ensure no data loss or more for read reconstruction time 
optimization? Use the example of RS(6,3) and 5 racks, the blocks will be 
written successfully. If rack_0 fails, read needs to reconstruct data_0 and 
data_5, but there is no data loss. This is just for my own education, to 
clarify why the write policy is different from verification policy for EC 
scenario. Again, this is not related this jira.

||rack_0||rack_1||rack_2||rack_3||rack_4||
|data_0|data_1|data_2|data_3|data_4|
|data_5|prty_0|prty_1|prty_2|

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957361#comment-14957361
 ] 

Brahma Reddy Battula commented on HDFS-8647:


[~mingma] and [~vinayrpet]

Uploaded patch to address following comments

{quote}The existing blockHasEnoughRacksStriped compares getRealDataBlockNum (# 
of data blocks) with the number racks. But after the refactoring, it compares 
getRealTotalBlockNum (# of total blocks) with the number racks,{quote}
{quote}
It might be easier to not to pass isStriped to verifyBlockPlacement. Instead, 
have BlockPlacementPolicyRackFaultTolerant implement verifyBlockPlacement which 
will use numberOfReplicas as the minRacks. This also makes the patch more 
applicable to branch-2 which doesn't have striped EC
{quote}


> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum

2015-10-14 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9220:

Attachment: HDFS-9220.002.patch

Looks like most of the test failures were caused by NULL checksum type. When 
the checksum is disabled, {{offset}} and {{end}} are both 0 but the assertion 
{{crcBytes != 0}} fails. Update the patch to fix this part.

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> 
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, 
> HDFS-9220.002.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.
> More generally, the failure happens when reading from the last block of a 
> file and the last block has <= 512 bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-10-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9242:


 Summary: Fix Findbugs warning from 
webhdfs.DataNodeUGIProvider.ugiCache 
 Key: HDFS-9242
 URL: https://issues.apache.org/jira/browse/HDFS-9242
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao


This was introduced by HDFS-8855 and pre-patch warning can be found at 
https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK

{code}
CodeWarning
DC  Possible doublecheck on 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache 
in new 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
 Configuration)
Bug type DC_DOUBLECHECK (click for details) 
In class org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
In method new 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
 Configuration)
On field 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
At DataNodeUGIProvider.java:[lines 49-51]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9207) Move the implementation to the hdfs-native-client module

2015-10-14 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957389#comment-14957389
 ] 

Bob Hansen commented on HDFS-9207:
--

If it will get things going, let's land this, refactor HDFS-8766, and get it 
landed too.

I'll +1 this, and ask [~wheat9] to get HDFS-8766 landed.

> Move the implementation to the hdfs-native-client module
> 
>
> Key: HDFS-9207
> URL: https://issues.apache.org/jira/browse/HDFS-9207
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-9207.000.patch
>
>
> The implementation of libhdfspp should be moved to the new hdfs-native-client 
> module as HDFS-9170 has landed in trunk and branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957422#comment-14957422
 ] 

Arpit Agarwal commented on HDFS-4015:
-

Hi [~anu], the v004 patch looks great. We should handle the case of 
{{blocksInFuture}} not being present in 
{{PBHelperClient#convert(GetFsStatsResponseProto res)}} (newer client + older 
NameNode). e.g.
{code}
result[ClientProtocol.GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX] =
res.hasBlocksInFuture() ? res.getBlocksInFuture() : 0;
{code}

+1 otherwise.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9243) TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout

2015-10-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9243:
--
Description: 
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
sometimes time out.
This is happening on trunk as can be observed in several recent jenkins job. 
(e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/ 
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/472/

On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.

I suspect it's a deadlock issue, as dead lock had occurred at this test case in 
HDFS-5527 before.

  was:
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
sometimes time out.
This is happening on trunk as can be observed in several recent jenkins job. 
(e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/ 
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/472/

On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.


> TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout
> -
>
> Key: HDFS-9243
> URL: https://issues.apache.org/jira/browse/HDFS-9243
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
> sometimes time out.
> This is happening on trunk as can be observed in several recent jenkins job. 
> (e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/472/
> On my local Linux machine, this test case times out 6 out of 10 times. When 
> it does not time out, this test takes about 20 seconds, otherwise it takes 
> more than 60 seconds and then time out.
> I suspect it's a deadlock issue, as dead lock had occurred at this test case 
> in HDFS-5527 before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9243) TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout

2015-10-14 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9243:
-

 Summary: 
TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout
 Key: HDFS-9243
 URL: https://issues.apache.org/jira/browse/HDFS-9243
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


This is happening on trunk 
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
 
On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957471#comment-14957471
 ] 

Hadoop QA commented on HDFS-9223:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 32s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   8m  1s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 42s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 23s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 47s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  2s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   0m 28s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs compilation is broken. |
| {color:green}+1{color} | findbugs |   0m 28s | The patch does not introduce 
any new Findbugs (version ) warnings. |
| {color:red}-1{color} | native |   0m 26s | Failed to build the native portion 
 of hadoop-common prior to running the unit tests in   
hadoop-hdfs-project/hadoop-hdfs |
| | |  39m 37s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766595/HDFS-9223.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0d77e85 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12982/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12982/artifact/patchprocess/whitespace.txt
 |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12982/console |


This message was automatically generated.

> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-4015) Safemode should count and report orphaned blocks

2015-10-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957422#comment-14957422
 ] 

Arpit Agarwal edited comment on HDFS-4015 at 10/14/15 6:42 PM:
---

Hi [~anu], the v004 patch looks great. We should handle the case of 
{{blocksInFuture}} not being present in 
{{PBHelperClient#convert(GetFsStatsResponseProto res)}} (newer client + older 
NameNode). e.g.
{code}
result[ClientProtocol.GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX] =
res.hasBlocksInFuture() ? res.getBlocksInFuture() : 0;
{code}

Edit: I need to take a closer look at the timing of setting 
{{shouldPostponeBlocksFromFuture}}.


was (Author: arpitagarwal):
Hi [~anu], the v004 patch looks great. We should handle the case of 
{{blocksInFuture}} not being present in 
{{PBHelperClient#convert(GetFsStatsResponseProto res)}} (newer client + older 
NameNode). e.g.
{code}
result[ClientProtocol.GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX] =
res.hasBlocksInFuture() ? res.getBlocksInFuture() : 0;
{code}

+1 otherwise.

> Safemode should count and report orphaned blocks
> 
>
> Key: HDFS-4015
> URL: https://issues.apache.org/jira/browse/HDFS-4015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Anu Engineer
> Attachments: HDFS-4015.001.patch, HDFS-4015.002.patch, 
> HDFS-4015.003.patch, HDFS-4015.004.patch
>
>
> The safemode status currently reports the number of unique reported blocks 
> compared to the total number of blocks referenced by the namespace. However, 
> it does not report the inverse: blocks which are reported by datanodes but 
> not referenced by the namespace.
> In the case that an admin accidentally starts up from an old image, this can 
> be confusing: safemode and fsck will show "corrupt files", which are the 
> files which actually have been deleted but got resurrected by restarting from 
> the old image. This will convince them that they can safely force leave 
> safemode and remove these files -- after all, they know that those files 
> should really have been deleted. However, they're not aware that leaving 
> safemode will also unrecoverably delete a bunch of other block files which 
> have been orphaned due to the namespace rollback.
> I'd like to consider reporting something like: "90 of expected 100 
> blocks have been reported. Additionally, 1 blocks have been reported 
> which do not correspond to any file in the namespace. Forcing exit of 
> safemode will unrecoverably remove those data blocks"
> Whether this statistic is also used for some kind of "inverse safe mode" is 
> the logical next step, but just reporting it as a warning seems easy enough 
> to accomplish and worth doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956368#comment-14956368
 ] 

Hudson commented on HDFS-1172:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #526 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/526/])
HDFS-1172. Blocks in newly completed files are considered (jing9: rev 
2a987243423eb5c7e191de2ba969b7591a441c70)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.012.patch, HDFS-1172.013.patch, HDFS-1172.014.patch, 
> HDFS-1172.014.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, 
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956329#comment-14956329
 ] 

Xiao Chen commented on HDFS-9231:
-

Attached patch 001. Description below:
* fsck from command line with -includeSnapshots will also show full dir of 
snapshots
* fsck from command line without -includeSnapshots behavior unchanged
* NameNode WebUI always show full dir of snapshots
* Some refactoring to reuse {{getSnapshottableDirs}} and 
{{ListCorruptFileBlocksWithSnapshot}}
* Getting all possible snapshots is not so efficient, but considering fsck 
should not be performed frequently and nothing is added into 
{{listCorruptFileBlocks}} where fslock is used, the impact should be minimal.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum

2015-10-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956328#comment-14956328
 ] 

Jing Zhao commented on HDFS-9220:
-

The failed tests look like related. Will dig further.

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> 
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-14 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-1172:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s:   (was: 3.0.0, 2.1.0-beta, 1.3.0)
  Status: Resolved  (was: Patch Available)

I've committed this into trunk and branch-2. Thanks [~iwasakims] for continuing 
and finishing the work!

> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.012.patch, HDFS-1172.013.patch, HDFS-1172.014.patch, 
> HDFS-1172.014.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, 
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-14 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956351#comment-14956351
 ] 

Masatake Iwasaki commented on HDFS-1172:


Thanks for the reviews, [~jingzhao]!

> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.012.patch, HDFS-1172.013.patch, HDFS-1172.014.patch, 
> HDFS-1172.014.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, 
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956389#comment-14956389
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9223:
---

Still need to change getCapacityUsedPercent.

+1 after it is addressed.

> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8915) TestFSNamesystem.testFSLockGetWaiterCount fails intermittently in jenkins

2015-10-14 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-8915:
---
Attachment: HDFS-8915.002.patch

Thanks for checking.

HDFS-9139 added fix for the same purpose.
{code}
Thread.sleep(10); // Lets all threads get BLOCKED
{code}

Though the difference is just doing retry or not, I attached rebased patch.


> TestFSNamesystem.testFSLockGetWaiterCount fails intermittently in jenkins
> -
>
> Key: HDFS-8915
> URL: https://issues.apache.org/jira/browse/HDFS-8915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Masatake Iwasaki
> Attachments: HDFS-8915.001.patch, HDFS-8915.002.patch
>
>
> This test was added as part of HDFS-8883, There is a race condition in the 
> test and it has failed *once* in the Apache Jenkins run.
> Here is the stack
> FAILED:  
> org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockGetWaiterCount
> Error Message:
> Expected number of blocked thread not found expected:<3> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Expected number of blocked thread not found 
> expected:<3> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockGetWaiterCount(TestFSNamesystem.java:261)
> From cursory code reading , even though we call into readlock.lock() there is 
> no guarantee that our thread is put in the wait queue. A proposed fix could 
> be to check for any thread in the lock queue instead of all 3, or disable the 
> test.
> It could also indicate an issue with the test infra-structure but any test 
> open to variations in result due to infra-structure issues creates noise in 
> tests so we are better off fixing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956348#comment-14956348
 ] 

Hudson commented on HDFS-1172:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #8628 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8628/])
HDFS-1172. Blocks in newly completed files are considered (jing9: rev 
2a987243423eb5c7e191de2ba969b7591a441c70)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.012.patch, HDFS-1172.013.patch, HDFS-1172.014.patch, 
> HDFS-1172.014.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, 
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9240:
-
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-8512)

> Use Builder pattern for BlockLocation constructors
> --
>
> Key: HDFS-9240
> URL: https://issues.apache.org/jira/browse/HDFS-9240
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>
> This JIRA is opened to refactor the 8 telescoping constructors of 
> BlockLocation class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-14 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9234:
-
Attachment: HDFS-9234-002.patch

Attached updated patch..
Fixed findbug warning.
Please review..

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2015-10-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9240:


 Summary: Use Builder pattern for BlockLocation constructors
 Key: HDFS-9240
 URL: https://issues.apache.org/jira/browse/HDFS-9240
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor


This JIRA is opened to refactor the 8 telescoping constructors of BlockLocation 
class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8915) TestFSNamesystem.testFSLockGetWaiterCount fails intermittently in jenkins

2015-10-14 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956472#comment-14956472
 ] 

Vinayakumar B commented on HDFS-8915:
-

{code}Thread.sleep(10); // Lets all threads get BLOCKED{code}
This line itself was added in HDFS-9139, 
Did anyone observe this test failed after HDFS-9139?

> TestFSNamesystem.testFSLockGetWaiterCount fails intermittently in jenkins
> -
>
> Key: HDFS-8915
> URL: https://issues.apache.org/jira/browse/HDFS-8915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Masatake Iwasaki
> Attachments: HDFS-8915.001.patch, HDFS-8915.002.patch
>
>
> This test was added as part of HDFS-8883, There is a race condition in the 
> test and it has failed *once* in the Apache Jenkins run.
> Here is the stack
> FAILED:  
> org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockGetWaiterCount
> Error Message:
> Expected number of blocked thread not found expected:<3> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Expected number of blocked thread not found 
> expected:<3> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockGetWaiterCount(TestFSNamesystem.java:261)
> From cursory code reading , even though we call into readlock.lock() there is 
> no guarantee that our thread is put in the wait queue. A proposed fix could 
> be to check for any thread in the lock queue instead of all 3, or disable the 
> test.
> It could also indicate an issue with the test infra-structure but any test 
> open to variations in result due to infra-structure issues creates noise in 
> tests so we are better off fixing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9239) DataNode Lifeline Protocol: an alternative protocol for reporting DataNode liveness

2015-10-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956523#comment-14956523
 ] 

Steve Loughran commented on HDFS-9239:
--

Does this actually need RPC? Or could some UDP packet be submitted, with the 
recipient doing an initial auth check of the sender, then queuing it to update 
the internal state. After all, this is intended to be one-way announcements

> DataNode Lifeline Protocol: an alternative protocol for reporting DataNode 
> liveness
> ---
>
> Key: HDFS-9239
> URL: https://issues.apache.org/jira/browse/HDFS-9239
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: DataNode-Lifeline-Protocol.pdf
>
>
> This issue proposes introduction of a new feature: the DataNode Lifeline 
> Protocol.  This is an RPC protocol that is responsible for reporting liveness 
> and basic health information about a DataNode to a NameNode.  Compared to the 
> existing heartbeat messages, it is lightweight and not prone to resource 
> contention problems that can harm accurate tracking of DataNode liveness 
> currently.  The attached design document contains more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-14 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-9234:
-
Attachment: HDFS-9234-003.patch

Thanks [~xyao] for review ...

bq. Is there a particular reason for that?
No, it's by mistake..

Attached updated patch, please review..

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch, 
> HDFS-9234-003.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956606#comment-14956606
 ] 

Hudson commented on HDFS-1172:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #493 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/493/])
HDFS-1172. Blocks in newly completed files are considered (jing9: rev 
2a987243423eb5c7e191de2ba969b7591a441c70)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.012.patch, HDFS-1172.013.patch, HDFS-1172.014.patch, 
> HDFS-1172.014.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, 
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8915) TestFSNamesystem.testFSLockGetWaiterCount fails intermittently in jenkins

2015-10-14 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956469#comment-14956469
 ] 

Vinayakumar B commented on HDFS-8915:
-

I think, this was fixed as part of HDFS-9139,
I was hitting this quite often during parallel runs, just introducing little 
sleep, it didnt occur again.

> TestFSNamesystem.testFSLockGetWaiterCount fails intermittently in jenkins
> -
>
> Key: HDFS-8915
> URL: https://issues.apache.org/jira/browse/HDFS-8915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Masatake Iwasaki
> Attachments: HDFS-8915.001.patch, HDFS-8915.002.patch
>
>
> This test was added as part of HDFS-8883, There is a race condition in the 
> test and it has failed *once* in the Apache Jenkins run.
> Here is the stack
> FAILED:  
> org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockGetWaiterCount
> Error Message:
> Expected number of blocked thread not found expected:<3> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Expected number of blocked thread not found 
> expected:<3> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockGetWaiterCount(TestFSNamesystem.java:261)
> From cursory code reading , even though we call into readlock.lock() there is 
> no guarantee that our thread is put in the wait queue. A proposed fix could 
> be to check for any thread in the lock queue instead of all 3, or disable the 
> test.
> It could also indicate an issue with the test infra-structure but any test 
> open to variations in result due to infra-structure issues creates noise in 
> tests so we are better off fixing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8512) storage type inside LocatedBlock object is not fully exposed for GETFILESTATUS

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8512:
-
Attachment: HDFS-8512.00.patch

> storage type inside LocatedBlock object is not fully exposed for GETFILESTATUS
> --
>
> Key: HDFS-8512
> URL: https://issues.apache.org/jira/browse/HDFS-8512
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Sumana Sathish
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8512.00.patch
>
>
> Storage type inside LocatedBlock object is not fully exposed for GETFILESTATUS
> {code}
> $ curl -i 
> "http://127.0.0.1:50070/webhdfs/v1/HOT/FILE1?user.name=xyao=GETFILESTATUS;
> HTTP/1.1 200 OK
> Cache-Control: no-cache
> Expires: Wed, 27 May 2015 18:04:13 GMT
> Date: Wed, 27 May 2015 18:04:13 GMT
> Pragma: no-cache
> Expires: Wed, 27 May 2015 18:04:13 GMT
> Date: Wed, 27 May 2015 18:04:13 GMT
> Pragma: no-cache
> Content-Type: application/json
> Set-Cookie: 
> hadoop.auth="u=xyao=xyao=simple=1432785853423=W4O5kKiYHmzzey4h7I9J9eL9EMY=";
>  Path=/; Expires=Thu, 28-May-2015 04:04:13 GMT; HttpOnly
> Transfer-Encoding: chunked
> Server: Jetty(6.1.26)
>  
> {"FileStatus":{"accessTime":1432683737985,"blockSize":134217728,"childrenNum":0,"fileId":16405,"group":"hadoop","length":150318178,"modificationTime":1432683738427,"owner":"xyao","pathSuffix":"","permission":"644","replication":1,"storagePolicy":7,"type":"FILE"}}
>  $ curl -i 
> "http://127.0.0.1:50070/webhdfs/v1/HOT/FILE1?user.name=xyao=GET_BLOCK_LOCATIONS=0=150318178;
> HTTP/1.1 200 OK
> Cache-Control: no-cache
> Expires: Wed, 27 May 2015 18:04:55 GMT
> Date: Wed, 27 May 2015 18:04:55 GMT
> Pragma: no-cache
> Expires: Wed, 27 May 2015 18:04:55 GMT
> Date: Wed, 27 May 2015 18:04:55 GMT
> Pragma: no-cache
> Content-Type: application/json
> Set-Cookie: 
> hadoop.auth="u=xyao=xyao=simple=1432785895031=TUiaNsCrARAPKz6xrddoQ1eHOXA=";
>  Path=/; Expires=Thu, 28-May-2015 04:04:55 GMT; HttpOnly
> Transfer-Encoding: chunked
> Server: Jetty(6.1.26)
>  
> {"LocatedBlocks":{"fileLength":150318178,"isLastBlockComplete":true,"isUnderConstruction":false,"lastLocatedBlock":{"block":{"blockId":1073741847,"blockPoolId":"BP-474445704-192.168.70.1-1432674221011","generationStamp":1023,"numBytes":16100450},"blockToken":{"urlString":"AA"},"cachedLocations":[],"isCorrupt":false,"locations":[{"adminState":"NORMAL","blockPoolUsed":300670976,"cacheCapacity":0,"cacheUsed":0,"capacity":1996329943040,"dfsUsed":300670976,"hostName":"192.168.70.1","infoPort":50075,"infoSecurePort":0,"ipAddr":"192.168.70.1","ipcPort":50020,"lastUpdate":1432749892058,"lastUpdateMonotonic":1432749892058,"name":"192.168.70.1:50010","networkLocation":"/default-rack","remaining":782138327040,"storageID":"49a30d0f-99f8-4b87-b986-502fe926271a","xceiverCount":1,"xferPort":50010}],"startOffset":134217728},"locatedBlocks":[{"block":{"blockId":1073741846,"blockPoolId":"BP-474445704-192.168.70.1-1432674221011","generationStamp":1022,"numBytes":134217728},"blockToken":{"urlString":"AA"},"cachedLocations":[],"isCorrupt":false,"locations":[{"adminState":"NORMAL","blockPoolUsed":300670976,"cacheCapacity":0,"cacheUsed":0,"capacity":1996329943040,"dfsUsed":300670976,"hostName":"192.168.70.1","infoPort":50075,"infoSecurePort":0,"ipAddr":"192.168.70.1","ipcPort":50020,"lastUpdate":1432749892058,"lastUpdateMonotonic":1432749892058,"name":"192.168.70.1:50010","networkLocation":"/default-rack","remaining":782138327040,"storageID":"49a30d0f-99f8-4b87-b986-502fe926271a","xceiverCount":1,"xferPort":50010}],"startOffset":0},{"block":{"blockId":1073741847,"blockPoolId":"BP-474445704-192.168.70.1-1432674221011","generationStamp":1023,"numBytes":16100450},"blockToken":{"urlString":"AA"},"cachedLocations":[],"isCorrupt":false,"locations":[{"adminState":"NORMAL","blockPoolUsed":300670976,"cacheCapacity":0,"cacheUsed":0,"capacity":1996329943040,"dfsUsed":300670976,"hostName":"192.168.70.1","infoPort":50075,"infoSecurePort":0,"ipAddr":"192.168.70.1","ipcPort":50020,"lastUpdate":1432749892058,"lastUpdateMonotonic":1432749892058,"name":"192.168.70.1:50010","networkLocation":"/default-rack","remaining":782138327040,"storageID":"49a30d0f-99f8-4b87-b986-502fe926271a","xceiverCount":1,"xferPort":50010}],"startOffset":134217728}]}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8512) storage type inside LocatedBlock object is not fully exposed for GETFILESTATUS

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8512:
-
Status: Patch Available  (was: Open)

> storage type inside LocatedBlock object is not fully exposed for GETFILESTATUS
> --
>
> Key: HDFS-8512
> URL: https://issues.apache.org/jira/browse/HDFS-8512
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Sumana Sathish
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8512.00.patch
>
>
> Storage type inside LocatedBlock object is not fully exposed for GETFILESTATUS
> {code}
> $ curl -i 
> "http://127.0.0.1:50070/webhdfs/v1/HOT/FILE1?user.name=xyao=GETFILESTATUS;
> HTTP/1.1 200 OK
> Cache-Control: no-cache
> Expires: Wed, 27 May 2015 18:04:13 GMT
> Date: Wed, 27 May 2015 18:04:13 GMT
> Pragma: no-cache
> Expires: Wed, 27 May 2015 18:04:13 GMT
> Date: Wed, 27 May 2015 18:04:13 GMT
> Pragma: no-cache
> Content-Type: application/json
> Set-Cookie: 
> hadoop.auth="u=xyao=xyao=simple=1432785853423=W4O5kKiYHmzzey4h7I9J9eL9EMY=";
>  Path=/; Expires=Thu, 28-May-2015 04:04:13 GMT; HttpOnly
> Transfer-Encoding: chunked
> Server: Jetty(6.1.26)
>  
> {"FileStatus":{"accessTime":1432683737985,"blockSize":134217728,"childrenNum":0,"fileId":16405,"group":"hadoop","length":150318178,"modificationTime":1432683738427,"owner":"xyao","pathSuffix":"","permission":"644","replication":1,"storagePolicy":7,"type":"FILE"}}
>  $ curl -i 
> "http://127.0.0.1:50070/webhdfs/v1/HOT/FILE1?user.name=xyao=GET_BLOCK_LOCATIONS=0=150318178;
> HTTP/1.1 200 OK
> Cache-Control: no-cache
> Expires: Wed, 27 May 2015 18:04:55 GMT
> Date: Wed, 27 May 2015 18:04:55 GMT
> Pragma: no-cache
> Expires: Wed, 27 May 2015 18:04:55 GMT
> Date: Wed, 27 May 2015 18:04:55 GMT
> Pragma: no-cache
> Content-Type: application/json
> Set-Cookie: 
> hadoop.auth="u=xyao=xyao=simple=1432785895031=TUiaNsCrARAPKz6xrddoQ1eHOXA=";
>  Path=/; Expires=Thu, 28-May-2015 04:04:55 GMT; HttpOnly
> Transfer-Encoding: chunked
> Server: Jetty(6.1.26)
>  
> {"LocatedBlocks":{"fileLength":150318178,"isLastBlockComplete":true,"isUnderConstruction":false,"lastLocatedBlock":{"block":{"blockId":1073741847,"blockPoolId":"BP-474445704-192.168.70.1-1432674221011","generationStamp":1023,"numBytes":16100450},"blockToken":{"urlString":"AA"},"cachedLocations":[],"isCorrupt":false,"locations":[{"adminState":"NORMAL","blockPoolUsed":300670976,"cacheCapacity":0,"cacheUsed":0,"capacity":1996329943040,"dfsUsed":300670976,"hostName":"192.168.70.1","infoPort":50075,"infoSecurePort":0,"ipAddr":"192.168.70.1","ipcPort":50020,"lastUpdate":1432749892058,"lastUpdateMonotonic":1432749892058,"name":"192.168.70.1:50010","networkLocation":"/default-rack","remaining":782138327040,"storageID":"49a30d0f-99f8-4b87-b986-502fe926271a","xceiverCount":1,"xferPort":50010}],"startOffset":134217728},"locatedBlocks":[{"block":{"blockId":1073741846,"blockPoolId":"BP-474445704-192.168.70.1-1432674221011","generationStamp":1022,"numBytes":134217728},"blockToken":{"urlString":"AA"},"cachedLocations":[],"isCorrupt":false,"locations":[{"adminState":"NORMAL","blockPoolUsed":300670976,"cacheCapacity":0,"cacheUsed":0,"capacity":1996329943040,"dfsUsed":300670976,"hostName":"192.168.70.1","infoPort":50075,"infoSecurePort":0,"ipAddr":"192.168.70.1","ipcPort":50020,"lastUpdate":1432749892058,"lastUpdateMonotonic":1432749892058,"name":"192.168.70.1:50010","networkLocation":"/default-rack","remaining":782138327040,"storageID":"49a30d0f-99f8-4b87-b986-502fe926271a","xceiverCount":1,"xferPort":50010}],"startOffset":0},{"block":{"blockId":1073741847,"blockPoolId":"BP-474445704-192.168.70.1-1432674221011","generationStamp":1023,"numBytes":16100450},"blockToken":{"urlString":"AA"},"cachedLocations":[],"isCorrupt":false,"locations":[{"adminState":"NORMAL","blockPoolUsed":300670976,"cacheCapacity":0,"cacheUsed":0,"capacity":1996329943040,"dfsUsed":300670976,"hostName":"192.168.70.1","infoPort":50075,"infoSecurePort":0,"ipAddr":"192.168.70.1","ipcPort":50020,"lastUpdate":1432749892058,"lastUpdateMonotonic":1432749892058,"name":"192.168.70.1:50010","networkLocation":"/default-rack","remaining":782138327040,"storageID":"49a30d0f-99f8-4b87-b986-502fe926271a","xceiverCount":1,"xferPort":50010}],"startOffset":134217728}]}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956494#comment-14956494
 ] 

Hudson commented on HDFS-1172:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1262 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1262/])
HDFS-1172. Blocks in newly completed files are considered (jing9: rev 
2a987243423eb5c7e191de2ba969b7591a441c70)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.012.patch, HDFS-1172.013.patch, HDFS-1172.014.patch, 
> HDFS-1172.014.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, 
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956417#comment-14956417
 ] 

Hudson commented on HDFS-1172:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2474 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2474/])
HDFS-1172. Blocks in newly completed files are considered (jing9: rev 
2a987243423eb5c7e191de2ba969b7591a441c70)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.012.patch, HDFS-1172.013.patch, HDFS-1172.014.patch, 
> HDFS-1172.014.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, 
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8915) TestFSNamesystem.testFSLockGetWaiterCount fails intermittently in jenkins

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956458#comment-14956458
 ] 

Hadoop QA commented on HDFS-8915:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   8m 54s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 42s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 32s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 51s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  54m 44s | Tests failed in hadoop-hdfs. |
| | |  80m 32s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766477/HDFS-8915.002.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 2a98724 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12972/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12972/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12972/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12972/console |


This message was automatically generated.

> TestFSNamesystem.testFSLockGetWaiterCount fails intermittently in jenkins
> -
>
> Key: HDFS-8915
> URL: https://issues.apache.org/jira/browse/HDFS-8915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Masatake Iwasaki
> Attachments: HDFS-8915.001.patch, HDFS-8915.002.patch
>
>
> This test was added as part of HDFS-8883, There is a race condition in the 
> test and it has failed *once* in the Apache Jenkins run.
> Here is the stack
> FAILED:  
> org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockGetWaiterCount
> Error Message:
> Expected number of blocked thread not found expected:<3> but was:<1>
> Stack Trace:
> java.lang.AssertionError: Expected number of blocked thread not found 
> expected:<3> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockGetWaiterCount(TestFSNamesystem.java:261)
> From cursory code reading , even though we call into readlock.lock() there is 
> no guarantee that our thread is put in the wait queue. A proposed fix could 
> be to check for any thread in the lock queue instead of all 3, or disable the 
> test.
> It could also indicate an issue with the test infra-structure but any test 
> open to variations in result due to infra-structure issues creates noise in 
> tests so we are better off fixing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-14 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956482#comment-14956482
 ] 

Xiaoyu Yao commented on HDFS-9234:
--

Thanks [~surendrasingh] for working on this. The patch looks good to me 
overall. Just one question:

I don't think it is right to cast the type quota and consumed from long to 
integer below. Is there a particular reason for that?
{code}
0   contentSummaryBuilder = contentSummaryBuilder.typeQuota(t,
331 (Integer) type.get("quota")).typeConsumed(t,
332 (Integer) type.get("consumed"));
{code}

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1172) Blocks in newly completed files are considered under-replicated too quickly

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956424#comment-14956424
 ] 

Hudson commented on HDFS-1172:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #538 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/538/])
HDFS-1172. Blocks in newly completed files are considered (jing9: rev 
2a987243423eb5c7e191de2ba969b7591a441c70)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Blocks in newly completed files are considered under-replicated too quickly
> ---
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.21.0
>Reporter: Todd Lipcon
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch, 
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch, 
> HDFS-1172.012.patch, HDFS-1172.013.patch, HDFS-1172.014.patch, 
> HDFS-1172.014.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt, 
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9232) Shouldn't start block recovery if block has no enough replicas

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956493#comment-14956493
 ] 

Hadoop QA commented on HDFS-9232:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  21m 41s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |  10m 59s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 19s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 32s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 49s | The applied patch generated  2 
new checkstyle issues (total was 253, now 254). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 47s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 44s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 23s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 40s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  61m 32s | Tests failed in hadoop-hdfs. |
| | | 120m 29s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766255/HDFS-9232.01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 2a98724 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12973/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12973/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12973/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12973/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12973/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12973/console |


This message was automatically generated.

> Shouldn't start block recovery if block has no enough replicas
> --
>
> Key: HDFS-9232
> URL: https://issues.apache.org/jira/browse/HDFS-9232
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-9232.01.patch
>
>
> from HDFS-8406:
> {quote}
> Before primary DN calls commitBlockSynchronization, it synchronized 2 RBW 
> replicas, and make them finalized. Then primary DN calls 
> commitBlockSynchronization, to complete the lastBlock and close the file. The 
> question is, your dfs.namenode.replication.min is 3, the last block can't be 
> completed. NameNode shouldn't issue blockRecovery in the first place because 
> lastBlock can't be completed anyway.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956537#comment-14956537
 ] 

Vinayakumar B commented on HDFS-8647:
-

Thanks [~brahmareddy] for patch.
Thanks [~mingma] for review, 
Those are good points.
bq. The existing blockHasEnoughRacksStriped compares getRealDataBlockNum (# of 
data blocks) with the number racks. But after the refactoring, it compares 
getRealTotalBlockNum (# of total blocks) with the number racks,
Yes, you are right. It should use getRealDataBlockNum as minracks. I found this 
is discussed in HDFS-7613

bq. The current patch doesn't apply to branch-2. If you agree with the above 
changes, could you try out if it applies to branch-2? If it doesn't apply, you 
will need to provide a separate patch for branch-2 later.
Yes, of-course it will not apply. Instead of providing another branch-2 patch, 
which might create more conflicts when merging EC to branch-2, how about 
waiting for commit to branch-2. After that simple cherry-pick might work.

bq. A general question about striped EC. It uses "# of racks >= # of data 
blocks" to check if a given block has enough racks. But what if "# of racks for 
the whole cluster < # of data blocks"? Say we use RS(6,3) and the cluster has 5 
racks. The write operation will spread the 9 blocks to 5 racks and succeed. But 
it will fail the "enough racks" check later in BM? But that has nothing to with 
the refactoring work here. I just want to bring it up in case others can chime 
in.
You are right. Check will fail, that means block will not be removed from 
neededReplications map. 
 "# of racks >= # of data blocks" requirement is to ensure rackwise failure 
doesn't create any dataloss in case of EC'ed file.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956456#comment-14956456
 ] 

Hadoop QA commented on HDFS-9231:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 36s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 56s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 32s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 27s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  50m 18s | Tests failed in hadoop-hdfs. |
| | |  97m 10s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFSNamesystemMBean |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766475/HDFS-9231.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 2a98724 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12971/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12971/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12971/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12971/console |


This message was automatically generated.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9234) WebHdfs : getContentSummary() should give quota for storage types

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956531#comment-14956531
 ] 

Hadoop QA commented on HDFS-9234:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  21m 49s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   3m  1s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 36s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  56m 27s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 36s | Tests passed in 
hadoop-hdfs-client. |
| | | 111m 58s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766488/HDFS-9234-002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 2a98724 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12974/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12974/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12974/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12974/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12974/console |


This message was automatically generated.

> WebHdfs : getContentSummary() should give quota for storage types
> -
>
> Key: HDFS-9234
> URL: https://issues.apache.org/jira/browse/HDFS-9234
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9234-001.patch, HDFS-9234-002.patch
>
>
> Currently webhdfs API for ContentSummary give only namequota and spacequota 
> but it will not give storage types quota.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8831) Support "Soft Delete" for files under HDFS encryption zone

2015-10-14 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957533#comment-14957533
 ] 

Xiaoyu Yao commented on HDFS-8831:
--

Propose to add a .Trash under the root of encryption zone without dependency on 
HDFS-8830. This per encryption zone .Trash directory lives within the zone and 
can be created at the time of the encryption zone creation. Files deleted 
within the encryption zone will be renamed/checkpointed to .Trash directory 
inside the encryption zone. For example, user foo delete a file /ez1/bar from 
an encryption zone rooted at /ez1, the file will be move to trash at 
/ez1/.Trash/foo/Current/ez1/bar. This way, we remove the dependency on 
HDFS-8830 and avoid the management overhead because we don't need to explicitly 
add trash locations (current and checkpoints) outside to the encryption zone 
root for each users' current and checkpoint directories. When the entire 
encryption zone is deleted, it should work as it is today, i.e., the root of 
the encryption zone is moved to the trash folder of the user who executed the 
deletion.

I will upload a proposal shortly that compares this with other alternatives for 
encryption zone trash support and summarize why the one proposed above is 
preferred. Any feedback and thoughts?

> Support "Soft Delete" for files under HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> With HDFS-8830, we can support "Soft Delete" by adding the .Trash folder of 
> the file being deleted appropriately to the same encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957566#comment-14957566
 ] 

Hadoop QA commented on HDFS-8647:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  20m  1s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   8m  2s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 26s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 11s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  5s | The patch has 6  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m  8s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs compilation is broken. |
| {color:green}+1{color} | findbugs |   4m  8s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |   0m 23s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |   0m 23s | Tests failed in hadoop-hdfs. |
| | |  48m 18s | |
\\
\\
|| Reason || Tests ||
| Failed build | hadoop-common |
|   | hadoop-hdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766598/HDFS-8647-006.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 56dc777 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12985/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12985/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12985/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12985/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12985/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12985/console |


This message was automatically generated.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9188:

Attachment: HDFS-9188.005.patch

Updated the patch to address test failures.

> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957646#comment-14957646
 ] 

Lei (Eddy) Xu commented on HDFS-9238:
-

+1. Thanks for working on this, [~twu]. 

Will commit soon.

> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-9241:
---

Assignee: Mingliang Liu

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957711#comment-14957711
 ] 

Mingliang Liu commented on HDFS-9241:
-

Thanks for reporting this [~steve_l]. I'll update the patch accordingly.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Status: Open  (was: Patch Available)

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8647) Abstract BlockManager's rack policy into BlockPlacementPolicy

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957678#comment-14957678
 ] 

Hadoop QA commented on HDFS-8647:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  22m 14s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   8m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 26s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m  4s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  5s | The patch has 6  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 26s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |   6m 42s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  74m 39s | Tests failed in hadoop-hdfs. |
| | | 131m 18s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.TestInjectionForSimulatedStorage |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.TestLargeBlock |
|   | hadoop.fs.TestGlobPaths |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
| Timed out tests | org.apache.hadoop.hdfs.TestReplication |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766598/HDFS-8647-006.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0d77e85 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12983/console |


This message was automatically generated.

> Abstract BlockManager's rack policy into BlockPlacementPolicy
> -
>
> Key: HDFS-8647
> URL: https://issues.apache.org/jira/browse/HDFS-8647
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8647-001.patch, HDFS-8647-002.patch, 
> HDFS-8647-003.patch, HDFS-8647-004.patch, HDFS-8647-004.patch, 
> HDFS-8647-005.patch, HDFS-8647-006.patch
>
>
> Sometimes we want to have namenode use alternative block placement policy 
> such as upgrade domains in HDFS-7541.
> BlockManager has built-in assumption about rack policy in functions such as 
> useDelHint, blockHasEnoughRacks. That means when we have new block placement 
> policy, we need to modify BlockManager to account for the new policy. Ideally 
> BlockManager should ask BlockPlacementPolicy object instead. That will allow 
> us to provide new BlockPlacementPolicy without changing BlockManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Attachment: HDFS-9231.003.patch

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9241:

Attachment: HDFS-9241.000.patch

As discussion in common-dev@, we may fix this by make the {{hadoop-client}} 
depend on {{hadoop-hdfs}} instead of {{hadoop-hdfs-client}}. Downstream users 
are free to exclude the server code by making it depend on 
{{hadoop-hdfs-client}} for thin dependency.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9241:

Affects Version/s: (was: 3.0.0)
   Status: Patch Available  (was: Open)

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Status: Patch Available  (was: Open)

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch, 
> HDFS-9231.003.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8947) NameNode, DataNode and NFS gateway to support JvmPauseMonitor as a service

2015-10-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-8947:
-
Status: Patch Available  (was: Open)

> NameNode, DataNode and NFS gateway to support JvmPauseMonitor as a service
> --
>
> Key: HDFS-8947
> URL: https://issues.apache.org/jira/browse/HDFS-8947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, nfs
>Affects Versions: 2.8.0
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Minor
> Attachments: 0001-HDFS-8947.patch, HADOOP-12321-003.patch, 
> HADOOP-12321-005-aggregated.patch
>
>
> As JvmPauseMonitor is made as an AbstractService, subsequent method changes 
> are needed in all places which uses the monitor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8947) NameNode, DataNode and NFS gateway to support JvmPauseMonitor as a service

2015-10-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-8947:
-
Status: Open  (was: Patch Available)

> NameNode, DataNode and NFS gateway to support JvmPauseMonitor as a service
> --
>
> Key: HDFS-8947
> URL: https://issues.apache.org/jira/browse/HDFS-8947
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode, nfs
>Affects Versions: 2.8.0
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Minor
> Attachments: 0001-HDFS-8947.patch, HADOOP-12321-003.patch, 
> HADOOP-12321-005-aggregated.patch
>
>
> As JvmPauseMonitor is made as an AbstractService, subsequent method changes 
> are needed in all places which uses the monitor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-14 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HDFS-9226:
-
Attachment: HDFS-9226.005.patch

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> 

[jira] [Updated] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9238:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   3.0.0
   Status: Resolved  (was: Patch Available)

> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Attachment: (was: HDFS-9231.003.patch)

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9231) fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot

2015-10-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9231:

Attachment: HDFS-9231.003.patch

Patch 003 addresses 1 checkstyle warning that's related.
Other warning/errors are unrelated.

> fsck doesn't explicitly list when Bad Replicas/Blocks are in a snapshot
> ---
>
> Key: HDFS-9231
> URL: https://issues.apache.org/jira/browse/HDFS-9231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-9231.001.patch, HDFS-9231.002.patch
>
>
> For snapshot files, fsck shows corrupt blocks with the original file dir 
> instead of the snapshot dir.
> This can be confusing since even when the original file is deleted, a new 
> fsck run will still show that file as corrupted although what's actually 
> corrupted is the snapshot. 
> This is true even when given the -includeSnapshots option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957682#comment-14957682
 ] 

Hadoop QA commented on HDFS-9220:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  24m 57s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m 16s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 50s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 52s | The applied patch generated  1 
new checkstyle issues (total was 60, now 60). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:red}-1{color} | eclipse:eclipse |   0m 35s | The patch failed to build 
with eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 50s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 32s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  53m 15s | Tests failed in hadoop-hdfs. |
| | | 110m 17s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.fs.TestGlobPaths |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766603/HDFS-9220.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 56dc777 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12984/console |


This message was automatically generated.

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> 
>
> Key: HDFS-9220
> URL: https://issues.apache.org/jira/browse/HDFS-9220
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Bogdan Raducanu
>Assignee: Jing Zhao
>Priority: Blocker
> Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, 
> HDFS-9220.002.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.
> More generally, the failure happens when reading from the last block of a 
> file and the last block has <= 512 bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957705#comment-14957705
 ] 

Steve Loughran commented on HDFS-9241:
--

lets see what happens in the discussion before voting on this

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957754#comment-14957754
 ] 

Colin Patrick McCabe commented on HDFS-9188:


Thanks, [~eddyxu].

{code}
1533for(MaterializedReplica replica : replicas) {
1534  replica.corruptData(sb.toString().getBytes());
1535}
{code}
You need to set UTF8 as the encoding.

+1 once that's resolved and pending jenkins

> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

The test failures are unrelated. Thanks [~andrew.wang] for the review! 
Commit it to trunk and branch-2 based on his +1. 


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: reviewed
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9244) Support nested encryption zones

2015-10-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9244:


 Summary: Support nested encryption zones
 Key: HDFS-9244
 URL: https://issues.apache.org/jira/browse/HDFS-9244
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Xiaoyu Yao


This JIRA is opened to track adding support of nested encryption zone based on 
[~andrew.wang]'s [comment 
|https://issues.apache.org/jira/browse/HDFS-8747?focusedCommentId=14654141=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14654141]
 for certain use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reopened HDFS-9210:
--

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957912#comment-14957912
 ] 

Hadoop QA commented on HDFS-9241:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 13s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |  10m 15s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 29s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   2m  0s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 45s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | client tests |   0m 17s | Tests passed in 
hadoop-client. |
| | |  47m 43s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766630/HDFS-9241.000.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 3d50855 |
| hadoop-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12991/artifact/patchprocess/testrun_hadoop-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12991/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12991/console |


This message was automatically generated.

> HDFS clients can't construct HdfsConfiguration instances
> 
>
> Key: HDFS-9241
> URL: https://issues.apache.org/jira/browse/HDFS-9241
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Steve Loughran
>Assignee: Mingliang Liu
> Attachments: HDFS-9241.000.patch
>
>
> the changes for the hdfs client classpath make instantiating 
> {{HdfsConfiguration}} from the client impossible; it only lives server side. 
> This breaks any app which creates one.
> I know people will look at the {{@Private}} tag and say "don't do that then", 
> but it's worth considering precisely why I, at least, do this: it's the only 
> way to guarantee that the hdfs-default and hdfs-site resources get on the 
> classpath, including all the security settings. It's precisely the use case 
> which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.
> What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9129) Move the safemode block count into BlockManager

2015-10-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9129:

Attachment: HDFS-9129.003.patch

In v3 patch, the {{Namesystem}} will not leave *STARTUP* to *OFF* state. 
Instead, it asks the {{BlockManager}} every time when its 
{{isInStartupSafeMode}} is called. This way, we are able to simplify 
{{Namesystem}}'s state machine by delegating all *STARTUP* safe mode logic, 
which includes leaving safe mode, to {{BlockManager}}, and to 
{{BlockManagerSafeMode}}.

V3 patch is for Jenkins. Please hold on before reviewing it.

> Move the safemode block count into BlockManager
> ---
>
> Key: HDFS-9129
> URL: https://issues.apache.org/jira/browse/HDFS-9129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9129.000.patch, HDFS-9129.001.patch, 
> HDFS-9129.002.patch, HDFS-9129.003.patch
>
>
> The {{SafeMode}} needs to track whether there are enough blocks so that the 
> NN can get out of the safemode. These fields can moved to the 
> {{BlockManager}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957735#comment-14957735
 ] 

Hudson commented on HDFS-9238:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8636 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8636/])
HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: 
rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java


> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957872#comment-14957872
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8637 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8637/])
HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: rev 
3d5085595286c0231f66543d1509247ad4bb5739)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: 
rev a8070259f8384021bd6196e7343f1cc23de89b1c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9188:

Attachment: HDFS-9188.006.patch

Thanks a lot for the reviews, [~cmccabe]. 

bq. You need to set UTF8 as the encoding.
Fixed.

The test failures are not relevant. 


> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch, HDFS-9188.006.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957733#comment-14957733
 ] 

Hadoop QA commented on HDFS-9226:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 22s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |  10m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 25s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 32s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 51s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 44s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | minicluster tests |   0m 19s | Tests passed in 
hadoop-minicluster. |
| | |  48m 53s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766619/HDFS-9226.005.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / dfa7848 |
| hadoop-minicluster test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12986/artifact/patchprocess/testrun_hadoop-minicluster.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12986/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12986/console |


This message was automatically generated.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS, test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> 

[jira] [Commented] (HDFS-6101) TestReplaceDatanodeOnFailure fails occasionally

2015-10-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957759#comment-14957759
 ] 

Arpit Agarwal commented on HDFS-6101:
-

Thanks [~jojochuang]. I can't repro the failure anymore but your fix looks 
valid.

Can you fix the coding style issues? e.g. Space before {{\{}} in {{synchronized 
(this)\{}}, remove extra spaces in {{synchronized ( writer )}} etc.? 

> TestReplaceDatanodeOnFailure fails occasionally
> ---
>
> Key: HDFS-6101
> URL: https://issues.apache.org/jira/browse/HDFS-6101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-6101.001.patch, HDFS-6101.002.patch, 
> TestReplaceDatanodeOnFailure.log
>
>
> Exception details in a comment below.
> The failure repros on both OS X and Linux if I run the test ~10 times in a 
> loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Labels:   (was: reviewed)

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9223) Code cleanup for DatanodeDescriptor and HeartbeatManager

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957955#comment-14957955
 ] 

Hadoop QA commented on HDFS-9223:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  19m 30s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   8m  4s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 29s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 24s | The applied patch generated  4 
new checkstyle issues (total was 328, now 318). |
| {color:red}-1{color} | whitespace |   0m  3s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  56m 24s | Tests failed in hadoop-hdfs. |
| | | 104m  9s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.fs.TestGlobPaths |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766595/HDFS-9223.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ba3c197 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12988/console |


This message was automatically generated.

> Code cleanup for DatanodeDescriptor and HeartbeatManager
> 
>
> Key: HDFS-9223
> URL: https://issues.apache.org/jira/browse/HDFS-9223
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-9223.000.patch, HDFS-9223.001.patch, 
> HDFS-9223.002.patch, HDFS-9223.003.patch
>
>
> Some code cleanup for {{DatanodeDescriptor}} and {{HeartbeatManager}}. The 
> changes include:
> # Change {{DataDescriptor#isAlive}} and {{DatanodeDescriptor#needKeyUpdate}} 
> from public to private
> # Use EnumMap for {{HeartbeatManager#storageTypeStatesMap}}
> # Move the {{isInStartupSafeMode}} out of the namesystem lock in 
> {{heartbeatCheck}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8831) Trash Support for files in HDFS encryption zone

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8831:
-
Summary: Trash Support for files in HDFS encryption zone  (was: Support 
"Soft Delete" for files under HDFS encryption zone)

> Trash Support for files in HDFS encryption zone
> ---
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> With HDFS-8830, we can support "Soft Delete" by adding the .Trash folder of 
> the file being deleted appropriately to the same encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957798#comment-14957798
 ] 

Hadoop QA commented on HDFS-9188:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   7m 53s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 15 new or modified test files. |
| {color:green}+1{color} | javac |   8m  5s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 30s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 26s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 26s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  4s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  49m 17s | Tests failed in hadoop-hdfs. |
| | |  72m 39s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.fs.TestGlobPaths |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766622/HDFS-9188.005.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / dfa7848 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12987/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12987/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12987/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12987/console |


This message was automatically generated.

> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch, HDFS-9188.004.patch, 
> HDFS-9188.005.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9238) Update TestFileCreation#testLeaseExpireHardLimit() to avoid using DataNodeTestUtils#getFile()

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957920#comment-14957920
 ] 

Hudson commented on HDFS-9238:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #544 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/544/])
HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid (lei: 
rev ba3c19787849a9cb9f805e2b6ef0f8485aa68f06)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Update TestFileCreation#testLeaseExpireHardLimit() to avoid using 
> DataNodeTestUtils#getFile()
> -
>
> Key: HDFS-9238
> URL: https://issues.apache.org/jira/browse/HDFS-9238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
>Priority: Trivial
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9238.001.patch
>
>
> TestFileCreation#testLeaseExpireHardLimit uses DataNodeTestUtils#getFile() to 
> open, read and verify blocks written on the DN. It’s better to use 
> getBlockInputStream() which does exactly the same thing but hides the detail 
> of getting the block file on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957921#comment-14957921
 ] 

Hudson commented on HDFS-9210:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #544 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/544/])
HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: rev 
3d5085595286c0231f66543d1509247ad4bb5739)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java
Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats. (xyao: 
rev a8070259f8384021bd6196e7343f1cc23de89b1c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/VolumeScanner.java


> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch, HDFS-9210.01.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >