[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds for codec calculation

2015-05-28 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562434#comment-14562434
 ] 

Walter Su commented on HDFS-8481:
-

*A*
decodeAndFillBuffer(..) has multiple functions, and is hard to read.
1. copy {{alignedStripe}} to {{decodeInputs}}
2. decode {{decodeInputs}} to {{outputs}}
3. copy {{outputs}} to {{buf}}

To me, {{decodeInputs}} should be prepared before calling 
{{decodeAndFillBuffer}}. How about we let caller do the 1st job? Or make 1st 
job an independent function so caller can call it.
*B*
buf[] is given by user. User will reuse it.
We should reuse decodeInputs[][] inside. Memory maybe cheap to client, but is 
precious to Datanode. We can improve this in another jira.


> Erasure coding: remove workarounds for codec calculation
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-28 Thread tongshiquan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tongshiquan updated HDFS-8476:
--
Attachment: screenshot-1.png

> quota can't limit the file which put before setting the storage policy
> --
>
> Key: HDFS-8476
> URL: https://issues.apache.org/jira/browse/HDFS-8476
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: tongshiquan
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: QBST
> Attachments: screenshot-1.png
>
>
> test steps:
> 1. hdfs dfs -mkdir /HOT
> 2. hdfs dfs -put 1G.txt /HOT/file1
> 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
> 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
> 5. hdfs dfs -put 1G.txt /HOT/file2
> 6. hdfs dfs -put 1G.txt /HOT/file3
> 7. hdfs dfs -count -q -h -v -t DISK /HOT
> In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
> the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
> it success, and in step7 count shows remaining quota is -3GB
> FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8476) quota can't limit the file which put before setting the storage policy

2015-05-28 Thread tongshiquan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562441#comment-14562441
 ] 

tongshiquan commented on HDFS-8476:
---

Xiaoyu Yao, My cluster have 3 nodes, 2NN and 3DN, HA mode. Each file have 3 
replicas, maybe it's one of the reason.

Have add screenshot

> quota can't limit the file which put before setting the storage policy
> --
>
> Key: HDFS-8476
> URL: https://issues.apache.org/jira/browse/HDFS-8476
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: tongshiquan
>Assignee: kanaka kumar avvaru
>Priority: Minor
>  Labels: QBST
> Attachments: screenshot-1.png
>
>
> test steps:
> 1. hdfs dfs -mkdir /HOT
> 2. hdfs dfs -put 1G.txt /HOT/file1
> 3. hdfs dfsadmin -setSpaceQuota 6442450944 -storageType DISK /HOT
> 4. hdfs storagepolicies -setStoragePolicy -path /HOT -policy HOT
> 5. hdfs dfs -put 1G.txt /HOT/file2
> 6. hdfs dfs -put 1G.txt /HOT/file3
> 7. hdfs dfs -count -q -h -v -t DISK /HOT
> In step6 file put should fail, because /HOT/file1 and /HOT/file2 have reach 
> the directory /HOT space quota 6GB (1G*3 replicas + 1G*3 replicas), but here 
> it success, and in step7 count shows remaining quota is -3GB
> FYI, if change the turn of step3 and step4, then it turns out normal



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7582) Enforce maximum number of ACL entries separately per access and default.

2015-05-28 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562468#comment-14562468
 ] 

Vinayakumar B commented on HDFS-7582:
-

Failures are not related.

> Enforce maximum number of ACL entries separately per access and default.
> 
>
> Key: HDFS-7582
> URL: https://issues.apache.org/jira/browse/HDFS-7582
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-7582-001.patch, HDFS-7582-01.patch
>
>
> Current ACL limits are only on the total number of entries.
> But there can be a situation where number of default entries for a directory 
> will be more than half of the maximum entries, i.e. > 16.
> In such case, under this parent directory only files can be created which 
> will have ACLs inherited using parent's default entries.
> But when directories are created, total number of entries will be more than 
> the maximum allowed, because sub-directories copies both inherited ACLs as 
> well as default entries.
> Since currently there is no check while copying ACLs from default ACLs 
> directory creation succeeds, but any modification (only permission on one 
> entry also) on the same ACL will fail.
> It would be better to enforce the maximum of 32 entries separately per access 
> and default.  This would be consistent with our observations testing ACLs on 
> other file systems, such as XFS and ext3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7621) Erasure Coding: update the Balancer/Mover data migration logic

2015-05-28 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562487#comment-14562487
 ] 

Walter Su commented on HDFS-7621:
-

My changes to {{Dispatcher}} is smaller. The patch is big because I rename 
{{PendingMove.block}} to {{reportedBlock}}, and javadoc changes.
{{GlobalBlockMap.putIfAbsent(..)}} is almost the same as the original 
{{get(..)}}. The logic doesn't change.
Changes to {{getBlockList()}} because need to correctly calculate 
{{bytesReceived}}. The logic doesn't change.
{{DBlockStriped}} is incremental.

{{PendingMove}} is used to handle reported block as it used to be, even before 
EC branch came out. most changes are about renaming and javadoc.
The only logic change is here:
{code}
139 @@ -224,7 +226,11 @@ private boolean markMovedIfGoodBlock(DBlock block, 
StorageType targetStorageType
 140synchronized (block) {
 141  synchronized (movedBlocks) {
 142if (isGoodBlockCandidate(source, target, targetStorageType, 
block)) {
 143 -this.block = block;
 144 +if (block instanceof DBlockStriped) {
 145 +  reportedBlock = ((DBlockStriped) 
block).getInnerBlock(source);
 146 +} else {
 147 +  reportedBlock = block;
 148 +}
 149  if (chooseProxySource()) {
 150movedBlocks.put(block);
 151if (LOG.isDebugEnabled()) {
{code}
I'm sure it's really small. 
Changes like {{getBlockList()}} and {{markMovedIfGoodBlock}} need be done 
anyway if we use {{GlobalBlockGroupMap}}
So I insist 006 patch.

> Erasure Coding: update the Balancer/Mover data migration logic
> --
>
> Key: HDFS-7621
> URL: https://issues.apache.org/jira/browse/HDFS-7621
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Walter Su
>  Labels: HDFS-7285
> Attachments: HDFS-7621.001.patch, HDFS-7621.002.patch, 
> HDFS-7621.003.patch, HDFS-7621.004.patch, HDFS-7621.005.patch, 
> HDFS-7621.006.patch
>
>
> Currently the Balancer/Mover only considers the distribution of replicas of 
> the same block during data migration: the migration cannot decrease the 
> number of racks. With EC the Balancer and Mover should also take into account 
> the distribution of blocks belonging to the same block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7609) startup used too much time to load edits

2015-05-28 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562543#comment-14562543
 ] 

Jing Zhao commented on HDFS-7609:
-

Thanks for sharing the thoughts, Ming! Totally agree with your analysis. But 
for now I still feel to move the standby check before the retry cache look up 
may be a cleaner way to go: in this way we do not need to expose the mapping 
between operations and the StandbyException out in the NameNodeRpcServer code. 
The two standby NameNode scenario can finally still be handled by client side 
retry/failover in most cases.

> startup used too much time to load edits
> 
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Carrey Zhan
>Assignee: Ming Ma
>  Labels: BB2015-05-RFC
> Attachments: HDFS-7609-2.patch, 
> HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, 
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same 
> time under very high load, leaving behind about 100 million transactions in 
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be 
> needed before finish, and it was loading fsedits most of the time. I also 
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. 
> So I set dfs.namenode.enable.retrycache to false, the restart process 
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover 
> process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds for codec calculation

2015-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562551#comment-14562551
 ] 

Hadoop QA commented on HDFS-8481:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 49s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 12s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 173m  9s | Tests failed in hadoop-hdfs. |
| | | 214m 52s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 88% of time  
Unsynchronized access at DFSOutputStream.java:88% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735795/HDFS-8481-HDFS-7285.01.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 1299357 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11147/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11147/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11147/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11147/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11147/console |


This message was automatically generated.

> Erasure coding: remove workarounds for codec calculation
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8270) create() always retried with hardcoded timeout when file already exists

2015-05-28 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8270:
-
Attachment: HDFS-8270.1.patch

Attached an initial patch. 
Please review and provide feedback.

> create() always retried with hardcoded timeout when file already exists
> ---
>
> Key: HDFS-8270
> URL: https://issues.apache.org/jira/browse/HDFS-8270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Andrey Stepachev
>Assignee: J.Andreina
> Attachments: HDFS-8270.1.patch
>
>
> In Hbase we stumbled on unexpected behaviour, which could 
> break things. 
> HDFS-6478 fixed wrong exception
> translation, but that apparently led to unexpected bahaviour:
> clients trying to create file without override=true will be forced
> to retry hardcoded amount of time (60 seconds).
> That could break or slowdown systems, that use filesystem
> for locks (like hbase fsck did, and we got it broken HBASE-13574).
> We should make this behaviour configurable, do client really need
> to wait lease timeout to be sure that file doesn't exists, or it it should
> be enough to fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8270) create() always retried with hardcoded timeout when file already exists

2015-05-28 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-8270:
-
Status: Patch Available  (was: Open)

> create() always retried with hardcoded timeout when file already exists
> ---
>
> Key: HDFS-8270
> URL: https://issues.apache.org/jira/browse/HDFS-8270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Andrey Stepachev
>Assignee: J.Andreina
> Attachments: HDFS-8270.1.patch
>
>
> In Hbase we stumbled on unexpected behaviour, which could 
> break things. 
> HDFS-6478 fixed wrong exception
> translation, but that apparently led to unexpected bahaviour:
> clients trying to create file without override=true will be forced
> to retry hardcoded amount of time (60 seconds).
> That could break or slowdown systems, that use filesystem
> for locks (like hbase fsck did, and we got it broken HBASE-13574).
> We should make this behaviour configurable, do client really need
> to wait lease timeout to be sure that file doesn't exists, or it it should
> be enough to fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8471) Implement read block over HTTP/2

2015-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562561#comment-14562561
 ] 

Hadoop QA commented on HDFS-8471:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 58s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  4s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 27s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 55s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  4s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 48s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 47s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 54s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 138m 57s | Tests failed in hadoop-hdfs. |
| | | 183m 19s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
| Timed out tests | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735803/HDFS-8471.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 50eeea1 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11148/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11148/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11148/console |


This message was automatically generated.

> Implement read block over HTTP/2
> 
>
> Key: HDFS-8471
> URL: https://issues.apache.org/jira/browse/HDFS-8471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HDFS-8471.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8336) Expose some administrative erasure coding operations to HdfsAdmin

2015-05-28 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-8336:
--
Attachment: HDFS-8336-001.patch

Attached simple patch to expose createErasureCodingZone and 
getErasureCodingZone apis in HdfsAdmin for administration purposes. 

> Expose some administrative erasure coding operations to HdfsAdmin
> -
>
> Key: HDFS-8336
> URL: https://issues.apache.org/jira/browse/HDFS-8336
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Minor
> Attachments: HDFS-8336-001.patch
>
>
> We have HdfsAdmin.java for exposing administrative functions. So, it would be 
> good, if we could expose EC related administrative functions as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8431) hdfs crypto class not found in Windows

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562669#comment-14562669
 ] 

Hudson commented on HDFS-8431:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #211 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/211/])
HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer. 
(cnauroth: rev 50eeea13000f0c82e0567410f0f8b611248f8c1b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd


> hdfs crypto class not found in Windows
> --
>
> Key: HDFS-8431
> URL: https://issues.apache.org/jira/browse/HDFS-8431
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.6.0
> Environment: Windows only
>Reporter: Sumana Sathish
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: encryption, scripts, windows
> Fix For: 2.8.0
>
> Attachments: Screen Shot 2015-05-18 at 6.27.11 PM.png, 
> hdfs-8431.001.patch, hdfs-8431.002.patch
>
>
> Attached screenshot shows that hdfs could not find class 'crypto' for Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8482) Rename BlockInfoContiguous to BlockInfo

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562670#comment-14562670
 ] 

Hudson commented on HDFS-8482:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #211 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/211/])
HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe Zhang. 
(wang: rev 4928f5473394981829e5ffd4b16ea0801baf5c45)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBlockUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.ja

[jira] [Commented] (HDFS-8135) Remove the deprecated FSConstants class

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562671#comment-14562671
 ] 

Hudson commented on HDFS-8135:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #211 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/211/])
Update CHANGES.txt for HDFS-8135. (wheat9: rev 
c46d4bafe1e34b77be3f218b4901f66db4db97f4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove the deprecated FSConstants class
> ---
>
> Key: HDFS-8135
> URL: https://issues.apache.org/jira/browse/HDFS-8135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 3.0.0
>
> Attachments: HDFS-8135-041315.patch
>
>
> The {{FSConstants}} class has been marked as deprecated since 0.23. There is 
> no uses of this class in the current code base and it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562673#comment-14562673
 ] 

Hudson commented on HDFS-5033:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #211 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/211/])
HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't have 
permissions to read the source (Darrell Taylor via aw) (aw: rev 
bf500d979858b084f0fe5c34a85c271a728e416b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java


> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Karthik Kambatla
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HDFS-5033.001.patch
>
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3716) Purger should remove stale fsimage ckpt files

2015-05-28 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-3716:
-
Attachment: HDFS-3716.1.patch

Attached an initial patch to remove stale fsimage ckpt files . 
please review.

> Purger should remove stale fsimage ckpt files
> -
>
> Key: HDFS-3716
> URL: https://issues.apache.org/jira/browse/HDFS-3716
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: suja s
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-3716.1.patch
>
>
> NN got killed while checkpointing in progress before renaming the ckpt file 
> to actual file.
> Since the checkpointing process is not completed, on next NN startup it will 
> load previous fsimage and apply rest of the edits.
> Functionally there's no harm but this ckpt file will be retained as is.
> Purger will not remove the ckpt file though other old fsimage files will be 
> taken care.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-3716) Purger should remove stale fsimage ckpt files

2015-05-28 Thread J.Andreina (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.Andreina updated HDFS-3716:
-
Status: Patch Available  (was: Open)

> Purger should remove stale fsimage ckpt files
> -
>
> Key: HDFS-3716
> URL: https://issues.apache.org/jira/browse/HDFS-3716
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: suja s
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-3716.1.patch
>
>
> NN got killed while checkpointing in progress before renaming the ckpt file 
> to actual file.
> Since the checkpointing process is not completed, on next NN startup it will 
> load previous fsimage and apply rest of the edits.
> Functionally there's no harm but this ckpt file will be retained as is.
> Purger will not remove the ckpt file though other old fsimage files will be 
> taken care.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8135) Remove the deprecated FSConstants class

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562753#comment-14562753
 ] 

Hudson commented on HDFS-8135:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #941 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/941/])
Update CHANGES.txt for HDFS-8135. (wheat9: rev 
c46d4bafe1e34b77be3f218b4901f66db4db97f4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove the deprecated FSConstants class
> ---
>
> Key: HDFS-8135
> URL: https://issues.apache.org/jira/browse/HDFS-8135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 3.0.0
>
> Attachments: HDFS-8135-041315.patch
>
>
> The {{FSConstants}} class has been marked as deprecated since 0.23. There is 
> no uses of this class in the current code base and it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8482) Rename BlockInfoContiguous to BlockInfo

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562752#comment-14562752
 ] 

Hudson commented on HDFS-8482:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #941 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/941/])
HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe Zhang. 
(wang: rev 4928f5473394981829e5ffd4b16ea0801baf5c45)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBlockUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTruncateQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/

[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562756#comment-14562756
 ] 

Hudson commented on HDFS-5033:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #941 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/941/])
HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't have 
permissions to read the source (Darrell Taylor via aw) (aw: rev 
bf500d979858b084f0fe5c34a85c271a728e416b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java


> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Karthik Kambatla
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HDFS-5033.001.patch
>
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8431) hdfs crypto class not found in Windows

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562751#comment-14562751
 ] 

Hudson commented on HDFS-8431:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #941 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/941/])
HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer. 
(cnauroth: rev 50eeea13000f0c82e0567410f0f8b611248f8c1b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd


> hdfs crypto class not found in Windows
> --
>
> Key: HDFS-8431
> URL: https://issues.apache.org/jira/browse/HDFS-8431
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.6.0
> Environment: Windows only
>Reporter: Sumana Sathish
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: encryption, scripts, windows
> Fix For: 2.8.0
>
> Attachments: Screen Shot 2015-05-18 at 6.27.11 PM.png, 
> hdfs-8431.001.patch, hdfs-8431.002.patch
>
>
> Attached screenshot shows that hdfs could not find class 'crypto' for Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8270) create() always retried with hardcoded timeout when file already exists

2015-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562863#comment-14562863
 ] 

Hadoop QA commented on HDFS-8270:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 17s | The applied patch generated  4 
new checkstyle issues (total was 145, now 146). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  4s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 160m 38s | Tests failed in hadoop-hdfs. |
| | | 203m 29s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.TestIsMethodSupported |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735834/HDFS-8270.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 50eeea1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11150/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11150/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11150/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11150/console |


This message was automatically generated.

> create() always retried with hardcoded timeout when file already exists
> ---
>
> Key: HDFS-8270
> URL: https://issues.apache.org/jira/browse/HDFS-8270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Andrey Stepachev
>Assignee: J.Andreina
> Attachments: HDFS-8270.1.patch
>
>
> In Hbase we stumbled on unexpected behaviour, which could 
> break things. 
> HDFS-6478 fixed wrong exception
> translation, but that apparently led to unexpected bahaviour:
> clients trying to create file without override=true will be forced
> to retry hardcoded amount of time (60 seconds).
> That could break or slowdown systems, that use filesystem
> for locks (like hbase fsck did, and we got it broken HBASE-13574).
> We should make this behaviour configurable, do client really need
> to wait lease timeout to be sure that file doesn't exists, or it it should
> be enough to fail fast.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8431) hdfs crypto class not found in Windows

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562957#comment-14562957
 ] 

Hudson commented on HDFS-8431:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2139 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/])
HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer. 
(cnauroth: rev 50eeea13000f0c82e0567410f0f8b611248f8c1b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd


> hdfs crypto class not found in Windows
> --
>
> Key: HDFS-8431
> URL: https://issues.apache.org/jira/browse/HDFS-8431
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.6.0
> Environment: Windows only
>Reporter: Sumana Sathish
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: encryption, scripts, windows
> Fix For: 2.8.0
>
> Attachments: Screen Shot 2015-05-18 at 6.27.11 PM.png, 
> hdfs-8431.001.patch, hdfs-8431.002.patch
>
>
> Attached screenshot shows that hdfs could not find class 'crypto' for Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8482) Rename BlockInfoContiguous to BlockInfo

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562960#comment-14562960
 ] 

Hudson commented on HDFS-8482:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2139 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/])
HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe Zhang. 
(wang: rev 4928f5473394981829e5ffd4b16ea0801baf5c45)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBlockUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTruncateQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-h

[jira] [Commented] (HDFS-8135) Remove the deprecated FSConstants class

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562962#comment-14562962
 ] 

Hudson commented on HDFS-8135:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2139 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/])
Update CHANGES.txt for HDFS-8135. (wheat9: rev 
c46d4bafe1e34b77be3f218b4901f66db4db97f4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove the deprecated FSConstants class
> ---
>
> Key: HDFS-8135
> URL: https://issues.apache.org/jira/browse/HDFS-8135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 3.0.0
>
> Attachments: HDFS-8135-041315.patch
>
>
> The {{FSConstants}} class has been marked as deprecated since 0.23. There is 
> no uses of this class in the current code base and it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562969#comment-14562969
 ] 

Hudson commented on HDFS-5033:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2139 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2139/])
HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't have 
permissions to read the source (Darrell Taylor via aw) (aw: rev 
bf500d979858b084f0fe5c34a85c271a728e416b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java


> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Karthik Kambatla
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HDFS-5033.001.patch
>
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8431) hdfs crypto class not found in Windows

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562974#comment-14562974
 ] 

Hudson commented on HDFS-8431:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #199 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/199/])
HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer. 
(cnauroth: rev 50eeea13000f0c82e0567410f0f8b611248f8c1b)
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> hdfs crypto class not found in Windows
> --
>
> Key: HDFS-8431
> URL: https://issues.apache.org/jira/browse/HDFS-8431
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.6.0
> Environment: Windows only
>Reporter: Sumana Sathish
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: encryption, scripts, windows
> Fix For: 2.8.0
>
> Attachments: Screen Shot 2015-05-18 at 6.27.11 PM.png, 
> hdfs-8431.001.patch, hdfs-8431.002.patch
>
>
> Attached screenshot shows that hdfs could not find class 'crypto' for Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8482) Rename BlockInfoContiguous to BlockInfo

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562977#comment-14562977
 ] 

Hudson commented on HDFS-8482:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #199 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/199/])
HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe Zhang. 
(wang: rev 4928f5473394981829e5ffd4b16ea0801baf5c45)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTruncateQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/j

[jira] [Commented] (HDFS-8135) Remove the deprecated FSConstants class

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562980#comment-14562980
 ] 

Hudson commented on HDFS-8135:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #199 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/199/])
Update CHANGES.txt for HDFS-8135. (wheat9: rev 
c46d4bafe1e34b77be3f218b4901f66db4db97f4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove the deprecated FSConstants class
> ---
>
> Key: HDFS-8135
> URL: https://issues.apache.org/jira/browse/HDFS-8135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 3.0.0
>
> Attachments: HDFS-8135-041315.patch
>
>
> The {{FSConstants}} class has been marked as deprecated since 0.23. There is 
> no uses of this class in the current code base and it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562992#comment-14562992
 ] 

Hudson commented on HDFS-5033:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #199 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/199/])
HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't have 
permissions to read the source (Darrell Taylor via aw) (aw: rev 
bf500d979858b084f0fe5c34a85c271a728e416b)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java


> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Karthik Kambatla
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HDFS-5033.001.patch
>
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3716) Purger should remove stale fsimage ckpt files

2015-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563043#comment-14563043
 ] 

Hadoop QA commented on HDFS-3716:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 33s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 17s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  4s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 162m 33s | Tests passed in hadoop-hdfs. 
|
| | | 205m 21s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735851/HDFS-3716.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7e509f5 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11151/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11151/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11151/console |


This message was automatically generated.

> Purger should remove stale fsimage ckpt files
> -
>
> Key: HDFS-3716
> URL: https://issues.apache.org/jira/browse/HDFS-3716
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: suja s
>Assignee: J.Andreina
>Priority: Minor
> Attachments: HDFS-3716.1.patch
>
>
> NN got killed while checkpointing in progress before renaming the ckpt file 
> to actual file.
> Since the checkpointing process is not completed, on next NN startup it will 
> load previous fsimage and apply rest of the edits.
> Functionally there's no harm but this ckpt file will be retained as is.
> Purger will not remove the ckpt file though other old fsimage files will be 
> taken care.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8482) Rename BlockInfoContiguous to BlockInfo

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563077#comment-14563077
 ] 

Hudson commented on HDFS-8482:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #209 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/209/])
HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe Zhang. 
(wang: rev 4928f5473394981829e5ffd4b16ea0801baf5c45)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTruncateQuotaUpdate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBlockUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/

[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563080#comment-14563080
 ] 

Hudson commented on HDFS-5033:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #209 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/209/])
HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't have 
permissions to read the source (Darrell Taylor via aw) (aw: rev 
bf500d979858b084f0fe5c34a85c271a728e416b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java


> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Karthik Kambatla
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HDFS-5033.001.patch
>
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8431) hdfs crypto class not found in Windows

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563076#comment-14563076
 ] 

Hudson commented on HDFS-8431:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #209 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/209/])
HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer. 
(cnauroth: rev 50eeea13000f0c82e0567410f0f8b611248f8c1b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd


> hdfs crypto class not found in Windows
> --
>
> Key: HDFS-8431
> URL: https://issues.apache.org/jira/browse/HDFS-8431
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.6.0
> Environment: Windows only
>Reporter: Sumana Sathish
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: encryption, scripts, windows
> Fix For: 2.8.0
>
> Attachments: Screen Shot 2015-05-18 at 6.27.11 PM.png, 
> hdfs-8431.001.patch, hdfs-8431.002.patch
>
>
> Attached screenshot shows that hdfs could not find class 'crypto' for Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8135) Remove the deprecated FSConstants class

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563078#comment-14563078
 ] 

Hudson commented on HDFS-8135:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #209 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/209/])
Update CHANGES.txt for HDFS-8135. (wheat9: rev 
c46d4bafe1e34b77be3f218b4901f66db4db97f4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove the deprecated FSConstants class
> ---
>
> Key: HDFS-8135
> URL: https://issues.apache.org/jira/browse/HDFS-8135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 3.0.0
>
> Attachments: HDFS-8135-041315.patch
>
>
> The {{FSConstants}} class has been marked as deprecated since 0.23. There is 
> no uses of this class in the current code base and it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8328) Follow-on to update decode for DataNode striped blocks reconstruction

2015-05-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563090#comment-14563090
 ] 

Kai Zheng commented on HDFS-8328:
-

Thanks Yi for working on this. The patch looks good in logic, just some 
comments for better readable:
1. {{minRequiredSources}} looks like a little confusing, because from coder's 
point of view, suppose 6+3, minRequiredSources would be 6 as dataBlkNum at the 
first glance. Better to rename it or have some comments for it.
{code}
+  final int cellsNum = (int)((blockGroup.getNumBytes() - 1) / cellSize + 
1);
+  minRequiredSources = Math.min(cellsNum, dataBlkNum);
{code}
2. How about renaming {{nullInputBuffers}} to nullCellBuffers, zeroCellBuffers 
or paddingCellBuffers, as you know, in the {{inputs}} array for the decode 
call, null entries indicate erasure or not to read. It could avoid some 
misunderstanding. Better to have some comments for it.
{code}
+  if (minRequiredSources < dataBlkNum) {
+nullInputBuffers = 
+new ByteBuffer[dataBlkNum - minRequiredSources];
+nullInputIndices = new short[dataBlkNum - minRequiredSources];
+  }
{code}
3. Any better name for {{success}}? nsuccess => numSuccess or ...
{code}
+int[] success = new int[minRequiredSources];
 int nsuccess = 0;
{code}
4. I guess the following utilities can be moved elsewhere and shared with 
client side. {{targetsStatus}} could have a better name.
{code}
+private int[] getErasedIndices(boolean[] targetsStatus) {
+  int[] result = new int[targets.length];
+  int m = 0;
+  for (int i = 0; i < targets.length; i++) {
+if (targetsStatus[i]) {
+  result[m++] = covertIndex4Decode(targetIndices[i]);
+}
+  }
+  return Arrays.copyOf(result, m);
+}
+
+private int covertIndex4Decode(int index) {
+  return index < dataBlkNum ? index + parityBlkNum : index - dataBlkNum;
+}
+
{code}
5. I'm wondering if the following codes can be better organized, like all the 
codes can be split into two functions: newStrippedReader and newBlockReader.
{code}
+private StripedReader addStripedReader(int i, long offset) {
+  StripedReader reader = new StripedReader(liveIndices[i]);
+  stripedReaders.add(reader);
+
+  BlockReader blockReader = newBlockReader(
+  getBlock(blockGroup, liveIndices[i]), offset, sources[i]);
+  if (blockReader != null) {
+initChecksumAndBufferSizeIfNeeded(blockReader);
+reader.blockReader = blockReader;
+  }
+  reader.buffer = ByteBuffer.allocate(bufferSize);
+  return reader;
+}
+
{code}
6. Is it easy to centralize all the input/output buffers allocation in a 
function, so in future it would be easier to enhance respecting the fact that 
Java coders like on-heap buffer, but native coders prefer direct buffer.

> Follow-on to update decode for DataNode striped blocks reconstruction
> -
>
> Key: HDFS-8328
> URL: https://issues.apache.org/jira/browse/HDFS-8328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-8328-HDFS-7285.001.patch
>
>
> Current the decode for DataNode striped blocks reconstruction is a 
> workaround, we need to update it after the decode fix in HADOOP-11847.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8431) hdfs crypto class not found in Windows

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563143#comment-14563143
 ] 

Hudson commented on HDFS-8431:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2157 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2157/])
HDFS-8431. hdfs crypto class not found in Windows. Contributed by Anu Engineer. 
(cnauroth: rev 50eeea13000f0c82e0567410f0f8b611248f8c1b)
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> hdfs crypto class not found in Windows
> --
>
> Key: HDFS-8431
> URL: https://issues.apache.org/jira/browse/HDFS-8431
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.6.0
> Environment: Windows only
>Reporter: Sumana Sathish
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: encryption, scripts, windows
> Fix For: 2.8.0
>
> Attachments: Screen Shot 2015-05-18 at 6.27.11 PM.png, 
> hdfs-8431.001.patch, hdfs-8431.002.patch
>
>
> Attached screenshot shows that hdfs could not find class 'crypto' for Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8482) Rename BlockInfoContiguous to BlockInfo

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563144#comment-14563144
 ] 

Hudson commented on HDFS-8482:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2157 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2157/])
HDFS-8482. Rename BlockInfoContiguous to BlockInfo. Contributed by Zhe Zhang. 
(wang: rev 4928f5473394981829e5ffd4b16ea0801baf5c45)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CacheReplicationMonitor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBlockUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguous.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCommitBlockSynchronization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlock.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoo

[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563148#comment-14563148
 ] 

Hudson commented on HDFS-5033:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2157 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2157/])
HDFS-5033. Bad error message for fs -put/copyFromLocal if user doesn't have 
permissions to read the source (Darrell Taylor via aw) (aw: rev 
bf500d979858b084f0fe5c34a85c271a728e416b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Karthik Kambatla
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HDFS-5033.001.patch
>
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8135) Remove the deprecated FSConstants class

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563145#comment-14563145
 ] 

Hudson commented on HDFS-8135:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2157 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2157/])
Update CHANGES.txt for HDFS-8135. (wheat9: rev 
c46d4bafe1e34b77be3f218b4901f66db4db97f4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Remove the deprecated FSConstants class
> ---
>
> Key: HDFS-8135
> URL: https://issues.apache.org/jira/browse/HDFS-8135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 3.0.0
>
> Attachments: HDFS-8135-041315.patch
>
>
> The {{FSConstants}} class has been marked as deprecated since 0.23. There is 
> no uses of this class in the current code base and it can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8450) Erasure Coding: Consolidate erasure coding zone related implementation into a single class

2015-05-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563164#comment-14563164
 ] 

Kai Zheng commented on HDFS-8450:
-

Nice work Rakesh! The patch looks good. Some comments and suggestions:
1. In {{createErasureCodingZone}} and related helper functions, would you check 
to ensure least required calls are made, for checkSuperuserPrivilege, 
checkOperation, writeLock and etc.?
2. Maybe {{ErasureCodingZoneManager#createErasureCodingZone}} could return a 
list directly?
{code}
+  private static List createErasureCodingZone(final ECSchema schema,
+  int cellSize, String src, FSDirectory fsd) throws IOException {
+fsd.writeLock();
+List xAttrs = Lists.newArrayListWithCapacity(1);
+try {
+  final XAttr ecXAttr = fsd.ecZoneManager.createErasureCodingZone(src,
+  schema, cellSize);
+  xAttrs.add(ecXAttr);
+} finally {
+  fsd.writeUnlock();
+}
+return xAttrs;
+  }
{code}
3. Gets the ECZone info for path. Gets => Get
4. I guess it would be a good chance to clean up the messy function names here, 
either following {{createErasureCodingZone}} or {{getECZoneInfo}}.
5. In similar places like below, maybe we could avoid the call {{isInECZone}} 
because if {{getECSchema}} returns non-null then it's true otherwise it's false.
{code}
+  FSDirErasureCodingZoneOp.getECSchema(fsDir, iip),
+  FSDirErasureCodingZoneOp.isInECZone(fsDir, iip));
{code}

> Erasure Coding: Consolidate erasure coding zone related implementation into a 
> single class
> --
>
> Key: HDFS-8450
> URL: https://issues.apache.org/jira/browse/HDFS-8450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8450-HDFS-7285-00.patch, 
> HDFS-8450-HDFS-7285-01.patch, HDFS-8450-HDFS-7285-02.patch
>
>
> The idea is to follow the same pattern suggested by HDFS-7416. It is good  to 
> consolidate all the erasure coding zone related implementations of 
> {{FSNamesystem}}. Here, proposing {{FSDirErasureCodingZoneOp}} class to have 
> functions to perform related erasure coding zone operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8474:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

At this point, consensus seems to be that this is an Impala problem.

Closing this as won't fix.

> Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
> -
>
> Key: HDFS-8474
> URL: https://issues.apache.org/jira/browse/HDFS-8474
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, libhdfs
>Affects Versions: 2.7.0
> Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: HDFS-8474.01.patch
>
>
> Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
> This is because getJNIEnv is not visible in the so file.
> Compilation fails with below error message :
> ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
> `impala::HBaseTableScanner::Init()':
> /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
> undefined reference to `getJNIEnv'
> ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
>  more undefined references to `getJNIEnv' follow
> collect2: ld returned 1 exit status
> make[3]: *** [be/build/release/service/impalad] Error 1
> make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
> make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
> make: *** [impalad] Error 2
> Compiler Impala Failed, exit
> libhdfs.so.0.0.0 returns nothing when following command is run.
> "nm -D libhdfs.so.0.0.0  | grep getJNIEnv"
> The change in HDFS-7879 breaks the backward compatibility of libhdfs although 
> it can be argued that Impala shouldn't be using above API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-05-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563179#comment-14563179
 ] 

Kai Zheng commented on HDFS-8062:
-

Sorry Kai I was not able to read the patch fully but will do it recently. I 
thought the chunkSize/cellSize related change is a major concern. Would you 
look at the change in HDFS-8375 and see how much related work to be done here 
to follow on? If non-trivial, maybe do it separately. A rebase for the patch 
would be clear to that. Thanks.

> Remove hard-coded values in favor of EC schema
> --
>
> Key: HDFS-8062
> URL: https://issues.apache.org/jira/browse/HDFS-8062
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Sasaki
> Attachments: HDFS-8062-HDFS-7285-07.patch, 
> HDFS-8062-HDFS-7285-08.patch, HDFS-8062.1.patch, HDFS-8062.2.patch, 
> HDFS-8062.3.patch, HDFS-8062.4.patch, HDFS-8062.5.patch, HDFS-8062.6.patch
>
>
> Related issues about EC schema in NameNode side:
> HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
> HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
> persisted ones in image and predefined ones in XML.
> This is to revisit all the places in NameNode that uses hard-coded values in 
> favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8481) Erasure coding: remove workarounds in client side stripped blocks recovering

2015-05-28 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8481:

Summary: Erasure coding: remove workarounds in client side stripped blocks 
recovering  (was: Erasure coding: remove workarounds for codec calculation)

> Erasure coding: remove workarounds in client side stripped blocks recovering
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8491) DN shutdown race conditions with open xceivers

2015-05-28 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-8491:
-

 Summary: DN shutdown race conditions with open xceivers
 Key: HDFS-8491
 URL: https://issues.apache.org/jira/browse/HDFS-8491
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Daryn Sharp


DN shutdowns at least for restarts have many race conditions.  Shutdown is very 
noisy with exceptions.  The DN notifies writers of the restart, waits 1s and 
then interrupts the xceiver threads but does not join.  The ipc server is 
stopped and then the bpos services are stopped.

Xceivers then encounter NPEs in closeBlock because the block no longer exists 
in the volume map when transient storage is checked.  Just before that, the DN 
notifies the NN the block was received.  This does not appear to always be 
true, but rather that the thread was interrupted. They race with bpos shutdown, 
and luckily appear to lose, to send the block received.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds in client side stripped blocks recovering

2015-05-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563194#comment-14563194
 ] 

Kai Zheng commented on HDFS-8481:
-

Good to have this for the client side and HDFS-8328 for datanode side as both 
are not trivial. For the common codes, I guess the later committed one could be 
rebased and do some refactoring.

> Erasure coding: remove workarounds in client side stripped blocks recovering
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8486) DN startup may cause severe data loss

2015-05-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563196#comment-14563196
 ] 

Daryn Sharp commented on HDFS-8486:
---

What you'll notice is a spike in corrupt blocks that tapers down.  What's going 
on is the DN's block report included all the blocks it deleted.  Over the next 
6 hours, the slice scanner slowly detects missing blocks and reports them as 
corrupt.  After 6 hours, the directory scanner detects and mass removes all the 
missing blocks.

In the 6 hour window, the NN does not know the block is under-replicated and it 
continues to send clients to the DN.  Will file a separate bug for the DN not 
informing the NN when it's missing a block it thought it had.

> DN startup may cause severe data loss
> -
>
> Key: HDFS-8486
> URL: https://issues.apache.org/jira/browse/HDFS-8486
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 0.23.1, 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
>
> A race condition between block pool initialization and the directory scanner 
> may cause a mass deletion of blocks in multiple storages.
> If block pool initialization finds a block on disk that is already in the 
> replica map, it deletes one of the blocks based on size, GS, etc.  
> Unfortunately it _always_ deletes one of the blocks even if identical, thus 
> the replica map _must_ be empty when the pool is initialized.
> The directory scanner starts at a random time within its periodic interval 
> (default 6h).  If the scanner starts very early it races to populate the 
> replica map, causing the block pool init to erroneously delete blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8492) DN should notify NN when client requests a missing block

2015-05-28 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-8492:
-

 Summary: DN should notify NN when client requests a missing block
 Key: HDFS-8492
 URL: https://issues.apache.org/jira/browse/HDFS-8492
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Daryn Sharp


If the DN has a block its volume map but not on-disk, it tells clients it's an 
invalid block id.  The NN is not informed of the missing block until either the 
bp slice scanner or the directory scanner detects the missing block.  DN should 
remove the replica from the volume map and inform the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6440) Support more than 2 NameNodes

2015-05-28 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HDFS-6440:
--
Attachment: hdfs-6440-trunk-v6.patch

New version, hopefully fixing the findbugs/checkstyle issues and increasing the 
TestPipelinesFailover timeout to get it to pass.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, 
> hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8489) Subclass BlockInfo to represent contiguous blocks

2015-05-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8489 started by Zhe Zhang.
---
> Subclass BlockInfo to represent contiguous blocks
> -
>
> Key: HDFS-8489
> URL: https://issues.apache.org/jira/browse/HDFS-8489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> As second step of the cleanup, we should make {{BlockInfo}} an abstract class 
> and merge the subclass {{BlockInfoContiguous}} from HDFS-7285 into trunk. The 
> patch should clearly separate where to use the abstract class versus the 
> subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8486) DN startup may cause severe data loss

2015-05-28 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-8486:
--
Attachment: HDFS-8486.patch

After multiple iterations, this is simplest low-risk patch.  The crucial part 
is the BlockPoolSlice realizes it's discovered an on-block disk that has the 
same path as in-memory.  In which case it updates the replica map with the one 
just found.

The other part is avoiding the race altogether.  The directory scan should not 
occur until after the block pools are initialized.  Although both should be 
able to  "work" simultaneously, until initialized the first time, the directory 
scanner warns there's no block scanner for every new block it finds.

Note I found writing a unit test to be extremely difficult.  The BlockPoolSlice 
ctor has numerous side-effects.  I instead split out part of duplicate 
resolution into a static method (sigh, makes future mocking impossible).

> DN startup may cause severe data loss
> -
>
> Key: HDFS-8486
> URL: https://issues.apache.org/jira/browse/HDFS-8486
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 0.23.1, 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HDFS-8486.patch
>
>
> A race condition between block pool initialization and the directory scanner 
> may cause a mass deletion of blocks in multiple storages.
> If block pool initialization finds a block on disk that is already in the 
> replica map, it deletes one of the blocks based on size, GS, etc.  
> Unfortunately it _always_ deletes one of the blocks even if identical, thus 
> the replica map _must_ be empty when the pool is initialized.
> The directory scanner starts at a random time within its periodic interval 
> (default 6h).  If the scanner starts very early it races to populate the 
> replica map, causing the block pool init to erroneously delete blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8401) Memfs - a layered file system for in-memory storage in HDFS

2015-05-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563399#comment-14563399
 ] 

Colin Patrick McCabe commented on HDFS-8401:


bq. Allow using memory features without calling HDFS-specific APIs. This also 
isolates applications from evolving APIs. Applications currently use shims and 
reflection tricks to work with different versions of HDFS.

HDFS-4949 didn't require applications to call any HDFS-specific APIs.  The 
administrator simply set a list of files and directories to be cached.  When 
applications read those files or directories, they were retrieved from the 
cache.

We could do something similar here by specifying that we wanted opportunistic 
caching on a certain directory subtree.  For example we could set a 2Q eviction 
policy on a certain directory subtree and have the NameNode manage that.  
[~andrew.wang] and I discussed doing that for HDFS-4949, but we simply didn't 
have time.

bq. Once applications start using memfs someone could write a memfs layer over 
another HCFS e.g. Amazon S3.

That does raise the question of why this belongs in HDFS, though.  If we just 
want a generic FS caching layer in Hadoop, we could do that in hadoop-common.

> Memfs - a layered file system for in-memory storage in HDFS
> ---
>
> Key: HDFS-8401
> URL: https://issues.apache.org/jira/browse/HDFS-8401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> We propose creating a layered filesystem that can provide in-memory storage 
> using existing features within HDFS. memfs will use lazy persist writes 
> introduced by HDFS-6581. For reads, memfs can use the Centralized Cache 
> Management feature introduced in HDFS-4949 to load hot data to memory.
> Paths in memfs and hdfs will correspond 1:1 so memfs will require no 
> additional metadata and it can be implemented entirely as a client-side 
> library.
> The advantage of a layered file system is that it requires little or no 
> changes to existing applications. e.g. Applications can use something like 
> {{memfs://}} instead of {{hdfs://}} for files targeted to memory storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8486) DN startup may cause severe data loss

2015-05-28 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-8486:
--
Status: Patch Available  (was: Open)

> DN startup may cause severe data loss
> -
>
> Key: HDFS-8486
> URL: https://issues.apache.org/jira/browse/HDFS-8486
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.0.0-alpha, 0.23.1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HDFS-8486.patch
>
>
> A race condition between block pool initialization and the directory scanner 
> may cause a mass deletion of blocks in multiple storages.
> If block pool initialization finds a block on disk that is already in the 
> replica map, it deletes one of the blocks based on size, GS, etc.  
> Unfortunately it _always_ deletes one of the blocks even if identical, thus 
> the replica map _must_ be empty when the pool is initialized.
> The directory scanner starts at a random time within its periodic interval 
> (default 6h).  If the scanner starts very early it races to populate the 
> replica map, causing the block pool init to erroneously delete blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8407) hdfsListDirectory must set errno to 0 on success

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8407:
---
Summary: hdfsListDirectory must set errno to 0 on success  (was: libhdfs 
hdfsListDirectory() API has different behavior than documentation)

> hdfsListDirectory must set errno to 0 on success
> 
>
> Key: HDFS-8407
> URL: https://issues.apache.org/jira/browse/HDFS-8407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Juan Yu
>Assignee: Masatake Iwasaki
> Attachments: HDFS-8407.001.patch, HDFS-8407.002.patch, 
> HDFS-8407.003.patch
>
>
> The documentation says it returns NULL on error, but it could also return 
> NULL when the directory is empty.
> /** 
>  * hdfsListDirectory - Get list of files/directories for a given
>  * directory-path. hdfsFreeFileInfo should be called to deallocate 
> memory. 
>  * @param fs The configured filesystem handle.
>  * @param path The path of the directory. 
>  * @param numEntries Set to the number of files/directories in path.
>  * @return Returns a dynamically-allocated array of hdfsFileInfo
>  * objects; NULL on error.
>  */
> {code}
> hdfsFileInfo *pathList = NULL; 
> ...
> //Figure out the number of entries in that directory
> jPathListSize = (*env)->GetArrayLength(env, jPathList);
> if (jPathListSize == 0) {
> ret = 0;
> goto done;
> }
> ...
> if (ret) {
> hdfsFreeFileInfo(pathList, jPathListSize);
> errno = ret;
> return NULL;
> }
> *numEntries = jPathListSize;
> return pathList;
> {code}
> Either change the implementation to match the doc, or fix the doc to match 
> the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-05-28 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-8493:


 Summary: Consolidate truncate() related implementation in a single 
class
 Key: HDFS-8493
 URL: https://issues.apache.org/jira/browse/HDFS-8493
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai


This jira proposes to consolidate truncate() related methods into a single 
class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8407) hdfsListDirectory must set errno to 0 on success

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8407:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

committed to 2.8.  thanks, Masatake.

> hdfsListDirectory must set errno to 0 on success
> 
>
> Key: HDFS-8407
> URL: https://issues.apache.org/jira/browse/HDFS-8407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Juan Yu
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-8407.001.patch, HDFS-8407.002.patch, 
> HDFS-8407.003.patch
>
>
> The documentation says it returns NULL on error, but it could also return 
> NULL when the directory is empty.
> /** 
>  * hdfsListDirectory - Get list of files/directories for a given
>  * directory-path. hdfsFreeFileInfo should be called to deallocate 
> memory. 
>  * @param fs The configured filesystem handle.
>  * @param path The path of the directory. 
>  * @param numEntries Set to the number of files/directories in path.
>  * @return Returns a dynamically-allocated array of hdfsFileInfo
>  * objects; NULL on error.
>  */
> {code}
> hdfsFileInfo *pathList = NULL; 
> ...
> //Figure out the number of entries in that directory
> jPathListSize = (*env)->GetArrayLength(env, jPathList);
> if (jPathListSize == 0) {
> ret = 0;
> goto done;
> }
> ...
> if (ret) {
> hdfsFreeFileInfo(pathList, jPathListSize);
> errno = ret;
> return NULL;
> }
> *numEntries = jPathListSize;
> return pathList;
> {code}
> Either change the implementation to match the doc, or fix the doc to match 
> the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-05-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563456#comment-14563456
 ] 

Colin Patrick McCabe commented on HDFS-7923:


I posted a new patch with some changes from the previous approach.  Since 
Datanodes can go away at any time after the NN gives them the green light, this 
patch adds the concept of leases for block reports.  Leases have a fixed time 
length... if the DN can't send its block report within that time, it loses the 
lease.  I also added a new fault injection framework to monitor what is going 
on in the BlockManager.  There was some milliseconds / seconds confusion in the 
existing initial block report delay code that I fixed (might want to split this 
off into a separate JIRA...)

> The DataNodes should rate-limit their full block reports by asking the NN on 
> heartbeat messages
> ---
>
> Key: HDFS-7923
> URL: https://issues.apache.org/jira/browse/HDFS-7923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7923.000.patch, HDFS-7923.001.patch, 
> HDFS-7923.002.patch, HDFS-7923.003.patch
>
>
> The DataNodes should rate-limit their full block reports.  They can do this 
> by first sending a heartbeat message to the NN with an optional boolean set 
> which requests permission to send a full block report.  If the NN responds 
> with another optional boolean set, the DN will send an FBR... if not, it will 
> wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8429) Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8429:
---
Summary: Avoid stuck threads if there is an error in DomainSocketWatcher 
that stops the thread  (was: Avoid stuck threads if there is a fatal error in 
DomainSocketWatcher)

> Avoid stuck threads if there is an error in DomainSocketWatcher that stops 
> the thread
> -
>
> Key: HDFS-8429
> URL: https://issues.apache.org/jira/browse/HDFS-8429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
> Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch, 
> HDFS-8429-003.patch
>
>
> In our cluster, an application is hung when doing a short circuit read of 
> local hdfs block. By looking into the log, we found the DataNode's 
> DomainSocketWatcher.watcherThread has exited with following log:
> {code}
> ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
> Thread[Thread-25,5,main] terminating on unexpected exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> The line 463 is following code snippet:
> {code}
>  try {
> for (int fd : fdSet.getAndClearReadableFds()) {
>   sendCallbackAndRemove("getAndClearReadableFds", entries, fdSet,
> fd);
> }
> {code}
> getAndClearReadableFds is a native method which will malloc an int array. 
> Since our memory is very tight, it looks like the malloc failed and a NULL 
> pointer is returned.
> The bad thing is that other threads then blocked in stack like this:
> {code}
> "DataXceiver for client 
> unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
> operation #1]" daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
> condition [0x7f09b9856000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007b0174808> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> IMO, we should exit the DN so that the users can know that something go  
> wrong  and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8429) Avoid stuck threads if there is a fatal error in DomainSocketWatcher

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8429:
---
Summary: Avoid stuck threads if there is a fatal error in 
DomainSocketWatcher  (was: The DomainSocketWatcher thread should not block 
other threads if it dies)

> Avoid stuck threads if there is a fatal error in DomainSocketWatcher
> 
>
> Key: HDFS-8429
> URL: https://issues.apache.org/jira/browse/HDFS-8429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
> Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch, 
> HDFS-8429-003.patch
>
>
> In our cluster, an application is hung when doing a short circuit read of 
> local hdfs block. By looking into the log, we found the DataNode's 
> DomainSocketWatcher.watcherThread has exited with following log:
> {code}
> ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
> Thread[Thread-25,5,main] terminating on unexpected exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> The line 463 is following code snippet:
> {code}
>  try {
> for (int fd : fdSet.getAndClearReadableFds()) {
>   sendCallbackAndRemove("getAndClearReadableFds", entries, fdSet,
> fd);
> }
> {code}
> getAndClearReadableFds is a native method which will malloc an int array. 
> Since our memory is very tight, it looks like the malloc failed and a NULL 
> pointer is returned.
> The bad thing is that other threads then blocked in stack like this:
> {code}
> "DataXceiver for client 
> unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
> operation #1]" daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
> condition [0x7f09b9856000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007b0174808> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> IMO, we should exit the DN so that the users can know that something go  
> wrong  and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8429) Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread

2015-05-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8429:
---
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

+1.  Committed to 2.8.  Thanks, zhouyingchao.

> Avoid stuck threads if there is an error in DomainSocketWatcher that stops 
> the thread
> -
>
> Key: HDFS-8429
> URL: https://issues.apache.org/jira/browse/HDFS-8429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
> Fix For: 2.8.0
>
> Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch, 
> HDFS-8429-003.patch
>
>
> In our cluster, an application is hung when doing a short circuit read of 
> local hdfs block. By looking into the log, we found the DataNode's 
> DomainSocketWatcher.watcherThread has exited with following log:
> {code}
> ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
> Thread[Thread-25,5,main] terminating on unexpected exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> The line 463 is following code snippet:
> {code}
>  try {
> for (int fd : fdSet.getAndClearReadableFds()) {
>   sendCallbackAndRemove("getAndClearReadableFds", entries, fdSet,
> fd);
> }
> {code}
> getAndClearReadableFds is a native method which will malloc an int array. 
> Since our memory is very tight, it looks like the malloc failed and a NULL 
> pointer is returned.
> The bad thing is that other threads then blocked in stack like this:
> {code}
> "DataXceiver for client 
> unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
> operation #1]" daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
> condition [0x7f09b9856000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007b0174808> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> IMO, we should exit the DN so that the users can know that something go  
> wrong  and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8401) Memfs - a layered file system for in-memory storage in HDFS

2015-05-28 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563531#comment-14563531
 ] 

Arpit Agarwal commented on HDFS-8401:
-

bq. The administrator simply set a list of files and directories to be cached. 
When applications read those files or directories, they were retrieved from the 
cache.
It's impractical to involve the administrator every time a new file is to be 
cached. We've heard this requirement makes caching difficult to use. There are 
a couple of other things can help with usability e.g. de-duplication of cache 
directives, predictability of cache locality.

bq. If we just want a generic FS caching layer in Hadoop, we could do that in 
hadoop-common.
That was my intention. I'll move the jira to common.

> Memfs - a layered file system for in-memory storage in HDFS
> ---
>
> Key: HDFS-8401
> URL: https://issues.apache.org/jira/browse/HDFS-8401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> We propose creating a layered filesystem that can provide in-memory storage 
> using existing features within HDFS. memfs will use lazy persist writes 
> introduced by HDFS-6581. For reads, memfs can use the Centralized Cache 
> Management feature introduced in HDFS-4949 to load hot data to memory.
> Paths in memfs and hdfs will correspond 1:1 so memfs will require no 
> additional metadata and it can be implemented entirely as a client-side 
> library.
> The advantage of a layered file system is that it requires little or no 
> changes to existing applications. e.g. Applications can use something like 
> {{memfs://}} instead of {{hdfs://}} for files targeted to memory storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8407) hdfsListDirectory must set errno to 0 on success

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563533#comment-14563533
 ] 

Hudson commented on HDFS-8407:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7919 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7919/])
HDFS-8407. libhdfs hdfsListDirectory must set errno to 0 on success (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
d2d95bfe886a7fdf9d58fd5c47ec7c0158393afb)
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c


> hdfsListDirectory must set errno to 0 on success
> 
>
> Key: HDFS-8407
> URL: https://issues.apache.org/jira/browse/HDFS-8407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Juan Yu
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-8407.001.patch, HDFS-8407.002.patch, 
> HDFS-8407.003.patch
>
>
> The documentation says it returns NULL on error, but it could also return 
> NULL when the directory is empty.
> /** 
>  * hdfsListDirectory - Get list of files/directories for a given
>  * directory-path. hdfsFreeFileInfo should be called to deallocate 
> memory. 
>  * @param fs The configured filesystem handle.
>  * @param path The path of the directory. 
>  * @param numEntries Set to the number of files/directories in path.
>  * @return Returns a dynamically-allocated array of hdfsFileInfo
>  * objects; NULL on error.
>  */
> {code}
> hdfsFileInfo *pathList = NULL; 
> ...
> //Figure out the number of entries in that directory
> jPathListSize = (*env)->GetArrayLength(env, jPathList);
> if (jPathListSize == 0) {
> ret = 0;
> goto done;
> }
> ...
> if (ret) {
> hdfsFreeFileInfo(pathList, jPathListSize);
> errno = ret;
> return NULL;
> }
> *numEntries = jPathListSize;
> return pathList;
> {code}
> Either change the implementation to match the doc, or fix the doc to match 
> the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8429) Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563535#comment-14563535
 ] 

Hudson commented on HDFS-8429:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7919 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7919/])
HDFS-8429. Avoid stuck threads if there is an error in DomainSocketWatcher that 
stops the thread.  (zhouyingchao via cmccabe) (cmccabe: rev 
246cefa089156a50bf086b8b1e4d4324d66dc58c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocketWatcher.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocketWatcher.c
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/unix/TestDomainSocketWatcher.java


> Avoid stuck threads if there is an error in DomainSocketWatcher that stops 
> the thread
> -
>
> Key: HDFS-8429
> URL: https://issues.apache.org/jira/browse/HDFS-8429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
> Fix For: 2.8.0
>
> Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch, 
> HDFS-8429-003.patch
>
>
> In our cluster, an application is hung when doing a short circuit read of 
> local hdfs block. By looking into the log, we found the DataNode's 
> DomainSocketWatcher.watcherThread has exited with following log:
> {code}
> ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
> Thread[Thread-25,5,main] terminating on unexpected exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> The line 463 is following code snippet:
> {code}
>  try {
> for (int fd : fdSet.getAndClearReadableFds()) {
>   sendCallbackAndRemove("getAndClearReadableFds", entries, fdSet,
> fd);
> }
> {code}
> getAndClearReadableFds is a native method which will malloc an int array. 
> Since our memory is very tight, it looks like the malloc failed and a NULL 
> pointer is returned.
> The bad thing is that other threads then blocked in stack like this:
> {code}
> "DataXceiver for client 
> unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
> operation #1]" daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
> condition [0x7f09b9856000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007b0174808> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> IMO, we should exit the DN so that the users can know that something go  
> wrong  and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8489) Subclass BlockInfo to represent contiguous blocks

2015-05-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8489:


Initial patch with the following changes:
# Converts {{BlockInfo}} to abstract class
# Merge {{BlockInfoContiguous}} from HDFS-7285 branch
# Changes {{BlockInfoContiguousUnderConstruction}} to subclass 
{{BlockInfoContiguous}} instead of {{BlockInfo}}
# Changes a few places which initiate a {{BlockInfo}} to initiate a 
{{BlockInfoContiguous}} instead. The most important one is 
{{FSDirWriteFileOp#addBlock}}. Others include fsimage, edit log operations, and 
tests.
# It is a little tricky to handle the copy constructor {{protected 
BlockInfoContiguous(BlockInfoContiguous from)}}. The patch simply uses type 
casting and we can brainstorm for a better idea.

> Subclass BlockInfo to represent contiguous blocks
> -
>
> Key: HDFS-8489
> URL: https://issues.apache.org/jira/browse/HDFS-8489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> As second step of the cleanup, we should make {{BlockInfo}} an abstract class 
> and merge the subclass {{BlockInfoContiguous}} from HDFS-7285 into trunk. The 
> patch should clearly separate where to use the abstract class versus the 
> subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8489) Subclass BlockInfo to represent contiguous blocks

2015-05-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8489:

Attachment: HDFS-8489.00.patch

> Subclass BlockInfo to represent contiguous blocks
> -
>
> Key: HDFS-8489
> URL: https://issues.apache.org/jira/browse/HDFS-8489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8489.00.patch
>
>
> As second step of the cleanup, we should make {{BlockInfo}} an abstract class 
> and merge the subclass {{BlockInfoContiguous}} from HDFS-7285 into trunk. The 
> patch should clearly separate where to use the abstract class versus the 
> subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8401) Memfs - a layered file system for in-memory storage in HDFS

2015-05-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563580#comment-14563580
 ] 

Andrew Wang commented on HDFS-8401:
---

bq. It's impractical to involve the administrator every time a new file is to 
be cached.

Read caching can be done by normal users, not just admins. We also have 
directory-level cache directives which kick in automatically without any 
explicit user involvement

If some of the other enhancements you mention could be built into HDFS, that'd 
also be preferable (de-dupe, predictability (?)).

Anecdotal, but I've heard a lot of users say that changing the scheme is not an 
option for them. If your concern is ease of use, focusing on improvements to 
what we already have in HDFS might be more bang for the buck. We have the 
LAZY_PERSIST storage policy and directory-level cache directives which seem 
like a start. Colin also mentioned opportunistic cache directives, which would 
be a really nice enhancement.

> Memfs - a layered file system for in-memory storage in HDFS
> ---
>
> Key: HDFS-8401
> URL: https://issues.apache.org/jira/browse/HDFS-8401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> We propose creating a layered filesystem that can provide in-memory storage 
> using existing features within HDFS. memfs will use lazy persist writes 
> introduced by HDFS-6581. For reads, memfs can use the Centralized Cache 
> Management feature introduced in HDFS-4949 to load hot data to memory.
> Paths in memfs and hdfs will correspond 1:1 so memfs will require no 
> additional metadata and it can be implemented entirely as a client-side 
> library.
> The advantage of a layered file system is that it requires little or no 
> changes to existing applications. e.g. Applications can use something like 
> {{memfs://}} instead of {{hdfs://}} for files targeted to memory storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8322:

Attachment: HDFS-8322.003.patch

[~aw] Thanks a lot for this great suggestion. I updated the patch to display 
such warnings for all commands except copy commands.

> Display warning if hadoop fs -ls is showing the local filesystem
> 
>
> Key: HDFS-8322
> URL: https://issues.apache.org/jira/browse/HDFS-8322
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8322:

Attachment: HDFS-8322.003.patch

[~aw] Thanks a lot for this great suggestion. I updated the patch to display 
such warnings for all commands except copy commands.

> Display warning if hadoop fs -ls is showing the local filesystem
> 
>
> Key: HDFS-8322
> URL: https://issues.apache.org/jira/browse/HDFS-8322
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8485) Transparent Encryption Fails to work with Yarn/MapReduce

2015-05-28 Thread Ambud Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563665#comment-14563665
 ] 

Ambud Sharma commented on HDFS-8485:


I have tried 2.7 and the error still exists

16:46:31,542 ERROR [stderr] (pool-17-thread-1) Error: java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
16:46:31,542 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:489)
16:46:31,542 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
16:46:31,542 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1392)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1494)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1479)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:451)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:112)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:43)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
java.security.AccessController.doPrivileged(Native Method)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
javax.security.auth.Subject.doAs(Subject.java:422)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
16:46:31,544 ERROR [stderr] (pool-17-thread-1) Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:332)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
16:46:31,545 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
16:46:31,545 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483)
16:46:31,545 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
16:46:31,545 ERROR [stderr] (pool-17-thread-1)  at

[jira] [Updated] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8322:

Attachment: HDFS-8322.003.patch

[~aw] Thanks a lot for this great suggestion. I updated the patch to display 
such warnings for all commands except copy commands.

> Display warning if hadoop fs -ls is showing the local filesystem
> 
>
> Key: HDFS-8322
> URL: https://issues.apache.org/jira/browse/HDFS-8322
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch, 
> HDFS-8322.003.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8485) Transparent Encryption Fails to work with Yarn/MapReduce

2015-05-28 Thread Ambud Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563670#comment-14563670
 ] 

Ambud Sharma commented on HDFS-8485:


I have tried 2.7 and the error still exists

16:46:31,542 ERROR [stderr] (pool-17-thread-1) Error: java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
16:46:31,542 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:489)
16:46:31,542 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
16:46:31,542 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1392)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1494)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1479)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:451)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:112)
16:46:31,543 ERROR [stderr] (pool-17-thread-1)  at 
com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:43)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
java.security.AccessController.doPrivileged(Native Method)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
javax.security.auth.Subject.doAs(Subject.java:422)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
16:46:31,544 ERROR [stderr] (pool-17-thread-1) Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:332)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128)
16:46:31,544 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
16:46:31,545 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
16:46:31,545 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483)
16:46:31,545 ERROR [stderr] (pool-17-thread-1)  at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
16:46:31,545 ERROR [stderr] (pool-17-thread-1)  at

[jira] [Updated] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-28 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8322:

Attachment: (was: HDFS-8322.003.patch)

> Display warning if hadoop fs -ls is showing the local filesystem
> 
>
> Key: HDFS-8322
> URL: https://issues.apache.org/jira/browse/HDFS-8322
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8481) Erasure coding: remove workarounds in client side stripped blocks recovering

2015-05-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8481:

Attachment: HDFS-8481-HDFS-7285.02.patch

Thanks Walter for reviewing! It's a great idea to separate out the logic of 
finalizing decode input buffers. The new patch does that.

The second review comment doesn't look entirely clear to me. If you think it's 
logically separate from this JIRA, let's do a follow-on (probably under 
HDFS-8031).

> Erasure coding: remove workarounds in client side stripped blocks recovering
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch, HDFS-8481-HDFS-7285.02.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8293) Erasure Coding: test the retry logic of DFSStripedInputStream

2015-05-28 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8293:

Parent Issue: HDFS-8031  (was: HDFS-7285)

> Erasure Coding: test the retry logic of DFSStripedInputStream
> -
>
> Key: HDFS-8293
> URL: https://issues.apache.org/jira/browse/HDFS-8293
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>
> In DFSStripedInputStream/DFSInputStream we retry the reading sometimes to 
> refetch token or encryption key. This jira plans to add more tests for it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563748#comment-14563748
 ] 

Hadoop QA commented on HDFS-6440:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m  2s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  1s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 24 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 32s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 23s | The applied patch generated  1 
new checkstyle issues (total was 34, now 35). |
| {color:red}-1{color} | whitespace |   3m 38s | The patch has 15  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 50s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 24s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 164m 13s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   3m 54s | Tests passed in bkjournal. |
| | | 243m 44s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735911/hdfs-6440-trunk-v6.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5504a26 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11152/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11152/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11152/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11152/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| bkjournal test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11152/artifact/patchprocess/testrun_bkjournal.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11152/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11152/console |


This message was automatically generated.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, 
> hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563791#comment-14563791
 ] 

Andrew Wang commented on HDFS-8322:
---

Hi Eddy, taking a first look now. At a high-level, I think the issue we're 
trying to solve is users not having a defaultFS setup and then picking up the 
default value of {file:///}. I think we could do that check instead: if the 
user has or hasn't set a defaultFS, print a warning. You can get a 
configuration without defaults loaded and then do a get() to check this.

Few nitty review comments, might not matter after fixing the above:

* Need to document this config parameter in core-default.xml, turn the - into a 
. also
* typo "Dose" -> "Does
* the default value should end in _DEFAULT per convention

> Display warning if hadoop fs -ls is showing the local filesystem
> 
>
> Key: HDFS-8322
> URL: https://issues.apache.org/jira/browse/HDFS-8322
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8485) Transparent Encryption Fails to work with Yarn/MapReduce

2015-05-28 Thread Ambud Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563817#comment-14563817
 ] 

Ambud Sharma commented on HDFS-8485:


Was missing the key provider property. Tested and working with 2.7.0 upgrade.

> Transparent Encryption Fails to work with Yarn/MapReduce
> 
>
> Key: HDFS-8485
> URL: https://issues.apache.org/jira/browse/HDFS-8485
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: RHEL-7, Kerberos 5
>Reporter: Ambud Sharma
>Priority: Critical
> Attachments: core-site.xml, hdfs-site.xml, kms-site.xml, 
> mapred-site.xml, yarn-site.xml
>
>
> Running a simple MapReduce job that writes to a path configured as an 
> encryption zone throws exception
> 11:26:26,343 INFO  [org.apache.hadoop.mapreduce.Job] (pool-14-thread-1) Task 
> Id : attempt_1432740034176_0001_m_00_2, Status : FAILED
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1) Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:424)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:710)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1358)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1457)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1442)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:112)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:43)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> java.security.AccessController.doPrivileged(Native Method)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> javax.security.auth.Subject.doAs(Subject.java:422)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1) Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(Ker

[jira] [Resolved] (HDFS-8485) Transparent Encryption Fails to work with Yarn/MapReduce

2015-05-28 Thread Ambud Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ambud Sharma resolved HDFS-8485.

   Resolution: Fixed
Fix Version/s: 2.7.0

> Transparent Encryption Fails to work with Yarn/MapReduce
> 
>
> Key: HDFS-8485
> URL: https://issues.apache.org/jira/browse/HDFS-8485
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: RHEL-7, Kerberos 5
>Reporter: Ambud Sharma
>Priority: Critical
> Fix For: 2.7.0
>
> Attachments: core-site.xml, hdfs-site.xml, kms-site.xml, 
> mapred-site.xml, yarn-site.xml
>
>
> Running a simple MapReduce job that writes to a path configured as an 
> encryption zone throws exception
> 11:26:26,343 INFO  [org.apache.hadoop.mapreduce.Job] (pool-14-thread-1) Task 
> Id : attempt_1432740034176_0001_m_00_2, Status : FAILED
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1) Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:424)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:710)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1358)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1457)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1442)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:112)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:43)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> java.security.AccessController.doPrivileged(Native Method)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> javax.security.auth.Subject.doAs(Subject.java:422)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1) Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> 11:26:26,348 ERROR [st

[jira] [Updated] (HDFS-8485) Transparent Encryption Fails to work with Yarn/MapReduce

2015-05-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8485:
---
Fix Version/s: (was: 2.7.0)

> Transparent Encryption Fails to work with Yarn/MapReduce
> 
>
> Key: HDFS-8485
> URL: https://issues.apache.org/jira/browse/HDFS-8485
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: RHEL-7, Kerberos 5
>Reporter: Ambud Sharma
>Priority: Critical
> Attachments: core-site.xml, hdfs-site.xml, kms-site.xml, 
> mapred-site.xml, yarn-site.xml
>
>
> Running a simple MapReduce job that writes to a path configured as an 
> encryption zone throws exception
> 11:26:26,343 INFO  [org.apache.hadoop.mapreduce.Job] (pool-14-thread-1) Task 
> Id : attempt_1432740034176_0001_m_00_2, Status : FAILED
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1) Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:424)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:710)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1358)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1457)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1442)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:112)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:43)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> java.security.AccessController.doPrivileged(Native Method)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> javax.security.auth.Subject.doAs(Subject.java:422)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1) Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> o

[jira] [Reopened] (HDFS-8485) Transparent Encryption Fails to work with Yarn/MapReduce

2015-05-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-8485:


> Transparent Encryption Fails to work with Yarn/MapReduce
> 
>
> Key: HDFS-8485
> URL: https://issues.apache.org/jira/browse/HDFS-8485
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: RHEL-7, Kerberos 5
>Reporter: Ambud Sharma
>Priority: Critical
> Attachments: core-site.xml, hdfs-site.xml, kms-site.xml, 
> mapred-site.xml, yarn-site.xml
>
>
> Running a simple MapReduce job that writes to a path configured as an 
> encryption zone throws exception
> 11:26:26,343 INFO  [org.apache.hadoop.mapreduce.Job] (pool-14-thread-1) Task 
> Id : attempt_1432740034176_0001_m_00_2, Status : FAILED
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1) Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:424)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:710)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1358)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1457)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1442)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:112)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:43)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> java.security.AccessController.doPrivileged(Native Method)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> javax.security.auth.Subject.doAs(Subject.java:422)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1) Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.authenti

[jira] [Resolved] (HDFS-8485) Transparent Encryption Fails to work with Yarn/MapReduce

2015-05-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-8485.

Resolution: Not A Problem

> Transparent Encryption Fails to work with Yarn/MapReduce
> 
>
> Key: HDFS-8485
> URL: https://issues.apache.org/jira/browse/HDFS-8485
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: RHEL-7, Kerberos 5
>Reporter: Ambud Sharma
>Priority: Critical
> Attachments: core-site.xml, hdfs-site.xml, kms-site.xml, 
> mapred-site.xml, yarn-site.xml
>
>
> Running a simple MapReduce job that writes to a path configured as an 
> encryption zone throws exception
> 11:26:26,343 INFO  [org.apache.hadoop.mapreduce.Job] (pool-14-thread-1) Task 
> Id : attempt_1432740034176_0001_m_00_2, Status : FAILED
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1) Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:424)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:710)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1358)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1457)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1442)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 11:26:26,346 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:112)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> com.s3.ingestion.S3ImportMR$S3ImportMapper.map(S3ImportMR.java:43)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> java.security.AccessController.doPrivileged(Native Method)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> javax.security.auth.Subject.doAs(Subject.java:422)
> 11:26:26,347 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1) Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:306)
> 11:26:26,348 ERROR [stderr] (pool-14-thread-1)at 
> org.a

[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds in client side stripped blocks recovering

2015-05-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563825#comment-14563825
 ] 

Kai Zheng commented on HDFS-8481:
-

Thanks Zhe for the patch. A comment:
A decoder instance was created per decode call in {{decodeAndFillBuffer}}. 
Please avoid doing this because it's expensive to prepare for a decoder. By the 
way, I'm not feeling very good to have the util because the logic is one of 
core parts. A util is more like a helper.

> Erasure coding: remove workarounds in client side stripped blocks recovering
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch, HDFS-8481-HDFS-7285.02.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8486) DN startup may cause severe data loss

2015-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563827#comment-14563827
 ] 

Hadoop QA commented on HDFS-8486:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 11s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 41s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 50s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 13s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 20s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 18s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 161m 56s | Tests passed in hadoop-hdfs. 
|
| | | 209m 11s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735927/HDFS-8486.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7ebe80e |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11153/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11153/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11153/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11153/console |


This message was automatically generated.

> DN startup may cause severe data loss
> -
>
> Key: HDFS-8486
> URL: https://issues.apache.org/jira/browse/HDFS-8486
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 0.23.1, 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HDFS-8486.patch
>
>
> A race condition between block pool initialization and the directory scanner 
> may cause a mass deletion of blocks in multiple storages.
> If block pool initialization finds a block on disk that is already in the 
> replica map, it deletes one of the blocks based on size, GS, etc.  
> Unfortunately it _always_ deletes one of the blocks even if identical, thus 
> the replica map _must_ be empty when the pool is initialized.
> The directory scanner starts at a random time within its periodic interval 
> (default 6h).  If the scanner starts very early it races to populate the 
> replica map, causing the block pool init to erroneously delete blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds in client side stripped blocks recovering

2015-05-28 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563839#comment-14563839
 ] 

Zhe Zhang commented on HDFS-8481:
-

Thanks Kai for the review. In the next rev I will create decoder only once 
(maybe it can be created with the {{DFSStripedInputStream}}).

bq. I'm not feeling very good to have the util because the logic is one of core 
parts
Do you mean {{decodeAndFillBuffer}} shouldn't be part of the 
{{StripedBlockUtil}} class? Basically I put it there to be used by both client 
and DN.

> Erasure coding: remove workarounds in client side stripped blocks recovering
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch, HDFS-8481-HDFS-7285.02.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds in client side stripped blocks recovering

2015-05-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563861#comment-14563861
 ] 

Kai Zheng commented on HDFS-8481:
-

bq. maybe it can be created with the DFSStripedInputStream
Great it will stay there.
bq. Do you mean decodeAndFillBuffer shouldn't be part of the StripedBlockUtil 
class?
Yes I meant that. Looks like to share the codes between client and datanode we 
need more abstraction than the util. The consideration is good and let's see 
how it will evolve in the future. Thanks.

> Erasure coding: remove workarounds in client side stripped blocks recovering
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch, HDFS-8481-HDFS-7285.02.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8322) Display warning if hadoop fs -ls is showing the local filesystem

2015-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563890#comment-14563890
 ] 

Hadoop QA commented on HDFS-8322:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 24s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  6s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 50s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 27s | Tests failed in 
hadoop-common. |
| | |  61m 25s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.shell.TestLs |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735964/HDFS-8322.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 9acd24f |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11156/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11156/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11156/console |


This message was automatically generated.

> Display warning if hadoop fs -ls is showing the local filesystem
> 
>
> Key: HDFS-8322
> URL: https://issues.apache.org/jira/browse/HDFS-8322
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-05-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563895#comment-14563895
 ] 

Andrew Wang commented on HDFS-7923:
---

Neato patch Colin, this is a high-ish level review, I probably need to do 
another pass.

Small stuff:
* Missing config key documentation in hdfs-defaults.xml
* requestBlockReportLeaseId: empty catch for unregistered node, we could add 
some more informative logging rather than relying on the warn below

BlockReportLeaseManager
* I discussed the NodeData structure with Colin offline, wondering why we 
didn't use a standard Collection. Colin brought up the reason of reducing 
garbage, which seems valid. I think we should consider implementing 
IntrusiveCollection though rather than writing another.
* I also asked about putting NodeData into DatanodeDescriptor. Not sure what 
the conclusion was on this, it might reduce garbage since we don't need a 
separate NodeData object.
* I prefer Precondition checks for invalid configuration values at startup, so 
there aren't any surprises for the user. Not everyone reads the messages on 
startup.
* requestLease has a check for isTraceEnabled, then logs at debug level

BPServiceActor:
* In offerService, we ignore the new leaseID if we already have one. On the NN 
though, a new request wipes out the old leaseID, and processReport checks based 
on leaseID rather than node. This kind of bug makes me wonder why we really 
need the leaseID at all, why not just attach a boolean to the node? Or if it's 
in the deferred vs. pending list?
* Can we fix the javadoc for scheduleBlockReport to mention randomness, and not 
"send...at the next heartbeat?" Incorrect right now.
* Have you thought about moving the BR scheduler to the NN side? We still rely 
on the DNs to jitter themselves and do the initial delay, but we could have the 
NN handle all this. This would also let the NN trigger FBRs whenever it wants. 
We could also do better than random scheduling, i.e. stride it rather than 
jitter. Incompatible, so we probably won't, but fun to think about :)
* scheduleBlockReport(long) do we want to add a checkArgument that delayMs is 
geq 0? You nixed the else case.

DatanodeManager:
* Could we do the BRLManager register/unregister in addDatanode and 
removeDatanode? I think this is safe, since on a DN restart it'll provide a 
lease ID of 0 and FBR, even without a reg/unreg.

> The DataNodes should rate-limit their full block reports by asking the NN on 
> heartbeat messages
> ---
>
> Key: HDFS-7923
> URL: https://issues.apache.org/jira/browse/HDFS-7923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-7923.000.patch, HDFS-7923.001.patch, 
> HDFS-7923.002.patch, HDFS-7923.003.patch
>
>
> The DataNodes should rate-limit their full block reports.  They can do this 
> by first sending a heartbeat message to the NN with an optional boolean set 
> which requests permission to send a full block report.  If the NN responds 
> with another optional boolean set, the DN will send an FBR... if not, it will 
> wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8407) hdfsListDirectory must set errno to 0 on success

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563905#comment-14563905
 ] 

Hudson commented on HDFS-8407:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #200 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/200/])
HDFS-8407. libhdfs hdfsListDirectory must set errno to 0 on success (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
d2d95bfe886a7fdf9d58fd5c47ec7c0158393afb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_ops.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> hdfsListDirectory must set errno to 0 on success
> 
>
> Key: HDFS-8407
> URL: https://issues.apache.org/jira/browse/HDFS-8407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Juan Yu
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-8407.001.patch, HDFS-8407.002.patch, 
> HDFS-8407.003.patch
>
>
> The documentation says it returns NULL on error, but it could also return 
> NULL when the directory is empty.
> /** 
>  * hdfsListDirectory - Get list of files/directories for a given
>  * directory-path. hdfsFreeFileInfo should be called to deallocate 
> memory. 
>  * @param fs The configured filesystem handle.
>  * @param path The path of the directory. 
>  * @param numEntries Set to the number of files/directories in path.
>  * @return Returns a dynamically-allocated array of hdfsFileInfo
>  * objects; NULL on error.
>  */
> {code}
> hdfsFileInfo *pathList = NULL; 
> ...
> //Figure out the number of entries in that directory
> jPathListSize = (*env)->GetArrayLength(env, jPathList);
> if (jPathListSize == 0) {
> ret = 0;
> goto done;
> }
> ...
> if (ret) {
> hdfsFreeFileInfo(pathList, jPathListSize);
> errno = ret;
> return NULL;
> }
> *numEntries = jPathListSize;
> return pathList;
> {code}
> Either change the implementation to match the doc, or fix the doc to match 
> the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8429) Avoid stuck threads if there is an error in DomainSocketWatcher that stops the thread

2015-05-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563907#comment-14563907
 ] 

Hudson commented on HDFS-8429:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #200 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/200/])
HDFS-8429. Avoid stuck threads if there is an error in DomainSocketWatcher that 
stops the thread.  (zhouyingchao via cmccabe) (cmccabe: rev 
246cefa089156a50bf086b8b1e4d4324d66dc58c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocketWatcher.c
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/unix/DomainSocketWatcher.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/unix/TestDomainSocketWatcher.java


> Avoid stuck threads if there is an error in DomainSocketWatcher that stops 
> the thread
> -
>
> Key: HDFS-8429
> URL: https://issues.apache.org/jira/browse/HDFS-8429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: zhouyingchao
>Assignee: zhouyingchao
> Fix For: 2.8.0
>
> Attachments: HDFS-8429-001.patch, HDFS-8429-002.patch, 
> HDFS-8429-003.patch
>
>
> In our cluster, an application is hung when doing a short circuit read of 
> local hdfs block. By looking into the log, we found the DataNode's 
> DomainSocketWatcher.watcherThread has exited with following log:
> {code}
> ERROR org.apache.hadoop.net.unix.DomainSocketWatcher: 
> Thread[Thread-25,5,main] terminating on unexpected exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:463)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> The line 463 is following code snippet:
> {code}
>  try {
> for (int fd : fdSet.getAndClearReadableFds()) {
>   sendCallbackAndRemove("getAndClearReadableFds", entries, fdSet,
> fd);
> }
> {code}
> getAndClearReadableFds is a native method which will malloc an int array. 
> Since our memory is very tight, it looks like the malloc failed and a NULL 
> pointer is returned.
> The bad thing is that other threads then blocked in stack like this:
> {code}
> "DataXceiver for client 
> unix:/home/work/app/hdfs/c3prc-micloud/datanode/dn_socket [Waiting for 
> operation #1]" daemon prio=10 tid=0x7f0c9c086d90 nid=0x8fc3 waiting on 
> condition [0x7f09b9856000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007b0174808> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
> at 
> org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:323)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.createNewMemorySegment(ShortCircuitRegistry.java:322)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.requestShortCircuitShm(DataXceiver.java:403)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opRequestShortCircuitShm(Receiver.java:214)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:95)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
> at java.lang.Thread.run(Thread.java:662)
> {code}
> IMO, we should exit the DN so that the users can know that something go  
> wrong  and fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8254) In StripedDataStreamer, it is hard to tolerate datanode failure in the leading streamer

2015-05-28 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563982#comment-14563982
 ] 

Zhe Zhang commented on HDFS-8254:
-

Thanks Nicholas for the patch! Our initial design put {{locateFollowingBlock}} 
logic in lead streamer for simplicity. I think it's a great idea to remove that 
single point of failure.

# The new {{ConcurrentPoll}} class looks good overall. Let me know if this 
understanding is correct: now the fastest streamer will take care of allocating 
block group from NN and distributing to other streamers. Can we add some 
Javadocs for the class and methods, and ideally, a design description on the 
JIRA?
# The concurrent logic is a little complex and some parts could be fragile. For 
example, the {{populate}} method in {{locateFollowingBlock}} directly changes 
the {{block}} of the class. It's true that {{locateFollowingBlock}} is only 
used by {{nextBlockOutputStream}}, which will reassign a correct value for 
{{block}}. But this dependency makes {{locateFollowingBlock}} less 
self-contained. It also looks like we could run into a race condition if 2 
streamers enter {{locateFollowingBlock}} around the same time? They could both 
pass {{isReady2Populate}} before either one has started taking from 
{{endBlocks}}. Since {{DataStreamer#locateFollowingBlock}} is not complex, can 
we do some refactoring and move it to the input stream level? This way the 
{{coordinator}} can take care of the main logic and the fastest streamer just 
has to trigger it. I haven't thought through {{updateBlockForPipeline}} and 
{{updatePipeline}} yet but I guess the stories should be similar.

Nits:
# {{DFSStripedOutputStream}} already has the schema but the streamers are still 
using constants. We should either use the schema or at least add some TODOs.
# New methods in {{DataStreamer}} could use some Javadoc.
# {{class Coordinator}} could be renamed to something like 
{{StreamersCoordinator}}, just to be more specific.
# The Javadoc of {{StripedBlockUtil#checkBlocks}} should say that it checks the 
two blocks are in the same block group

> In StripedDataStreamer, it is hard to tolerate datanode failure in the 
> leading streamer
> ---
>
> Key: HDFS-8254
> URL: https://issues.apache.org/jira/browse/HDFS-8254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8254_20150526.patch, h8254_20150526b.patch
>
>
> StripedDataStreamer javadoc is shown below.
> {code}
>  * The StripedDataStreamer class is used by {@link DFSStripedOutputStream}.
>  * There are two kinds of StripedDataStreamer, leading streamer and ordinary
>  * stream. Leading streamer requests a block group from NameNode, unwraps
>  * it to located blocks and transfers each located block to its corresponding
>  * ordinary streamer via a blocking queue.
> {code}
> Leading streamer is the streamer with index 0.  When the datanode of the 
> leading streamer fails, the other steamers cannot continue since no one will 
> request a block group from NameNode anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8489) Subclass BlockInfo to represent contiguous blocks

2015-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564009#comment-14564009
 ] 

Hadoop QA commented on HDFS-8489:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 54s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 10 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 31s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  4 
new checkstyle issues (total was 686, now 686). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 13s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 46s | Tests failed in hadoop-hdfs. |
| | | 209m  1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735953/HDFS-8489.00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ae14543 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11154/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11154/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11154/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11154/console |


This message was automatically generated.

> Subclass BlockInfo to represent contiguous blocks
> -
>
> Key: HDFS-8489
> URL: https://issues.apache.org/jira/browse/HDFS-8489
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8489.00.patch
>
>
> As second step of the cleanup, we should make {{BlockInfo}} an abstract class 
> and merge the subclass {{BlockInfoContiguous}} from HDFS-7285 into trunk. The 
> patch should clearly separate where to use the abstract class versus the 
> subclass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8494) Remove hard-coded chunk size in favor of ECZone

2015-05-28 Thread Kai Sasaki (JIRA)
Kai Sasaki created HDFS-8494:


 Summary: Remove hard-coded chunk size in favor of ECZone
 Key: HDFS-8494
 URL: https://issues.apache.org/jira/browse/HDFS-8494
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Kai Sasaki
Assignee: Kai Sasaki
 Fix For: HDFS-7285


It is necessary to remove hard-coded values inside NameNode configured in 
{{HdfsConstants}}. In this JIRA, we can remove {{chunkSize}} gracefully in 
favor of HDFS-8375.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-05-28 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564036#comment-14564036
 ] 

Rakesh R commented on HDFS-8493:


After talking with [~wheat9], I will work on this issue. Thanks!

> Consolidate truncate() related implementation in a single class
> ---
>
> Key: HDFS-8493
> URL: https://issues.apache.org/jira/browse/HDFS-8493
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>
> This jira proposes to consolidate truncate() related methods into a single 
> class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8493) Consolidate truncate() related implementation in a single class

2015-05-28 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R reassigned HDFS-8493:
--

Assignee: Rakesh R

> Consolidate truncate() related implementation in a single class
> ---
>
> Key: HDFS-8493
> URL: https://issues.apache.org/jira/browse/HDFS-8493
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Rakesh R
>
> This jira proposes to consolidate truncate() related methods into a single 
> class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8495) Consolidate append() related implementation into a single class

2015-05-28 Thread Rakesh R (JIRA)
Rakesh R created HDFS-8495:
--

 Summary: Consolidate append() related implementation into a single 
class
 Key: HDFS-8495
 URL: https://issues.apache.org/jira/browse/HDFS-8495
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Rakesh R
Assignee: Rakesh R


This jira proposes to consolidate {{FSNamesystem#append()}} related methods 
into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8401) Memfs - a layered file system for in-memory storage in HDFS

2015-05-28 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564043#comment-14564043
 ] 

Sanjay Radia commented on HDFS-8401:


Consider the following use case:   one wants to run a few jobs and cache the 
input and the intermediate output just for the duration of these jobs. Today 
the user has to pin such data by changing the dir-file attributes, and when the 
jobs are finished he has to reset the attributes. It is easier to say "jobxxx 
input = memfs://.../input tmp=memfs://.../tmpdir  output=". Here setting the 
scheme is not inconvenient since it is part of parameters to a program.  
Further this works with any existing application - Hive, Pig etc since the hint 
to cache is in the scheme of the pathname. Our existing policies and dir level 
setting work when things  are semi-permanent (ie this dir has dimension tables 
and please cache them - all jobs will benefit). In addition we could add or 
already have programmatic APIs to indicate that a file being read or written 
needs to be cached. But this requires change to the application code.   Once we 
get fully automated memory caching working we will not need  our existing 
storage policies nor layers like memfs since the system will just take care of 
it all - but it will take us some time to get there. 

I think both approaches have their own strengths and are complementary. Note  
spark-tachyon uses a layered file system and the approach is viewed as a simple 
way to control which files get cached on a per-job basis.

Further one can also cache specific Hive  tables in hive meta store by giving a 
path name that has the memfs-scheme. Here the memfs-pathname or setting the 
dirs attribute are roughly equal from a ease-of-usage perspective.

An additional point about memfs for non-hdfs systems: the Memfs *abstraction* 
allows caching S3 data in a very similar fashion. Of course one will have to 
build a full caching implementation of memfs for S3 because the memfs proposed 
in this Jira is very very thin layer over HDFS because ALL the caching 
mechanism is already in HDFS. So I expect several implementation of the memfs 
interface for HCFS file systems.

> Memfs - a layered file system for in-memory storage in HDFS
> ---
>
> Key: HDFS-8401
> URL: https://issues.apache.org/jira/browse/HDFS-8401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> We propose creating a layered filesystem that can provide in-memory storage 
> using existing features within HDFS. memfs will use lazy persist writes 
> introduced by HDFS-6581. For reads, memfs can use the Centralized Cache 
> Management feature introduced in HDFS-4949 to load hot data to memory.
> Paths in memfs and hdfs will correspond 1:1 so memfs will require no 
> additional metadata and it can be implemented entirely as a client-side 
> library.
> The advantage of a layered file system is that it requires little or no 
> changes to existing applications. e.g. Applications can use something like 
> {{memfs://}} instead of {{hdfs://}} for files targeted to memory storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds in client side stripped blocks recovering

2015-05-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564052#comment-14564052
 ] 

Hadoop QA commented on HDFS-8481:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  14m 59s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 37s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 24s | The patch appears to introduce 2 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 172m 49s | Tests failed in hadoop-hdfs. |
| | | 214m 35s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735966/HDFS-8481-HDFS-7285.02.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 1299357 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11155/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11155/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11155/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11155/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11155/console |


This message was automatically generated.

> Erasure coding: remove workarounds in client side stripped blocks recovering
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch, HDFS-8481-HDFS-7285.02.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8481) Erasure coding: remove workarounds in client side stripped blocks recovering

2015-05-28 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564053#comment-14564053
 ] 

Walter Su commented on HDFS-8481:
-

This is user's logic of calling pread. The {{buf}} is reused until the entire 
file has been read.
{code}
byte[] buf = new buf[4096];
while(readLen = in.read(buf)){
..
}
{code}

Assume we has a 768mb file (128mb * 6) which exactly contains 1 block group. We 
lost one block so we have to decode until 768mb data has been read.
{code}
byte[][] decodeInputs =
new byte[dataBlkNum + parityBlkNum][(int) 
alignedStripe.getSpanInBlock()];
{code}
For every {{alignedStripe}} being read we need a new {{decodeInputs}}. For 
everytime user calls pread, we have new multiple {{alignedStripe}}. For 
everytime user calls stateful read, we have 1~3 new {{alignedStripe}}.
Which means, when entire 768mb data has been read, we have newed 128mb*9 
byte[][] {{decodeInputs}} garbage waiting gc.
We cannot depend {{DFSStripedInputStream}} to keep {{decodeInputs}} object and 
reuse it. Because every {{SpanInBlock}} is different.
I'm not sure if I make it clear. If so, it's an issue right? (Not related to 
this jira)
bq. we need more abstraction than the util.
I'm +1 for this idea. I think we can resolve the {{decodeInputs}} issue in that 
abstraction.

> Erasure coding: remove workarounds in client side stripped blocks recovering
> 
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch, 
> HDFS-8481-HDFS-7285.01.patch, HDFS-8481-HDFS-7285.02.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate 
> decoded contents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8496) Calling stopWriter() with FSDatasetImpl lock held may block other threads

2015-05-28 Thread zhouyingchao (JIRA)
zhouyingchao created HDFS-8496:
--

 Summary: Calling stopWriter() with FSDatasetImpl lock held may  
block other threads
 Key: HDFS-8496
 URL: https://issues.apache.org/jira/browse/HDFS-8496
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: zhouyingchao
Assignee: zhouyingchao


On a DN of a HDFS 2.6 cluster, we noticed some DataXceiver threads and  
heartbeat threads are blocked for quite a while on the FSDatasetImpl lock. By 
looking at the stack, we found the calling of stopWriter() with FSDatasetImpl 
lock blocked everything.

Following is the heartbeat stack, as an example, to show how threads are 
blocked by FSDatasetImpl lock:
{code}
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getDfsUsed(FsVolumeImpl.java:152)
- waiting to lock <0x0007701badc0> (a 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getAvailable(FsVolumeImpl.java:191)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getStorageReports(FsDatasetImpl.java:144)
- locked <0x000770465dc0> (a java.lang.Object)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:575)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:680)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
at java.lang.Thread.run(Thread.java:662)
{code}

The thread which held the FSDatasetImpl lock is just sleeping to wait another 
thread to exit in stopWriter(). The stack is:
{code}
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1194)
- locked <0x0007636953b8> (a org.apache.hadoop.util.Daemon)
at 
org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:183)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.recoverCheck(FsDatasetImpl.java:982)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.recoverClose(FsDatasetImpl.java:1026)
- locked <0x0007701badc0> (a 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:624)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
at java.lang.Thread.run(Thread.java:662)
{code}

In this case, we deployed quite a lot other workloads on the DN, the local file 
system and disk is quite busy. We guess this is why the stopWriter took quite a 
long time.
Any way, it is not quite reasonable to call stopWriter with the FSDatasetImpl 
lock held.   In HDFS-7999, the createTemporary() is changed to call stopWriter 
without FSDatasetImpl lock. We guess we should do so in the other three 
methods: recoverClose()/recoverAppend/recoverRbw().

I'll try to finish a patch for this today. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8336) Expose some administrative erasure coding operations to HdfsAdmin

2015-05-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564087#comment-14564087
 ] 

Kai Zheng commented on HDFS-8336:
-

This idea looks good. How about adding EC schema related operations as well? 
Thanks.

> Expose some administrative erasure coding operations to HdfsAdmin
> -
>
> Key: HDFS-8336
> URL: https://issues.apache.org/jira/browse/HDFS-8336
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Minor
> Attachments: HDFS-8336-001.patch
>
>
> We have HdfsAdmin.java for exposing administrative functions. So, it would be 
> good, if we could expose EC related administrative functions as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >