[jira] [Updated] (HDFS-10839) Add new BlockReader#readFully(ByteBuffer buf) API

2016-09-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-10839:
-
Description: Add a ByteBuffer version readFully API in BlockReader 
interface,  to facilitate read data from BlcokReader and then put them into 
ByteBuffer  (was: Add a ByteBuffer version readAll API in BlockReader 
interface,  to facilitate read data from BlcokReader and then put them into 
ByteBuffer)

> Add new BlockReader#readFully(ByteBuffer buf) API
> -
>
> Key: HDFS-10839
> URL: https://issues.apache.org/jira/browse/HDFS-10839
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-10839-v1.patch
>
>
> Add a ByteBuffer version readFully API in BlockReader interface,  to 
> facilitate read data from BlcokReader and then put them into ByteBuffer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10839) Add new BlockReader#readFully(ByteBuffer buf) API

2016-09-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-10839:
-
Summary: Add new BlockReader#readFully(ByteBuffer buf) API  (was: Add new 
BlockReader#readAll(ByteBuffer buf, int len) API)

> Add new BlockReader#readFully(ByteBuffer buf) API
> -
>
> Key: HDFS-10839
> URL: https://issues.apache.org/jira/browse/HDFS-10839
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-10839-v1.patch
>
>
> Add a ByteBuffer version readAll API in BlockReader interface,  to facilitate 
> read data from BlcokReader and then put them into ByteBuffer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10608) Include event for AddBlock in Inotify Event Stream

2016-09-06 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469619#comment-15469619
 ] 

Surendra Singh Lilhore commented on HDFS-10608:
---

bq.  just a question about why you wanted blk_ instead of the id itself for 
this event? I was just wondering the reasoning behind your desire to have a 
string instead of a long? 
If you see the FSCK report or NN/DN logs, we will get block name (blk_id) 
instead of the id itself. So I am thinking, hdfs client only should know about 
the block name.

[~ajisakaa],   [~vinayrpet],  [~brahmareddy] Please give your suggestion on 
this. 


> Include event for AddBlock in Inotify Event Stream
> --
>
> Key: HDFS-10608
> URL: https://issues.apache.org/jira/browse/HDFS-10608
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: churro morales
>Priority: Minor
> Attachments: HDFS-10608.patch, HDFS-10608.v1.patch, 
> HDFS-10608.v2.patch, HDFS-10608.v3.patch, HDFS-10608.v4.patch
>
>
> It would be nice to have an AddBlockEvent in the INotify pipeline.  Based on 
> discussions from mailing list:
> http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201607.mbox/%3C1467743792.4040080.657624289.7BE240AD%40webmail.messagingengine.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10838) Last full block report received time for each DN should be easily discoverable

2016-09-06 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469597#comment-15469597
 ] 

Surendra Singh Lilhore commented on HDFS-10838:
---

Thanks [~vinayrpet] and [~arpitagarwal] for review...

bq.  Should the NN web UI to show relative time, just like last heartbeat time? 
I'm fine with either though.
Yes, I think in UI we should show relative time. 

[~vinayrpet], please give your suggestion on this.

> Last full block report received time for each DN should be easily discoverable
> --
>
> Key: HDFS-10838
> URL: https://issues.apache.org/jira/browse/HDFS-10838
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Reporter: Arpit Agarwal
>Assignee: Surendra Singh Lilhore
> Attachments: DFSAdmin-Report.png, HDFS-10838-001.patch, NN_UI.png
>
>
> It should be easy for administrators to discover the time of last full block 
> report from each DataNode.
> We can show it in the NameNode web UI or in the output of {{hdfs dfsadmin 
> -report}}, or both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-06 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469284#comment-15469284
 ] 

Konstantin Shvachko commented on HDFS-10843:


[~xkrogen] thanks for digging into this. 

??When the block is committed, the cached size is updated to the final size of 
the block. However, the calculation of the computed size uses the full block 
size until the block is completed??

Looks like {{computeQuotaUsage()}} does not use prefered block size to compute 
file size. And {{computeFileSize()}} checks for COMPLETED state.
So we should correct cached value and update it to final length only when the 
block is completed.
I mean whatever is cached should correspond to how the quotas are checked.

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10714) Issue in handling checksum errors in write pipeline when fault DN is LAST_IN_PIPELINE

2016-09-06 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469269#comment-15469269
 ] 

Vinayakumar B commented on HDFS-10714:
--

Current Patch contains some part of the HDFS-6937 patch.

> Issue in handling checksum errors in write pipeline when fault DN is 
> LAST_IN_PIPELINE
> -
>
> Key: HDFS-10714
> URL: https://issues.apache.org/jira/browse/HDFS-10714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10714-01-draft.patch
>
>
> We had come across one issue, where write is failed even 7 DN’s are available 
> due to network fault at one datanode which is LAST_IN_PIPELINE. It will be 
> similar to HDFS-6937 .
> Scenario : (DN3 has N/W Fault and Min repl=2).
> Write pipeline:
> DN1->DN2->DN3  => DN3 Gives ERROR_CHECKSUM ack. And so DN2 marked as bad
> DN1->DN4-> DN3 => DN3 Gives ERROR_CHECKSUM ack. And so DN4 is marked as bad
> ….
> And so on ( all the times DN3 is LAST_IN_PIPELINE) ... Continued till no more 
> datanodes to construct the pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9847) HDFS configuration should accept time units

2016-09-06 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469191#comment-15469191
 ] 

Yiqun Lin commented on HDFS-9847:
-

Thanks [~chris.douglas] for the commit and thanks others' work for this JIRA.

> HDFS configuration should accept time units
> ---
>
> Key: HDFS-9847
> URL: https://issues.apache.org/jira/browse/HDFS-9847
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9847-branch-2.001.patch, 
> HDFS-9847-branch-2.002.patch, HDFS-9847-nothrow.001.patch, 
> HDFS-9847-nothrow.002.patch, HDFS-9847-nothrow.003.patch, 
> HDFS-9847-nothrow.004.patch, HDFS-9847.001.patch, HDFS-9847.002.patch, 
> HDFS-9847.003.patch, HDFS-9847.004.patch, HDFS-9847.005.patch, 
> HDFS-9847.006.patch, HDFS-9847.007.patch, HDFS-9847.008.patch, 
> branch-2-delta.002.txt, timeduration-w-y.patch
>
>
> In HDFS-9821, it talks about the issue of leting existing keys use friendly 
> units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names 
> contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make 
> some other configurations which without time unit name to accept friendly 
> time units. The time unit  {{seconds}} is frequently used in hdfs. We can 
> updating this configurations first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10821) DiskBalancer: Report command support with multiple nodes

2016-09-06 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469175#comment-15469175
 ] 

Yiqun Lin commented on HDFS-10821:
--

Hi, [~anu], can you take a quick look for the patch? I will post a new patch to 
fix checkstyle warnings and the comments from you if you have other comments 
for the latest patch.

> DiskBalancer: Report command support with multiple nodes
> 
>
> Key: HDFS-10821
> URL: https://issues.apache.org/jira/browse/HDFS-10821
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.9.0
>
> Attachments: HDFS-10821.001.patch
>
>
> Since HDFS-10813 has committed to the trunk, then we can use {{getNodes}} 
> method to parse the nodes string and support multiple nodes with {{hdfs 
> diskbalancer}} subcommands(ex -report, -query). In this JIRA, we are focusing 
> on the subcommand {{-report}}.
> That means we can use command {{hdfs diskbalancer -report -node}} to print 
> one or one more datanodes report info. A test input command(here I use UUID 
> to specify one datanode):
> {code}
> hdfs diskbalancer -report -node 
> e05ade8e-fb28-42cf-9aa9-43e564c0ec99,38714337-84fb-4e35-9ea3-0bb47d6da700
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469153#comment-15469153
 ] 

Erik Krogen edited comment on HDFS-10843 at 9/7/16 1:11 AM:


After further consideration it seems that it is probably most appropriate to 
continue to use the preferred block size as the cached size until the block is 
actually completed rather than just committed. This is more consistent with how 
the rest of the code treats the under construction size (see comments on 
{{INodeFile.storagespaceConsumed,computeFileSize}} and code in 
{{FileWithSnapshotFeature.updateQuotaAndCollectBlocks}} and  
{{FSDirAppendOp.computeQuotaDeltaForUCBlock}}); it seems that generally the 
under construction / not under construction distinction is used rather than 
committed vs. uncommitted. 


was (Author: xkrogen):
After further consideration it seems that it is probably most appropriate to 
continue to use the preferred block size as the cached size until the block is 
actually completed rather than just committed. This is more consistent with how 
the rest of the code treats the under construction size (see comments on 
{{INodeFile.storagespaceConsumed,computeFileSize}}, 
{{FileWithSnapshotFeature.updateQuotaAndCollectBlocks}}, 
{{FSDirAppendOp.computeQuotaDeltaForUCBlock}}); it seems that generally the 
under construction / not under construction distinction is used rather than 
committed vs. uncommitted. 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469153#comment-15469153
 ] 

Erik Krogen commented on HDFS-10843:


After further consideration it seems that it is probably most appropriate to 
continue to use the preferred block size as the cached size until the block is 
actually completed rather than just committed. This is more consistent with how 
the rest of the code treats the under construction size (see comments on 
{{INodeFile.storagespaceConsumed,computeFileSize}}, 
{{FileWithSnapshotFeature.updateQuotaAndCollectBlocks}}, 
{{FSDirAppendOp.computeQuotaDeltaForUCBlock}}); it seems that generally the 
under construction / not under construction distinction is used rather than 
committed vs. uncommitted. 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10742) Measurement of lock held time in FsDatasetImpl

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469154#comment-15469154
 ] 

Hadoop QA commented on HDFS-10742:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 73m 
34s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10742 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827273/HDFS-10742.014.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a86b8660d560 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5f23abf |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16654/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16654/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Measurement of lock held time in FsDatasetImpl
> --
>
> Key: HDFS-10742
> URL: https://issues.apache.org/jira/browse/HDFS-10742
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-10742.001.patch, HDFS-10742.002.patch, 
> HDFS-10742.003.patch, HDFS-10742.004.patch, HDFS-10742.005.patch, 
> HDFS-10742.006.patch, HDFS-10742.007.patch, HDFS-10742.008.patch, 
> 

[jira] [Commented] (HDFS-10608) Include event for AddBlock in Inotify Event Stream

2016-09-06 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469122#comment-15469122
 ] 

churro morales commented on HDFS-10608:
---

[~surendrasingh] just a question about why you wanted blk_ instead of the 
id itself for this event?  I was just wondering the reasoning behind your 
desire to have a string instead of a long?  The latest patch incorporates your 
comments and makes it a string.  

Thanks again for reviewing.  

> Include event for AddBlock in Inotify Event Stream
> --
>
> Key: HDFS-10608
> URL: https://issues.apache.org/jira/browse/HDFS-10608
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: churro morales
>Priority: Minor
> Attachments: HDFS-10608.patch, HDFS-10608.v1.patch, 
> HDFS-10608.v2.patch, HDFS-10608.v3.patch, HDFS-10608.v4.patch
>
>
> It would be nice to have an AddBlockEvent in the INotify pipeline.  Based on 
> discussions from mailing list:
> http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201607.mbox/%3C1467743792.4040080.657624289.7BE240AD%40webmail.messagingengine.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8901) Use ByteBuffer in striping positional read

2016-09-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469075#comment-15469075
 ] 

Kai Zheng commented on HDFS-8901:
-

Thanks [~zhz]. Can we have this now if it addressed all your questions? Not 
sure about it, though much things to follow on to improve.

> Use ByteBuffer in striping positional read
> --
>
> Key: HDFS-8901
> URL: https://issues.apache.org/jira/browse/HDFS-8901
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Zheng
>Assignee: SammiChen
> Attachments: HDFS-8901-v10.patch, HDFS-8901-v17.patch, 
> HDFS-8901-v18.patch, HDFS-8901-v19.patch, HDFS-8901-v2.patch, 
> HDFS-8901-v20.patch, HDFS-8901-v3.patch, HDFS-8901-v4.patch, 
> HDFS-8901-v5.patch, HDFS-8901-v6.patch, HDFS-8901-v7.patch, 
> HDFS-8901-v8.patch, HDFS-8901-v9.patch, HDFS-8901.v11.patch, 
> HDFS-8901.v12.patch, HDFS-8901.v13.patch, HDFS-8901.v14.patch, 
> HDFS-8901.v15.patch, HDFS-8901.v16.patch, initial-poc.patch
>
>
> Native erasure coder prefers to direct ByteBuffer for performance 
> consideration. To prepare for it, this change uses ByteBuffer through the 
> codes in implementing striping position read. It will also fix avoiding 
> unnecessary data copying between striping read chunk buffers and decode input 
> buffers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-09-06 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469060#comment-15469060
 ] 

Virajith Jalaparti commented on HDFS-10636:
---

Hi [~eddyxu], 

bq. Should {{LocalReplica}} and {{ProvidedReplica}} extend {{FinalizedReplica}}?
Could you take a look at my previous [comment | 
https://issues.apache.org/jira/browse/HDFS-10636?focusedCommentId=15433727=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15433727]
 about this? I had a few questions as to what would be the motivation for this. 

bq. {{ReplicaBuilder#buildLocalReplicaInPipeline}} can be private.
This will not be sufficient as the function is used in {{FsVolumeImpl}}. 


> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch, 
> HDFS-10636.003.patch, HDFS-10636.004.patch, HDFS-10636.005.patch, 
> HDFS-10636.006.patch, HDFS-10636.007.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468993#comment-15468993
 ] 

Erik Krogen commented on HDFS-10843:


I have realized that this bug is more severe than I previously thought. As 
described above this will only result in a transient state where the cached and 
computed values briefly disagree. However, if the replication is changed while 
in this state, or similarly, while the block is under construction but has not 
yet been committed, the cached value will be updated incorrectly, causing the 
cached value to be persistently incorrect. All of these issues should be fixed 
by the same root cause of ensuring that the storagespace values are always 
computed in a consistent way - right now different spots in the code handle the 
different states (under construction, committed, completed) in inconsistent 
ways. 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10742) Measurement of lock held time in FsDatasetImpl

2016-09-06 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-10742:
--
Attachment: HDFS-10742.014.patch

Somehow missed the license in previous patch, fixed it.

> Measurement of lock held time in FsDatasetImpl
> --
>
> Key: HDFS-10742
> URL: https://issues.apache.org/jira/browse/HDFS-10742
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-10742.001.patch, HDFS-10742.002.patch, 
> HDFS-10742.003.patch, HDFS-10742.004.patch, HDFS-10742.005.patch, 
> HDFS-10742.006.patch, HDFS-10742.007.patch, HDFS-10742.008.patch, 
> HDFS-10742.009.patch, HDFS-10742.010.patch, HDFS-10742.011.patch, 
> HDFS-10742.012.patch, HDFS-10742.013.patch, HDFS-10742.014.patch
>
>
> This JIRA proposes to measure the time the of lock of {{FsDatasetImpl}} is 
> held by a thread. Doing so will allow us to measure lock statistics.
> This can be done by extending the {{AutoCloseableLock}} lock object in 
> {{FsDatasetImpl}}. In the future we can also consider replacing the lock with 
> a read-write lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10742) Measurement of lock held time in FsDatasetImpl

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468936#comment-15468936
 ] 

Hadoop QA commented on HDFS-10742:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 
48s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10742 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827260/HDFS-10742.013.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6fdfe42601e7 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5f23abf |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16653/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16653/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16653/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Measurement of lock held time in FsDatasetImpl
> --
>
> Key: HDFS-10742
> URL: https://issues.apache.org/jira/browse/HDFS-10742
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-10742.001.patch, HDFS-10742.002.patch, 
> HDFS-10742.003.patch, HDFS-10742.004.patch, 

[jira] [Updated] (HDFS-10757) KMSClientProvider combined with KeyProviderCache can result in wrong UGI being used

2016-09-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10757:
--
Attachment: HDFS-10757.00.patch

Attach a low risk patch to make KMSCP stateless for discussion. 

> KMSClientProvider combined with KeyProviderCache can result in wrong UGI 
> being used
> ---
>
> Key: HDFS-10757
> URL: https://issues.apache.org/jira/browse/HDFS-10757
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Xiaoyu Yao
>Priority: Critical
> Attachments: HDFS-10757.00.patch
>
>
> ClientContext::get gets the context from CACHE via a config setting based 
> name, then KeyProviderCache stored in ClientContext gets the key provider 
> cached by URI from the configuration, too. These would return the same 
> KeyProvider regardless of current UGI.
> KMSClientProvider caches the UGI (actualUgi) in ctor; that means in 
> particular that all the users of DFS with KMSClientProvider in a process will 
> get the KMS token (along with other credentials) of the first user, via the 
> above cache.
> Either KMSClientProvider shouldn't store the UGI, or one of the caches should 
> be UGI-aware, like the FS object cache.
> Side note: the comment in createConnection that purports to handle the 
> different UGI doesn't seem to cover what it says it covers. In our case, we 
> have two unrelated UGIs with no auth (createRemoteUser) with bunch of tokens, 
> including a KMS token, added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9038) DFS reserved space is erroneously counted towards non-DFS used.

2016-09-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468897#comment-15468897
 ] 

Andrew Wang commented on HDFS-9038:
---

Gentle reminder, please set the appropriate 3.0.0 fix version when committing 
to trunk. Thanks!

> DFS reserved space is erroneously counted towards non-DFS used.
> ---
>
> Key: HDFS-9038
> URL: https://issues.apache.org/jira/browse/HDFS-9038
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: GetFree.java, HDFS-9038-002.patch, HDFS-9038-003.patch, 
> HDFS-9038-004.patch, HDFS-9038-005.patch, HDFS-9038-006.patch, 
> HDFS-9038-007.patch, HDFS-9038-008.patch, HDFS-9038-009.patch, 
> HDFS-9038-010.patch, HDFS-9038-011.patch, HDFS-9038.patch
>
>
> HDFS-5215 changed the DataNode volume available space calculation to consider 
> the reserved space held by the {{dfs.datanode.du.reserved}} configuration 
> property.  As a side effect, reserved space is now counted towards non-DFS 
> used.  I don't believe it was intentional to change the definition of non-DFS 
> used.  This issue proposes restoring the prior behavior: do not count 
> reserved space towards non-DFS used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9038) DFS reserved space is erroneously counted towards non-DFS used.

2016-09-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9038:
--
Fix Version/s: 3.0.0-alpha2

> DFS reserved space is erroneously counted towards non-DFS used.
> ---
>
> Key: HDFS-9038
> URL: https://issues.apache.org/jira/browse/HDFS-9038
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: GetFree.java, HDFS-9038-002.patch, HDFS-9038-003.patch, 
> HDFS-9038-004.patch, HDFS-9038-005.patch, HDFS-9038-006.patch, 
> HDFS-9038-007.patch, HDFS-9038-008.patch, HDFS-9038-009.patch, 
> HDFS-9038-010.patch, HDFS-9038-011.patch, HDFS-9038.patch
>
>
> HDFS-5215 changed the DataNode volume available space calculation to consider 
> the reserved space held by the {{dfs.datanode.du.reserved}} configuration 
> property.  As a side effect, reserved space is now counted towards non-DFS 
> used.  I don't believe it was intentional to change the definition of non-DFS 
> used.  This issue proposes restoring the prior behavior: do not count 
> reserved space towards non-DFS used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10757) KMSClientProvider combined with KeyProviderCache can result in wrong UGI being used

2016-09-06 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468871#comment-15468871
 ] 

Xiaoyu Yao commented on HDFS-10757:
---

Thanks [~arun197] and [~jnp] for the discussion. Adding a new 
KeyProvdierExtension with UGI support could be complicated as we have 
KeyproviderCryptoExtension for encrypt/decrypt related operations and 
KeyProviderDelegationTokenExtension for delegation token related operations. 

A third option would be remove the KMSCP instance variable actualUGI and make 
it a local variable of methods (such as createConnection/addDelegationTokens, 
etc) that need to use it for doAs. This way, the KMSCP cached by 
KeyProviderCache will be stateless. 

Let me know your thoughts, Thanks!

> KMSClientProvider combined with KeyProviderCache can result in wrong UGI 
> being used
> ---
>
> Key: HDFS-10757
> URL: https://issues.apache.org/jira/browse/HDFS-10757
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Xiaoyu Yao
>Priority: Critical
>
> ClientContext::get gets the context from CACHE via a config setting based 
> name, then KeyProviderCache stored in ClientContext gets the key provider 
> cached by URI from the configuration, too. These would return the same 
> KeyProvider regardless of current UGI.
> KMSClientProvider caches the UGI (actualUgi) in ctor; that means in 
> particular that all the users of DFS with KMSClientProvider in a process will 
> get the KMS token (along with other credentials) of the first user, via the 
> above cache.
> Either KMSClientProvider shouldn't store the UGI, or one of the caches should 
> be UGI-aware, like the FS object cache.
> Side note: the comment in createConnection that purports to handle the 
> different UGI doesn't seem to cover what it says it covers. In our case, we 
> have two unrelated UGIs with no auth (createRemoteUser) with bunch of tokens, 
> including a KMS token, added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-09-06 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468790#comment-15468790
 ] 

Lei (Eddy) Xu commented on HDFS-10636:
--

Hi, [~virajith],  Thanks for the updates. And thanks for [~jpallas]'s inputs. 

I have a few questions remained:
* Should {{LocalReplica}} and {{ProvidedReplica}} extend {{FinalizedReplica}}? 
It seems the opposite way in the current patch. Similarly concerns for 
{{ReplicaBeingWritten}} and {{LocalReplicaBeingWritten}}.
* {{ReplicaBuilder#buildLocalReplicaInPipeline}} can be private.

The rest looks good to me. 


> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch, 
> HDFS-10636.003.patch, HDFS-10636.004.patch, HDFS-10636.005.patch, 
> HDFS-10636.006.patch, HDFS-10636.007.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10608) Include event for AddBlock in Inotify Event Stream

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468779#comment-15468779
 ] 

Hadoop QA commented on HDFS-10608:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project: The patch generated 12 new 
+ 193 unchanged - 1 fixed = 205 total (was 194) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10608 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827249/HDFS-10608.v4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux b4d72ccc69e9 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f6c0b75 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16652/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 

[jira] [Updated] (HDFS-10742) Measurement of lock held time in FsDatasetImpl

2016-09-06 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-10742:
--
Attachment: HDFS-10742.013.patch

fix checkstyle and findbug warnings

> Measurement of lock held time in FsDatasetImpl
> --
>
> Key: HDFS-10742
> URL: https://issues.apache.org/jira/browse/HDFS-10742
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-10742.001.patch, HDFS-10742.002.patch, 
> HDFS-10742.003.patch, HDFS-10742.004.patch, HDFS-10742.005.patch, 
> HDFS-10742.006.patch, HDFS-10742.007.patch, HDFS-10742.008.patch, 
> HDFS-10742.009.patch, HDFS-10742.010.patch, HDFS-10742.011.patch, 
> HDFS-10742.012.patch, HDFS-10742.013.patch
>
>
> This JIRA proposes to measure the time the of lock of {{FsDatasetImpl}} is 
> held by a thread. Doing so will allow us to measure lock statistics.
> This can be done by extending the {{AutoCloseableLock}} lock object in 
> {{FsDatasetImpl}}. In the future we can also consider replacing the lock with 
> a read-write lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10742) Measurement of lock held time in FsDatasetImpl

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468745#comment-15468745
 ] 

Hadoop QA commented on HDFS-10742:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 109 unchanged - 0 fixed = 110 total (was 109) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Format string should use %n rather than n in 
org.apache.hadoop.hdfs.InstrumentedLock.check(long, long)  At 
InstrumentedLock.java:rather than n in 
org.apache.hadoop.hdfs.InstrumentedLock.check(long, long)  At 
InstrumentedLock.java:[line 118] |
| Failed junit tests | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10742 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827246/HDFS-10742.012.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cf8d9e71b675 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f6c0b75 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16651/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16651/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16651/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-06 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468710#comment-15468710
 ] 

Manoj Govindassamy commented on HDFS-9781:
--

[~arpitagarwal], I agree. The patch you shared solves IllegalMonitorException 
issue.  But, I believe we have more than one issue here and so the test is 
failing even after your latest fix. Or May be the test has an issue. I am 
trying to root cause it.

After applying the latest fix patch, though I don't see any 
IllegalMonitorStateException, but await() wakes up continuously because of 
interrupt exception and the while block around it waitVolumesRemoved() behaves 
like a infinite loop causing the caller removeVolume() hang. Are you see this 
same new problem ? Looks like an old issue just getting unearthed because of 
latest fix and test. Your thoughts please ?

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch-2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch, HDFS-9781.branch-2.001.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468631#comment-15468631
 ] 

Erik Krogen commented on HDFS-10843:


I will attach a patch containing a unit test demonstrating the behavior as well 
as a fix. [~shv], can you comment on the desired behavior? 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-06 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-10843:
--

 Summary: Quota Feature Cached Size != Computed Size When Block 
Committed But Not Completed
 Key: HDFS-10843
 URL: https://issues.apache.org/jira/browse/HDFS-10843
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, namenode
Affects Versions: 2.6.0
Reporter: Erik Krogen
Assignee: Erik Krogen


Currently when a block has been committed but has not yet been completed, the 
cached size (used for the quota feature) of the directory containing that block 
differs from the computed size. This results in log messages of the following 
form:

bq. ERROR namenode.NameNode 
(DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192

When a block is initially started under construction, the used space is 
conservatively set to a full block. When the block is committed, the cached 
size is updated to the final size of the block. However, the calculation of the 
computed size uses the full block size until the block is completed, so in the 
period where the block is committed but not completed they disagree. To fix 
this we need to decide which is correct and fix the other to match. It seems to 
me that the cached size is correct since once the block is committed its size 
will not change. 

This can be reproduced using the following steps:
- Create a directory with a quota
- Start writing to a file within this directory
- Prevent all datanodes to which the file is written from communicating the 
corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
simulate a transient network partition/delay)
- During this time, call DistributedFileSystem.getContentSummary() on the 
directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468599#comment-15468599
 ] 

Arpit Agarwal commented on HDFS-9781:
-

Hi [~manojg], the wait issue in removeVolumes (HDFS-10830) causes the 
IllegalMonitorException. If the test case still hangs after fixing that I am 
not sure how it could be at fault.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch-2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch, HDFS-9781.branch-2.001.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-09-06 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10636:
--
Status: Open  (was: Patch Available)

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch, 
> HDFS-10636.003.patch, HDFS-10636.004.patch, HDFS-10636.005.patch, 
> HDFS-10636.006.patch, HDFS-10636.007.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9038) DFS reserved space is erroneously counted towards non-DFS used.

2016-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468552#comment-15468552
 ] 

Hudson commented on HDFS-9038:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10401 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10401/])
HDFS-9038. DFS reserved space is erroneously counted towards non-DFS (arp: rev 
5f23abfa30ea29a5474513c463b4d462c0e824ee)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsVolumeList.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeCapacityReport.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/protocol/StorageReport.java


> DFS reserved space is erroneously counted towards non-DFS used.
> ---
>
> Key: HDFS-9038
> URL: https://issues.apache.org/jira/browse/HDFS-9038
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: GetFree.java, HDFS-9038-002.patch, HDFS-9038-003.patch, 
> HDFS-9038-004.patch, HDFS-9038-005.patch, HDFS-9038-006.patch, 
> HDFS-9038-007.patch, HDFS-9038-008.patch, HDFS-9038-009.patch, 
> HDFS-9038-010.patch, HDFS-9038-011.patch, HDFS-9038.patch
>
>
> HDFS-5215 changed the DataNode volume available space calculation to consider 
> the reserved space held by the {{dfs.datanode.du.reserved}} configuration 
> property.  As a side effect, reserved space is now counted towards non-DFS 
> used.  I don't believe it was intentional to change the definition of non-DFS 
> used.  This issue proposes restoring the prior behavior: do not count 
> reserved space towards non-DFS used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10713) Throttle FsNameSystem lock warnings

2016-09-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468265#comment-15468265
 ] 

Arpit Agarwal edited comment on HDFS-10713 at 9/6/16 8:55 PM:
--

Thanks for the updated patch [~hanishakoneru]. The read lock is not exclusive 
so the following sequence of operations in readUnlock won't see a consistent 
state. This block of code probably needs to synchronize on a separate mutex 
after releasing the read lock. 

{code}
if (needReport && readLockInterval >= this.readLockReportingThreshold) {
  if (readLockInterval > longestReadLockHeldInterval) {
longestReadLockHeldInterval = readLockInterval;
  }
  if (currentTime - timeStampOfLastReadLockReport > this
  .lockSuppressWarningInterval) {
logReport = true;
numSuppressedWarnings = numReadLockWarningsSuppressed;
numReadLockWarningsSuppressed = 0;
longestLockHeldInterval = longestReadLockHeldInterval;
longestReadLockHeldInterval = 0;
timeStampOfLastReadLockReport = currentTime;
  } else {
numReadLockWarningsSuppressed++;
  }
}

this.fsLock.readLock().unlock();
{code}

-I also had some questions about the _needReport_ logic I posted on 
HDFS-10817.- 


was (Author: arpitagarwal):
Thanks for the updated patch [~hanishakoneru]. The read lock is not exclusive 
so the following sequence of operations in readUnlock won't see a consistent 
state. This block of code probably needs to synchronize on a separate mutex 
after releasing the read lock. 

{code}
if (needReport && readLockInterval >= this.readLockReportingThreshold) {
  if (readLockInterval > longestReadLockHeldInterval) {
longestReadLockHeldInterval = readLockInterval;
  }
  if (currentTime - timeStampOfLastReadLockReport > this
  .lockSuppressWarningInterval) {
logReport = true;
numSuppressedWarnings = numReadLockWarningsSuppressed;
numReadLockWarningsSuppressed = 0;
longestLockHeldInterval = longestReadLockHeldInterval;
longestReadLockHeldInterval = 0;
timeStampOfLastReadLockReport = currentTime;
  } else {
numReadLockWarningsSuppressed++;
  }
}

this.fsLock.readLock().unlock();
{code}

I also had some questions about the _needReport_ logic I posted on HDFS-10817.

> Throttle FsNameSystem lock warnings
> ---
>
> Key: HDFS-10713
> URL: https://issues.apache.org/jira/browse/HDFS-10713
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging, namenode
>Reporter: Arpit Agarwal
>Assignee: Hanisha Koneru
> Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, 
> HDFS-10713.002.patch, HDFS-10713.003.patch
>
>
> The NameNode logs a message if the FSNamesystem write lock is held by a 
> thread for over 1 second. These messages can be throttled to at one most one 
> per x minutes to avoid potentially filling up NN logs. We can also log the 
> number of suppressed notices since the last log message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9038) DFS reserved space is erroneously counted towards non-DFS used.

2016-09-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9038:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s:   (was: 2.7.4)
  Status: Resolved  (was: Patch Available)

Pushed to 2.8.0. Hit numerous conflicts cherry-picking to branch-2.7 that I 
didn't look into. 2.7.4 will likely need an updated patch.

> DFS reserved space is erroneously counted towards non-DFS used.
> ---
>
> Key: HDFS-9038
> URL: https://issues.apache.org/jira/browse/HDFS-9038
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: GetFree.java, HDFS-9038-002.patch, HDFS-9038-003.patch, 
> HDFS-9038-004.patch, HDFS-9038-005.patch, HDFS-9038-006.patch, 
> HDFS-9038-007.patch, HDFS-9038-008.patch, HDFS-9038-009.patch, 
> HDFS-9038-010.patch, HDFS-9038-011.patch, HDFS-9038.patch
>
>
> HDFS-5215 changed the DataNode volume available space calculation to consider 
> the reserved space held by the {{dfs.datanode.du.reserved}} configuration 
> property.  As a side effect, reserved space is now counted towards non-DFS 
> used.  I don't believe it was intentional to change the definition of non-DFS 
> used.  This issue proposes restoring the prior behavior: do not count 
> reserved space towards non-DFS used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468517#comment-15468517
 ] 

Hadoop QA commented on HDFS-10636:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 25 new + 856 unchanged - 94 fixed = 881 total (was 950) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDNFencing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10636 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12826989/HDFS-10636.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3e2ad5721046 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f6c0b75 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16650/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16650/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16650/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16650/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-10608) Include event for AddBlock in Inotify Event Stream

2016-09-06 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HDFS-10608:
--
Status: Patch Available  (was: Open)

> Include event for AddBlock in Inotify Event Stream
> --
>
> Key: HDFS-10608
> URL: https://issues.apache.org/jira/browse/HDFS-10608
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: churro morales
>Priority: Minor
> Attachments: HDFS-10608.patch, HDFS-10608.v1.patch, 
> HDFS-10608.v2.patch, HDFS-10608.v3.patch, HDFS-10608.v4.patch
>
>
> It would be nice to have an AddBlockEvent in the INotify pipeline.  Based on 
> discussions from mailing list:
> http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201607.mbox/%3C1467743792.4040080.657624289.7BE240AD%40webmail.messagingengine.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10608) Include event for AddBlock in Inotify Event Stream

2016-09-06 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HDFS-10608:
--
Status: Open  (was: Patch Available)

> Include event for AddBlock in Inotify Event Stream
> --
>
> Key: HDFS-10608
> URL: https://issues.apache.org/jira/browse/HDFS-10608
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: churro morales
>Priority: Minor
> Attachments: HDFS-10608.patch, HDFS-10608.v1.patch, 
> HDFS-10608.v2.patch, HDFS-10608.v3.patch, HDFS-10608.v4.patch
>
>
> It would be nice to have an AddBlockEvent in the INotify pipeline.  Based on 
> discussions from mailing list:
> http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201607.mbox/%3C1467743792.4040080.657624289.7BE240AD%40webmail.messagingengine.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10608) Include event for AddBlock in Inotify Event Stream

2016-09-06 Thread churro morales (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

churro morales updated HDFS-10608:
--
Attachment: HDFS-10608.v4.patch

[~surendrasingh] 

Removed the lastBlockLength and converted the blockIds to strings as you 
requested.  Now use simply prepend the blockID with Block.BLOCK_FILE_PREFIX.

Hope this is what you wanted and thanks for the review.  Let me know if this is 
in a state to get upstream now.

> Include event for AddBlock in Inotify Event Stream
> --
>
> Key: HDFS-10608
> URL: https://issues.apache.org/jira/browse/HDFS-10608
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: churro morales
>Priority: Minor
> Attachments: HDFS-10608.patch, HDFS-10608.v1.patch, 
> HDFS-10608.v2.patch, HDFS-10608.v3.patch, HDFS-10608.v4.patch
>
>
> It would be nice to have an AddBlockEvent in the INotify pipeline.  Based on 
> discussions from mailing list:
> http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201607.mbox/%3C1467743792.4040080.657624289.7BE240AD%40webmail.messagingengine.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10742) Measurement of lock held time in FsDatasetImpl

2016-09-06 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-10742:
--
Attachment: HDFS-10742.012.patch

> Measurement of lock held time in FsDatasetImpl
> --
>
> Key: HDFS-10742
> URL: https://issues.apache.org/jira/browse/HDFS-10742
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-10742.001.patch, HDFS-10742.002.patch, 
> HDFS-10742.003.patch, HDFS-10742.004.patch, HDFS-10742.005.patch, 
> HDFS-10742.006.patch, HDFS-10742.007.patch, HDFS-10742.008.patch, 
> HDFS-10742.009.patch, HDFS-10742.010.patch, HDFS-10742.011.patch, 
> HDFS-10742.012.patch
>
>
> This JIRA proposes to measure the time the of lock of {{FsDatasetImpl}} is 
> held by a thread. Doing so will allow us to measure lock statistics.
> This can be done by extending the {{AutoCloseableLock}} lock object in 
> {{FsDatasetImpl}}. In the future we can also consider replacing the lock with 
> a read-write lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10817) Add Logging for Long-held NN Read Locks

2016-09-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468414#comment-15468414
 ] 

Erik Krogen commented on HDFS-10817:


Right, the {{getReadHoldCount}} vs. {{getReadLockCount}} distinction is not 
very obvious but very important!

> Add Logging for Long-held NN Read Locks
> ---
>
> Key: HDFS-10817
> URL: https://issues.apache.org/jira/browse/HDFS-10817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10817.000.patch, HDFS-10817.001.patch, 
> HDFS-10817.002.patch, HDFS-10817.003.patch, HDFS-10817.004.patch, 
> HDFS-10817.addendum.000.patch, HDFS-10817.addendum.001.patch
>
>
> Right now the Namenode will log when a write lock is held for a long time to 
> help tracks methods which are causing expensive delays. Let's do the same for 
> read locks since these operations may also be expensive/long and cause 
> delays. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10842) Prevent IncrementalBlockReport processing before the first FullBlockReport has been received

2016-09-06 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-10842:
---
Assignee: Daryn Sharp  (was: Kuhu Shukla)

> Prevent IncrementalBlockReport processing before the first FullBlockReport 
> has been received
> 
>
> Key: HDFS-10842
> URL: https://issues.apache.org/jira/browse/HDFS-10842
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>Assignee: Daryn Sharp
>
> In a scenario where a Datanode does not send a fullBlockReport, incremental 
> block reports can cause the NN to make error in judgement about what blocks 
> are on a given DN. One solution is to not process the blocks unless we have 
> received the first full block report.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACL inheritance conflicts with umaskmode

2016-09-06 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468307#comment-15468307
 ] 

John Zhuge commented on HDFS-6962:
--

No problem, [~cnauroth].

> ACL inheritance conflicts with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.010.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run_compat_tests, run_unit_tests, test_plan.md
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10817) Add Logging for Long-held NN Read Locks

2016-09-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468304#comment-15468304
 ] 

Arpit Agarwal commented on HDFS-10817:
--

Got it thank you for the clarification. I got the threadLocal part but was 
confused about the re-entrant part.

> Add Logging for Long-held NN Read Locks
> ---
>
> Key: HDFS-10817
> URL: https://issues.apache.org/jira/browse/HDFS-10817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10817.000.patch, HDFS-10817.001.patch, 
> HDFS-10817.002.patch, HDFS-10817.003.patch, HDFS-10817.004.patch, 
> HDFS-10817.addendum.000.patch, HDFS-10817.addendum.001.patch
>
>
> Right now the Namenode will log when a write lock is held for a long time to 
> help tracks methods which are causing expensive delays. Let's do the same for 
> read locks since these operations may also be expensive/long and cause 
> delays. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10817) Add Logging for Long-held NN Read Locks

2016-09-06 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468294#comment-15468294
 ] 

Erik Krogen commented on HDFS-10817:


[~arpitagarwal], thank you for the concern but I do not believe that 
understanding is correct. {{getReadHoldCount}} is checking how many reentrant 
locking calls have been made. Each thread stores its own copy of 
{{readLockHeldTimestamp}} (note that it is a {{ThreadLocal}}, meaning each 
thread has its own copy) which is set on its first read lock call, and read on 
its final unlock call. Different threads will each monitor and record their 
time separately.  

> Add Logging for Long-held NN Read Locks
> ---
>
> Key: HDFS-10817
> URL: https://issues.apache.org/jira/browse/HDFS-10817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10817.000.patch, HDFS-10817.001.patch, 
> HDFS-10817.002.patch, HDFS-10817.003.patch, HDFS-10817.004.patch, 
> HDFS-10817.addendum.000.patch, HDFS-10817.addendum.001.patch
>
>
> Right now the Namenode will log when a write lock is held for a long time to 
> help tracks methods which are causing expensive delays. Let's do the same for 
> read locks since these operations may also be expensive/long and cause 
> delays. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10450) libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468295#comment-15468295
 ] 

Hadoop QA commented on HDFS-10450:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m  
4s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m 
11s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
43s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
5s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
11s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
9s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 9s{color} | {color:green} The patch generated 0 new + 97 unchanged - 1 fixed = 
97 total (was 98) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
24s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Issue | HDFS-10450 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827233/HDFS-10450.HDFS-8707.002.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  cc  mvnsite  
javac  unit  |
| uname | Linux 039ee5d441ec 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 06316b7 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| shellcheck | v0.4.4 |
| JDK v1.7.0_111  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16649/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16649/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Implement Cyrus SASL 

[jira] [Commented] (HDFS-6962) ACL inheritance conflicts with umaskmode

2016-09-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468292#comment-15468292
 ] 

Chris Nauroth commented on HDFS-6962:
-

Oh no!  I botched the commit message on this, so it says "Contributed by Chris 
Nauroth" instead of "Contributed by John Zhuge".  This can't really be fixed, 
because it would require a force push and cause grief for anyone who has sync'd 
the repo since the commit.

[~jzhuge], I'm really sorry about that.  The JIRA issue remains assigned to you 
for proper attribution though.

> ACL inheritance conflicts with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.010.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run_compat_tests, run_unit_tests, test_plan.md
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468285#comment-15468285
 ] 

Hudson commented on HDFS-10841:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10400 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10400/])
HDFS-10841. Remove duplicate or unused variable in appendFile(). (xiao: rev 
f6c0b7543f612de756ff0c03e9a2c6e33b496a36)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAppendOp.java


> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10841.patch
>
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10713) Throttle FsNameSystem lock warnings

2016-09-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468265#comment-15468265
 ] 

Arpit Agarwal commented on HDFS-10713:
--

Thanks for the updated patch [~hanishakoneru]. The read lock is not exclusive 
so the following sequence of operations in readUnlock won't see a consistent 
state. This block of code probably needs to synchronize on a separate mutex 
after releasing the read lock. 

{code}
if (needReport && readLockInterval >= this.readLockReportingThreshold) {
  if (readLockInterval > longestReadLockHeldInterval) {
longestReadLockHeldInterval = readLockInterval;
  }
  if (currentTime - timeStampOfLastReadLockReport > this
  .lockSuppressWarningInterval) {
logReport = true;
numSuppressedWarnings = numReadLockWarningsSuppressed;
numReadLockWarningsSuppressed = 0;
longestLockHeldInterval = longestReadLockHeldInterval;
longestReadLockHeldInterval = 0;
timeStampOfLastReadLockReport = currentTime;
  } else {
numReadLockWarningsSuppressed++;
  }
}

this.fsLock.readLock().unlock();
{code}

I also had some questions about the _needReport_ logic I posted on HDFS-10817.

> Throttle FsNameSystem lock warnings
> ---
>
> Key: HDFS-10713
> URL: https://issues.apache.org/jira/browse/HDFS-10713
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging, namenode
>Reporter: Arpit Agarwal
>Assignee: Hanisha Koneru
> Attachments: HDFS-10713.000.patch, HDFS-10713.001.patch, 
> HDFS-10713.002.patch, HDFS-10713.003.patch
>
>
> The NameNode logs a message if the FSNamesystem write lock is held by a 
> thread for over 1 second. These messages can be throttled to at one most one 
> per x minutes to avoid potentially filling up NN logs. We can also log the 
> number of suppressed notices since the last log message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10817) Add Logging for Long-held NN Read Locks

2016-09-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468261#comment-15468261
 ] 

Arpit Agarwal commented on HDFS-10817:
--

Hi [~xkrogen],

Thanks for adding this instrumentation. IIUC this implicitly assumes that with 
multiple readers, first thread to acquire the read lock will be the last thread 
to release it, since only the first thread to get the read lock will update 
readLockHeldTimeStamp. If another thread is the last to release it then 
{{readLockHeldTimeStamp.get()}} is going to return LONG.MAX_VALUE so the 
calculation will be off. 


> Add Logging for Long-held NN Read Locks
> ---
>
> Key: HDFS-10817
> URL: https://issues.apache.org/jira/browse/HDFS-10817
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: logging, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10817.000.patch, HDFS-10817.001.patch, 
> HDFS-10817.002.patch, HDFS-10817.003.patch, HDFS-10817.004.patch, 
> HDFS-10817.addendum.000.patch, HDFS-10817.addendum.001.patch
>
>
> Right now the Namenode will log when a write lock is held for a long time to 
> help tracks methods which are causing expensive delays. Let's do the same for 
> read locks since these operations may also be expensive/long and cause 
> delays. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-09-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-10636:

Status: Open  (was: Patch Available)

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch, 
> HDFS-10636.003.patch, HDFS-10636.004.patch, HDFS-10636.005.patch, 
> HDFS-10636.006.patch, HDFS-10636.007.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10636) Modify ReplicaInfo to remove the assumption that replica metadata and data are stored in java.io.File.

2016-09-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-10636:

Status: Patch Available  (was: Open)

> Modify ReplicaInfo to remove the assumption that replica metadata and data 
> are stored in java.io.File.
> --
>
> Key: HDFS-10636
> URL: https://issues.apache.org/jira/browse/HDFS-10636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10636.001.patch, HDFS-10636.002.patch, 
> HDFS-10636.003.patch, HDFS-10636.004.patch, HDFS-10636.005.patch, 
> HDFS-10636.006.patch, HDFS-10636.007.patch
>
>
> Replace java.io.File related APIs from {{ReplicaInfo}}, and enable the 
> definition of new {{ReplicaInfo}} sub-classes whose metadata and data can be 
> present on external storages (HDFS-9806). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10835) Fix typos in httpfs.sh

2016-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468233#comment-15468233
 ] 

Hudson commented on HDFS-10835:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10399 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10399/])
HDFS-10835. Fix typos in httpfs.sh. Contributed by John Zhuge. (xiao: rev 
7c213c341d047362e6f8b66ab3ffafaf639d8d31)
* (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh


> Fix typos in httpfs.sh
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch, HDFS-10835.004.patch, HDFS-10835.005.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10841:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.8. Backporting to branch-2 has a 
trivial conflict, compiled before pushing.

Thanks Kihwal for the quick fix!

> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10841.patch
>
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10835) Fix typos in httpfs.sh

2016-09-06 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468191#comment-15468191
 ] 

John Zhuge commented on HDFS-10835:
---

Thanks [~xiaochen] for the review and commit, [~aw] for the review.

> Fix typos in httpfs.sh
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch, HDFS-10835.004.patch, HDFS-10835.005.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468192#comment-15468192
 ] 

Xiao Chen commented on HDFS-10841:
--

+1 on the patch, committing soon.

> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Minor
> Attachments: HDFS-10841.patch
>
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10835) Fix typos in httpfs.sh

2016-09-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10835:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to trunk.

Thanks [~jzhuge] for fixing the issue, and [~aw] for reviews.

> Fix typos in httpfs.sh
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch, HDFS-10835.004.patch, HDFS-10835.005.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10835) Fix typos in httpfs.sh

2016-09-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468179#comment-15468179
 ] 

Xiao Chen commented on HDFS-10835:
--

+1 on patch 5, committing this.

> Fix typos in httpfs.sh
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch, HDFS-10835.004.patch, HDFS-10835.005.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10835) Fix typos in httpfs.sh

2016-09-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10835:
-
Summary: Fix typos in httpfs.sh  (was: Typo in httpfs.sh hadoop_usage)

> Fix typos in httpfs.sh
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch, HDFS-10835.004.patch, HDFS-10835.005.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9847) HDFS configuration should accept time units

2016-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468147#comment-15468147
 ] 

Hudson commented on HDFS-9847:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10398 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10398/])
HDFS-9847. HDFS configuration should accept time units. Contributed by 
(cdouglas: rev d37dc5d1b8e022a7085118a2e7066623483c293f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointConf.java


> HDFS configuration should accept time units
> ---
>
> Key: HDFS-9847
> URL: https://issues.apache.org/jira/browse/HDFS-9847
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9847-branch-2.001.patch, 
> HDFS-9847-branch-2.002.patch, HDFS-9847-nothrow.001.patch, 
> HDFS-9847-nothrow.002.patch, HDFS-9847-nothrow.003.patch, 
> HDFS-9847-nothrow.004.patch, HDFS-9847.001.patch, HDFS-9847.002.patch, 
> HDFS-9847.003.patch, HDFS-9847.004.patch, HDFS-9847.005.patch, 
> HDFS-9847.006.patch, HDFS-9847.007.patch, HDFS-9847.008.patch, 
> branch-2-delta.002.txt, timeduration-w-y.patch
>
>
> In HDFS-9821, it talks about the issue of leting existing keys use friendly 
> units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names 
> contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make 
> some other configurations which without time unit name to accept friendly 
> time units. The time unit  {{seconds}} is frequently used in hdfs. We can 
> updating this configurations first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACL inheritance conflicts with umaskmode

2016-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468145#comment-15468145
 ] 

Hudson commented on HDFS-6962:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10398 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10398/])
HDFS-6962. ACL inheritance conflicts with umaskmode. Contributed by (cnauroth: 
rev f0d5382ff3e31a47d13e4cb6c3a244cca82b17ce)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSymlinkOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/PermissionParam.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLIWithPosixAclInheritance.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclStorage.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/package-info.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsCreateModes.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/UnmaskedPermissionParam.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLIWithPosixAclInheritance.xml


> ACL inheritance conflicts with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.010.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run_compat_tests, run_unit_tests, test_plan.md
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m 

[jira] [Commented] (HDFS-7878) API - expose an unique file identifier

2016-09-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468143#comment-15468143
 ] 

Jing Zhao commented on HDFS-7878:
-

{{clusterID}} can be a candidate, but it can only be retrieved from either 
VERSION file or the NamespaceInfo. It is hard for a dfsclient to map a 
clusterId to a specific NameNode. How about using name service id 
("dfs.nameservice.id") here?

> API - expose an unique file identifier
> --
>
> Key: HDFS-7878
> URL: https://issues.apache.org/jira/browse/HDFS-7878
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, 
> HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, 
> HDFS-7878.06.patch, HDFS-7878.patch
>
>
> See HDFS-487.
> Even though that is resolved as duplicate, the ID is actually not exposed by 
> the JIRA it supposedly duplicates.
> INode ID for the file should be easy to expose; alternatively ID could be 
> derived from block IDs, to account for appends...
> This is useful e.g. for cache key by file, to make sure cache stays correct 
> when file is overwritten.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.

2016-09-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468133#comment-15468133
 ] 

Mingliang Liu commented on HDFS-10810:
--

Will do this week. Thanks.

>  Setreplication removing block from underconstrcution temporarily when batch 
> IBR is enabled.
> 
>
> Key: HDFS-10810
> URL: https://issues.apache.org/jira/browse/HDFS-10810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10810-002.patch, HDFS-10810.patch
>
>
> 1)Batch IBR is enabled with number of committed blocks allowed=1
> 2) Written one block and closed the file without waiting for IBR
> 3)Setreplication called immediately on the file. 
> SO till the finalized IBR Received, this block will be marked as corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10842) Prevent IncrementalBlockReport processing before the first FullBlockReport has been received

2016-09-06 Thread Kuhu Shukla (JIRA)
Kuhu Shukla created HDFS-10842:
--

 Summary: Prevent IncrementalBlockReport processing before the 
first FullBlockReport has been received
 Key: HDFS-10842
 URL: https://issues.apache.org/jira/browse/HDFS-10842
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kuhu Shukla
Assignee: Kuhu Shukla


In a scenario where a Datanode does not send a fullBlockReport, incremental 
block reports can cause the NN to make error in judgement about what blocks are 
on a given DN. One solution is to not process the blocks unless we have 
received the first full block report.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACL inheritance conflicts with umaskmode

2016-09-06 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468117#comment-15468117
 ] 

John Zhuge commented on HDFS-6962:
--

Thank you [~eddyxu] and [~cnauroth] for the great reviews and commit, 
[~Alexandre LINTE] for reporting the issue.

> ACL inheritance conflicts with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.010.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run_compat_tests, run_unit_tests, test_plan.md
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10450) libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468106#comment-15468106
 ] 

Hadoop QA commented on HDFS-10450:
--

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/16649/console in case of 
problems.


> libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc
> 
>
> Key: HDFS-10450
> URL: https://issues.apache.org/jira/browse/HDFS-10450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10450.HDFS-8707.000.patch, 
> HDFS-10450.HDFS-8707.001.patch, HDFS-10450.HDFS-8707.002.patch
>
>
> The current sasl_engine implementation was proven out using GSASL, which is 
> does not have an ASF-approved license.  It included a framework to use Cyrus 
> SASL (libsasl2.so) instead; we should complete that implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6962) ACL inheritance conflicts with umaskmode

2016-09-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6962:

Fix Version/s: 3.0.0-alpha2

> ACL inheritance conflicts with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.010.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run_compat_tests, run_unit_tests, test_plan.md
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6962) ACL inheritance conflicts with umaskmode

2016-09-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6962:


+1 for patch revision 010.  I have committed this to trunk.  [~jzhuge], thank 
you for your hard work on this patch.  [~eddyxu], thank you for reviewing.

> ACL inheritance conflicts with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.010.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run_compat_tests, run_unit_tests, test_plan.md
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6962) ACL inheritance conflicts with umaskmode

2016-09-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6962:

Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

> ACL inheritance conflicts with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.010.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run_compat_tests, run_unit_tests, test_plan.md
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6962) ACL inheritance conflicts with umaskmode

2016-09-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6962:

Release Note: The original implementation of HDFS ACLs applied the client's 
umask to the permissions when inheriting a default ACL defined on a parent 
directory.  This behavior is a deviation from the POSIX ACL specification, 
which states that the umask has no influence when a default ACL propagates from 
parent to child.  HDFS now offers the capability to ignore the umask in this 
case for improved compliance with POSIX.  This change is considered 
backward-incompatible, so the new behavior is off by default and must be 
explicitly configured by setting dfs.namenode.posix.acl.inheritance.enabled to 
true in hdfs-site.xml.  Please see the HDFS Permissions Guide for further 
details.

> ACL inheritance conflicts with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.010.patch, HDFS-6962.1.patch, 
> disabled_new_client.log, disabled_old_client.log, enabled_new_client.log, 
> enabled_old_client.log, run_compat_tests, run_unit_tests, test_plan.md
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9781) FsDatasetImpl#getBlockReports can occasionally throw NullPointerException

2016-09-06 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468073#comment-15468073
 ] 

Manoj Govindassamy commented on HDFS-9781:
--

[~arpitagarwal], Thanks for looking at test failures. would like to understand 
more on these test failure signatures. 

The fix in this bug HDFS-9781 does the following
1. fixes NPE issue in getBlockReports()
2. test case TestFsDatasetImpl#testRemoveVolumeBeingWritten() updated to verify 
both HDFS-9781 (NPE) and HDFS-10830 (IllegalMonitorStateException) issues 

So I believe, even if you back out HDFS-9781 NPE fix, you will continue to see 
the other problem exposed from the updated testcase, until the wait in 
removeVolumes() is fixed. 

Can you please tell me how the test case fails when you backed out all other 
issues. Is that only by IllegalMonitorStateException or by other errors.

> FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
> -
>
> Key: HDFS-9781
> URL: https://issues.apache.org/jira/browse/HDFS-9781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9781-branch-2.001.patch, HDFS-9781.002.patch, 
> HDFS-9781.003.patch, HDFS-9781.01.patch, HDFS-9781.branch-2.001.patch
>
>
> FsDatasetImpl#getBlockReports occasionally throws NPE. The NPE caused 
> TestFsDatasetImpl#testRemoveVolumeBeingWritten to time out, because the test 
> waits for the call to FsDatasetImpl#getBlockReports to complete without 
> exceptions.
> Additionally, the test should be updated to identify an expected exception, 
> using {{GenericTestUtils.assertExceptionContains()}}
> {noformat}
> Exception in thread "Thread-20" java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getBlockReports(FsDatasetImpl.java:1709)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1BlockReportThread.run(TestFsDatasetImpl.java:587)
> 2016-02-08 15:47:30,379 [Thread-21] WARN  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:run(606)) - Exception caught. This should not affect 
> the test
> java.io.IOException: Failed to move meta file for ReplicaBeingWritten, 
> blk_0_0, RBW
>   getNumBytes() = 0
>   getBytesOnDisk()  = 0
>   getVisibleLength()= 0
>   getVolume()   = 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current
>   getBlockFile()= 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0
>   bytesAcked=0
>   bytesOnDisk=0 from 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta
>  to 
> /home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addFinalizedBlock(BlockPoolSlice.java:295)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addFinalizedBlock(FsVolumeImpl.java:819)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1620)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1601)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl$1ResponderThread.run(TestFsDatasetImpl.java:603)
> Caused by: java.io.IOException: 
> renameTo(src=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/rbw/blk_0_0.meta,
>  
> dst=/home/weichiu/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/Nmi6rYndvr/data0/current/bpid-0/current/finalized/subdir0/subdir0/blk_0_0.meta)
>  failed.
> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:873)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:855)
> ... 5 more
> 2016-02-08 15:47:34,381 [Thread-19] INFO  impl.FsDatasetImpl 
> (FsVolumeList.java:waitVolumeRemoved(287)) - Volume reference is released.
> 2016-02-08 15:47:34,384 [Thread-19] INFO  impl.TestFsDatasetImpl 
> (TestFsDatasetImpl.java:testRemoveVolumeBeingWritten(622)) - Volumes removed
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10450) libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc

2016-09-06 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468056#comment-15468056
 ] 

Bob Hansen commented on HDFS-10450:
---

Cleaned up whitespace

> libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc
> 
>
> Key: HDFS-10450
> URL: https://issues.apache.org/jira/browse/HDFS-10450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10450.HDFS-8707.000.patch, 
> HDFS-10450.HDFS-8707.001.patch, HDFS-10450.HDFS-8707.002.patch
>
>
> The current sasl_engine implementation was proven out using GSASL, which is 
> does not have an ASF-approved license.  It included a framework to use Cyrus 
> SASL (libsasl2.so) instead; we should complete that implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10450) libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc

2016-09-06 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10450:
--
Attachment: HDFS-10450.HDFS-8707.002.patch

> libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc
> 
>
> Key: HDFS-10450
> URL: https://issues.apache.org/jira/browse/HDFS-10450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10450.HDFS-8707.000.patch, 
> HDFS-10450.HDFS-8707.001.patch, HDFS-10450.HDFS-8707.002.patch
>
>
> The current sasl_engine implementation was proven out using GSASL, which is 
> does not have an ASF-approved license.  It included a framework to use Cyrus 
> SASL (libsasl2.so) instead; we should complete that implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9847) HDFS configuration should accept time units

2016-09-06 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-9847:

Summary: HDFS configuration should accept time units  (was: HDFS 
configuration without time unit name should accept friendly time units)

> HDFS configuration should accept time units
> ---
>
> Key: HDFS-9847
> URL: https://issues.apache.org/jira/browse/HDFS-9847
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9847-branch-2.001.patch, 
> HDFS-9847-branch-2.002.patch, HDFS-9847-nothrow.001.patch, 
> HDFS-9847-nothrow.002.patch, HDFS-9847-nothrow.003.patch, 
> HDFS-9847-nothrow.004.patch, HDFS-9847.001.patch, HDFS-9847.002.patch, 
> HDFS-9847.003.patch, HDFS-9847.004.patch, HDFS-9847.005.patch, 
> HDFS-9847.006.patch, HDFS-9847.007.patch, HDFS-9847.008.patch, 
> branch-2-delta.002.txt, timeduration-w-y.patch
>
>
> In HDFS-9821, it talks about the issue of leting existing keys use friendly 
> units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names 
> contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make 
> some other configurations which without time unit name to accept friendly 
> time units. The time unit  {{seconds}} is frequently used in hdfs. We can 
> updating this configurations first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9847) HDFS configuration without time unit name should accept friendly time units

2016-09-06 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-9847:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

+1 Thanks for updating the patch, [~linyiqun].

I committed this. Thanks!

> HDFS configuration without time unit name should accept friendly time units
> ---
>
> Key: HDFS-9847
> URL: https://issues.apache.org/jira/browse/HDFS-9847
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-9847-branch-2.001.patch, 
> HDFS-9847-branch-2.002.patch, HDFS-9847-nothrow.001.patch, 
> HDFS-9847-nothrow.002.patch, HDFS-9847-nothrow.003.patch, 
> HDFS-9847-nothrow.004.patch, HDFS-9847.001.patch, HDFS-9847.002.patch, 
> HDFS-9847.003.patch, HDFS-9847.004.patch, HDFS-9847.005.patch, 
> HDFS-9847.006.patch, HDFS-9847.007.patch, HDFS-9847.008.patch, 
> branch-2-delta.002.txt, timeduration-w-y.patch
>
>
> In HDFS-9821, it talks about the issue of leting existing keys use friendly 
> units e.g. 60s, 5m, 1d, 6w etc. But there are som configuration key names 
> contain time unit name, like {{dfs.blockreport.intervalMsec}}, so we can make 
> some other configurations which without time unit name to accept friendly 
> time units. The time unit  {{seconds}} is frequently used in hdfs. We can 
> updating this configurations first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468046#comment-15468046
 ] 

Hadoop QA commented on HDFS-10841:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m  
9s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10841 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827212/HDFS-10841.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c07bd5b1522a 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 39d1b1d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16648/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16648/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Minor
> Attachments: HDFS-10841.patch
>
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> 

[jira] [Commented] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468041#comment-15468041
 ] 

Xiao Chen commented on HDFS-10841:
--

Thanks Kihwal for taking care of this!

The original problem only shows up in branch-2, because {{src}} is not used at 
all. Looking at this patch, I think it makes more sense to use {{path}} for 
logging and remove {{src}}. +1 pending jenkins.

> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Minor
> Attachments: HDFS-10841.patch
>
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10450) libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467894#comment-15467894
 ] 

Hadoop QA commented on HDFS-10450:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
47s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
9s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
40s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
15s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
12s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 8s{color} | {color:green} The patch generated 0 new + 97 unchanged - 1 fixed = 
97 total (was 98) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
27s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Issue | HDFS-10450 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827211/HDFS-10450.HDFS-8707.001.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  cc  mvnsite  
javac  unit  |
| uname | Linux 884f52b5848e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 06316b7 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| shellcheck | v0.4.4 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16647/artifact/patchprocess/whitespace-eol.txt
 |
| JDK v1.7.0_111  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16647/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 

[jira] [Commented] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467875#comment-15467875
 ] 

Kihwal Lee commented on HDFS-10841:
---

>From https://builds.apache.org/job/PreCommit-Admin/263621
{noformat}
HDFS-10841,12827212 has not been processed, submitting
ERROR retrieving resource
 
https://builds.apache.org/job/PreCommit-HDFS-Build/buildWithParameters?token=hadoopqa_NUM=10841_ID=12827212:
 HTTP Error 503: Service Unavailable
Archiving artifacts
Compressed 99.73 KB of artifacts by 64.2% relative to #263620
Finished: SUCCESS
{noformat}

The precommit admin failed to submit the work for this jira, but the jira was 
ignored as "... has been processed, ignoring" in subsequent runs. Kicking it 
manually.

> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Minor
> Attachments: HDFS-10841.patch
>
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10714) Issue in handling checksum errors in write pipeline when fault DN is LAST_IN_PIPELINE

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467787#comment-15467787
 ] 

Hadoop QA commented on HDFS-10714:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project: The patch generated 5 new + 
167 unchanged - 2 fixed = 172 total (was 169) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFsHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10714 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827198/HDFS-10714-01-draft.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b999d7a6cba6 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 62a9667 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16645/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16645/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 

[jira] [Updated] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10841:
--
Priority: Minor  (was: Major)

> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Minor
> Attachments: HDFS-10841.patch
>
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10841:
--
Status: Patch Available  (was: Open)

> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-10841.patch
>
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10841:
--
Attachment: HDFS-10841.patch

Even in trunk, {{src}} is only used for a logging statement in the method. We 
can get rid of it.

> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-10841.patch
>
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10841) Remove duplicate or unused variable in appendFile()

2016-09-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10841:
--
Summary: Remove duplicate or unused variable in appendFile()  (was: Fix 
findbugs warning in branch-2 introduced by path handling improvements)

> Remove duplicate or unused variable in appendFile()
> ---
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10450) libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467710#comment-15467710
 ] 

Hadoop QA commented on HDFS-10450:
--

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HDFS-Build/16647/console in case of 
problems.


> libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc
> 
>
> Key: HDFS-10450
> URL: https://issues.apache.org/jira/browse/HDFS-10450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10450.HDFS-8707.000.patch, 
> HDFS-10450.HDFS-8707.001.patch
>
>
> The current sasl_engine implementation was proven out using GSASL, which is 
> does not have an ASF-approved license.  It included a framework to use Cyrus 
> SASL (libsasl2.so) instead; we should complete that implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10841) Fix findbugs warning in branch-2 introduced by path handling improvements

2016-09-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10841:
--
Target Version/s: 2.8.0  (was: 2.8.0, 2.9.0)

> Fix findbugs warning in branch-2 introduced by path handling improvements
> -
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10450) libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc

2016-09-06 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10450:
--
Attachment: HDFS-10450.HDFS-8707.001.patch

Added CyrusSASL to the docker image

> libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc
> 
>
> Key: HDFS-10450
> URL: https://issues.apache.org/jira/browse/HDFS-10450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10450.HDFS-8707.000.patch, 
> HDFS-10450.HDFS-8707.001.patch
>
>
> The current sasl_engine implementation was proven out using GSASL, which is 
> does not have an ASF-approved license.  It included a framework to use Cyrus 
> SASL (libsasl2.so) instead; we should complete that implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10841) Fix findbugs warning in branch-2 introduced by path handling improvements

2016-09-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-10841:
-

Assignee: Kihwal Lee  (was: Daryn Sharp)

> Fix findbugs warning in branch-2 introduced by path handling improvements
> -
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10841) Fix findbugs warning in branch-2 introduced by path handling improvements

2016-09-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10841:
--
Assignee: Daryn Sharp

> Fix findbugs warning in branch-2 introduced by path handling improvements
> -
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10841) Fix findbugs warning in branch-2 introduced by path handling improvements

2016-09-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10841:
--
Description: 
A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have the 
same problem.

{noformat}
CodeWarning
DLS Dead store to src in 
org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
  .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
boolean, boolean)
Bug type DLS_DEAD_LOCAL_STORE (click for details) 
In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
  .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
boolean, boolean)
Local variable named src
At FSDirAppendOp.java:[line 92]
{noformat}

  was:A new findbugs warning was reported (See HDFS-10745). branch-2.8 might 
have the same problem.


> Fix findbugs warning in branch-2 introduced by path handling improvements
> -
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Daryn Sharp
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.
> {noformat}
> Code  Warning
> DLS   Dead store to src in 
> org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Bug type DLS_DEAD_LOCAL_STORE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
> In method org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp
>   .appendFile(FSNamesystem, String, FSPermissionChecker, String, String, 
> boolean, boolean)
> Local variable named src
> At FSDirAppendOp.java:[line 92]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10745) Directly resolve paths into INodesInPath

2016-09-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467655#comment-15467655
 ] 

Kihwal Lee commented on HDFS-10745:
---

Hmm the branch-2 precommit didn't complain about the new findbugs warning.  It 
could be from another related change that was cherry-picked.  Filed HDFS-10841. 
Will fix it soon.

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745.2.patch, HDFS-10745.branch-2.patch, 
> HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10841) Fix findbugs warning in branch-2 introduced by path handling improvements

2016-09-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10841:
--
Target Version/s: 2.8.0, 2.9.0

> Fix findbugs warning in branch-2 introduced by path handling improvements
> -
>
> Key: HDFS-10841
> URL: https://issues.apache.org/jira/browse/HDFS-10841
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have 
> the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10841) Fix findbugs warning in branch-2 introduced by path handling improvements

2016-09-06 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-10841:
-

 Summary: Fix findbugs warning in branch-2 introduced by path 
handling improvements
 Key: HDFS-10841
 URL: https://issues.apache.org/jira/browse/HDFS-10841
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


A new findbugs warning was reported (See HDFS-10745). branch-2.8 might have the 
same problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467622#comment-15467622
 ] 

Hadoop QA commented on HDFS-10835:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m 
35s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed = 
74 total (was 75) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12827201/HDFS-10835.005.patch |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux c1fb9543dead 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 62a9667 |
| shellcheck | v0.4.4 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16646/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16646/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch, HDFS-10835.004.patch, HDFS-10835.005.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HDFS-10835) Typo in httpfs.sh hadoop_usage

2016-09-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10835:
--
Attachment: HDFS-10835.005.patch

Patch 005:
* Do not use local var {{name}}

[~aw], [~xiaochen] and I are planning to convert both HttpFS and KMS from 
Tomcat to Jetty 9 pretty soon. It is for real this time :) I will probably 
create 2 sub-tasks under HADOOP-10075.

> Typo in httpfs.sh hadoop_usage
> --
>
> Key: HDFS-10835
> URL: https://issues.apache.org/jira/browse/HDFS-10835
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Affects Versions: 3.0.0-alpha1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
> Attachments: HDFS-10835.001.patch, HDFS-10835.002.patch, 
> HDFS-10835.003.patch, HDFS-10835.004.patch, HDFS-10835.005.patch
>
>
> Typo in method {{hadoop_usage}} of {{httpfs.sh}}. The {{kms}} words should be 
> replaced with {{httpfs}}:
> {noformat}
> function hadoop_usage
> {
>   hadoop_add_subcommand "run" "Start kms in the current window"
>   hadoop_add_subcommand "run -security" "Start in the current window with 
> security manager"
>   hadoop_add_subcommand "start" "Start kms in a separate window"
>   hadoop_add_subcommand "start -security" "Start in a separate window with 
> security manager"
>   hadoop_add_subcommand "status" "Return the LSB compliant status"
>   hadoop_add_subcommand "stop" "Stop kms, waiting up to 5 seconds for the 
> process to end"
>   hadoop_add_subcommand "top n" "Stop kms, waiting up to n seconds for the 
> process to end"
>   hadoop_add_subcommand "stop -force" "Stop kms, wait up to 5 seconds and 
> then use kill -KILL if still running"
>   hadoop_add_subcommand "stop n -force" "Stop kms, wait up to n seconds and 
> then use kill -KILL if still running"
>   hadoop_generate_usage "${MYNAME}" false
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10838) Last full block report received time for each DN should be easily discoverable

2016-09-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467580#comment-15467580
 ] 

Arpit Agarwal commented on HDFS-10838:
--

Nice, thank you for the quick patch [~surendrasingh]!

# {{DatanodeInfoProto convert(DatanodeInfo info)}} should handle missing 
lastBlockReportTime for wire compatibility. Something like this should work.
{code}
di.hasLastBlockReportTime() ? di.getLastBlockReportTime() : 0
{code}
# Also when the value is displayed in the web UI/command line output it should 
be checked whether it is equal to zero and if so something like 'never' should 
be displayed instead of the timestamp.
# Should the NN web UI to show relative time, just like last heartbeat time? 
I'm fine with either though.


> Last full block report received time for each DN should be easily discoverable
> --
>
> Key: HDFS-10838
> URL: https://issues.apache.org/jira/browse/HDFS-10838
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Reporter: Arpit Agarwal
>Assignee: Surendra Singh Lilhore
> Attachments: DFSAdmin-Report.png, HDFS-10838-001.patch, NN_UI.png
>
>
> It should be easy for administrators to discover the time of last full block 
> report from each DataNode.
> We can show it in the NameNode web UI or in the output of {{hdfs dfsadmin 
> -report}}, or both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10714) Issue in handling checksum errors in write pipeline when fault DN is LAST_IN_PIPELINE

2016-09-06 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10714:
-
Attachment: HDFS-10714-01-draft.patch

> Issue in handling checksum errors in write pipeline when fault DN is 
> LAST_IN_PIPELINE
> -
>
> Key: HDFS-10714
> URL: https://issues.apache.org/jira/browse/HDFS-10714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10714-01-draft.patch
>
>
> We had come across one issue, where write is failed even 7 DN’s are available 
> due to network fault at one datanode which is LAST_IN_PIPELINE. It will be 
> similar to HDFS-6937 .
> Scenario : (DN3 has N/W Fault and Min repl=2).
> Write pipeline:
> DN1->DN2->DN3  => DN3 Gives ERROR_CHECKSUM ack. And so DN2 marked as bad
> DN1->DN4-> DN3 => DN3 Gives ERROR_CHECKSUM ack. And so DN4 is marked as bad
> ….
> And so on ( all the times DN3 is LAST_IN_PIPELINE) ... Continued till no more 
> datanodes to construct the pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10714) Issue in handling checksum errors in write pipeline when fault DN is LAST_IN_PIPELINE

2016-09-06 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10714:
-
Attachment: HADOOP-10714-01-draft.patch

Here is the Initial approach, based on the #2 mentioned by [~brahmareddy].

1. DN1->DN2->DN3 is the pipeline,
2. DN3 will get ChecksumException, and Sends the CHECKSUM Error Ack upstream 
and shuts itself down.
3. DN2 will receive the Ack, and before sending upstream, verifies its local 
replica's checksum.
4. If DN2 also found checksum error, then possibly DN1 also would have error. 
So DN2 also marks itself CHECKSUM_ERROR, and sends the reply upstream and shuts 
itself down.

So in this way all DNs replicas will be verified before Ack reaches client.

Please review and give suggestions.

> Issue in handling checksum errors in write pipeline when fault DN is 
> LAST_IN_PIPELINE
> -
>
> Key: HDFS-10714
> URL: https://issues.apache.org/jira/browse/HDFS-10714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> We had come across one issue, where write is failed even 7 DN’s are available 
> due to network fault at one datanode which is LAST_IN_PIPELINE. It will be 
> similar to HDFS-6937 .
> Scenario : (DN3 has N/W Fault and Min repl=2).
> Write pipeline:
> DN1->DN2->DN3  => DN3 Gives ERROR_CHECKSUM ack. And so DN2 marked as bad
> DN1->DN4-> DN3 => DN3 Gives ERROR_CHECKSUM ack. And so DN4 is marked as bad
> ….
> And so on ( all the times DN3 is LAST_IN_PIPELINE) ... Continued till no more 
> datanodes to construct the pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10714) Issue in handling checksum errors in write pipeline when fault DN is LAST_IN_PIPELINE

2016-09-06 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10714:
-
Status: Patch Available  (was: Open)

> Issue in handling checksum errors in write pipeline when fault DN is 
> LAST_IN_PIPELINE
> -
>
> Key: HDFS-10714
> URL: https://issues.apache.org/jira/browse/HDFS-10714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> We had come across one issue, where write is failed even 7 DN’s are available 
> due to network fault at one datanode which is LAST_IN_PIPELINE. It will be 
> similar to HDFS-6937 .
> Scenario : (DN3 has N/W Fault and Min repl=2).
> Write pipeline:
> DN1->DN2->DN3  => DN3 Gives ERROR_CHECKSUM ack. And so DN2 marked as bad
> DN1->DN4-> DN3 => DN3 Gives ERROR_CHECKSUM ack. And so DN4 is marked as bad
> ….
> And so on ( all the times DN3 is LAST_IN_PIPELINE) ... Continued till no more 
> datanodes to construct the pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10714) Issue in handling checksum errors in write pipeline when fault DN is LAST_IN_PIPELINE

2016-09-06 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-10714:
-
Attachment: (was: HADOOP-10714-01-draft.patch)

> Issue in handling checksum errors in write pipeline when fault DN is 
> LAST_IN_PIPELINE
> -
>
> Key: HDFS-10714
> URL: https://issues.apache.org/jira/browse/HDFS-10714
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> We had come across one issue, where write is failed even 7 DN’s are available 
> due to network fault at one datanode which is LAST_IN_PIPELINE. It will be 
> similar to HDFS-6937 .
> Scenario : (DN3 has N/W Fault and Min repl=2).
> Write pipeline:
> DN1->DN2->DN3  => DN3 Gives ERROR_CHECKSUM ack. And so DN2 marked as bad
> DN1->DN4-> DN3 => DN3 Gives ERROR_CHECKSUM ack. And so DN4 is marked as bad
> ….
> And so on ( all the times DN3 is LAST_IN_PIPELINE) ... Continued till no more 
> datanodes to construct the pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10794) [SPS]: Provide storage policy satisfy worker at DN for co-ordinating the block storage movement work

2016-09-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467421#comment-15467421
 ] 

Rakesh R edited comment on HDFS-10794 at 9/6/16 1:32 PM:
-

It looks like the checkstyle warning and test case failure are unrelated to the 
patch.
{code}
Checkstyle warning:-

./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StoragePolicySatisfyWorker.java:0::
 Missing package-info.java file.
{code}


was (Author: rakeshr):
It looks like the checkstyle warning is unrelated to the patch.
{code}
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StoragePolicySatisfyWorker.java:0::
 Missing package-info.java file.
{code}

> [SPS]: Provide storage policy satisfy worker at DN for co-ordinating the 
> block storage movement work
> 
>
> Key: HDFS-10794
> URL: https://issues.apache.org/jira/browse/HDFS-10794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10794-00.patch, HDFS-10794-HDFS-10285.00.patch, 
> HDFS-10794-HDFS-10285.01.patch, HDFS-10794-HDFS-10285.02.patch, 
> HDFS-10794-HDFS-10285.03.patch
>
>
> The idea of this jira is to implement a mechanism to move the blocks to the 
> given target in order to satisfy the block storage policy. Datanode receives 
> {{blocktomove}} details via heart beat response from NN. More specifically, 
> its a datanode side extension to handle the block storage movement commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10794) [SPS]: Provide storage policy satisfy worker at DN for co-ordinating the block storage movement work

2016-09-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15467421#comment-15467421
 ] 

Rakesh R commented on HDFS-10794:
-

It looks like the checkstyle warning is unrelated to the patch.
{code}
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StoragePolicySatisfyWorker.java:0::
 Missing package-info.java file.
{code}

> [SPS]: Provide storage policy satisfy worker at DN for co-ordinating the 
> block storage movement work
> 
>
> Key: HDFS-10794
> URL: https://issues.apache.org/jira/browse/HDFS-10794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10794-00.patch, HDFS-10794-HDFS-10285.00.patch, 
> HDFS-10794-HDFS-10285.01.patch, HDFS-10794-HDFS-10285.02.patch, 
> HDFS-10794-HDFS-10285.03.patch
>
>
> The idea of this jira is to implement a mechanism to move the blocks to the 
> given target in order to satisfy the block storage policy. Datanode receives 
> {{blocktomove}} details via heart beat response from NN. More specifically, 
> its a datanode side extension to handle the block storage movement commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >