[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-01-05 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802380#comment-15802380
 ] 

Sean Mackrory commented on HDFS-11096:
--

In the SortedMapWritable case, the class is actually still there in the same 
package, but it's now templated slightly differently to avoid a warning. This 
actually did cause a source incompatibility in Hive and was removed from 
branch-2. Not sure if this would cause problems at runtime if an application 
with the old version was deployed on the new version. Will look into it. 
Details in HADOOP-13338.

FileStatus also still implements Comparable but specifies it's own type. It 
requires a FileStatus, not an Object, although it used to cast the Object to a 
FileStatus anyway. So the only this would actually fail is if someone was 
casting a FileStatus to an Object before the call to compare(). So it could 
happen, but it's unlikely, and fixing that is something someone could do before 
upgrading to Hadoop 3. That change was in 
https://issues.apache.org/jira/browse/HADOOP-12209. I'm inclined to fix that by 
documentating that we support rolling upgrade, assuming a set of things have 
been checked, including that.

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Priority: Blocker
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-01-05 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15799802#comment-15799802
 ] 

Sean Mackrory edited comment on HDFS-11096 at 1/5/17 8:03 PM:
--

I've been doing a lot of testing. I've posted some automation here, we may want 
to hook into a Jenkins job or something: 
https://github.com/mackrorysd/hadoop-compatibility. I've tested running a bunch 
of MapReduce jobs while doing a rolling upgrade of HDFS, and haven't had any 
failures that indicate an incompatibility. I've also tested pulling data from 
an old cluster onto a new cluster. I'll keep adding other aspects to the tests 
to improve coverage.

I haven't seen a way to whitelist stuff. Filed an issue with jacc: 
https://github.com/lvc/japi-compliance-checker/issues/36.

As for the incompatibilities, I think there's relatively little action to be 
taken, so I'll file JIRAs for those. In detail: metrics and s3a are technically 
violating the contract, but in all cases it would be some serious baggage and 
due to their nature I think it's acceptable. I think SortedMapWritable should 
be put back but deprecated (I'm sure someone's depending on it somewhere and it 
should be trivial), and FileStatus should still implement Comparable. Not so 
sure about NameodeMXBean, the missing configuration keys, or the cases of 
reduced visibility. I'm inclined to leave these as-is unless we know it breaks 
something and they care. They are technically incompatibilities, so maybe 
someone else feels differently (or is aware of applications they are likely to 
break), but it would be nice to shed baggage and poor practices where we can. 
All other issues I feel more confident that they're either not actually 
breaking the contract or are extremely unlikely to break anything enough to 
warrant sticking with the old way. I'll sleep on some of these one more night 
and file JIRAs to start addressing the issues I think are important enough 
tomorrow.


was (Author: mackrorysd):
I've been doing a lot of testing. I've posted some automation here, we may want 
to hook into a Jenkins job or something: 
https://github.com/mackrorysd/hadoop-compatibility. I've tested running a bunch 
of MapReduce jobs while doing a rolling upgrade of HDFS, and haven't had any 
failures that indicate an incompatibility. I've also tested pulling data from 
an old cluster onto a new cluster. I'll keep adding other aspects to the tests 
to improve coverage.

I haven't seen a way to whitelist stuff. Filed an issue with jacc: 
https://github.com/lvc/japi-compliance-checker/issues/36.

As for the incompatibilities, I think there's relatively action to be taken, so 
I'll file JIRAs for those. In detail: metrics and s3a are technically violating 
the contract, but in all cases it would be some serious baggage and due to 
their nature I think it's acceptable. I think SortedMapWritable should be put 
back but deprecated (I'm sure someone's depending on it somewhere and it should 
be trivial), and FileStatus should still implement Comparable. Not so sure 
about NameodeMXBean, the missing configuration keys, or the cases of reduced 
visibility. I'm inclined to leave these as-is unless we know it breaks 
something and they care. They are technically incompatibilities, so maybe 
someone else feels differently (or is aware of applications they are likely to 
break), but it would be nice to shed baggage and poor practices where we can. 
All other issues I feel more confident that they're either not actually 
breaking the contract or are extremely unlikely to break anything enough to 
warrant sticking with the old way. I'll sleep on some of these one more night 
and file JIRAs to start addressing the issues I think are important enough 
tomorrow.

> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Priority: Blocker
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8519) Patch for PPC64

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8519:
-
Fix Version/s: (was: 2.7.1)

> Patch for PPC64
> ---
>
> Key: HDFS-8519
> URL: https://issues.apache.org/jira/browse/HDFS-8519
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
> Environment: RHEL 7.1 /PPC64
>Reporter: Tony Reix
>
> The attached patch enables Hadoop to work on PPC64.
> That deals with SystemPageSize and BloclSize , which are not 4096 on PPC64.
> There are changes in 3 files:
> - 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
> - 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
> - 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
> where 4096 is replaced by getOperatingSystemPageSize() or by using PAGE_SIZE
> The patch has been built on branch-2.7 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8518) Patch for PPC64

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8518:
-
Fix Version/s: (was: 2.7.1)

> Patch for PPC64
> ---
>
> Key: HDFS-8518
> URL: https://issues.apache.org/jira/browse/HDFS-8518
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
> Environment: RHEL 7.1 /PPC64
>Reporter: Tony Reix
>
> The attached patch enables Hadoop to work on PPC64.
> That deals with SystemPageSize and BloclSize , which are not 4096 on PPC64.
> There are changes in 3 files:
> - 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
> - 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
> - 
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
> where 4096 is replaced by getOperatingSystemPageSize() or by using PAGE_SIZE
> The patch has been built on branch-2.7 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9293) FSEditLog's 'OpInstanceCache' instance of threadLocal cache exists dirty 'rpcId',which may cause standby NN too busy to communicate

2017-01-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9293:
-
Fix Version/s: (was: 2.7.1)

> FSEditLog's  'OpInstanceCache' instance of threadLocal cache exists dirty 
> 'rpcId',which may cause standby NN too busy  to communicate 
> --
>
> Key: HDFS-9293
> URL: https://issues.apache.org/jira/browse/HDFS-9293
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0, 2.7.1
>Reporter: DENG FEI
>Assignee: DENG FEI
>
>   In our cluster (hadoop 2.2.0-HA,700+ DN),we found standby NN tail editlog 
> slowly,and hold the fsnamesystem writelock during the work and the DN's 
> heartbeart/blockreport IPC request blocked.Lead to Active NN remove stale DN 
> which can't send heartbeat  because blocking at process Standby NN Regiest 
> common(FIXED at 2.7.1).
>   Below is the standby NN  stack:
> "Edit log tailer" prio=10 tid=0x7f28fcf35800 nid=0x1a7d runnable 
> [0x7f0dd1d76000]
>java.lang.Thread.State: RUNNABLE
>   at java.util.PriorityQueue.remove(PriorityQueue.java:360)
>   at 
> org.apache.hadoop.util.LightWeightCache.put(LightWeightCache.java:217)
>   at org.apache.hadoop.ipc.RetryCache.addCacheEntry(RetryCache.java:270)
>   - locked <0x7f12817714b8> (a org.apache.hadoop.ipc.RetryCache)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntry(FSNamesystem.java:724)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:406)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:199)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:112)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:733)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:227)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:321)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:279)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292)
>
> When apply editLogOp,if the IPC retryCache is found,need  to remove the 
> previous from priorityQueue(O(N)), The updateblock is don't  need record 
> rpcId on editlog except  'client request updatePipeline',but we found many 
> 'UpdateBlocksOp' has repeat ipcId.
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11295) Check storage remaining instead of node remaining in BlockPlacementPolicyDefault.chooseReplicaToDelete()

2017-01-05 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-11295:
-

 Summary: Check storage remaining instead of node remaining in 
BlockPlacementPolicyDefault.chooseReplicaToDelete()
 Key: HDFS-11295
 URL: https://issues.apache.org/jira/browse/HDFS-11295
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.1
Reporter: Xiao Liang


Currently in BlockPlacementPolicyDefault.chooseReplicaToDelete() the logic for 
choosing replica to delete is to pick the node with the least free 
space(node.getRemaining()), if all hearbeats are within the tolerable heartbeat 
interval.
However, a node may have multiple storages and node.getRemaining() is a sum of 
the remainings of them, if free space of the storage with the block to be 
delete is low, free space of the node could still be high due to other storages 
of the node, finally the storage chosen may not be the storage with least free 
space.
So using storage.getRemaining() to choose a storage with least free space for 
choosing replica to delete may be a better way to balance storage usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9391) Update webUI/JMX to display maintenance state info

2017-01-05 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802316#comment-15802316
 ] 

Manoj Govindassamy commented on HDFS-9391:
--

Case 1: One replica is decommissioning and two replicas of the same block are 
entering maintenance

{quote} the code will still increment maintenanceOnlyReplicas when processing 
the decommissioning node, because NumberReplicas includes all replicas stats.
{quote}

* I don't think so. In the current upstream trunk code, a node can only be in 
one state and {{NumberReplicas}} accounting for _Decommission_ and 
_Maintenance_ are exclusive. If the replica is Decommissioning, it cannot be in 
any of Maintenance states, and vice versa. 

* In the current upstream trunk code, {{outOfServiceReplicas}} is defined as a 
sum of both decommission and maintenance

* Patch v02 in this jira, just makes use of already accounted numbers in 
{{NumberReplicas}} in different variables namely -- decommissionOnlyReplicas 
and maintenanceOnlyReplicas

So in the above example, with the patch v02 we will get following numbers 
-- decommissionOnlyReplicas = 1
-- maintenanceOnlyReplicas = 2
-- outOfServiceReplicas = 3 

Hence, 
-- Entering Maintenance page will only show 2 maintenance replica nodes
-- Decommissioning page will only show 1 decommissioning replica node


Case 2: All replicas are decommissioning

{quote}EnteringMaintenance page will have nothing to show to begin with given 
no nodes are entering maintenance.{quote}

* Thats right. When all replicas are decommissioning, {{NumberReplicas}} will 
only have decommissionedAndDecommissioning nodes.
* Entering Maintenance page will be empty
* Decommissioning page will show all these decommissioning nodes
 

[~mingma],
* Do the above cases and results match your expectation ?

* Also, lets make sure we are targeting the same goals w.r.t Decommissioing and 
EnteringMaintenance Page. Based on the discussions we had earlier (refer 
comments 1 - 4), I assumed we want the following 
** Entering Maintenance Page to show the nodes that are Entering Maintenance / 
In Maintenance Only.
** Decommissioning Page to show the nodes that are Decommissioning / 
Decommissioned Only.

* Please correct me if my understanding from the current upstream trunk code is 
wrong or if we want different goals for the pages.



> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9391-MaintenanceMode-WebUI.pdf, HDFS-9391.01.patch, 
> HDFS-9391.02.patch, Maintenance webUI.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9391) Update webUI/JMX to display maintenance state info

2017-01-05 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802317#comment-15802317
 ] 

Manoj Govindassamy commented on HDFS-9391:
--

Code References:

*BlockManager#checkReplicaOnStorage:*
* {code}
  } else if (node.isDecommissionInProgress()) {
s = StoredReplicaState.DECOMMISSIONING;
  } else if (node.isDecommissioned()) {
s = StoredReplicaState.DECOMMISSIONED;
  } else if (node.isMaintenance()) {
if (node.isInMaintenance() || !node.isAlive()) {
  s = StoredReplicaState.MAINTENANCE_NOT_FOR_READ;
} else {
  s = StoredReplicaState.MAINTENANCE_FOR_READ;
}
  } else if (isExcess(node, b)) {
s = StoredReplicaState.EXCESS;
  } else {
s = StoredReplicaState.LIVE;
  }
  counters.add(s, 1);
{code}

*DecommissionManager#Monitor#processBlocksInternal:*
* {code}
if ((liveReplicas == 0) &&
(num.decommissionedAndDecommissioning() > 0)) {
  decommissionOnlyReplicas++;
}
if ((liveReplicas == 0) && (num.maintenanceReplicas() > 0)) {
  maintenanceOnlyReplicas++;
}
if ((liveReplicas == 0) && (num.outOfServiceReplicas() > 0)) {
  outOfServiceOnlyReplicas++;
}
{code}

*NumberReplicas:*
* {code}

  public int decommissionedAndDecommissioning() {
return decommissioned() + decommissioning();
  }

  public int maintenanceReplicas() {
return (int) (get(MAINTENANCE_NOT_FOR_READ) + get(MAINTENANCE_FOR_READ));
  }

  public int outOfServiceReplicas() {
return maintenanceReplicas() + decommissionedAndDecommissioning();
  }

{code}

> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9391-MaintenanceMode-WebUI.pdf, HDFS-9391.01.patch, 
> HDFS-9391.02.patch, Maintenance webUI.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11273) Move TransferFsImage#doGetUrl function to a Util class

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802216#comment-15802216
 ] 

Hadoop QA commented on HDFS-11273:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 11 new + 19 unchanged - 3 fixed = 30 total (was 22) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 10 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 
unchanged - 0 fixed = 8 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11273 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845657/HDFS-11273.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f13b34b076a4 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0a55bd8 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18039/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18039/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18039/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18039/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18039/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-

[jira] [Commented] (HDFS-11072) Add ability to unset and change directory EC policy

2017-01-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802135#comment-15802135
 ] 

Andrew Wang commented on HDFS-11072:


Thanks Allen for the quick turnaround! I retriggered precommit for this JIRA.

> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch, HDFS-11072-v5.patch
>
>
> Since the directory-level EC policy simply applies to files at create time, 
> it makes sense to make it more similar to storage policies and allow changing 
> and unsetting the policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11072) Add ability to unset and change directory EC policy

2017-01-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802100#comment-15802100
 ] 

Allen Wittenauer edited comment on HDFS-11072 at 1/5/17 6:28 PM:
-

Wrote a job that applied a "scorched earth" policy to H9's docker situation. 
Looks like mesos' reviewbot is now properly building docker images again, so I 
suspect everything that is using yetus will work too.


was (Author: aw):
Wrote a job that applied a "scorched earth" policy to H9's docker situation.

> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch, HDFS-11072-v5.patch
>
>
> Since the directory-level EC policy simply applies to files at create time, 
> it makes sense to make it more similar to storage policies and allow changing 
> and unsetting the policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7967) Reduce the performance impact of the balancer

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802106#comment-15802106
 ] 

Hadoop QA commented on HDFS-7967:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
24s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
49s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 457 unchanged - 13 fixed = 463 total (was 470) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
38s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_121 with JDK 
v1.7.0_121 generated 6 new + 7 unchanged - 0 fixed = 13 total (was 7) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | HDFS-7967 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845639/HDFS-7967-branch-2.8.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e00178d2dcfe 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided

[jira] [Commented] (HDFS-11072) Add ability to unset and change directory EC policy

2017-01-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802100#comment-15802100
 ] 

Allen Wittenauer commented on HDFS-11072:
-

Wrote a job that applied a "scorched earth" policy to H9's docker situation.

> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch, HDFS-11072-v5.patch
>
>
> Since the directory-level EC policy simply applies to files at create time, 
> it makes sense to make it more similar to storage policies and allow changing 
> and unsetting the policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11285) Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never transition to (Dead, DECOMMISSIONED)

2017-01-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15802046#comment-15802046
 ] 

Andrew Wang commented on HDFS-11285:


Ah. So the issue is that there is a DN that still has a few blocks waiting for 
decom (probably RBW blocks from an open file), and it transitions from (Live, 
Decomming) to (Dead, Decomming), not (Dead, Decommed). We don't immediately 
transition to (Dead, Decommed) since that could lead to unintentional dataloss. 
If an operator wants to override this for a dead node, then they can run your 
procedure.

Question, do these nodes truly *never* transition to (Dead, Decommed) ? Once 
the straggler blocks are closed and minimally replicated, (Dead, Decomming) 
should transition to (Dead, Decommed).

Otherwise, I'm guessing the straggler blocks are caused by open-for-write 
files. Fixing that is more work; we could force the writing clients to do 
pipeline recovery to route off the decomming nodes to maintain minimal 
replication.

> Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never 
> transition to (Dead, DECOMMISSIONED)
> --
>
> Key: HDFS-11285
> URL: https://issues.apache.org/jira/browse/HDFS-11285
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lantao Jin
> Attachments: DecomStatus.png
>
>
> We have seen the use case of decommissioning DataNodes that are already dead 
> or unresponsive, and not expected to rejoin the cluster. In a large cluster, 
> we met more than 100 nodes were dead, decommissioning and their {panel} Under 
> replicated blocks {panel} {panel} Blocks with no live replicas {panel} were 
> all ZERO. Actually It has been fixed in 
> [HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374]. After that, we 
> can refreshNode twice to eliminate this case. But, seems this patch missed 
> after refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We 
> are using a Hadoop version based 2.7.1 and only below operations can 
> transition the status from {panel} Dead, DECOMMISSION_INPROGRESS {panel} to 
> {panel} Dead, DECOMMISSIONED {panel}:
> # Retire it from hdfs-exclude
> # refreshNodes
> # Re-add it to hdfs-exclude
> # refreshNodes
> So, why the code removed after refactor in the new DecommissionManager?
> {code:java}
> if (!node.isAlive) {
>   LOG.info("Dead node " + node + " is decommissioned immediately.");
>   node.setDecommissioned();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11072) Add ability to unset and change directory EC policy

2017-01-05 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801999#comment-15801999
 ] 

Andrew Wang commented on HDFS-11072:


Looking at some other stalled precommits, it looks like the H9 build box is 
bad. I filed INFRA-13234, also ping [~aw] in case he has ideas.

> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch, HDFS-11072-v5.patch
>
>
> Since the directory-level EC policy simply applies to files at create time, 
> it makes sense to make it more similar to storage policies and allow changing 
> and unsetting the policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy

2017-01-05 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-11193:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-10285
   Status: Resolved  (was: Patch Available)

Thank you [~rakeshr], I have just push the patch to branch

> [SPS]: Erasure coded files should be considered for satisfying storage policy
> -
>
> Key: HDFS-11193
> URL: https://issues.apache.org/jira/browse/HDFS-11193
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: HDFS-10285
>
> Attachments: HDFS-11193-HDFS-10285-00.patch, 
> HDFS-11193-HDFS-10285-01.patch, HDFS-11193-HDFS-10285-02.patch, 
> HDFS-11193-HDFS-10285-03.patch, HDFS-11193-HDFS-10285-04.patch
>
>
> Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. 
> {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider 
> all immediate files under that directory and need to check that, the files 
> really matching with namespace storage policy. All the mismatched striped 
> blocks should be chosen for block movement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11294) libhdfs++: Segfault in HA failover if DNS lookup for both Namenodes fails

2017-01-05 Thread James Clampffer (JIRA)
James Clampffer created HDFS-11294:
--

 Summary: libhdfs++: Segfault in HA failover if DNS lookup for both 
Namenodes fails
 Key: HDFS-11294
 URL: https://issues.apache.org/jira/browse/HDFS-11294
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


Hit while doing more manual testing on HDFS-11028.

The HANamenodeTracker takes an asio endpoint to figure out what endpoint on the 
other node to try next during a failover.  This is done by passing the element 
at index 0 from endpoints (a std::vector).

When DNS fails the endpoints vector for that node will be empty so the iterator 
returned by endpoints\[0\] is just a null pointer that gets dereferenced and 
causes a segfault.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801883#comment-15801883
 ] 

Hadoop QA commented on HDFS-11291:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 370 unchanged - 1 fixed = 371 total (was 371) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11291 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845814/HDFS-11291.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a16a7e9206fc 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a605ff3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18036/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18036/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18036/testReport/ |
| modules | C: hadoop-

[jira] [Commented] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy

2017-01-05 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801846#comment-15801846
 ] 

Rakesh R commented on HDFS-11193:
-

It looks like test failures are unrelated to the patch, please ignore it.

> [SPS]: Erasure coded files should be considered for satisfying storage policy
> -
>
> Key: HDFS-11193
> URL: https://issues.apache.org/jira/browse/HDFS-11193
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11193-HDFS-10285-00.patch, 
> HDFS-11193-HDFS-10285-01.patch, HDFS-11193-HDFS-10285-02.patch, 
> HDFS-11193-HDFS-10285-03.patch, HDFS-11193-HDFS-10285-04.patch
>
>
> Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. 
> {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider 
> all immediate files under that directory and need to check that, the files 
> really matching with namespace storage policy. All the mismatched striped 
> blocks should be chosen for block movement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7967) Reduce the performance impact of the balancer

2017-01-05 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-7967:
--
Status: Patch Available  (was: Open)

> Reduce the performance impact of the balancer
> -
>
> Key: HDFS-7967
> URL: https://issues.apache.org/jira/browse/HDFS-7967
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-7967-branch-2.8.patch, HDFS-7967-branch-2.patch
>
>
> The balancer needs to query for blocks to move from overly full DNs.  The 
> block lookup is extremely inefficient.  An iterator of the node's blocks is 
> created from the iterators of its storages' blocks.  A random number is 
> chosen corresponding to how many blocks will be skipped via the iterator.  
> Each skip requires costly scanning of triplets.
> The current design also only considers node imbalances while ignoring 
> imbalances within the nodes's storages.  A more efficient and intelligent 
> design may eliminate the costly skipping of blocks via round-robin selection 
> of blocks from the storages based on remaining capacity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()

2017-01-05 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-11291:
--
Attachment: HDFS-11291.002.patch

Thanks [~linyiqun] for review..
Attached updated patch..
Please review..

> Avoid unnecessary edit log for setStoragePolicy() and setReplication()
> --
>
> Key: HDFS-11291
> URL: https://issues.apache.org/jira/browse/HDFS-11291
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch
>
>
> We are setting the storage policy for file without checking the current 
> policy of file for avoiding extra getStoragePolicy() rpc call. Currently 
> namenode is not checking the current storage policy before setting new one 
> and adding edit logs. I think if the old and new storage policy is same we 
> can avoid set operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9283) DFS client retry trace should not be dispalyed in CLI

2017-01-05 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801570#comment-15801570
 ] 

Daniel Templeton commented on HDFS-9283:


Thanks for the patch.  I don't think the {{ProtobufHelper}} is the right place 
to make that change, though.  That code is used all over Hadoop, so adding in 
an HDFS-specific NN-specific special case seems like a bad idea.  You should be 
able to make that same change in the calling code by looking at the cause.

> DFS client retry trace should not be dispalyed in CLI 
> --
>
> Key: HDFS-9283
> URL: https://issues.apache.org/jira/browse/HDFS-9283
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9283_00.patch
>
>
> In Secure HA Mode when we try to 
> {code} 
> ./hdfs fsck /
> {code}
> If more than 2 name nodes are present it will through the below exception and 
> stack trace. 
> {code}
> 15/10/22 18:39:06 INFO retry.RetryInvocationHandler: Exception while invoking 
> getFileInfo of class ClientNamenodeProtocolTranslatorPB over 
> /10.18.111.177:65110 after 1 fail over attempts. Trying to fail over after 
> sleeping for 843ms.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1875)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1297)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3745)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:853)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:973)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2088)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2084)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2082)
> at org.apache.hadoop.ipc.Client.call(Client.java:1511)
> at org.apache.hadoop.ipc.Client.call(Client.java:1442)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1789)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1387)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1383)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1383)
> at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:753)
> at org.apache.hadoop.hdfs.tools.DFSck.getResolvedPath(DFSck.java:232)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:311)
> at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:73)
> at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:151)
> at org.apache.hado

[jira] [Commented] (HDFS-11160) VolumeScanner reports write-in-progress replicas as corrupt incorrectly

2017-01-05 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801541#comment-15801541
 ] 

Kihwal Lee commented on HDFS-11160:
---

[~jojochuang], please make sure it is in 2.8.

> VolumeScanner reports write-in-progress replicas as corrupt incorrectly
> ---
>
> Key: HDFS-11160
> URL: https://issues.apache.org/jira/browse/HDFS-11160
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
> Environment: CDH5.7.4
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-11160.001.patch, HDFS-11160.002.patch, 
> HDFS-11160.003.patch, HDFS-11160.004.patch, HDFS-11160.005.patch, 
> HDFS-11160.006.patch, HDFS-11160.007.patch, HDFS-11160.008.patch, 
> HDFS-11160.branch-2.patch, HDFS-11160.reproduce.patch
>
>
> Due to a race condition initially reported in HDFS-6804, VolumeScanner may 
> erroneously detect good replicas as corrupt. This is serious because in some 
> cases it results in data loss if all replicas are declared corrupt. This bug 
> is especially prominent when there are a lot of append requests via 
> HttpFs/WebHDFS.
> We are investigating an incidence that caused very high block corruption rate 
> in a relatively small cluster. Initially, we thought HDFS-11056 is to blame. 
> However, after applying HDFS-11056, we are still seeing VolumeScanner 
> reporting corrupt replicas.
> It turns out that if a replica is being appended while VolumeScanner is 
> scanning it, VolumeScanner may use the new checksum to compare against old 
> data, causing checksum mismatch.
> I have a unit test to reproduce the error. Will attach later. A quick and 
> simple fix is to hold FsDatasetImpl lock and read from disk the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801481#comment-15801481
 ] 

Hadoop QA commented on HDFS-11193:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
27s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 126 unchanged - 2 fixed = 126 total (was 128) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
| Timed out junit tests | 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11193 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845776/HDFS-11193-HDFS-10285-04.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d057b3f87894 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / 57193f7 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18034/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18034/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS

[jira] [Commented] (HDFS-9283) DFS client retry trace should not be dispalyed in CLI

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801362#comment-15801362
 ] 

Hadoop QA commented on HDFS-9283:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-9283 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845302/HDFS-9283_00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8f112005c15a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a605ff3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18035/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18035/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DFS client retry trace should not be dispalyed in CLI 
> --
>
> Key: HDFS-9283
> URL: https://issues.apache.org/jira/browse/HDFS-9283
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9283_00.patch
>
>
> In Secure HA Mode when we try to 
> {code} 
> ./hdfs fsck /
> {code}
> If more than 2 name n

[jira] [Commented] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy

2017-01-05 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801168#comment-15801168
 ] 

Rakesh R commented on HDFS-11193:
-

Attached new patch addressing checkstyle warning. It looks like the test case 
failures are unrelated to the patch. Perhaps, I will rebase the branch with the 
latest trunk code in the coming weekend and I hope these failures will be fixed 
automatically after merging.

> [SPS]: Erasure coded files should be considered for satisfying storage policy
> -
>
> Key: HDFS-11193
> URL: https://issues.apache.org/jira/browse/HDFS-11193
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11193-HDFS-10285-00.patch, 
> HDFS-11193-HDFS-10285-01.patch, HDFS-11193-HDFS-10285-02.patch, 
> HDFS-11193-HDFS-10285-03.patch, HDFS-11193-HDFS-10285-04.patch
>
>
> Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. 
> {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider 
> all immediate files under that directory and need to check that, the files 
> really matching with namespace storage policy. All the mismatched striped 
> blocks should be chosen for block movement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy

2017-01-05 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11193:

Attachment: HDFS-11193-HDFS-10285-04.patch

> [SPS]: Erasure coded files should be considered for satisfying storage policy
> -
>
> Key: HDFS-11193
> URL: https://issues.apache.org/jira/browse/HDFS-11193
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-11193-HDFS-10285-00.patch, 
> HDFS-11193-HDFS-10285-01.patch, HDFS-11193-HDFS-10285-02.patch, 
> HDFS-11193-HDFS-10285-03.patch, HDFS-11193-HDFS-10285-04.patch
>
>
> Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. 
> {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider 
> all immediate files under that directory and need to check that, the files 
> really matching with namespace storage policy. All the mismatched striped 
> blocks should be chosen for block movement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11072) Add ability to unset and change directory EC policy

2017-01-05 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801125#comment-15801125
 ] 

Kai Zheng commented on HDFS-11072:
--

Hi [~andrew.wang], this hasn't built for some days due to some unknown Jenkins 
problem. I manually triggered the 
[building|https://builds.apache.org/job/PreCommit-HDFS-Build/18003/] by input 
of the jira number, but the task failed. Could you help with a check and give 
some hint? This issue doesn't look some special. Thank you.

> Add ability to unset and change directory EC policy
> ---
>
> Key: HDFS-11072
> URL: https://issues.apache.org/jira/browse/HDFS-11072
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11072-v1.patch, HDFS-11072-v2.patch, 
> HDFS-11072-v3.patch, HDFS-11072-v4.patch, HDFS-11072-v5.patch
>
>
> Since the directory-level EC policy simply applies to files at create time, 
> it makes sense to make it more similar to storage policies and allow changing 
> and unsetting the policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10759) Change fsimage bool isStriped from boolean to an enum

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801083#comment-15801083
 ] 

Hadoop QA commented on HDFS-10759:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 21 new 
+ 1093 unchanged - 3 fixed = 1114 total (was 1096) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 68m 
45s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10759 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845755/HDFS-10759.0003.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 2f2b884d4dc4 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a605ff3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18033/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18033/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console

[jira] [Commented] (HDFS-11293) FsDatasetImpl throws ReplicaAlreadyExistsException in a wrong situation

2017-01-05 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15800939#comment-15800939
 ] 

Yuanbo Liu commented on HDFS-11293:
---

[~umamaheswararao] Thanks for your response.
{code}
 the scheduling is wrong if that happening right? 
{code}
The current answer is yes and I've encountered it when I test SPS.
But in general speaking, A[SSD] chosen as a target seems reasonable because the 
block replica exists in A[DISK], not A[SSD]. Are there any considerations about 
not putting replica in the same node with different storage type/dir?

> FsDatasetImpl throws ReplicaAlreadyExistsException in a wrong situation
> ---
>
> Key: HDFS-11293
> URL: https://issues.apache.org/jira/browse/HDFS-11293
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>Priority: Critical
>
> In {{FsDatasetImpl#createTemporary}}, we use {{volumeMap}} to get replica 
> info by block pool id. But in this situation:
> {code}
> datanode A => {DISK, SSD}, datanode B => {DISK, ARCHIVE}.
> 1. the same block replica exists in A[DISK] and B[DISK].
> 2. the block pool id of datanode A and datanode B are the same.
> {code}
> Then we start to change the file's storage policy and move the block replica 
> in the cluster. Very likely we have to move block from B[DISK] to A[SSD], at 
> this time, datanode A throws ReplicaAlreadyExistsException and it's not a 
> correct behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10759) Change fsimage bool isStriped from boolean to an enum

2017-01-05 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15800934#comment-15800934
 ] 

Ewan Higgs commented on HDFS-10759:
---

The TestWebHDFS* test failues appear to have been impacted by HDFS-11280. A 
patch was committed and then reverted and has since been fixed.

> Change fsimage bool isStriped from boolean to an enum
> -
>
> Key: HDFS-10759
> URL: https://issues.apache.org/jira/browse/HDFS-10759
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1, 3.0.0-beta1, 3.0.0-alpha2
>Reporter: Ewan Higgs
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10759.0001.patch, HDFS-10759.0002.patch, 
> HDFS-10759.0003.patch
>
>
> The new erasure coding project has updated the protocol for fsimage such that 
> the {{INodeFile}} has a boolean '{{isStriped}}'. I think this is better as an 
> enum or integer since a boolean precludes any future block types. 
> For example:
> {code}
> enum BlockType {
>   CONTIGUOUS = 0,
>   STRIPED = 1,
> }
> {code}
> We can also make this more robust to future changes where there are different 
> block types supported in a staged rollout.  Here, we would use 
> {{UNKNOWN_BLOCK_TYPE}} as the first value since this is the default value. 
> See 
> [here|http://androiddevblog.com/protocol-buffers-pitfall-adding-enum-values/] 
> for more discussion.
> {code}
> enum BlockType {
>   UNKNOWN_BLOCK_TYPE = 0,
>   CONTIGUOUS = 1,
>   STRIPED = 2,
> }
> {code}
> But I'm not convinced this is necessary since there are other enums that 
> don't use this approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11285) Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never transition to (Dead, DECOMMISSIONED)

2017-01-05 Thread Lantao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated HDFS-11285:
--
Attachment: DecomStatus.png

> Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never 
> transition to (Dead, DECOMMISSIONED)
> --
>
> Key: HDFS-11285
> URL: https://issues.apache.org/jira/browse/HDFS-11285
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lantao Jin
> Attachments: DecomStatus.png
>
>
> We have seen the use case of decommissioning DataNodes that are already dead 
> or unresponsive, and not expected to rejoin the cluster. In a large cluster, 
> we met more than 100 nodes were dead, decommissioning and their {panel} Under 
> replicated blocks {panel} {panel} Blocks with no live replicas {panel} were 
> all ZERO. Actually It has been fixed in 
> [HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374]. After that, we 
> can refreshNode twice to eliminate this case. But, seems this patch missed 
> after refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We 
> are using a Hadoop version based 2.7.1 and only below operations can 
> transition the status from {panel} Dead, DECOMMISSION_INPROGRESS {panel} to 
> {panel} Dead, DECOMMISSIONED {panel}:
> # Retire it from hdfs-exclude
> # refreshNodes
> # Re-add it to hdfs-exclude
> # refreshNodes
> So, why the code removed after refactor in the new DecommissionManager?
> {code:java}
> if (!node.isAlive) {
>   LOG.info("Dead node " + node + " is decommissioned immediately.");
>   node.setDecommissioned();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10759) Change fsimage bool isStriped from boolean to an enum

2017-01-05 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-10759:
--
Attachment: HDFS-10759.0003.patch

Attached the correct patch.

> Change fsimage bool isStriped from boolean to an enum
> -
>
> Key: HDFS-10759
> URL: https://issues.apache.org/jira/browse/HDFS-10759
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1, 3.0.0-beta1, 3.0.0-alpha2
>Reporter: Ewan Higgs
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10759.0001.patch, HDFS-10759.0002.patch, 
> HDFS-10759.0003.patch
>
>
> The new erasure coding project has updated the protocol for fsimage such that 
> the {{INodeFile}} has a boolean '{{isStriped}}'. I think this is better as an 
> enum or integer since a boolean precludes any future block types. 
> For example:
> {code}
> enum BlockType {
>   CONTIGUOUS = 0,
>   STRIPED = 1,
> }
> {code}
> We can also make this more robust to future changes where there are different 
> block types supported in a staged rollout.  Here, we would use 
> {{UNKNOWN_BLOCK_TYPE}} as the first value since this is the default value. 
> See 
> [here|http://androiddevblog.com/protocol-buffers-pitfall-adding-enum-values/] 
> for more discussion.
> {code}
> enum BlockType {
>   UNKNOWN_BLOCK_TYPE = 0,
>   CONTIGUOUS = 1,
>   STRIPED = 2,
> }
> {code}
> But I'm not convinced this is necessary since there are other enums that 
> don't use this approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10759) Change fsimage bool isStriped from boolean to an enum

2017-01-05 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-10759:
--
Attachment: (was: HDFS-11026.003.patch)

> Change fsimage bool isStriped from boolean to an enum
> -
>
> Key: HDFS-10759
> URL: https://issues.apache.org/jira/browse/HDFS-10759
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1, 3.0.0-beta1, 3.0.0-alpha2
>Reporter: Ewan Higgs
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10759.0001.patch, HDFS-10759.0002.patch
>
>
> The new erasure coding project has updated the protocol for fsimage such that 
> the {{INodeFile}} has a boolean '{{isStriped}}'. I think this is better as an 
> enum or integer since a boolean precludes any future block types. 
> For example:
> {code}
> enum BlockType {
>   CONTIGUOUS = 0,
>   STRIPED = 1,
> }
> {code}
> We can also make this more robust to future changes where there are different 
> block types supported in a staged rollout.  Here, we would use 
> {{UNKNOWN_BLOCK_TYPE}} as the first value since this is the default value. 
> See 
> [here|http://androiddevblog.com/protocol-buffers-pitfall-adding-enum-values/] 
> for more discussion.
> {code}
> enum BlockType {
>   UNKNOWN_BLOCK_TYPE = 0,
>   CONTIGUOUS = 1,
>   STRIPED = 2,
> }
> {code}
> But I'm not convinced this is necessary since there are other enums that 
> don't use this approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7343) HDFS smart storage management

2017-01-05 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-7343:
---
Attachment: HDFS-Smart-Storage-Management-update.pdf

Based on the discussion and feedbacks collected, we updated the design 
document. There are many changes compared with the previous version:
# The ultimate target is separated into two phases, and in phase 1, we focus on 
implement a rule-based automation engine that integrates the facilities in 
HDFS. We will make it an end-to-end intelligent solution in phase 2.
# Kafka service dependency removed, SSM gets info directly from NN.
# DNs report metrics and events to NN instead of been polled by SSM directly.
# Metrics, events and some other info are maintained in NN off-heap memory and 
stored in NN.
# SSM is stateless and polling info from NN when starts up.

[~andrew.wang], [~anu], [~xiaochen], [~eddyxu] and anybody else, please help 
review it. Your suggestion or comment is appreciated. Thanks!

> HDFS smart storage management
> -
>
> Key: HDFS-7343
> URL: https://issues.apache.org/jira/browse/HDFS-7343
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: HDFS-Smart-Storage-Management-update.pdf, 
> HDFS-Smart-Storage-Management.pdf
>
>
> As discussed in HDFS-7285, it would be better to have a comprehensive and 
> flexible storage policy engine considering file attributes, metadata, data 
> temperature, storage type, EC codec, available hardware capabilities, 
> user/application preference and etc.
> Modified the title for re-purpose.
> We'd extend this effort some bit and aim to work on a comprehensive solution 
> to provide smart storage management service in order for convenient, 
> intelligent and effective utilizing of erasure coding or replicas, HDFS cache 
> facility, HSM offering, and all kinds of tools (balancer, mover, disk 
> balancer and so on) in a large cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11293) FsDatasetImpl throws ReplicaAlreadyExistsException in a wrong situation

2017-01-05 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15800742#comment-15800742
 ] 

Uma Maheswara Rao G commented on HDFS-11293:


[~yuanbo], I am wondering how 'A' chosen as target when replica already there 
in that node. the scheduling is wrong if that happening right? Can you explain 
a little more whats your scenario?

> FsDatasetImpl throws ReplicaAlreadyExistsException in a wrong situation
> ---
>
> Key: HDFS-11293
> URL: https://issues.apache.org/jira/browse/HDFS-11293
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>Priority: Critical
>
> In {{FsDatasetImpl#createTemporary}}, we use {{volumeMap}} to get replica 
> info by block pool id. But in this situation:
> {code}
> datanode A => {DISK, SSD}, datanode B => {DISK, ARCHIVE}.
> 1. the same block replica exists in A[DISK] and B[DISK].
> 2. the block pool id of datanode A and datanode B are the same.
> {code}
> Then we start to change the file's storage policy and move the block replica 
> in the cluster. Very likely we have to move block from B[DISK] to A[SSD], at 
> this time, datanode A throws ReplicaAlreadyExistsException and it's not a 
> correct behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2   3