[jira] [Resolved] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-18 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja resolved HDFS-13139.
-
Resolution: Workaround

HDP default configuration had calculated non-HDFS reserved storage 
"dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage 
configured for a datanode (among the compute config groups) which had three 
drives and one drive was in TBs. Our default configuration to store data on 
datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity 
(around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 
0% and our existing datanode storage had some supporting directories and files 
in KBs which had resulted in marking negative KB capacity of the datanode. To 
fix the downscaling issue, either, we need to lower down non hdfs reserved 
capacity (lower than 3 %) or point our datanode to higher disk capacity 
(greater than 3.5 %).

> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>  Labels: azureblob, cloudbreak, datanode, decommission, hdfs
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-18 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja updated HDFS-13139:

Comment: was deleted

(was: HDP default configuration had calculated non-HDFS reserved storage 
"dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage 
configured for a datanode (among the compute config groups) which had three 
drives and one drive was in TBs. Our default configuration to store data on 
datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity 
(around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 
0% and our existing datanode storage had some supporting directories and files 
in KBs which had resulted in marking negative KB capacity of the datanode. To 
fix the downscaling issue, either, we need to lower down non hdfs reserved 
capacity (lower than 3 %) or point our datanode to higher disk capacity 
(greater than 3.5 %).)

> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>  Labels: azureblob, cloudbreak, datanode, decommission, hdfs
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-18 Thread Abhishek Sakhuja (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368797#comment-16368797
 ] 

Abhishek Sakhuja edited comment on HDFS-13139 at 2/19/18 5:32 AM:
--

HDP default configuration had calculated non-HDFS reserved storage 
"dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage 
configured for a datanode (among the compute config groups) which had three 
drives and one drive was in TBs. Our default configuration to store data on 
datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity 
(around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 
0% and our existing datanode storage had some supporting directories and files 
in KBs which had resulted in marking negative KB capacity of the datanode. To 
fix the downscaling issue, either, we need to lower down non hdfs reserved 
capacity (lower than 3 %) or point our datanode to higher disk capacity 
(greater than 3.5 %).


was (Author: abhi.sakhuja):
HDP default configuration had calculated non-HDFS reserved storage 
"dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage 
configured for a datanode (among the compute config groups) which has three 
drives and one drive was in TBs. Our default configuration to store data on 
datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity 
(around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 
0% and our existing datanode storage had some supporting directories and files 
in KBs which had resulted in marking negative KB capacity of the datanode. To 
fix the downscaling issue, either, we need to lower down non hdfs reserved 
capacity (lower than 3 %) or point our datanode to higher disk capacity 
(greater than 3.5 %).

> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>  Labels: azureblob, cloudbreak, datanode, decommission, hdfs
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-18 Thread Abhishek Sakhuja (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368797#comment-16368797
 ] 

Abhishek Sakhuja commented on HDFS-13139:
-

HDP default configuration had calculated non-HDFS reserved storage 
"dfs.du.datanode.reserved" (approx 3.5 %) on total disk for the lowest storage 
configured for a datanode (among the compute config groups) which has three 
drives and one drive was in TBs. Our default configuration to store data on 
datanode "dfs.datanode.data.dir" was pointing to a drive with lowest capacity 
(around 3 % of overall DN storage). This 3 % < 3.5 % had made HDFS capacity as 
0% and our existing datanode storage had some supporting directories and files 
in KBs which had resulted in marking negative KB capacity of the datanode. To 
fix the downscaling issue, either, we need to lower down non hdfs reserved 
capacity (lower than 3 %) or point our datanode to higher disk capacity 
(greater than 3.5 %).

> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>  Labels: azureblob, cloudbreak, datanode, decommission, hdfs
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-13 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja updated HDFS-13139:

Labels: azureblob cloudbreak decommission hdfs  (was: )

> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>  Labels: azureblob, cloudbreak, datanode, decommission, hdfs
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-13 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja updated HDFS-13139:

Labels: azureblob cloudbreak datanode decommission hdfs  (was: azureblob 
cloudbreak decommission hdfs)

> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>  Labels: azureblob, cloudbreak, datanode, decommission, hdfs
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-13 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja updated HDFS-13139:

Description: 
Configuring Azure WASB storage as a default HDFS location which means that 
Hadoop HDFS capacity will be 0. I have default replication as 1 but now when I 
am trying to decommission a node, datanode tries to rebalance some 28KB of data 
to another available datanode. However, our HDFS has 0 capacity and therefore, 
decommissioning fails with below given error:
{code:java}
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not 
allowed{code}
Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}

  was:
Created a Hadoop cluster and configured Azure WASB storage as a default HDFS 
location which means that Hadoop HDFS capacity will be 0. I have default 
replication as 1 but now when I am trying to decommission a node, datanode 
tries to rebalance some 28KB of data to another available datanode. However, 
our HDFS has 0 capacity and therefore, decommissioning fails with below given 
error:
{code:java}
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not 
allowed{code}
Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}


> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-13 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja updated HDFS-13139:

Description: 
Created a Hadoop cluster and configured Azure WASB storage as a default HDFS 
location which means that Hadoop HDFS capacity will be 0. I have default 
replication as 1 but now when I am trying to decommission a node, datanode 
tries to rebalance some 28KB of data to another available datanode. However, 
our HDFS has 0 capacity and therefore, decommissioning fails with below given 
error:
{code:java}
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not 
allowed{code}
Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}

  was:
Created a Hadoop cluster and configured Azure WASB storage as a default HDFS 
location which means that Hadoop HDFS capacity will be 0. I have default 
replication as 1 but now when I am trying to decommission a node, datanode 
tries to rebalance some 28KB of data to another available datanode. However, 
our HDFS has 0 capacity and therefore, decommissioning fails with below given 
error:
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not allowed

Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}


> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>
> Created a Hadoop cluster and configured Azure WASB storage as a default HDFS 
> location which means that Hadoop HDFS capacity will be 0. I have default 
> replication as 1 but now when I am trying to decommission a node, datanode 
> tries to rebalance some 28KB of data to another available datanode. However, 
> our HDFS has 0 capacity and therefore, decommissioning fails with below given 
> error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-13 Thread Abhishek Sakhuja (JIRA)
Abhishek Sakhuja created HDFS-13139:
---

 Summary: Default HDFS as Azure WASB tries rebalancing datanode 
data to HDFS (0% capacity) and fails
 Key: HDFS-13139
 URL: https://issues.apache.org/jira/browse/HDFS-13139
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, fs/azure, hdfs
Affects Versions: 2.7.3
Reporter: Abhishek Sakhuja


Created a Hadoop cluster and configured Azure WASB storage as a default HDFS 
location which means that Hadoop HDFS capacity will be 0. I have default 
replication as 1 but now when I am trying to decommission a node, datanode 
tries to rebalance some 28KB of data to another available datanode. However, 
our HDFS has 0 capacity and therefore, decommissioning fails with below given 
error:
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not allowed

Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org