[jira] [Updated] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-13 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja updated HDFS-13139:

Labels: azureblob cloudbreak decommission hdfs  (was: )

> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>  Labels: azureblob, cloudbreak, datanode, decommission, hdfs
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-13 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja updated HDFS-13139:

Labels: azureblob cloudbreak datanode decommission hdfs  (was: azureblob 
cloudbreak decommission hdfs)

> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>  Labels: azureblob, cloudbreak, datanode, decommission, hdfs
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-13 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja updated HDFS-13139:

Description: 
Configuring Azure WASB storage as a default HDFS location which means that 
Hadoop HDFS capacity will be 0. I have default replication as 1 but now when I 
am trying to decommission a node, datanode tries to rebalance some 28KB of data 
to another available datanode. However, our HDFS has 0 capacity and therefore, 
decommissioning fails with below given error:
{code:java}
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not 
allowed{code}
Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}

  was:
Created a Hadoop cluster and configured Azure WASB storage as a default HDFS 
location which means that Hadoop HDFS capacity will be 0. I have default 
replication as 1 but now when I am trying to decommission a node, datanode 
tries to rebalance some 28KB of data to another available datanode. However, 
our HDFS has 0 capacity and therefore, decommissioning fails with below given 
error:
{code:java}
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not 
allowed{code}
Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}


> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>
> Configuring Azure WASB storage as a default HDFS location which means that 
> Hadoop HDFS capacity will be 0. I have default replication as 1 but now when 
> I am trying to decommission a node, datanode tries to rebalance some 28KB of 
> data to another available datanode. However, our HDFS has 0 capacity and 
> therefore, decommissioning fails with below given error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13139) Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% capacity) and fails

2018-02-13 Thread Abhishek Sakhuja (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Sakhuja updated HDFS-13139:

Description: 
Created a Hadoop cluster and configured Azure WASB storage as a default HDFS 
location which means that Hadoop HDFS capacity will be 0. I have default 
replication as 1 but now when I am trying to decommission a node, datanode 
tries to rebalance some 28KB of data to another available datanode. However, 
our HDFS has 0 capacity and therefore, decommissioning fails with below given 
error:
{code:java}
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not 
allowed{code}
Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}

  was:
Created a Hadoop cluster and configured Azure WASB storage as a default HDFS 
location which means that Hadoop HDFS capacity will be 0. I have default 
replication as 1 but now when I am trying to decommission a node, datanode 
tries to rebalance some 28KB of data to another available datanode. However, 
our HDFS has 0 capacity and therefore, decommissioning fails with below given 
error:
New node(s) could not be removed from the cluster. Reason Trying to move 
'28672' bytes worth of data to nodes with '0' bytes of capacity is not allowed

Getting the information on cluster shows that default local HDFS is still used 
for some KB space which is getting rebalanced whereas available capacity is 0:
{code:java}
"CapacityRemaining" : 0,
 "CapacityTotal" : 0,
 "CapacityUsed" : 131072,
 "DeadNodes" : "{}",
 "DecomNodes" : "{}",
 "HeapMemoryMax" : 1060372480,
 "HeapMemoryUsed" : 147668152,
 "NonDfsUsedSpace" : 0,
 "NonHeapMemoryMax" : -1,
 "NonHeapMemoryUsed" : 75319744,
 "PercentRemaining" : 0.0,
 "PercentUsed" : 100.0,
 "Safemode" : "",
 "StartTime" : 1518241019502,
 "TotalFiles" : 1,
 "UpgradeFinalized" : true,{code}


> Default HDFS as Azure WASB tries rebalancing datanode data to HDFS (0% 
> capacity) and fails
> --
>
> Key: HDFS-13139
> URL: https://issues.apache.org/jira/browse/HDFS-13139
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, fs/azure, hdfs
>Affects Versions: 2.7.3
>Reporter: Abhishek Sakhuja
>Priority: Major
>
> Created a Hadoop cluster and configured Azure WASB storage as a default HDFS 
> location which means that Hadoop HDFS capacity will be 0. I have default 
> replication as 1 but now when I am trying to decommission a node, datanode 
> tries to rebalance some 28KB of data to another available datanode. However, 
> our HDFS has 0 capacity and therefore, decommissioning fails with below given 
> error:
> {code:java}
> New node(s) could not be removed from the cluster. Reason Trying to move 
> '28672' bytes worth of data to nodes with '0' bytes of capacity is not 
> allowed{code}
> Getting the information on cluster shows that default local HDFS is still 
> used for some KB space which is getting rebalanced whereas available capacity 
> is 0:
> {code:java}
> "CapacityRemaining" : 0,
>  "CapacityTotal" : 0,
>  "CapacityUsed" : 131072,
>  "DeadNodes" : "{}",
>  "DecomNodes" : "{}",
>  "HeapMemoryMax" : 1060372480,
>  "HeapMemoryUsed" : 147668152,
>  "NonDfsUsedSpace" : 0,
>  "NonHeapMemoryMax" : -1,
>  "NonHeapMemoryUsed" : 75319744,
>  "PercentRemaining" : 0.0,
>  "PercentUsed" : 100.0,
>  "Safemode" : "",
>  "StartTime" : 1518241019502,
>  "TotalFiles" : 1,
>  "UpgradeFinalized" : true,{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org