[jira] [Updated] (HDFS-15386) ReplicaNotFoundException keeps happening in DN after removing multiple DN's data directories

2024-02-11 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HDFS-15386:
--
  Component/s: datanode
 Hadoop Flags: Reviewed
 Target Version/s: 3.3.1, 2.10.1, 3.2.2, 3.4.0
Affects Version/s: 3.3.1
   2.10.1
   3.2.2
   3.4.0

> ReplicaNotFoundException keeps happening in DN after removing multiple DN's 
> data directories
> 
>
> Key: HDFS-15386
> URL: https://issues.apache.org/jira/browse/HDFS-15386
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.2.2, 2.10.1, 3.3.1, 3.4.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.4, 3.2.2, 2.10.1, 3.3.1, 3.4.0
>
>
> When removing volumes, we need to invalidate all the blocks in the volumes. 
> In the following code (FsDatasetImpl), we keep the blocks that will be 
> invalidate in *blkToInvalidate* map. However as the key of the map is *bpid* 
> (Block Pool ID), it will be overwritten by other removed volumes. As a 
> result, the map will have only the blocks of the last volume we are removing, 
> and invalidate only them:
> {code:java}
> for (String bpid : volumeMap.getBlockPoolList()) {
>   List blocks = new ArrayList<>();
>   for (Iterator it =
> volumeMap.replicas(bpid).iterator(); it.hasNext();) {
> ReplicaInfo block = it.next();
> final StorageLocation blockStorageLocation =
> block.getVolume().getStorageLocation();
> LOG.trace("checking for block " + block.getBlockId() +
> " with storageLocation " + blockStorageLocation);
> if (blockStorageLocation.equals(sdLocation)) {
>   blocks.add(block);
>   it.remove();
> }
>   }
>   blkToInvalidate.put(bpid, blocks);
> }
> {code}
> [https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15386) ReplicaNotFoundException keeps happening in DN after removing multiple DN's data directories

2020-06-09 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15386:
-
Fix Version/s: 2.10.1

> ReplicaNotFoundException keeps happening in DN after removing multiple DN's 
> data directories
> 
>
> Key: HDFS-15386
> URL: https://issues.apache.org/jira/browse/HDFS-15386
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.4, 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5
>
>
> When removing volumes, we need to invalidate all the blocks in the volumes. 
> In the following code (FsDatasetImpl), we keep the blocks that will be 
> invalidate in *blkToInvalidate* map. However as the key of the map is *bpid* 
> (Block Pool ID), it will be overwritten by other removed volumes. As a 
> result, the map will have only the blocks of the last volume we are removing, 
> and invalidate only them:
> {code:java}
> for (String bpid : volumeMap.getBlockPoolList()) {
>   List blocks = new ArrayList<>();
>   for (Iterator it =
> volumeMap.replicas(bpid).iterator(); it.hasNext();) {
> ReplicaInfo block = it.next();
> final StorageLocation blockStorageLocation =
> block.getVolume().getStorageLocation();
> LOG.trace("checking for block " + block.getBlockId() +
> " with storageLocation " + blockStorageLocation);
> if (blockStorageLocation.equals(sdLocation)) {
>   blocks.add(block);
>   it.remove();
> }
>   }
>   blkToInvalidate.put(bpid, blocks);
> }
> {code}
> [https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15386) ReplicaNotFoundException keeps happening in DN after removing multiple DN's data directories

2020-06-05 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15386:
-
Fix Version/s: 3.1.5
   3.4.0
   3.3.1
   3.2.2
   3.0.4

> ReplicaNotFoundException keeps happening in DN after removing multiple DN's 
> data directories
> 
>
> Key: HDFS-15386
> URL: https://issues.apache.org/jira/browse/HDFS-15386
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.4, 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
>
> When removing volumes, we need to invalidate all the blocks in the volumes. 
> In the following code (FsDatasetImpl), we keep the blocks that will be 
> invalidate in *blkToInvalidate* map. However as the key of the map is *bpid* 
> (Block Pool ID), it will be overwritten by other removed volumes. As a 
> result, the map will have only the blocks of the last volume we are removing, 
> and invalidate only them:
> {code:java}
> for (String bpid : volumeMap.getBlockPoolList()) {
>   List blocks = new ArrayList<>();
>   for (Iterator it =
> volumeMap.replicas(bpid).iterator(); it.hasNext();) {
> ReplicaInfo block = it.next();
> final StorageLocation blockStorageLocation =
> block.getVolume().getStorageLocation();
> LOG.trace("checking for block " + block.getBlockId() +
> " with storageLocation " + blockStorageLocation);
> if (blockStorageLocation.equals(sdLocation)) {
>   blocks.add(block);
>   it.remove();
> }
>   }
>   blkToInvalidate.put(bpid, blocks);
> }
> {code}
> [https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15386) ReplicaNotFoundException keeps happening in DN after removing multiple DN's data directories

2020-06-03 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HDFS-15386:

Description: 
When removing volumes, we need to invalidate all the blocks in the volumes. In 
the following code (FsDatasetImpl), we keep the blocks that will be invalidate 
in *blkToInvalidate* map. However as the key of the map is *bpid* (Block Pool 
ID), it will be overwritten by other removed volumes. As a result, the map will 
have only the blocks of the last volume we are removing, and invalidate only 
them:
{code:java}
for (String bpid : volumeMap.getBlockPoolList()) {
  List blocks = new ArrayList<>();
  for (Iterator it =
volumeMap.replicas(bpid).iterator(); it.hasNext();) {
ReplicaInfo block = it.next();
final StorageLocation blockStorageLocation =
block.getVolume().getStorageLocation();
LOG.trace("checking for block " + block.getBlockId() +
" with storageLocation " + blockStorageLocation);
if (blockStorageLocation.equals(sdLocation)) {
  blocks.add(block);
  it.remove();
}
  }
  blkToInvalidate.put(bpid, blocks);
}
{code}
[https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]

  was:
When removing volumes, we need to invalidate all the blocks in the volumes. In 
the following code (FsDatasetImpl), we keep the blocks that will be invalidate 
in *blkToInvalidate* map. However as the key of the map is *bpid* (Block Pool 
ID), it will be overwritten by other removed volumes. As a result, the map will 
have only the blocks of the last volume, and invalidate only them:
{code:java}
for (String bpid : volumeMap.getBlockPoolList()) {
  List blocks = new ArrayList<>();
  for (Iterator it =
volumeMap.replicas(bpid).iterator(); it.hasNext();) {
ReplicaInfo block = it.next();
final StorageLocation blockStorageLocation =
block.getVolume().getStorageLocation();
LOG.trace("checking for block " + block.getBlockId() +
" with storageLocation " + blockStorageLocation);
if (blockStorageLocation.equals(sdLocation)) {
  blocks.add(block);
  it.remove();
}
  }
  blkToInvalidate.put(bpid, blocks);
}
{code}
[https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]


> ReplicaNotFoundException keeps happening in DN after removing multiple DN's 
> data directories
> 
>
> Key: HDFS-15386
> URL: https://issues.apache.org/jira/browse/HDFS-15386
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> When removing volumes, we need to invalidate all the blocks in the volumes. 
> In the following code (FsDatasetImpl), we keep the blocks that will be 
> invalidate in *blkToInvalidate* map. However as the key of the map is *bpid* 
> (Block Pool ID), it will be overwritten by other removed volumes. As a 
> result, the map will have only the blocks of the last volume we are removing, 
> and invalidate only them:
> {code:java}
> for (String bpid : volumeMap.getBlockPoolList()) {
>   List blocks = new ArrayList<>();
>   for (Iterator it =
> volumeMap.replicas(bpid).iterator(); it.hasNext();) {
> ReplicaInfo block = it.next();
> final StorageLocation blockStorageLocation =
> block.getVolume().getStorageLocation();
> LOG.trace("checking for block " + block.getBlockId() +
> " with storageLocation " + blockStorageLocation);
> if (blockStorageLocation.equals(sdLocation)) {
>   blocks.add(block);
>   it.remove();
> }
>   }
>   blkToInvalidate.put(bpid, blocks);
> }
> {code}
> [https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15386) ReplicaNotFoundException keeps happening in DN after removing multiple DN's data directories

2020-06-03 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HDFS-15386:

Description: 
When removing volumes, we need to invalidate all the blocks in the volumes. In 
the following code (FsDatasetImpl), we keep the blocks that will be invalidate 
in *blkToInvalidate* map. However as the key of the map is *bpid* (Block Pool 
ID), it will be overwritten by other removed volumes. As a result, the map will 
have only the blocks of the last volume, and invalidate only them:
{code:java}
for (String bpid : volumeMap.getBlockPoolList()) {
  List blocks = new ArrayList<>();
  for (Iterator it =
volumeMap.replicas(bpid).iterator(); it.hasNext();) {
ReplicaInfo block = it.next();
final StorageLocation blockStorageLocation =
block.getVolume().getStorageLocation();
LOG.trace("checking for block " + block.getBlockId() +
" with storageLocation " + blockStorageLocation);
if (blockStorageLocation.equals(sdLocation)) {
  blocks.add(block);
  it.remove();
}
  }
  blkToInvalidate.put(bpid, blocks);
}
{code}
[https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]

  was:
When removing volumes, we need to invalidate all the blocks in the volumes. In 
the following code (FsDatasetImpl), we keep the blocks that will be invalidate 
in *blkToInvalidate* map.
 However as the key of the map is *bpid* (Block Pool ID), it will be 
overwritten by other removed volumes. As a result, the map will have only the 
blocks of the last volume, and invalidate only them:
{code:java}
for (String bpid : volumeMap.getBlockPoolList()) {
  List blocks = new ArrayList<>();
  for (Iterator it =
volumeMap.replicas(bpid).iterator(); it.hasNext();) {
ReplicaInfo block = it.next();
final StorageLocation blockStorageLocation =
block.getVolume().getStorageLocation();
LOG.trace("checking for block " + block.getBlockId() +
" with storageLocation " + blockStorageLocation);
if (blockStorageLocation.equals(sdLocation)) {
  blocks.add(block);
  it.remove();
}
  }
  blkToInvalidate.put(bpid, blocks);
}
{code}
[https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]


> ReplicaNotFoundException keeps happening in DN after removing multiple DN's 
> data directories
> 
>
> Key: HDFS-15386
> URL: https://issues.apache.org/jira/browse/HDFS-15386
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> When removing volumes, we need to invalidate all the blocks in the volumes. 
> In the following code (FsDatasetImpl), we keep the blocks that will be 
> invalidate in *blkToInvalidate* map. However as the key of the map is *bpid* 
> (Block Pool ID), it will be overwritten by other removed volumes. As a 
> result, the map will have only the blocks of the last volume, and invalidate 
> only them:
> {code:java}
> for (String bpid : volumeMap.getBlockPoolList()) {
>   List blocks = new ArrayList<>();
>   for (Iterator it =
> volumeMap.replicas(bpid).iterator(); it.hasNext();) {
> ReplicaInfo block = it.next();
> final StorageLocation blockStorageLocation =
> block.getVolume().getStorageLocation();
> LOG.trace("checking for block " + block.getBlockId() +
> " with storageLocation " + blockStorageLocation);
> if (blockStorageLocation.equals(sdLocation)) {
>   blocks.add(block);
>   it.remove();
> }
>   }
>   blkToInvalidate.put(bpid, blocks);
> }
> {code}
> [https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15386) ReplicaNotFoundException keeps happening in DN after removing multiple DN's data directories

2020-06-03 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HDFS-15386:

Summary: ReplicaNotFoundException keeps happening in DN after removing 
multiple DN's data directories  (was: ReplicaNotFoundException keeps happening 
in DN after removing multiple DN's data direcotries)

> ReplicaNotFoundException keeps happening in DN after removing multiple DN's 
> data directories
> 
>
> Key: HDFS-15386
> URL: https://issues.apache.org/jira/browse/HDFS-15386
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> When removing volumes, we need to invalidate all the blocks in the volumes. 
> In the following code (FsDatasetImpl), we keep the blocks that will be 
> invalidate in *blkToInvalidate* map.
>  However as the key of the map is *bpid* (Block Pool ID), it will be 
> overwritten by other removed volumes. As a result, the map will have only the 
> blocks of the last volume, and invalidate only them:
> {code:java}
> for (String bpid : volumeMap.getBlockPoolList()) {
>   List blocks = new ArrayList<>();
>   for (Iterator it =
> volumeMap.replicas(bpid).iterator(); it.hasNext();) {
> ReplicaInfo block = it.next();
> final StorageLocation blockStorageLocation =
> block.getVolume().getStorageLocation();
> LOG.trace("checking for block " + block.getBlockId() +
> " with storageLocation " + blockStorageLocation);
> if (blockStorageLocation.equals(sdLocation)) {
>   blocks.add(block);
>   it.remove();
> }
>   }
>   blkToInvalidate.put(bpid, blocks);
> }
> {code}
> [https://github.com/apache/hadoop/blob/704409d53bf7ebf717a3c2e988ede80f623bbad3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L580-L595]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org