[ 
https://issues.apache.org/jira/browse/HDFS-16198?focusedWorklogId=646689&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646689
 ]

ASF GitHub Bot logged work on HDFS-16198:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 05/Sep/21 10:55
            Start Date: 05/Sep/21 10:55
    Worklog Time Spent: 10m 
      Work Description: Hexiaoqiao commented on a change in pull request #3359:
URL: https://github.com/apache/hadoop/pull/3359#discussion_r702405022



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithShortCircuitRead.java
##########
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager;
+import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
+import org.apache.hadoop.hdfs.shortcircuit.DfsClientShm;
+import 
org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.PerDatanodeVisitorInfo;
+import org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.Visitor;
+import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache;
+import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitShm.Slot;
+import org.apache.hadoop.net.unix.DomainSocket;
+import org.apache.hadoop.net.unix.TemporarySocketDirectory;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Test;
+import org.slf4j.event.Level;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+
+public class TestBlockTokenWithShortCircuitRead {
+
+  private static final int BLOCK_SIZE = 1024;
+  private static final int FILE_SIZE = 2 * BLOCK_SIZE;
+  private static final String FILE_TO_SHORT_CIRCUIT_READ = "/fileToSSR.dat";
+
+  static {
+    GenericTestUtils.setLogLevel(DFSClient.LOG, Level.TRACE);
+  }
+
+  private void readFile(FSDataInputStream in) throws IOException {
+    byte[] toRead = new byte[FILE_SIZE];
+    int totalRead = 0;
+    int nRead;
+    while ((nRead = in.read(toRead, totalRead,
+      toRead.length - totalRead)) > 0) {
+      totalRead += nRead;
+    }
+    assertEquals("Cannot read file.", toRead.length, totalRead);
+  }
+
+  @Test
+  public void testShortCircuitReadWithInvalidToken() throws Exception {
+    MiniDFSCluster cluster = null;
+    short numDataNodes = 1;
+    Configuration conf = new Configuration();
+    conf.setBoolean(DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY, true);
+    conf.setLong(DFS_BLOCK_SIZE_KEY, BLOCK_SIZE);
+    conf.setInt("io.bytes.per.checksum", BLOCK_SIZE);

Review comment:
       It will be more graceful to use DFS_BYTES_PER_CHECKSUM_KEY instead of 
`io.bytes.per.checksum`.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithShortCircuitRead.java
##########
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager;
+import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
+import org.apache.hadoop.hdfs.shortcircuit.DfsClientShm;
+import 
org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.PerDatanodeVisitorInfo;
+import org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.Visitor;
+import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache;
+import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitShm.Slot;
+import org.apache.hadoop.net.unix.DomainSocket;
+import org.apache.hadoop.net.unix.TemporarySocketDirectory;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Test;
+import org.slf4j.event.Level;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.IPC_CLIENT_CONNECT_MAX_RETRIES_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+
+public class TestBlockTokenWithShortCircuitRead {
+
+  private static final int BLOCK_SIZE = 1024;
+  private static final int FILE_SIZE = 2 * BLOCK_SIZE;
+  private static final String FILE_TO_SHORT_CIRCUIT_READ = "/fileToSSR.dat";
+
+  static {
+    GenericTestUtils.setLogLevel(DFSClient.LOG, Level.TRACE);
+  }
+
+  private void readFile(FSDataInputStream in) throws IOException {
+    byte[] toRead = new byte[FILE_SIZE];
+    int totalRead = 0;
+    int nRead;
+    while ((nRead = in.read(toRead, totalRead,
+      toRead.length - totalRead)) > 0) {
+      totalRead += nRead;
+    }
+    assertEquals("Cannot read file.", toRead.length, totalRead);
+  }
+
+  @Test
+  public void testShortCircuitReadWithInvalidToken() throws Exception {
+    MiniDFSCluster cluster = null;
+    short numDataNodes = 1;
+    Configuration conf = new Configuration();
+    conf.setBoolean(DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY, true);
+    conf.setLong(DFS_BLOCK_SIZE_KEY, BLOCK_SIZE);
+    conf.setInt("io.bytes.per.checksum", BLOCK_SIZE);
+    conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+    conf.setInt(DFSConfigKeys.DFS_REPLICATION_KEY, numDataNodes);
+    conf.setInt(IPC_CLIENT_CONNECT_MAX_RETRIES_KEY, 0);
+    // Set short retry timeouts so this test runs faster
+    conf.setInt(HdfsClientConfigKeys.Retry.WINDOW_BASE_KEY, 10);
+    TemporarySocketDirectory sockDir = new TemporarySocketDirectory();
+    conf.set(DFS_DOMAIN_SOCKET_PATH_KEY, new File(sockDir.getDir(),
+        "testShortCircuitReadWithInvalidToken").getAbsolutePath());
+    conf.setBoolean(HdfsClientConfigKeys.Read.ShortCircuit.KEY, true);
+    // avoid caching
+    conf.setInt(HdfsClientConfigKeys.Read.ShortCircuit.STREAMS_CACHE_SIZE_KEY, 
0);
+    DomainSocket.disableBindPathValidation();
+
+    try {
+      cluster = new MiniDFSCluster.Builder(conf)
+        .numDataNodes(numDataNodes).format(true).build();
+      cluster.waitActive();
+
+      final NameNode nn = cluster.getNameNode();
+      final NamenodeProtocols nnProto = nn.getRpcServer();
+      final BlockManager bm = nn.getNamesystem().getBlockManager();
+      final BlockTokenSecretManager sm = bm.getBlockTokenSecretManager();
+      // set a short token lifetime (1 second) initially
+      SecurityTestUtil.setBlockTokenLifetime(sm, 1000L);
+
+      Path fileToRead = new Path(FILE_TO_SHORT_CIRCUIT_READ);
+      DistributedFileSystem fs = cluster.getFileSystem();
+
+      final ShortCircuitCache cache =
+          fs.getClient().getClientContext().getShortCircuitCache();
+      final DatanodeInfo datanode =
+          new DatanodeInfo.DatanodeInfoBuilder()
+          .setNodeID(cluster.getDataNodes().get(0).getDatanodeId())
+          .build();
+
+      cache.getDfsClientShmManager().visit(new Visitor() {
+        @Override
+        public void visit(HashMap<DatanodeInfo, PerDatanodeVisitorInfo> info) {
+          // The ClientShmManager starts off empty
+          Assert.assertEquals(0, info.size());
+        }
+      });
+
+      // create file to read
+      DFSTestUtil.createFile(fs, fileToRead, FILE_SIZE, numDataNodes, 0);
+
+      try(FSDataInputStream in = fs.open(fileToRead)) {
+        // acquire access token
+        readFile(in);
+
+        // verify token is not expired
+        List<LocatedBlock> locatedBlocks = nnProto.getBlockLocations(
+            FILE_TO_SHORT_CIRCUIT_READ, 0, FILE_SIZE).getLocatedBlocks();
+        LocatedBlock lblock = locatedBlocks.get(0); // first block
+        Token<BlockTokenIdentifier> myToken = lblock.getBlockToken();
+        assertFalse(SecurityTestUtil.isBlockTokenExpired(myToken));
+
+        // wait token expiration
+        while (!SecurityTestUtil.isBlockTokenExpired(myToken)) {

Review comment:
       The logic for `wait token expiration` here and the next one are 
duplicate source segment?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 646689)
    Time Spent: 2h 10m  (was: 2h)

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-16198
>                 URL: https://issues.apache.org/jira/browse/HDFS-16198
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Eungsop Yoo
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HDFS-16198.patch, screenshot-2.png
>
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-2.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to