[ 
https://issues.apache.org/jira/browse/HADOOP-19474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17938508#comment-17938508
 ] 

ASF GitHub Bot commented on HADOOP-19474:
-----------------------------------------

bhattmanish98 commented on code in PR #7421:
URL: https://github.com/apache/hadoop/pull/7421#discussion_r2013589793


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##########
@@ -1277,57 +1275,14 @@ public String listStatus(final Path path, final String 
startFrom,
 
     do {
       try (AbfsPerfInfo perfInfo = startTracking("listStatus", "listPath")) {
-        AbfsRestOperation op = listingClient.listPath(relativePath, false,
-            abfsConfiguration.getListMaxResults(), continuation,
-            tracingContext);
+        ListResponseData listResponseData = 
listingClient.listPath(relativePath,

Review Comment:
   Should we perform a null check on listResponseData in this case?



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsAHCHttpOperation.java:
##########
@@ -194,26 +192,14 @@ String getConnResponseMessage() throws IOException {
   public void processResponse(final byte[] buffer,
       final int offset,
       final int length) throws IOException {
-    try {
-      if (!isPayloadRequest) {
-        prepareRequest();
-        LOG.debug("Sending request: {}", httpRequestBase);
-        httpResponse = executeRequest();
-        LOG.debug("Request sent: {}; response {}", httpRequestBase,
-            httpResponse);
-      }
-      parseResponseHeaderAndBody(buffer, offset, length);
-    } finally {

Review Comment:
   Any reason for removing this part of code?



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -1753,4 +1772,87 @@ protected AbfsRestOperation getSuccessOp(final 
AbfsRestOperationType operationTy
     successOp.hardSetResult(HttpURLConnection.HTTP_OK);
     return successOp;
   }
+
+  private String getPrimaryUserGroup() throws AzureBlobFileSystemException {
+    String primaryUserGroup;
+    if 
(!getAbfsConfiguration().getSkipUserGroupMetadataDuringInitialization()) {
+      try {
+        primaryUserGroup = 
UserGroupInformation.getCurrentUser().getPrimaryGroupName();
+      } catch (IOException ex) {
+        LOG.error("Failed to get primary group for {}, using user name as 
primary group name",
+            getPrimaryUser());
+        primaryUserGroup = getPrimaryUser();
+      }
+    } else {
+      //Provide a default group name
+      primaryUserGroup = getPrimaryUser();
+    }
+    return primaryUserGroup;
+  }
+
+  private String getPrimaryUser() throws AzureBlobFileSystemException {

Review Comment:
   Java doc missing



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/VersionedFileStatus.java:
##########
@@ -0,0 +1,127 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.EtagSource;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * A File status with version info extracted from the etag value returned
+ * in a LIST or HEAD request.
+ * The etag is included in the java serialization.
+ */
+public class VersionedFileStatus extends FileStatus implements EtagSource {
+
+  /**
+   * The superclass is declared serializable; this subclass can also
+   * be serialized.
+   */
+  private static final long serialVersionUID = -2009013240419749458L;
+
+  /**
+   * The etag of an object.
+   * Not-final so that serialization via reflection will preserve the value.
+   */
+  private String version;
+
+  private String encryptionContext;
+
+  public VersionedFileStatus(
+      final String owner, final String group, final FsPermission fsPermission, 
final boolean hasAcl,
+      final long length, final boolean isdir, final int blockReplication,
+      final long blocksize, final long modificationTime, final Path path,
+      final String version, final String encryptionContext) {
+    super(length, isdir, blockReplication, blocksize, modificationTime, 0,
+        fsPermission,
+        owner,
+        group,
+        null,
+        path,
+        hasAcl, false, false);
+
+    this.version = version;
+    this.encryptionContext = encryptionContext;
+  }
+
+  /** Compare if this object is equal to another object.
+   * @param   obj the object to be compared.
+   * @return  true if two file status has the same path name; false if not.
+   */
+  @Override
+  public boolean equals(Object obj) {
+    if (!(obj instanceof FileStatus)) {
+      return false;
+    }
+
+    FileStatus other = (FileStatus) obj;
+
+    if (!this.getPath().equals(other.getPath())) {// compare the path
+      return false;
+    }
+
+    if (other instanceof VersionedFileStatus) {
+      return this.version.equals(((VersionedFileStatus) other).version);
+    }
+
+    return true;
+  }
+
+  /**
+   * Returns a hash code value for the object, which is defined as
+   * the hash code of the path name.
+   *
+   * @return  a hash code value for the path name and version
+   */
+  @Override
+  public int hashCode() {
+    int hash = getPath().hashCode();
+    hash = 89 * hash + (this.version != null ? this.version.hashCode() : 0);

Review Comment:
   What is the logic behind multiplying hash with 89?



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsDfsClient.java:
##########
@@ -1464,20 +1472,41 @@ public Hashtable<String, String> 
getXMSProperties(AbfsHttpOperation result)
 
   /**
    * Parse the list file response from DFS ListPath API in Json format
-   * @param stream InputStream contains the list results.
-   * @throws IOException if parsing fails.
+   * @param result InputStream contains the list results.
+   * @param uri to be used for path conversion.
+   * @return {@link ListResponseData}. containing listing response.
+   * @throws AzureBlobFileSystemException if parsing fails.
    */
   @Override
-  public ListResultSchema parseListPathResults(final InputStream stream) 
throws IOException {
-    DfsListResultSchema listResultSchema;
-    try {
-      final ObjectMapper objectMapper = new ObjectMapper();
-      listResultSchema = objectMapper.readValue(stream, 
DfsListResultSchema.class);
+  public ListResponseData parseListPathResults(AbfsHttpOperation result, URI 
uri) throws AzureBlobFileSystemException {
+    try (InputStream listResultInputStream = result.getListResultStream()) {
+      DfsListResultSchema listResultSchema;
+      try {
+        final ObjectMapper objectMapper = new ObjectMapper();
+        listResultSchema = objectMapper.readValue(listResultInputStream,
+            DfsListResultSchema.class);
+        result.setListResultSchema(listResultSchema);
+        LOG.debug("ListPath listed {} paths with {} as continuation token",
+            listResultSchema.paths().size(),
+            getContinuationFromResponse(result));
+      } catch (IOException ex) {
+        throw new AbfsDriverException(ex);

Review Comment:
   Please add some error message along with exception.



##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRename.java:
##########
@@ -1780,19 +1782,19 @@ private Configuration createConfig(String 
producerQueueSize, String consumerMaxL
   private void validateRename(AzureBlobFileSystem fs, Path src, Path dst,
       boolean isSrcExist, boolean isDstExist, boolean isJsonExist)
       throws IOException {
-    Assertions.assertThat(fs.exists(dst))
-        .describedAs("Renamed Destination directory should exist.")
-        .isEqualTo(isDstExist);
     Assertions.assertThat(fs.exists(new Path(src.getParent(), src.getName() + 
SUFFIX)))
         .describedAs("Renamed Pending Json file should exist.")
         .isEqualTo(isJsonExist);
     Assertions.assertThat(fs.exists(src))
-        .describedAs("Renamed Destination directory should exist.")
+        .describedAs("Renamed Source directory should exist.")

Review Comment:
   Could you please correct it to `Renamed Source directory should not exist.`



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -1753,4 +1772,87 @@ protected AbfsRestOperation getSuccessOp(final 
AbfsRestOperationType operationTy
     successOp.hardSetResult(HttpURLConnection.HTTP_OK);
     return successOp;
   }
+
+  private String getPrimaryUserGroup() throws AzureBlobFileSystemException {
+    String primaryUserGroup;
+    if 
(!getAbfsConfiguration().getSkipUserGroupMetadataDuringInitialization()) {

Review Comment:
   We can simplify this method
   ```
   private String getPrimaryUserGroup() throws AzureBlobFileSystemException {
      if 
(!getAbfsConfiguration().getSkipUserGroupMetadataDuringInitialization()) {
         try {
           return UserGroupInformation.getCurrentUser().getPrimaryGroupName();
         } catch (IOException ex) {
           LOG.error("Failed to get primary group for {}, using user name as 
primary group name",
               getPrimaryUser());
         }
       }
       return getPrimaryUser();
   }



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##########
@@ -1277,57 +1275,14 @@ public String listStatus(final Path path, final String 
startFrom,
 
     do {
       try (AbfsPerfInfo perfInfo = startTracking("listStatus", "listPath")) {
-        AbfsRestOperation op = listingClient.listPath(relativePath, false,
-            abfsConfiguration.getListMaxResults(), continuation,
-            tracingContext);
+        ListResponseData listResponseData = 
listingClient.listPath(relativePath,
+            false, abfsConfiguration.getListMaxResults(), continuation,
+            tracingContext, this.uri);
+        AbfsRestOperation op = listResponseData.getOp();
         perfInfo.registerResult(op.getResult());
-        continuation = 
listingClient.getContinuationFromResponse(op.getResult());
-        ListResultSchema retrievedSchema = 
op.getResult().getListResultSchema();
-        if (retrievedSchema == null) {
-          throw new AbfsRestOperationException(
-                  AzureServiceErrorCode.PATH_NOT_FOUND.getStatusCode(),
-                  AzureServiceErrorCode.PATH_NOT_FOUND.getErrorCode(),
-                  "listStatusAsync path not found",
-                  null, op.getResult());
-        }
-
-        long blockSize = abfsConfiguration.getAzureBlockSize();
-
-        for (ListResultEntrySchema entry : retrievedSchema.paths()) {
-          final String owner = 
identityTransformer.transformIdentityForGetRequest(entry.owner(), true, 
userName);
-          final String group = 
identityTransformer.transformIdentityForGetRequest(entry.group(), false, 
primaryUserGroup);
-          final String encryptionContext = entry.getXMsEncryptionContext();
-          final FsPermission fsPermission = entry.permissions() == null
-                  ? new AbfsPermission(FsAction.ALL, FsAction.ALL, 
FsAction.ALL)
-                  : AbfsPermission.valueOf(entry.permissions());
-          final boolean hasAcl = 
AbfsPermission.isExtendedAcl(entry.permissions());
-
-          long lastModifiedMillis = 0;
-          long contentLength = entry.contentLength() == null ? 0 : 
entry.contentLength();
-          boolean isDirectory = entry.isDirectory() == null ? false : 
entry.isDirectory();
-          if (entry.lastModified() != null && !entry.lastModified().isEmpty()) 
{
-            lastModifiedMillis = DateTimeUtils.parseLastModifiedTime(
-                entry.lastModified());
-          }
-
-          Path entryPath = new Path(File.separator + entry.name());
-          entryPath = entryPath.makeQualified(this.uri, entryPath);
-
-          fileStatuses.add(
-                  new VersionedFileStatus(
-                          owner,
-                          group,
-                          fsPermission,
-                          hasAcl,
-                          contentLength,
-                          isDirectory,
-                          1,
-                          blockSize,
-                          lastModifiedMillis,
-                          entryPath,
-                          entry.eTag(),
-                          encryptionContext));
-        }
+        continuation = listResponseData.getContinuationToken();
+        List<FileStatus> fileStatusListInCurrItr = 
listResponseData.getFileStatusList();
+        fileStatuses.addAll(fileStatusListInCurrItr);

Review Comment:
   Before adding it to fileStatuses, shouldn't we check if 
fileStatusListInCurrItr is null or empty?



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/ListResponseData.java:
##########
@@ -0,0 +1,103 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.contracts.services;
+
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
+
+/**
+ * This class is used to hold the response data for list operations.
+ * It contains a list of FileStatus objects, a map of rename pending JSON 
paths,
+ * continuation token and the executed REST operation.
+ */
+public class ListResponseData {
+
+  private List<FileStatus> fileStatusList;
+  private Map<Path, Integer> renamePendingJsonPaths;

Review Comment:
   Since content Length is long, we should keep it Map<Path, Long>.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -285,6 +298,18 @@ private AbfsClient(final URL baseUrl,
           metricIdlePeriod);
     }
     this.abfsMetricUrl = abfsConfiguration.getMetricUri();
+
+    final Class<? extends IdentityTransformerInterface> 
identityTransformerClass =
+        
abfsConfiguration.getRawConfiguration().getClass(FS_AZURE_IDENTITY_TRANSFORM_CLASS,
 IdentityTransformer.class,
+            IdentityTransformerInterface.class);
+    try {
+      this.identityTransformer =
+          
identityTransformerClass.getConstructor(Configuration.class).newInstance(abfsConfiguration.getRawConfiguration());
+    } catch (IllegalAccessException | InstantiationException | 
IllegalArgumentException
+             | InvocationTargetException | NoSuchMethodException e) {
+      throw new IOException(e);

Review Comment:
   We can add some message as well along with exception for better 
understanding.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -1753,4 +1772,87 @@ protected AbfsRestOperation getSuccessOp(final 
AbfsRestOperationType operationTy
     successOp.hardSetResult(HttpURLConnection.HTTP_OK);
     return successOp;
   }
+
+  private String getPrimaryUserGroup() throws AzureBlobFileSystemException {
+    String primaryUserGroup;
+    if 
(!getAbfsConfiguration().getSkipUserGroupMetadataDuringInitialization()) {
+      try {
+        primaryUserGroup = 
UserGroupInformation.getCurrentUser().getPrimaryGroupName();
+      } catch (IOException ex) {
+        LOG.error("Failed to get primary group for {}, using user name as 
primary group name",
+            getPrimaryUser());
+        primaryUserGroup = getPrimaryUser();
+      }
+    } else {
+      //Provide a default group name
+      primaryUserGroup = getPrimaryUser();
+    }
+    return primaryUserGroup;
+  }
+
+  private String getPrimaryUser() throws AzureBlobFileSystemException {
+    try {
+      return UserGroupInformation.getCurrentUser().getUserName();
+    } catch (IOException ex) {
+      throw new AbfsDriverException(ex);
+    }
+  }
+
+  /**
+   * Creates a VersionedFileStatus object from the ListResultEntrySchema.
+   * @param entry ListResultEntrySchema object.
+   * @param uri to be used for the path conversion.
+   * @return VersionedFileStatus object.
+   * @throws AzureBlobFileSystemException if transformation fails.
+   */
+  protected VersionedFileStatus getVersionedFileStatusFromEntry(
+      ListResultEntrySchema entry, URI uri) throws 
AzureBlobFileSystemException {
+    long blockSize = abfsConfiguration.getAzureBlockSize();
+    final String owner, group;
+    try{
+      if (identityTransformer != null) {
+        owner = identityTransformer.transformIdentityForGetRequest(
+            entry.owner(), true, getPrimaryUser());
+        group = identityTransformer.transformIdentityForGetRequest(
+            entry.group(), false, getPrimaryUserGroup());
+      } else {
+        owner = null;

Review Comment:
   The default value for owner and group is null. Do we need to explicitly 
assign it to null here?



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -1753,4 +1772,87 @@ protected AbfsRestOperation getSuccessOp(final 
AbfsRestOperationType operationTy
     successOp.hardSetResult(HttpURLConnection.HTTP_OK);
     return successOp;
   }
+
+  private String getPrimaryUserGroup() throws AzureBlobFileSystemException {

Review Comment:
   Java doc for this method



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##########
@@ -1753,4 +1772,87 @@ protected AbfsRestOperation getSuccessOp(final 
AbfsRestOperationType operationTy
     successOp.hardSetResult(HttpURLConnection.HTTP_OK);
     return successOp;
   }
+
+  private String getPrimaryUserGroup() throws AzureBlobFileSystemException {
+    String primaryUserGroup;
+    if 
(!getAbfsConfiguration().getSkipUserGroupMetadataDuringInitialization()) {
+      try {
+        primaryUserGroup = 
UserGroupInformation.getCurrentUser().getPrimaryGroupName();
+      } catch (IOException ex) {
+        LOG.error("Failed to get primary group for {}, using user name as 
primary group name",
+            getPrimaryUser());
+        primaryUserGroup = getPrimaryUser();
+      }
+    } else {
+      //Provide a default group name
+      primaryUserGroup = getPrimaryUser();
+    }
+    return primaryUserGroup;
+  }
+
+  private String getPrimaryUser() throws AzureBlobFileSystemException {
+    try {
+      return UserGroupInformation.getCurrentUser().getUserName();
+    } catch (IOException ex) {
+      throw new AbfsDriverException(ex);
+    }
+  }
+
+  /**
+   * Creates a VersionedFileStatus object from the ListResultEntrySchema.
+   * @param entry ListResultEntrySchema object.
+   * @param uri to be used for the path conversion.
+   * @return VersionedFileStatus object.
+   * @throws AzureBlobFileSystemException if transformation fails.
+   */
+  protected VersionedFileStatus getVersionedFileStatusFromEntry(
+      ListResultEntrySchema entry, URI uri) throws 
AzureBlobFileSystemException {
+    long blockSize = abfsConfiguration.getAzureBlockSize();
+    final String owner, group;
+    try{
+      if (identityTransformer != null) {
+        owner = identityTransformer.transformIdentityForGetRequest(
+            entry.owner(), true, getPrimaryUser());
+        group = identityTransformer.transformIdentityForGetRequest(
+            entry.group(), false, getPrimaryUserGroup());
+      } else {
+        owner = null;
+        group = null;
+      }
+    } catch (IOException ex) {
+      LOG.error("Failed to get owner/group for path {}", entry.name(), ex);
+      throw new AbfsDriverException(ex);
+    }
+    final String encryptionContext = entry.getXMsEncryptionContext();

Review Comment:
   Should we perform a null check on the entry before this, or will the entry 
always have a non-null value?



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##########
@@ -668,6 +660,10 @@ protected boolean isConnectionDisconnectedOnError() {
     return connectionDisconnectedOnError;
   }
 
+  protected void setListResultSchema(final ListResultSchema listResultSchema) {

Review Comment:
   Java doc missing



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsBlobClient.java:
##########
@@ -374,63 +380,68 @@ public AbfsRestOperation listPath(final String 
relativePath, final boolean recur
         requestHeaders);
 
     op.execute(tracingContext);
-    // Filter the paths for which no rename redo operation is performed.
-    fixAtomicEntriesInListResults(op, tracingContext);
-    if (isEmptyListResults(op.getResult()) && is404CheckRequired) {
+    ListResponseData listResponseData = parseListPathResults(op.getResult(), 
uri);
+    listResponseData.setOp(op);
+
+    // Perform Pending Rename Redo Operation on Atomic Rename Paths.
+    // Crashed HBase log rename recovery can be done by Filesystem.listStatus.
+    if (tracingContext.getOpType() == FSOperationType.LISTSTATUS
+        && op.getResult() != null
+        && op.getResult().getStatusCode() == HTTP_OK) {
+      retryRenameOnAtomicEntriesInListResults(tracingContext,
+          listResponseData.getRenamePendingJsonPaths());

Review Comment:
   We are calculating renamePendingJsonPath in this method after this line. How 
listResponseData.getRenamePendingJsonPaths() will work here is not clear.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/VersionedFileStatus.java:
##########
@@ -0,0 +1,127 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.EtagSource;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * A File status with version info extracted from the etag value returned
+ * in a LIST or HEAD request.
+ * The etag is included in the java serialization.
+ */
+public class VersionedFileStatus extends FileStatus implements EtagSource {
+
+  /**
+   * The superclass is declared serializable; this subclass can also
+   * be serialized.
+   */
+  private static final long serialVersionUID = -2009013240419749458L;
+
+  /**
+   * The etag of an object.
+   * Not-final so that serialization via reflection will preserve the value.
+   */
+  private String version;
+
+  private String encryptionContext;
+
+  public VersionedFileStatus(
+      final String owner, final String group, final FsPermission fsPermission, 
final boolean hasAcl,
+      final long length, final boolean isdir, final int blockReplication,
+      final long blocksize, final long modificationTime, final Path path,
+      final String version, final String encryptionContext) {
+    super(length, isdir, blockReplication, blocksize, modificationTime, 0,
+        fsPermission,
+        owner,
+        group,
+        null,
+        path,
+        hasAcl, false, false);
+
+    this.version = version;
+    this.encryptionContext = encryptionContext;
+  }
+
+  /** Compare if this object is equal to another object.
+   * @param   obj the object to be compared.
+   * @return  true if two file status has the same path name; false if not.
+   */
+  @Override
+  public boolean equals(Object obj) {
+    if (!(obj instanceof FileStatus)) {
+      return false;
+    }
+
+    FileStatus other = (FileStatus) obj;
+
+    if (!this.getPath().equals(other.getPath())) {// compare the path
+      return false;
+    }
+
+    if (other instanceof VersionedFileStatus) {
+      return this.version.equals(((VersionedFileStatus) other).version);
+    }
+
+    return true;
+  }
+
+  /**
+   * Returns a hash code value for the object, which is defined as
+   * the hash code of the path name.
+   *
+   * @return  a hash code value for the path name and version
+   */
+  @Override
+  public int hashCode() {
+    int hash = getPath().hashCode();
+    hash = 89 * hash + (this.version != null ? this.version.hashCode() : 0);
+    return hash;
+  }
+
+  /**
+   * Returns the version of this FileStatus
+   *
+   * @return  a string value for the FileStatus version
+   */
+  public String getVersion() {
+    return this.version;
+  }
+
+  @Override
+  public String getEtag() {

Review Comment:
   Java doc missing



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/ListResponseData.java:
##########
@@ -0,0 +1,103 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.contracts.services;
+
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation;
+
+/**
+ * This class is used to hold the response data for list operations.
+ * It contains a list of FileStatus objects, a map of rename pending JSON 
paths,
+ * continuation token and the executed REST operation.
+ */
+public class ListResponseData {
+
+  private List<FileStatus> fileStatusList;
+  private Map<Path, Integer> renamePendingJsonPaths;

Review Comment:
   Keeping it as an Integer is also fine. Since the JSON file will be small, 
it's okay to keep it as an int.



##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/ListResponseData.java:
##########
@@ -0,0 +1,103 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.contracts.services;

Review Comment:
   I think we should either create a new package and keep this file there or 
move it to the service package since there are many similar files present there.





> ABFS: [FnsOverBlob] Listing Optimizations to avoid multiple iteration over 
> list response.
> -----------------------------------------------------------------------------------------
>
>                 Key: HADOOP-19474
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19474
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/azure
>    Affects Versions: 3.5.0, 3.4.1
>            Reporter: Anuj Modi
>            Assignee: Anuj Modi
>            Priority: Major
>              Labels: pull-request-available
>
> On blob endpoint, there are a couple of handling that is needed to be done on 
> client side.
> This involves:
>  # Parsing of xml response and converting them to VersionedFileStatus list
>  # Removing duplicate entries for non-empty explicit directories coming due 
> to presence of the marker files
>  # Trigerring Rename recovery on the previously failed rename indicated by 
> the presence of pending json file.
> Currently all three are done in a separate iteration over whole list. This is 
> to pbring all those things to a common place so that single iteration over 
> list reposne can handle all three.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to