[
https://issues.apache.org/jira/browse/HADOOP-19233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17922700#comment-17922700
]
ASF GitHub Bot commented on HADOOP-19233:
-----------------------------------------
bhattmanish98 commented on code in PR #7265:
URL: https://github.com/apache/hadoop/pull/7265#discussion_r1937122362
##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsPermission.java:
##########
@@ -68,7 +69,8 @@ public boolean equals(Object obj) {
* @return a permission object for the provided string representation
*/
public static AbfsPermission valueOf(final String abfsSymbolicPermission) {
- if (abfsSymbolicPermission == null) {
+ if (abfsSymbolicPermission == null
Review Comment:
Taken
##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemCreate.java:
##########
@@ -166,6 +169,147 @@ public void testCreateNonRecursive2() throws Exception {
assertIsFile(fs, testFile);
}
+ /**
+ * Test createNonRecursive when parent exist.
+ *
+ * @throws Exception in case of failure
+ */
+ @Test
+ public void testCreateNonRecursiveWhenParentExist() throws Exception {
+ AzureBlobFileSystem fs = getFileSystem();
+ assumeBlobServiceType();
+ fs.setWorkingDirectory(new Path(ROOT_PATH));
+ Path createDirectoryPath = new Path("hbase/A");
+ fs.mkdirs(createDirectoryPath);
+ fs.createNonRecursive(new Path(createDirectoryPath, "B"), FsPermission
+ .getDefault(), false, 1024,
+ (short) 1, 1024, null);
+ Assertions.assertThat(fs.exists(new Path(createDirectoryPath, "B")))
+ .describedAs("File should be created").isTrue();
+ fs.close();
+ }
+
+ /**
+ * Test createNonRecursive when parent does not exist.
+ *
+ * @throws Exception in case of failure
+ */
+ @Test
+ public void testCreateNonRecursiveWhenParentNotExist() throws Exception {
+ AzureBlobFileSystem fs = getFileSystem();
+ assumeBlobServiceType();
+ fs.setWorkingDirectory(new Path(ROOT_PATH));
+ Path createDirectoryPath = new Path("A/");
Review Comment:
Both works.
##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsBlobClient.java:
##########
@@ -1694,16 +1976,24 @@ private boolean isNonEmptyListing(String path,
* @return True if empty results without continuation token.
*/
private boolean isEmptyListResults(AbfsHttpOperation result) {
Review Comment:
Taken
##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##########
@@ -1144,9 +1158,9 @@ public void delete(final Path path, final boolean
recursive,
boolean shouldContinue = true;
LOG.debug("delete filesystem: {} path: {} recursive: {}",
- getClient().getFileSystem(),
- path,
- String.valueOf(recursive));
+ getClient().getFileSystem(),
Review Comment:
Same as above, the formatting was not as per out XML, so I thought to
correct it for delete flow.
##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsPermission.java:
##########
@@ -100,7 +102,8 @@ public static AbfsPermission valueOf(final String
abfsSymbolicPermission) {
* extended ACL; otherwise false.
*/
public static boolean isExtendedAcl(final String abfsSymbolicPermission) {
- if (abfsSymbolicPermission == null) {
+ if (abfsSymbolicPermission == null
Review Comment:
Taken
##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##########
@@ -1155,9 +1169,10 @@ public void delete(final Path path, final boolean
recursive,
do {
try (AbfsPerfInfo perfInfo = startTracking("delete", "deletePath")) {
AbfsRestOperation op = getClient().deletePath(relativePath, recursive,
- continuation, tracingContext,
getIsNamespaceEnabled(tracingContext));
+ continuation, tracingContext);
perfInfo.registerResult(op.getResult());
- continuation =
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
+ continuation = op.getResult()
Review Comment:
Corrected formatting only in rename and delete flow as earlier formatting
was not as per what we follow.
> ABFS: [FnsOverBlob] Implementing Rename and Delete APIs over Blob Endpoint
> --------------------------------------------------------------------------
>
> Key: HADOOP-19233
> URL: https://issues.apache.org/jira/browse/HADOOP-19233
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/azure
> Affects Versions: 3.4.0
> Reporter: Anuj Modi
> Assignee: Manish Bhatt
> Priority: Major
> Labels: pull-request-available
>
> Currently, we only support rename and delete operations on the DFS endpoint.
> The reason for supporting rename and delete operations on the Blob endpoint
> is that the Blob endpoint does not account for hierarchy. We need to ensure
> that the HDFS contracts are maintained when performing rename and delete
> operations. Renaming or deleting a directory over the Blob endpoint requires
> the client to handle the orchestration and rename or delete all the blobs
> within the specified directory.
>
> The task outlines the considerations for implementing rename and delete
> operations for the FNS-blob endpoint to ensure compatibility with HDFS
> contracts.
> * {*}Blob Endpoint Usage{*}: The task addresses the need for abstraction in
> the code to maintain HDFS contracts while performing rename and delete
> operations on the blob endpoint, which does not support hierarchy.
> * {*}Rename Operations{*}: The {{AzureBlobFileSystem#rename()}} method will
> use a {{RenameHandler}} instance to handle rename operations, with separate
> handlers for the DFS and blob endpoints. This method includes prechecks,
> destination adjustments, and orchestration of directory renaming for blobs.
> * {*}Atomic Rename{*}: Atomic renaming is essential for blob endpoints, as
> it requires orchestration to copy or delete each blob within the directory. A
> configuration will allow developers to specify directories for atomic
> renaming, with a JSON file to track the status of renames.
> * {*}Delete Operations{*}: Delete operations are simpler than renames,
> requiring fewer HDFS contract checks. For blob endpoints, the client must
> handle orchestration, including managing orphaned directories created by
> Az-copy.
> * {*}Orchestration for Rename/Delete{*}: Orchestration for rename and delete
> operations over blob endpoints involves listing blobs and performing actions
> on each blob. The process must be optimized to handle large numbers of blobs
> efficiently.
> * {*}Need for Optimization{*}: Optimization is crucial because the
> {{ListBlob}} API can return a maximum of 5000 blobs at once, necessitating
> multiple calls for large directories. The task proposes a producer-consumer
> model to handle blobs in parallel, thereby reducing processing time and
> memory usage.
> * {*}Producer-Consumer Design{*}: The proposed design includes a producer to
> list blobs, a queue to store the blobs, and a consumer to process them in
> parallel. This approach aims to improve efficiency and mitigate memory issues.
> More details will follow
> Perquisites for this Patch:
> 1. HADOOP-19187 ABFS: [FnsOverBlob]Making AbfsClient Abstract for supporting
> both DFS and Blob Endpoint - ASF JIRA (apache.org)
> 2. HADOOP-19226 ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob
> Endpoint for AbfsBlobClient - ASF JIRA (apache.org)
> 3. HADOOP-19207 ABFS: [FnsOverBlob]Response Handling of Blob Endpoint APIs
> and Metadata APIs - ASF JIRA (apache.org)
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]