[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=564569=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-564569 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 11/Mar/21 12:47 Start Date: 11/Mar/21 12:47 Worklog Time Spent: 10m Work Description: steveloughran merged pull request #2742: URL: https://github.com/apache/hadoop/pull/2742 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 564569) Time Spent: 3h (was: 2h 50m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 3h > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563830=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563830 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 10/Mar/21 15:34 Start Date: 10/Mar/21 15:34 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#issuecomment-795612632 ah, thanks -lovely. just tweaked the imports slightly for better backporting; I'll merge once the next compile is good, then do a cp and retest for branch-3.3. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 563830) Time Spent: 2h 50m (was: 2h 40m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563819=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563819 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 10/Mar/21 15:10 Start Date: 10/Mar/21 15:10 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#issuecomment-795576948 testing: s3 london, most recently with `-Dparallel-tests -DtestsThreadCount=6 -Dmarkers=keep -Ds3guard -Ddynamo -Dscale` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 563819) Time Spent: 2h 40m (was: 2.5h) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563741=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563741 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 10/Mar/21 13:34 Start Date: 10/Mar/21 13:34 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#discussion_r591511493 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestRenameDeleteRace.java ## @@ -0,0 +1,248 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Semaphore; +import java.util.concurrent.TimeUnit; + +import com.amazonaws.AmazonClientException; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListeningExecutorService; Review comment: no need, the compiler finds it. Will fix This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 563741) Time Spent: 2.5h (was: 2h 20m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 2.5h > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563632=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563632 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 10/Mar/21 10:34 Start Date: 10/Mar/21 10:34 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#discussion_r591334439 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestRenameDeleteRace.java ## @@ -0,0 +1,248 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Semaphore; +import java.util.concurrent.TimeUnit; + +import com.amazonaws.AmazonClientException; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListeningExecutorService; Review comment: gotcha. Can we add some style checking to block this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 563632) Time Spent: 2h 20m (was: 2h 10m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563627=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563627 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 10/Mar/21 10:26 Start Date: 10/Mar/21 10:26 Worklog Time Spent: 10m Work Description: iwasakims commented on a change in pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#discussion_r591325915 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestRenameDeleteRace.java ## @@ -0,0 +1,248 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Semaphore; +import java.util.concurrent.TimeUnit; + +import com.amazonaws.AmazonClientException; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListeningExecutorService; Review comment: We need fix similar to #2758 after #2575. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 563627) Time Spent: 2h 10m (was: 2h) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563113=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563113 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 09/Mar/21 14:57 Start Date: 09/Mar/21 14:57 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#discussion_r590442351 ## File path: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md ## @@ -1126,6 +1126,26 @@ We also recommend using applications/application options which do not rename files when committing work or when copying data to S3, but instead write directly to the final destination. +## Rename not behaving as "expected" + +S3 is not a filesystem. The S3A connector mimics rename by + +* HEAD then LIST of source path +* HEAD then LIST of destination path +* File-by-file copy of source objects to destination. + Parallelized, with page listings of directory objects and issuing of DELETE requests. +* Post-delete recreation of destination parent directory marker, if needed. Review comment: oops. fixed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 563113) Time Spent: 1h 50m (was: 1h 40m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563114=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563114 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 09/Mar/21 14:57 Start Date: 09/Mar/21 14:57 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#discussion_r590443239 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestlRenameDeleteRace.java ## @@ -0,0 +1,200 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Semaphore; +import java.util.concurrent.TimeUnit; + +import com.amazonaws.AmazonClientException; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListeningExecutorService; +import org.assertj.core.api.Assertions; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.contract.ContractTestUtils; +import org.apache.hadoop.fs.s3a.AbstractS3ATestBase; +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.apache.hadoop.util.BlockingThreadPoolExecutorService; + +import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY; +import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_DELETE; +import static org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_NULL; +import static org.apache.hadoop.fs.s3a.Constants.S3_METADATA_STORE_IMPL; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.getTestBucketName; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.submit; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion; +import static org.apache.hadoop.io.IOUtils.cleanupWithLogger; + +/** + * HADOOP-16721: race condition with delete and rename underneath the same destination + * directory. + * This test suite recreates the failure using semaphores to guarantee the failure + * condition is encountered -then verifies that the rename operation is successful. + */ +public class ITestlRenameDeleteRace extends AbstractS3ATestBase { + + private static final Logger LOG = + LoggerFactory.getLogger(ITestlRenameDeleteRace.class); + + + /** Many threads for scale performance: {@value}. */ + public static final int EXECUTOR_THREAD_COUNT = 2; + + /** + * For submitting work. + */ + private static final ListeningExecutorService EXECUTOR = + BlockingThreadPoolExecutorService.newInstance( + EXECUTOR_THREAD_COUNT, + EXECUTOR_THREAD_COUNT * 2, + 30, TimeUnit.SECONDS, + "test-operations"); + + @Override + protected Configuration createConfiguration() { +Configuration conf = super.createConfiguration(); +String bucketName = getTestBucketName(conf); + +removeBaseAndBucketOverrides(bucketName, conf, +S3_METADATA_STORE_IMPL, +DIRECTORY_MARKER_POLICY); +// use the keep policy to ensure that surplus markers exist +// to complicate failures +conf.set(DIRECTORY_MARKER_POLICY, DIRECTORY_MARKER_POLICY_DELETE); +conf.set(S3_METADATA_STORE_IMPL, S3GUARD_METASTORE_NULL); + +return conf; + } + + @Test + public void testDeleteRenameRaceCondition() throws Throwable { +describe("verify no race between delete and rename"); +final S3AFileSystem fs = getFileSystem(); +final Path path = path(getMethodName()); +Path srcDir = new Path(path, "src"); + +// this dir must exist throughout the rename +Path destDir = new Path(path, "dest"); +// this dir tree will be deleted in a thread which does not +// complete before the rename exists +Path destSubdir1 = new Path(destDir, "subdir1"); +Path subfile1 = new Path(destSubdir1, "subfile1"); + +// this is the
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563112=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563112 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 09/Mar/21 14:55 Start Date: 09/Mar/21 14:55 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#issuecomment-794005013 > 3. don't downgrade FileNotFoundException to 'false' if source doesn't exist > 4. raise FileAlreadyExistsException if dest path exists or parent is a file. Should this JIRA be marked as incompatible for applicatinos assuming existing behavior? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 563112) Time Spent: 1h 40m (was: 1.5h) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563110=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563110 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 09/Mar/21 14:54 Start Date: 09/Mar/21 14:54 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#issuecomment-794003742 I got error on ITestS3AContractDistCp but this seems not to be related. I can not reproduce the error by running the ITestS3AContractDistCp alone by `-Dit.test=ITestS3AContractDistCp`. ``` $ mvn verify -Dtest=x -Dit.test=ITestS3AContract* ... [INFO] Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 47.658 s <<< FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp [ERROR] testNonDirectWrite(org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp) Time elapsed: 4.055 s <<< FAILURE! java.lang.AssertionError: Expected 2 renames for a non-direct write distcp expected:<2> but was:<0> at org.apache.hadoop.fs.contract.s3a.ITestS3AContractDistCp.testNonDirectWrite(ITestS3AContractDistCp.java:98) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 563110) Time Spent: 1.5h (was: 1h 20m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563107=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563107 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 09/Mar/21 14:48 Start Date: 09/Mar/21 14:48 Worklog Time Spent: 10m Work Description: iwasakims commented on pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#issuecomment-793993687 LGTM overall. Tests against Tokyo region worked. I just left comments for nits. ``` $ mvn verify -Dtest=x -Dit.test=ITestS3AContractRename,ITestlRenameDeleteRace -Ds3guard ... [INFO] Running org.apache.hadoop.fs.s3a.impl.ITestlRenameDeleteRace [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.044 s - in org.apache.hadoop.fs.s3a.impl.ITestlRenameDeleteRace [INFO] Running org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.603 s - in org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency [INFO] Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractRename [WARNING] Tests run: 22, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 12.523 s - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractRename ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 563107) Time Spent: 1h 20m (was: 1h 10m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=563106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-563106 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 09/Mar/21 14:46 Start Date: 09/Mar/21 14:46 Worklog Time Spent: 10m Work Description: iwasakims commented on a change in pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#discussion_r590433369 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestlRenameDeleteRace.java ## @@ -0,0 +1,200 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Semaphore; +import java.util.concurrent.TimeUnit; + +import com.amazonaws.AmazonClientException; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListeningExecutorService; +import org.assertj.core.api.Assertions; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.contract.ContractTestUtils; +import org.apache.hadoop.fs.s3a.AbstractS3ATestBase; +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.apache.hadoop.util.BlockingThreadPoolExecutorService; + +import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY; +import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_DELETE; +import static org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_NULL; +import static org.apache.hadoop.fs.s3a.Constants.S3_METADATA_STORE_IMPL; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.getTestBucketName; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.submit; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion; +import static org.apache.hadoop.io.IOUtils.cleanupWithLogger; + +/** + * HADOOP-16721: race condition with delete and rename underneath the same destination + * directory. + * This test suite recreates the failure using semaphores to guarantee the failure + * condition is encountered -then verifies that the rename operation is successful. + */ +public class ITestlRenameDeleteRace extends AbstractS3ATestBase { + + private static final Logger LOG = + LoggerFactory.getLogger(ITestlRenameDeleteRace.class); + + + /** Many threads for scale performance: {@value}. */ + public static final int EXECUTOR_THREAD_COUNT = 2; + + /** + * For submitting work. + */ + private static final ListeningExecutorService EXECUTOR = + BlockingThreadPoolExecutorService.newInstance( + EXECUTOR_THREAD_COUNT, + EXECUTOR_THREAD_COUNT * 2, + 30, TimeUnit.SECONDS, + "test-operations"); + + @Override + protected Configuration createConfiguration() { +Configuration conf = super.createConfiguration(); +String bucketName = getTestBucketName(conf); + +removeBaseAndBucketOverrides(bucketName, conf, +S3_METADATA_STORE_IMPL, +DIRECTORY_MARKER_POLICY); +// use the keep policy to ensure that surplus markers exist +// to complicate failures +conf.set(DIRECTORY_MARKER_POLICY, DIRECTORY_MARKER_POLICY_DELETE); +conf.set(S3_METADATA_STORE_IMPL, S3GUARD_METASTORE_NULL); + +return conf; + } + + @Test + public void testDeleteRenameRaceCondition() throws Throwable { +describe("verify no race between delete and rename"); +final S3AFileSystem fs = getFileSystem(); +final Path path = path(getMethodName()); +Path srcDir = new Path(path, "src"); + +// this dir must exist throughout the rename +Path destDir = new Path(path, "dest"); +// this dir tree will be deleted in a thread which does not +// complete before the rename exists +Path destSubdir1 = new Path(destDir, "subdir1"); +Path subfile1 = new Path(destSubdir1, "subfile1"); + +// this is the
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=562986=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-562986 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 09/Mar/21 10:33 Start Date: 09/Mar/21 10:33 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#discussion_r590194822 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestlRenameDeleteRace.java ## @@ -0,0 +1,200 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Semaphore; +import java.util.concurrent.TimeUnit; + +import com.amazonaws.AmazonClientException; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListeningExecutorService; +import org.assertj.core.api.Assertions; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.contract.ContractTestUtils; +import org.apache.hadoop.fs.s3a.AbstractS3ATestBase; +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.apache.hadoop.util.BlockingThreadPoolExecutorService; + +import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY; +import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_DELETE; +import static org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_NULL; +import static org.apache.hadoop.fs.s3a.Constants.S3_METADATA_STORE_IMPL; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.getTestBucketName; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.submit; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion; +import static org.apache.hadoop.io.IOUtils.cleanupWithLogger; + +/** + * HADOOP-16721: race condition with delete and rename underneath the same destination + * directory. + * This test suite recreates the failure using semaphores to guarantee the failure + * condition is encountered -then verifies that the rename operation is successful. + */ +public class ITestlRenameDeleteRace extends AbstractS3ATestBase { Review comment: yeah. well spotted This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 562986) Time Spent: 1h (was: 50m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=562838=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-562838 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 09/Mar/21 05:38 Start Date: 09/Mar/21 05:38 Worklog Time Spent: 10m Work Description: iwasakims commented on a change in pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#discussion_r589957067 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestlRenameDeleteRace.java ## @@ -0,0 +1,200 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Semaphore; +import java.util.concurrent.TimeUnit; + +import com.amazonaws.AmazonClientException; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ListeningExecutorService; +import org.assertj.core.api.Assertions; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.contract.ContractTestUtils; +import org.apache.hadoop.fs.s3a.AbstractS3ATestBase; +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.apache.hadoop.util.BlockingThreadPoolExecutorService; + +import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY; +import static org.apache.hadoop.fs.s3a.Constants.DIRECTORY_MARKER_POLICY_DELETE; +import static org.apache.hadoop.fs.s3a.Constants.S3GUARD_METASTORE_NULL; +import static org.apache.hadoop.fs.s3a.Constants.S3_METADATA_STORE_IMPL; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.getTestBucketName; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.submit; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion; +import static org.apache.hadoop.io.IOUtils.cleanupWithLogger; + +/** + * HADOOP-16721: race condition with delete and rename underneath the same destination + * directory. + * This test suite recreates the failure using semaphores to guarantee the failure + * condition is encountered -then verifies that the rename operation is successful. + */ +public class ITestlRenameDeleteRace extends AbstractS3ATestBase { Review comment: Is the class name intended to be ITestRenameDeleteRace? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 562838) Time Spent: 50m (was: 40m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain >
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=562837=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-562837 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 09/Mar/21 05:37 Start Date: 09/Mar/21 05:37 Worklog Time Spent: 10m Work Description: iwasakims commented on a change in pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#discussion_r589956618 ## File path: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md ## @@ -1126,6 +1126,26 @@ We also recommend using applications/application options which do not rename files when committing work or when copying data to S3, but instead write directly to the final destination. +## Rename not behaving as "expected" + +S3 is not a filesystem. The S3A connector mimics rename by + +* HEAD then LIST of source path +* HEAD then LIST of destination path +* File-by-file copy of source objects to destination. + Parallelized, with page listings of directory objects and issuing of DELETE requests. +* Post-delete recreation of destination parent directory marker, if needed. Review comment: recreation of **source** parent directory marker? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 562837) Time Spent: 40m (was: 0.5h) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=562556=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-562556 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 08/Mar/21 19:55 Start Date: 08/Mar/21 19:55 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#issuecomment-793030939 Testing: s3 london with markers==keep and delete, s3guard on and off. * change in error reporting of s3a FS matched with relevant changes in s3a.xml for contract tests. * skip tests verifying that you can't rename 2+ levels under a file. * Failures related to endpoints of common-crawl and ITestAssumeRole.testAssumeRoleBadInnerAuth: known and fixed in #2675. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 562556) Time Spent: 0.5h (was: 20m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > h3. race condition in delete/rename overlap > If you have multiple threads on a system doing rename operations, then one > thread doing a delete(dest/subdir) may delete the last file under a subdir, > and, before its listed and recreated any parent dir marker -other threads may > conclude there's an empty dest dir and fail. > This is most likely on an overloaded system with many threads executing > rename operations, as with parallel copying taking place there are many > threads to schedule and https connections to pool. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=561033=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-561033 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 04/Mar/21 17:52 Start Date: 04/Mar/21 17:52 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2742: URL: https://github.com/apache/hadoop/pull/2742#issuecomment-790807398 Testing with -Dparallel-tests -DtestsThreadCount=8 -Dmarkers=keep -Dscale If the store is set to raise exceptions, a lot of tests which expect rename(bad options) to return false now get exceptions. One of the contract tests would downgrade if the raised exception was FileAlreadyExistsException and the contract xml said that was ok. I'm reluctant to go with a bigger patch. this PR is so that Hive and friends can get better reporting on errors, rather than have them lost. It will be optional This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 561033) Time Spent: 20m (was: 10m) > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Improve rename resilience in two ways. > h3. parent dir probes > allow an option skip the LIST for the parent and just do HEAD object to makes > sure it is not a file. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16721) Improve S3A rename resilience
[ https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=561031=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-561031 ] ASF GitHub Bot logged work on HADOOP-16721: --- Author: ASF GitHub Bot Created on: 04/Mar/21 17:50 Start Date: 04/Mar/21 17:50 Worklog Time Spent: 10m Work Description: steveloughran opened a new pull request #2742: URL: https://github.com/apache/hadoop/pull/2742 s3a rename to support fs.s3a.rename.raises.exceptions: raise exceptions on rename failures fs.s3a.rename.reduced.probes: don't look for parent dir (LIST), just verify it isn't a file. The reduced probe not only saves money, it avoids race conditions where one thread deleting a subdir can cause LIST to fail before a dir marker is recreated. Note: * file:// rename() creates parent dirs, so this isn't too dangerous. * tests will switch modes. We could always just do the HEAD; topic for discussion. This patch: optional Change-Id: Ic0f8a410b45fef14ff522cb5aa1ae2bc19c8 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 561031) Remaining Estimate: 0h Time Spent: 10m > Improve S3A rename resilience > - > > Key: HADOOP-16721 > URL: https://issues.apache.org/jira/browse/HADOOP-16721 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Improve rename resilience in two ways. > h3. parent dir probes > allow an option skip the LIST for the parent and just do HEAD object to makes > sure it is not a file. > h3. failure reporting > the classic \{[rename(source, dest)}} operation returns \{{false}} on certain > failures, which, while somewhat consistent with the posix APIs, turns out to > be useless for identifying the cause of problems. Applications tend to have > code which goes > {code} > if (!fs.rename(src, dest)) throw new IOException("rename failed"); > {code} > While ultimately the rename/3 call needs to be made public (HADOOP-11452) it > would then need a adoption across applications. We can do this in the hadoop > modules, but for Hive, Spark etc it will take along time. > Proposed: a switch to tell S3A to stop downgrading certain failures (source > is dir, dest is file, src==dest, etc) into "false". This can be turned on > when trying to diagnose why things like Hive are failing. > Production code: trivial > * change in rename(), > * new option > * docs. > Test code: > * need to clear this option for rename contract tests > * need to create a new FS with this set to verify the various failure modes > trigger it. > > If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org