[ 
https://issues.apache.org/jira/browse/HADOOP-16721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16721:
------------------------------------
    Description: 

h3. race condition in delete/rename overlap

If you have multiple threads on a system doing rename operations, then one 
thread doing a delete(dest/subdir) may delete the last file under a subdir, 
and, before its listed and recreated any parent dir marker -other threads may 
conclude there's an empty dest dir and fail.

This is most likely on an overloaded system with many threads executing rename 
operations, as with parallel copying taking place there are many threads to 
schedule and https connections to pool. 

h3. failure reporting
the classic \{[rename(source, dest)}} operation returns \{{false}} on certain 
failures, which, while somewhat consistent with the posix APIs, turns out to be 
useless for identifying the cause of problems. Applications tend to have code 
which goes

{code}
if (!fs.rename(src, dest)) throw new IOException("rename failed");
{code}

While ultimately the rename/3 call needs to be made public (HADOOP-11452) it 
would then need a adoption across applications. We can do this in the hadoop 
modules, but for Hive, Spark etc it will take along time.

Proposed: a switch to tell S3A to stop downgrading certain failures (source is 
dir, dest is file, src==dest, etc) into "false". This can be turned on when 
trying to diagnose why things like Hive are failing.

Production code: trivial 
* change in rename(), 
* new option
* docs.

Test code: 
* need to clear this option for rename contract tests
* need to create a new FS with this set to verify the various failure modes 
trigger it.

 

If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS

  was:

Improve rename resilience in two ways.

h3. parent dir probes

allow an option skip the LIST for the parent and just do HEAD object to makes 
sure it is not a file. 

h3. failure reporting
the classic \{[rename(source, dest)}} operation returns \{{false}} on certain 
failures, which, while somewhat consistent with the posix APIs, turns out to be 
useless for identifying the cause of problems. Applications tend to have code 
which goes

{code}
if (!fs.rename(src, dest)) throw new IOException("rename failed");
{code}

While ultimately the rename/3 call needs to be made public (HADOOP-11452) it 
would then need a adoption across applications. We can do this in the hadoop 
modules, but for Hive, Spark etc it will take along time.

Proposed: a switch to tell S3A to stop downgrading certain failures (source is 
dir, dest is file, src==dest, etc) into "false". This can be turned on when 
trying to diagnose why things like Hive are failing.

Production code: trivial 
* change in rename(), 
* new option
* docs.

Test code: 
* need to clear this option for rename contract tests
* need to create a new FS with this set to verify the various failure modes 
trigger it.

 

If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS


> Improve S3A rename resilience
> -----------------------------
>
>                 Key: HADOOP-16721
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16721
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> h3. race condition in delete/rename overlap
> If you have multiple threads on a system doing rename operations, then one 
> thread doing a delete(dest/subdir) may delete the last file under a subdir, 
> and, before its listed and recreated any parent dir marker -other threads may 
> conclude there's an empty dest dir and fail.
> This is most likely on an overloaded system with many threads executing 
> rename operations, as with parallel copying taking place there are many 
> threads to schedule and https connections to pool. 
> h3. failure reporting
> the classic \{[rename(source, dest)}} operation returns \{{false}} on certain 
> failures, which, while somewhat consistent with the posix APIs, turns out to 
> be useless for identifying the cause of problems. Applications tend to have 
> code which goes
> {code}
> if (!fs.rename(src, dest)) throw new IOException("rename failed");
> {code}
> While ultimately the rename/3 call needs to be made public (HADOOP-11452) it 
> would then need a adoption across applications. We can do this in the hadoop 
> modules, but for Hive, Spark etc it will take along time.
> Proposed: a switch to tell S3A to stop downgrading certain failures (source 
> is dir, dest is file, src==dest, etc) into "false". This can be turned on 
> when trying to diagnose why things like Hive are failing.
> Production code: trivial 
> * change in rename(), 
> * new option
> * docs.
> Test code: 
> * need to clear this option for rename contract tests
> * need to create a new FS with this set to verify the various failure modes 
> trigger it.
>  
> If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to