[ 
https://issues.apache.org/jira/browse/HADOOP-15323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16732602#comment-16732602
 ] 

wujinhu commented on HADOOP-15323:
----------------------------------

Thanks [~cheersyang].

However, there is no api to tell if singleCopy is supported or not.  OSS SDK 
will determine retry or not if exception thrown based on retry counts and 
exception type. For example, if singleCopy throws exception, OSS SDK will 
execute logic below:

 
{code:java}
private boolean shouldRetry(Exception exception, RequestMessage request, 
ResponseMessage response, int retries,
        RetryStrategy retryStrategy) {

    if (retries >= config.getMaxErrorRetry()) {
        return false;
    }

    if (!request.isRepeatable()) {
        return false;
    }

    if (retryStrategy.shouldRetry(exception, request, response, retries)) {
        getLog().debug("Retrying on " + exception.getClass().getName() + ": " + 
exception.getMessage());
        return true;
    }
    return false;
}

public boolean shouldRetry(Exception ex, RequestMessage request, 
ResponseMessage response, int retries) {
    if (ex instanceof ClientException) {
        String errorCode = ((ClientException) ex).getErrorCode();
        if (errorCode.equals(ClientErrorCode.CONNECTION_TIMEOUT)
                || errorCode.equals(ClientErrorCode.SOCKET_TIMEOUT)
                || errorCode.equals(ClientErrorCode.CONNECTION_REFUSED)
                || errorCode.equals(ClientErrorCode.UNKNOWN_HOST)
                || errorCode.equals(ClientErrorCode.SOCKET_EXCEPTION)) {
            return true;
        }

        // Don't retry when request input stream is non-repeatable
        if (errorCode.equals(ClientErrorCode.NONREPEATABLE_REQUEST)) {
            return false;
        }
    }

    if (ex instanceof OSSException) {
        String errorCode = ((OSSException) ex).getErrorCode();
        // No need retry for invalid responses
        if (errorCode.equals(OSSErrorCode.INVALID_RESPONSE)) {
            return false;
        }
    }

    if (response != null) {
        int statusCode = response.getStatusCode();
        if (statusCode == HttpStatus.SC_INTERNAL_SERVER_ERROR
                || statusCode == HttpStatus.SC_SERVICE_UNAVAILABLE) {
            return true;
        }
    }

    return false;
}
{code}
 

If it is client exception and repeatable request, OSS SDK will retry this 
request. If it is server exception, that depends, but OSS SDK will not retry 
the exception(single copy not supported) we mentioned.

SO, there is only one case we do not cover. If OSS SDK throws client exception 
more than *fs.oss.attempts.maximum(default is 10)* and singleCopy is supported, 
it will go to multi part copy. I think we can ignore this case, because it is 
less common.

> AliyunOSS: Improve copy file performance for AliyunOSSFileSystemStore
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-15323
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15323
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/oss
>    Affects Versions: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 3.3.0
>            Reporter: wujinhu
>            Assignee: wujinhu
>            Priority: Major
>         Attachments: HADOOP-15323.001.patch, HADOOP-15323.002.patch
>
>
> Aliyun OSS will support shallow copy which means server will only copy 
> metadata when copy object operation occurs. 
> With shallow copy, we can use copyObject api instead of multi-part copy api 
> if we do not change object storage type & encryption type & source object  is 
> uploaded by Put / Multipart upload api.
> We will try to use copyObject api, and check result. If shallow copy disabled 
> for this object, then we will use multipart copy. So, I will remove 
> fs.oss.multipart.upload.threshold configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to