[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-16 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794371#comment-16794371
 ] 

Yuming Wang commented on HADOOP-16152:
--

Maybe we should move this ticket to the subtask of HADOOP-15338 due to support 
for java 11.
https://www.eclipse.org/lists/jetty-announce/msg00124.html

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16188) s3a rename failed during copy, "Unable to copy part" + 200 error code

2019-03-16 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794339#comment-16794339
 ] 

Steve Loughran commented on HADOOP-16188:
-

AWS SDK transfer manager is meant to be doing the retry itself, hence the 
once() invocation of the operation: we don't bother retrying ourselves.

We need to question that assumption, but at the same time: not-double-retry on 
retry failures.

I'm starting to wonder if its time to stop relying on xfer manager and embrace 
some of its work ourselves? Or is that a distraction? 

For now: what about invoking the copy call with a retry policy which only 
retries on 200+ server: everything else we assume that transfer manager has 
done a best effort.


To backport this I'm going to cherry-pick the invoker code from the S3A 
committer into 3.0, branch-2, *but only the invoke/retry classes, *none of the 
actual usages*. It just sets things up for a fix for this





> s3a rename failed during copy, "Unable to copy part" + 200 error code
> -
>
> Key: HADOOP-16188
> URL: https://issues.apache.org/jira/browse/HADOOP-16188
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Error during a rename where AWS S3 seems to have some internal error *which 
> is not retried and returns status code 200"
> {code}
> com.amazonaws.SdkClientException: Unable to copy part: We encountered an 
> internal error. Please try again. (Service: Amazon S3; Status Code: 200; 
> Error Code: InternalError;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org