[ 
https://issues.apache.org/jira/browse/HADOOP-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15275757#comment-15275757
 ] 

Chris Nauroth commented on HADOOP-12709:
----------------------------------------

[~liuml07], thank you for the updated patch.

I spotted a few more files that need clean-ups because of the configuration 
property changes:
* hadoop-common-project/hadoop-common/src/test/resources/core-site.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/resources/job_1329348432655_0001_conf.xml
* hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json

The Checkstyle and Javadoc warnings aren't really introduced by this patch, but 
since we're touching these files anyway, it would be good to go ahead and clean 
them up.

Aside from that, this patch looks like the right overall approach to me.  I 
would like a second review from [~ste...@apache.org] before we proceed with any 
commits.  In particular, I'd like a second opinion on the configuration 
property renames and the class renames.  Some of these are 
backwards-incompatible for S3N.  I think it's the right thing to do, and we can 
make a backwards-incompatible change like this in trunk/3.x, but I'd like a 
second opinion.  I know Steve won't be available to comment until mid-next week 
at the earliest.

> Deprecate s3:// in branch-2,; cut from trunk
> --------------------------------------------
>
>                 Key: HADOOP-12709
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12709
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>    Affects Versions: 2.8.0
>            Reporter: Steve Loughran
>            Assignee: Mingliang Liu
>         Attachments: HADOOP-12709.000.patch, HADOOP-12709.001.patch, 
> HADOOP-12709.002.patch
>
>
> The fact that s3:// was broken in Hadoop 2.7 *and nobody noticed until now* 
> shows that it's not being used. while invaluable at the time, s3n and 
> especially s3a render it obsolete except for reading existing data.
> I propose
> # Mark Java source as {{@deprecated}}
> # Warn the first time in a JVM that an S3 instance is created, "deprecated 
> -will be removed in future releases"
> # in Hadoop trunk we really cut it. Maybe have an attic project (external?) 
> which holds it for anyone who still wants it. Or: retain the code but remove 
> the {{fs.s3.impl}} config option, so you have to explicitly add it for use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to