Chris Douglas wrote:
Thus far the changes suggested for a 1.0 branch are:
- de-deprecate classic mapred APIs (no Jira issue yet)
Why? Tom and Owen's proposal preserves compatibility with the
deprecated FileSystem and mapred APIs up to 1.0. After Tom cuts a
release- from either the 0.21 branch
Allen Wittenauer wrote:
On Apr 5, 2010, at 5:06 PM, Chris K Wensel wrote:
we need a well healed 1.0 sooner than later.
Why?
I think it would be good for a 0.21 with the newly renamed artifacts
hadoop-common, hadoop-hdfs and hadoop-mapred out there; I think the new
APIs should be made
We've long-delayed declaring 1.0 because we were afraid to commit to
supporting a given API for a longer term. Now folks are willing to make
that long-term commitment to an API, yet seem reluctant to call it 1.0.
The commitment is to the new APIs. Folks are reluctant to cut a
release without
From the Java SE 7 JavaDocs:
A program element annotated @Deprecated is one that programmers are discouraged
from using, typically because it is dangerous, or because a better alternative
exists. Compilers warn when a deprecated program element is used or overridden
in non-deprecated code.
So, yes, deprecation is just a warning to avoid these APIs, but deprecation
is a stronger statement than you're portraying. It's not fair notice that
the API may go away. It's final notice that the API should go away but for
backward compatibility reasons it can't. Decprecated := don't use.
The APIs at
issue have preferred alternatives and will be retained for backwards
compatibility reasons.
Actually, from my perspective, re the 0.20 branch, they are not preferred
alternatives and are not complete as more were introduced into .21 (of which
many are wrappers around the stable
Actually, from my perspective, re the 0.20 branch, they are not preferred
alternatives and are not complete as more were introduced into .21 (of which
many are wrappers around the stable apis for sake of transition).
Sorry, I must have been unclear, because this is part of the argument.
Summarily: given that the APIs are *not* fully functional, preferred
alternatives in 0.20- we shouldn't base our 1.0 release on it. Do you
agree? -C
well said, but I still think a release is fine off .20 if we remove the
deprecation warnings (and drop the new apis completely as they add
Owen O'Malley wrote:
In my experience with releasing Hadoop, the bare minimum of scale
testing is a couple of weeks on 500 nodes (and more is far better) with
a team of people testing it. I think that releasing a 1.0 that has never
been tested at scale would be disastrous.
For the record, I
Chris Douglas wrote:
Speaking of the release vote process, I renew my request that we
formalize both the RM role and the bylaws. -C
I think the HTTPD release rules are non-controversial and would support
adoption of something similar. Someone needs to draft a proposal,
initiate a
Our org (Trend Micro) will be using an internal build based on 0.20 for at
least the rest of this year. It is, really, already 1.0 from our point of
view, the first ASF Hadoop release officially adopted into our production
environment. I hope other users of Hadoop will speak up on this thread
Chris K Wensel wrote:
are we saying we will de-deprecate the stable APIs in .20, or make the new APIs
introduced in .20 stable?
+1 on removing the deprecations on the stable APIs.
Yes. I too am +1 on removing deprecations in stable, public APIs in a
1.0 release. Code that uses only public
Todd Lipcon wrote:
With HDFS-200 we'd also need HDFS-142
Good to know. I' have to admit to being puzzled by HDFS-200, since
Nicholas resolved it as a duplicate on 7 January, yet Dhruba's continued
to post patches to it.
Dhruba, Stack: do you have any thoughts on the appropriateness of
Hi Guys,
To throw in my 2 cents: it would be really nice to get out a 1.0 branch
based off of 0.20 it¹s not perfect, but releases never are. That¹s why you
can make more of them. :)
In terms of the significance of the 1.0 labeling, I think it's important for
adoption. I was telling someone at
LOL, I want a v100! :)
On 4/1/10 2:31 PM, Allen Wittenauer awittena...@linkedin.com wrote:
On 4/1/10 2:15 PM, Mattmann, Chris A (388J)
chris.a.mattm...@jpl.nasa.gov wrote:
In terms of the significance of the 1.0 labeling, I think it's important for
adoption.
Companies wanting a 1.0
Hi Guys,
To throw in my 2 cents: it would be really nice to get out a 1.0 branch
based off of 0.20 it¹s not perfect, but releases never are. That¹s why you
can make more of them. :)
In terms of the significance of the 1.0 labeling, I think it's important for
adoption. I was telling someone at
We have been testing the HDFS append code for 0.20 (using HDFS-200,
HDFS-142), but I believe it is not ready for production yet. I am guessing
that there would be another two months of testing before I would classify
0.20.3 + HDFS-200 as production quality. HDFS-200 touches code paths that
would
On Apr 1, 2010, at 10:50 AM, Doug Cutting wrote:
If it takes months, it is a failure. It should take weeks, if that.
On Apr 1, 2010, at 9:31 PM, Dhruba Borthakur wrote:
We have been testing the HDFS append code for 0.20 (using HDFS-200,
HDFS-142), but I believe it is not ready for
Hi,
I'm glad we're heading towards a release. We'd like to better understand some
aspects regarding the release plan.
What would be the tentative release schedule, and what affects particular
releases? We could either continue with our current version or plan based on
what's going to be
Owen O'Malley wrote:
It is tempting and I think that 0.20 is *really* our 1.0, but I think
re-labeling a release a year after it came out would be confusing.
I wasn't proposing just a re-labeling. I was proposing a new release,
branched from 0.20 rather than trunk. We'd introduce some
HDFS 0.20 does not have a reliable append.
Also it is (was last time I looked) incompatible with the 0.21 append HDFS-256.
That wouldn't be a problem if that was the only incompatibility. But it's not.
If 1.0 is re-labeled or re-branched from 0.20 we will have to many
incompatibilities
going
If I may pitch in briefly here, believe it or not, there is a lot of
enterprises out there whom think that anything that isn't version 1.0
isn't worth considering, let alone deploying (doesn't make sense, but
some people are like that). Hence, from a market adoption point of view,
Apache
Konstantin Shvachko wrote:
I would like to propose a straightforward release of 0.21 from current
0.21 branch.
That could be done too. Tom's volunteered to drive a release from trunk
in a few weeks. Owen's volunteered to drive another release from trunk
in about six months. Would you like
On 3/31/2010 2:19 PM, Doug Cutting wrote:
Konstantin Shvachko wrote:
I would like to propose a straightforward release of 0.21 from current
0.21 branch.
That could be done too. Would you like to volunteer to drive a release from
the current 0.21 branch?
I would If I could.
I intended to
Tom White wrote:
I think the focus should be on getting an alpha release
out, so I suggest we create a new 0.21 branch from trunk
Another release we might consider is 1.0 based on 0.20. We'd then have
releases that correspond to what folks are actually using in production.
This would also
A 1.0 release based off 0.20 would give us a chance to state more precisely
the 1.0 API that we intend to support long-term. For example, we might
un-mark the old mapreduce APIs as deprecated in a 1.0 release, and mark the
new mapreduce APIs as experimental and unstable there. Programs
On Mar 30, 2010, at 3:40 PM, Doug Cutting wrote:
Another release we might consider is 1.0 based on 0.20.
It is tempting and I think that 0.20 is *really* our 1.0, but I think
re-labeling a release a year after it came out would be confusing.
I think that we should change the rules so
Stack wrote:
Getting a release out is critical. Otherwise, IMO, the project is
dead but for the stiffening.
Thanks Tom for stepping up to play the RM role for a 0.21.
Regarding Steve's call for what we can offer Tom to help along the
release, the little flea hbase can test its use case on
On Wed, Mar 24, 2010 at 01:27PM, Brian Bockelman wrote:
a) Have a stable/unstable series (0.19.x is unstable, 0.20.x is stable,
0.21.x is unstable). For the unstable releases, lower the bar for code
acceptance for less-risky patches.
I can see how the different criteria of patch acceptance
On Mar 24, 2010, at 4:25 PM, Tom White wrote:
I agree that getting the release process restarted is of utmost
importance to the project. To help make that happen I'm happy to
volunteer to be a release manager for the next release. This will be
the first release post-split, so there will
Getting a release out is critical. Otherwise, IMO, the project is
dead but for the stiffening.
Thanks Tom for stepping up to play the RM role for a 0.21.
Regarding Steve's call for what we can offer Tom to help along the
release, the little flea hbase can test its use case on 0.21.0
candidates
On Fri, Mar 26, 2010 at 11:43 AM, Owen O'Malley omal...@apache.org wrote:
On Mar 24, 2010, at 4:25 PM, Tom White wrote:
I agree that getting the release process restarted is of utmost
importance to the project. To help make that happen I'm happy to
volunteer to be a release manager for the
Tom White wrote:
I agree that getting the release process restarted is of utmost
importance to the project. To help make that happen I'm happy to
volunteer to be a release manager for the next release. This will be
the first release post-split, so there will undoubtedly be some issues
to work
On 3/15/10 9:06 AM, Owen O'Malley o...@yahoo-inc.com wrote:
From our 21 experience, it looks like our old release strategy is
failing.
Maybe this is a dumb question but... Are we sure it isn't the community
failing?
From where I stand, the major committers (PMC?) have essentially
Hey Allen,
Your post provoked a few thoughts:
1) Hadoop is a large, but relatively immature project (as in, there's still a
lot of major features coming down the pipe). If we wait to release on
features, especially when there are critical bugs, we end up with a large
number of patches between
I agree that getting the release process restarted is of utmost
importance to the project. To help make that happen I'm happy to
volunteer to be a release manager for the next release. This will be
the first release post-split, so there will undoubtedly be some issues
to work out. I think the
Hey Tom,
That sounds like a great idea. +1.
Thanks,
Jeff
On Wed, Mar 24, 2010 at 4:25 PM, Tom White t...@cloudera.com wrote:
I agree that getting the release process restarted is of utmost
importance to the project. To help make that happen I'm happy to
volunteer to be a release manager for
From our 21 experience, it looks like our old release strategy is
failing. In looking around, I found that HTTPD's release strategy is
extremely different and seems much more likely to produce usable
releases. It is well worth reading, in my opinion.
38 matches
Mail list logo