I think I speak for all the other mrunit committers when I say we're
happy to be the guinea pigs on this.
On May 6, 2011, at 5:45 AM, Steve Loughran ste...@apache.org wrote:
On 05/05/11 18:52, Todd Lipcon wrote:
On Thu, May 5, 2011 at 10:32 AM, Eric Yangey...@yahoo-inc.com wrote:
Git is
Arun,
I believe you're reading to much into my comment - it was absolutely
well intent.
And I think you have to abstain from such reprimands in the future.
Should you have anything to say to me personally - pl. send me a
separate email next time.
With regards,
Cos
On Fri, May 6, 2011 at
[I am not on PMC, but seeing that PMC may be busy with other issues, I
will try to answer your questions.]
Eric,
I think the thread
http://mail-archives.apache.org/mod_mbox/hadoop-general/201101.mbox/%3C18C
5c999-4680-4684-bc55-a430c40fd...@yahoo-inc.com%3E will answer your
questions. Here is
On May 6, 2011, at 11:18 PM, Milind Bhandarkar wrote:
Allen, there are per job limits, and per user limits in this branch. (So,
max capacity of -1 is for the queue, but within the queue, the per user
limits come into picture.) If I remember right, the defaults were based on
a certain
On May 8, 2011, at 9:50 AM, Eric Sammer wrote:
do we permit
backward incompatible changes between 0.22.0 and 0.22.1 or is this
something we've allowed just for the 203 release?
good question.
do we allow incompatible (smallish) features to be added to a 20.x release.
hoping that they will
-1 for rc1
I downloaded and ran the test target 3 times.
First run failed because my umask is defaulted to 0002, which is a known
problem HADOOP-5050 committed to 0.21 but not 0.20.
Set umask to 0022 and re-ran test twice. Both resulted in failure. Here is
the list of failed tests:
[junit]
[Mentioning again: I am not on the PMC, and this email contains
non-binding opinions based on my reading the general@hadoop.apache.org
emails.]
It is my understanding that, from the beginning, the 0.20+security was
always treated as an exception to the normal (I.e. Pre-0.20) release
process.