+1 (binding)
Arun
On Fri, Apr 10, 2015 at 4:44 PM -0700, "Vinod Kumar Vavilapalli"
mailto:vino...@apache.org>> wrote:
Hi all,
I've created a release candidate RC0 for Apache Hadoop 2.7.0.
The RC is available at: http://people.apache.org/~vinodkv/hadoop-2.7.0-RC0/
The RC tag in git is: r
Colin,
Do you have a list of incompatible changes other than the shell-script
rewrite? If we do have others we'd have to fix them anyway for the current plan
on hadoop-3.x right? So, I don't see the difference?
Arun
From: Colin P. McCabe
Sent: Monday,
Steve,
From: Steve Loughran
Sent: Monday, March 09, 2015 2:15 PM
To: mapreduce-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
common-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Hadoop 3.x: what about shipping trunk as a 2.x release in 2
Over the last few days, we have had lots of discussions that have intertwined
several major themes:
# When/why do we make major Hadoop releases?
# When/how do we move to major JDK versions?
# To a lesser extent, we have debated another theme: what do we do about trunk?
For now, let's park
Awesome, looks like we can just do this in a compatible manner - nothing else
on the list seems like it warrants a (premature) major release.
Thanks Vinod.
Arun
From: Vinod Kumar Vavilapalli
Sent: Tuesday, March 03, 2015 2:30 PM
To: common-...@hadoop.ap
Andrew,
Thanks for bringing up this discussion.
I'm a little puzzled for I feel like we are rehashing the same discussion from
last year - where we agreed on a different course of action w.r.t switch to
JDK7.
IAC, breaking compatibility for hadoop-3 is a pretty big cost - particularly
for
Sounds good, thanks for the help Vinod!
Arun
From: Vinod Kumar Vavilapalli
Sent: Sunday, March 01, 2015 11:43 AM
To: Hadoop Common; Jason Lowe; Arun Murthy
Subject: Re: 2.7 status
Agreed. How about we roll an RC end of this week? As a Java 7+ release
My bad, been sorted distracted.
I agree, we should just roll fwd a 2.7 ASAP with all the goodies.
What sort of timing makes sense? 2 week hence?
thanks,
Arun
From: Jason Lowe
Sent: Friday, February 13, 2015 8:11 AM
To: common-...@hadoop.apache.org
Subje
Folks,
With hadoop-2.6 out it's time to think ahead.
As we've discussed in the past, 2.6 was the last release which supports JDK6.
I'm thinking it's best to try get 2.7 out in a few weeks (maybe by the
holidays) with just the switch to JDK7 (HADOOP-10530) and possibly
support for JDK-1.8 (as a r
Duh!
$ chmod a+r *
Please try now. Thanks!
Arun
On Mon, Nov 10, 2014 at 7:06 PM, Tsuyoshi OZAWA
wrote:
> Hi Arun,
>
> Could you confirm the link and permission to the files is correct? I
> got a following error:
>
>
> Forbidden
> You don't have permission to access
> /~acmurthy/hadoop-2.6.0-
as already indicated
>> on
>>>>> the dev mailing list. Hopefully HDFS-6581 gets ready sooner. Both of
>>> these
>>>>> features are being in development for sometime.
>>>>>
>>>>> On Tue, Sep 23, 2014 at 3:27 PM, Andrew Wang <
>
Looks like most of the content is in and hadoop-2.6 is shaping up nicely.
I'll create branch-2.6 by end of the week and we can go from there to
stabilize it - hopefully in the next few weeks.
Thoughts?
thanks,
Arun
On Tue, Aug 12, 2014 at 1:34 PM, Arun C Murthy wrote:
> Folks,
>
> With hadoo
Sorry, coming to discussion late.
We all agreed that 2.6 would the *last* release supporting JDK6 and
hadoop-2.7 would drop support for JDK6. We could easily do 2.7 right after
2.6 (maybe with few critical bug-fixes) with the defining feature of 2.7
being *JDK7 only*. I've checked with HBase, Pig
Chris,
I think you really can use the events via the YARN Timeline Server:
http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/TimelineServer.html
Many YARN applications (MR, Tez, Spark soon) will emit events to the
Timeline Server.
hth,
Arun
On Wed, Aug 20, 2014 at 1:48 AM,
I suggest we do a 2.5.1 (with potentially other bug fixes) rather than fix
existing tarballs.
thanks,
Arun
On Mon, Aug 18, 2014 at 12:42 PM, Karthik Kambatla
wrote:
> Hi devs
>
> Tsuyoshi just brought it to my notice that the published tarballs don't
> have LICENSE, NOTICE and README at the to
Alejandro,
On Tue, Jun 24, 2014 at 4:44 PM, Alejandro Abdelnur
wrote:
> After reading this thread and thinking a bit about it, I think it should be
> OK such move up to JDK7 in Hadoop 2 for the following reasons:
>
> * Existing Hadoop 2 releases and related projects are running
> on JDK7 in p
Thanks everyone. I'll start a vote tmrw if there are no objections.
Arun
On Mon, Jun 23, 2014 at 12:06 PM, Jitendra Pandey
wrote:
> +1, sounds good!
>
>
> On Mon, Jun 23, 2014 at 12:02 PM, Andrew Wang
> wrote:
>
> > +1 here as well, let's do a vote thread (for 7 days, maybe for the last
> > t
Akira,
Waiting for one more issue, stay tuned.
thanks,
Arun
On Mon, May 19, 2014 at 11:40 PM, Akira AJISAKA
wrote:
> Hi Arun,
>
> I'd like to know when to release Hadoop 2.4.1.
> It looks like all of the blockers have been resolved.
>
> Thanks,
> Akira
>
>
> (2014/04/24 5:59), Arun C Murthy w
Based on the discussion at common-dev@, we've decided to target 2.3
off the tip of branch-2 based on the 2 major HDFS features which are
Heterogenous Storage (HDFS-2832) and HDFS Cache (HDFS-4949).
I'll create a new branch-2.3 on (1/24) at 6pm PST.
thanks,
Arun
--
CONFIDENTIALITY NOTICE
NOTICE:
That makes sense too.
On Aug 16, 2013, at 10:39 AM, Vinod Kumar Vavilapalli
wrote:
>
> We need to make a call on what blockers will be. From my limited
> understanding, this doesn't seem like a API or a compatibility issue. Can we
> not fix it in subsequent bug-fix releases?
>
> I do see a lo
Ok, I'll spin rc1 after. Thanks.
Sent from my iPhone
On Apr 10, 2013, at 11:44 AM, Siddharth Seth wrote:
> Arun, MAPREDUCE-5094 would be a useful jira to include in the 2.0.4-alpha
> release. It's not an absolute blocker since the values can be controlled
> explicitly by changing tests which us
Sounds great! Thanks for the update Ralph!
Sent from my iPhone
On Dec 20, 2011, at 10:22 PM, Ralph Castain wrote:
> Just a quick update on this notion. Several of us in the OMPI community got
> together and successfully integrated Java bindings into the OMPI code base,
> and we have enough su
Praveen,
There are many ways to prevent what you described...
I'm in the process of adding more docs, for now pls take a look at the
following older blog post for more details:
http://developer.yahoo.com/blogs/hadoop/posts/2011/03/mapreduce-nextgen-scheduler/
Arun
Sent from my iPhone
On Nov 2
> On Sat, Nov 26, 2011 at 3:56 AM, Patrick Wendell wrote:
>
>> Hey All,
>>
>> Two questions about the MR2 scheduler code, for anyone more familiar.
>>
>> - The return type of allocate() suggests that the AM will get some
>> instantaneous allocation of resources. Reading through the two current
>>
Try blowing away your ivy cache?
Sent from my iPhone
On Oct 15, 2011, at 12:29 PM, Alejandro Abdelnur wrote:
> Running 'ant examples -Dresolvers=internal' from trunk fails with.
>
> Any ideas?
>
> Thanks.
>
> Alejandro
>
> --
> compile-mapred-classes:
> [jsp-compile] log4j:WARN No appenders
This is secure mode or unsecured? Cluster or single node? Tx
Sent from my iPhone
On Sep 23, 2011, at 8:37 AM, Ravi Prakash wrote:
> Hi Arun/Vinod,
>
> After commit d4dca4eabf83a97d158f1e1caa4801020679d5e2
> Date: Wed Sep 21 18:52:27 2011 +
> MAPREDUCE-2880. svn merge -c r1173783 --ignore-
You need to run 'mvn install' first to get MR2 installed first. Use
-P-cbuild to skip the LTC build.
Sent from my iPhone
On Aug 31, 2011, at 7:45 PM, Eli Collins wrote:
> How do you build the old MR code nowadays?
>
> The wiki suggests using ant -Dresolvers=internal veryclean test in
> hadoop-m
That means you don't have the autotool chain necessary for build the
native code.
For now pass -P-cbuild to skip them.
Arun
Sent from my iPhone
On Aug 18, 2011, at 11:26 PM, rajesh putta wrote:
> Hi,
> I am using apache-maven-3.0.3 and i have set LD_LIBRARY_PATH=/usr/local/lib
> which has goo
Pls open a jira and file the patch. Thanks!
Sent from my iPhone
On Jul 22, 2011, at 6:56 PM, sp yu wrote:
> I've recently using hadoop*(version 0.21.0)* for some data processing, but
> sometimes reducer crashed. Always the log is like bellow(at the end of this
> mail), which tells when multi fe
Moving to mapreduce-dev@, bcc general@.
Yes, as described in the bug, the CS has high-ram jobs which is a
better model for shared multi-tenant clusters. The hadoop-0.20.203
release from Apache has the most current and tested version of the
CapacityScheduler.
Arun
Sent from my iPhone
On Jul 22,
Yuri,
Do you have the stack trace?
Pls file a jira. Thanks.
Sent from my iPhone
On Jul 13, 2011, at 7:44 PM, y...@isi.edu wrote:
> Greetings,
>
> I'm running common/hdfs/mapreduce trunk version
> -r1146503; I'm getting the following error at the reduce phase:
>
> Error: tried to access class
31 matches
Mail list logo