utside these it's Apache Hadoop. Here is a list of patches EMR applied
> on top of 1.0.3 Hadoop
> >
> http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/EnvironmentConfig_AMIHadoopPatches.html
> >
> > Regards,
> > Peter
> >
> >
Hi,
I am trying to setup a hadoop cluster and I was wondering what is
difference in specifying the Jobtracker IP in *mapred.job.tracker* in *
mapred-site.xml* and noting the same IP in *conf/masters* file? Do I need
to do both or just one. If I need to do both, is there a difference in how
the two
i Momina
>
> maybe the problem is your DNS Resolution. You must have IP hostname
> enteries if all nodes in /etc/hosts file. like this
>
> 127.0.0.1 localhost
>
>
> On Fri, Jul 6, 2012 at 2:49 PM, Momina Khan wrote:
>
> hi Ivan,
> >
> > i have tried with
ta/
>
> Ivan
>
> -Original Message-
> From: Momina Khan [mailto:momina.a...@gmail.com]
> Sent: Thursday, July 05, 2012 10:30 PM
> To: common-dev@hadoop.apache.org
> Subject: HDFS to S3 copy issues
>
> hi ... hope someone is able to help me out with this ... have
all communication regarding health of task tracker and tasks assigned to TT
from JT, info on failed tasks, completed all is communicated through
heartbeat ... what other communication are you talking about? hadoop uses
the regular heartbeat and all other info piggy backs on it ... from TT to JT
on
hi
the periodic heartbeat call that task trackers make to the jobtracker is
thru RPC same for namenode and data nodes.
momina
On Fri, Aug 13, 2010 at 5:39 AM, Ahmad Shahzad wrote:
> Hi ALL,
>Can anyone tell me the purpose for RPC server and RPC client
> that is used in hadoop.
hi
i posted earlier on the mailing list asking for help as my
start_thrift_server.sh file just couldnt start the HadoopThriftServer ... a
friend resolved the problem for me the following are the steps ...put here
so that they might help someone else wanting to use thhriftfs ...
(1) ran ant on th
hi!
i have to use HDFS in my python project and am trying to use thriftfs for it
but i cant seem to launch the HadoopThriftServer via the
start_thrift_server.sh script
i keep getting this error
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/hadoop/thriftfs/HadoopThriftServer
hi
hi!
i have to use HDFS in my python project and am trying to use thriftfs for it
but i cant seem to launch the HadoopThriftServer via the
start_thrift_server.sh script
i keep getting this error
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/hadoop/thriftfs/HadoopThriftSe
hod, which is passed on for job scheduling along with the
> split info.
>
> Hope this is what you were looking for.
>
> Amogh
>
>
> On 5/7/10 4:22 PM, "momina khan" wrote:
>
> hi,
>
> i am trying to figure out how hadoop uses data locality to schedule m
hi,
i am trying to figure out how hadoop uses data locality to schedule maps on
nodes which locally store tha map input ... going through code i am going in
circles in between a couple of file but not really getting anywhere ... that
is to say that i cant locate the HDFS API or func that can commu
hi could anyone point me to where i can download the LATE scheduler API plug
in for HAdoop ... i distinctly remember reading up on work on the API in
this mailing list but cant locate it on the web!
thanks
momina
the best place to start is the MapReduce paper by Jeff Dean ...and try
googling a talk by google's Aron on MapReduce
momina
On Thu, Dec 10, 2009 at 4:12 PM, Neo Anderson
wrote:
> Hi
>
> I am interested in distributed computing and would like to learn core
> concept e.g. MapReduce. However, I am
13 matches
Mail list logo