y, July 30, 2012 1:45 PM
Subject: Re: Reducer - getMapOutput
Hey Robert,
Can you pls share what you are trying to do?
Arun
On Jul 30, 2012, at 9:10 AM, Grandl Robert wrote:
Hi,
>
>
>I am trying to modify the code for data transfer of intermediate output.
>
>
>
>In this
Hi,
I am trying to modify the code for data transfer of intermediate output.
In this respect, on the reduce side in getMapOuput I want to have the
connection with the TaskTracker(setupSecureConnection), but then on doGet the
TaskTracker to be able to delay the response back to the client. I t
Robert
Sent: Tuesday, July 17, 2012 7:38 PM
Subject: Re: Hadoop compile - delete conf files
this is how it suppose to be. It copies the files from the src and create the
package.
Thanks,
Mayank
On Tue, Jul 17, 2012 at 1:16 PM, Grandl Robert wrote:
Hi,
>
>
>I am trying to compile ha
Hi,
I am trying to compile hadoop from command line doing something like:
ant compile jar run
However, it always delete the conf files content (hadoop-env.sh, core-site.xml,
mapred-site.xml, hdfs-site.xml)
So I have to recover from backup these files all the time.
Does anybody face similar
Hi,
It is possible to write to a HDFS datanode w/o relying on Namenode, i.e. to
find the location of Datanodes from somewhere else ?
Thanks,
Robert
ion on how reducer works
On Jul 9, 2012, at 12:55 PM, Grandl Robert wrote:
Thanks a lot guys for answers.
>
>
>
>Still I am not able to find exactly the code for the following things:
>
>
>1. reducer to read from a Map output only its partition. I looked into
>ReduceTas
e a bit on how the data is written to which partition ?
Thanks,
Robert
From: Arun C Murthy
To: mapreduce-user@hadoop.apache.org
Sent: Monday, July 9, 2012 9:24 AM
Subject: Re: Basic question on how reducer works
Robert,
On Jul 7, 2012, at 6:37 PM, Grandl Ro
I see. I was looking into tasktracker log :).
Thanks a lot,
Robert
From: Harsh J
To: Grandl Robert ; mapreduce-user
Sent: Sunday, July 8, 2012 9:16 PM
Subject: Re: Basic question on how reducer works
The changes should appear in your Task's userlogs
Hi,
I have some questions related to basic functionality in Hadoop.
1. When a Mapper process the intermediate output data, how it knows how many
partitions to do(how many reducers will be) and how much data to go in each
partition for each reducer ?
2. A JobTracker when assigns a task to a r