Hello,
I am using the Hadoop MapReduce version 0.20.2 and soon 0.21.
I wanted to use the JobClient class to circumvent the use of the command
line interface.
I am noticed that JobClient still uses the deprecated JobConf class for
jib submissions.
Are there any alternatives to JobClient not using
Hello,
I am trying to move to Hadoop MapReduce 0.21.0.
The corresponding tutorial still uses Tool and ToolRunner.
Yet both are deprecated. What would be the correct way to implement,
configure and submit a Job now? I was thinking in terms of:
Configuration configuration = new Configura
able to use them without
problems.
Tom
On Wed, Sep 22, 2010 at 6:52 AM, Martin Becker<_martinbec...@web.de> wrote:
Hello,
I am trying to move to Hadoop MapReduce 0.21.0.
The corresponding tutorial still uses Tool and ToolRunner.
Yet both are deprecated. What would be the correct way t
Hi,
I am using Hadoop MapReduce 0.21.0. The usual process of starting
Hadoop/HDFS/MapReduce was to use the "start-all.sh" script. Now when
calling that script, it tell me that its usage is deprecated and I was
to use "start-{dfs,mapred}.sh". But when I do so the error message
"Hadoop common
White wrote:
Hi Martin,
Neither Tool nor ToolRunner is deprecated in 0.21.0. I don't think
they have ever been deprecated. You should be able to use them without
problems.
Tom
On Wed, Sep 22, 2010 at 6:52 AM, Martin Becker<_martinbec...@web.de> wrote:
Hello,
I am trying to mov
Hi,
so the same package in both jars is not the problem. Should have known that.
I do not know why this happens. Any ideas?
Regards,
Martin
On 22.09.2010 17:45, Martin Becker wrote:
Hi,
Tom, thanks for your answer.
OK, so the problem is, that when I add both hadoop-common-0.21.0.jar
AND
Hi Tom,
I see. Thanks.
Martin
On 22.09.2010 18:27, Tom White wrote:
Hi Martin,
This is a known bug, see https://issues.apache.org/jira/browse/HADOOP-6953.
Cheers
Tom
On Wed, Sep 22, 2010 at 8:17 AM, Martin Becker<_martinbec...@web.de> wrote:
Hi,
I am using Hadoop MapReduce 0.21.
su
wrote:
In 0.21, JobClient methods are available in org.apache.hadoop.mapreduce.Job
and org.apache.hadoop.mapreduce.Cluster classes.
On 9/22/10 3:07 PM, "Martin Becker"<_martinbec...@web.de> wrote:
Hello,
I am using the Hadoop MapReduce version 0.20.2 and soon 0.21.
I w
:
Martin,
Can you give more information about how you compiled and ran your job?
It probably makes sense to open a JIRA
(https://issues.apache.org/jira/browse/MAPREDUCE) to track this.
Cheers
Tom
On Wed, Sep 22, 2010 at 9:59 AM, Martin Becker<_martinbec...@web.de> wrote:
Hello Tom,
But c
ut warnings.
Tom
On Wed, Sep 22, 2010 at 2:43 AM, Amareshwari Sri Ramadasu
wrote:
In 0.21, JobClient methods are available in org.apache.hadoop.mapreduce.Job
and org.apache.hadoop.mapreduce.Cluster classes.
On 9/22/10 3:07 PM, "Martin Becker"<_martinbec...@web.de> wrote:
H
ation to submit a job, without having to call any script
files. Can you give me a pointer?
Martin
On 23.09.2010 18:54, Tom White wrote:
This tutorial should help:
http://hadoop.apache.org/mapreduce/docs/r0.21.0/mapred_tutorial.html
Tom
On Thu, Sep 23, 2010 at 1:24 AM, Martin Becker<_m
Hi James,
I am trying to avoid to call any command line command. I want to submit
a job from within a java application. If possible without packing any
jar file at all. But I guess that will be necessary to allow Hadoop to
load the specific classes. The tutorial definitely does not contain an
On Fri, Sep 24, 2010 at 4:12 PM, Martin Becker <_martinbec...@web.de
<mailto:martinbec...@web.de>> wrote:
Hi James,
I am trying to avoid to call any command line command. I want to
submit a job from within a java application. If possible without
packing any jar file
on is basically
the same as your template. And using the Hadoop executable with my main
jar and the additional jars loaded by -libjars works fine.
Regards,
Martin
On 24.09.2010 17:29, David Rosenstrauch wrote:
On 09/24/2010 11:12 AM, Martin Becker wrote:
Hi James,
I am trying to avoid to call any co
Hello David,
This will at best run my MapReduce process on the local Hadoop instance.
What do I do to submit it to a remote Hadoop cluster using Java code?
Martin
On 24.09.2010 18:53, David Rosenstrauch wrote:
On 09/24/2010 12:42 PM, Martin Becker wrote:
Hello David,
Thanks for your
problem using a more accurate header.
Thank you,
Martin
On 24.09.2010 20:44, David Rosenstrauch wrote:
On 09/24/2010 01:26 PM, Martin Becker wrote:
Hello David,
This will at best run my MapReduce process on the local Hadoop instance.
What do I do to submit it to a remote Hadoop cluster using Java
Hello everbody,
I am wondering if there is a feature allowing (in my case) reduce
tasks to communicate. For example by some volatile variables at some
centralized point. Or maybe just notify other running or to-be-running
reduce tasks of a completed reduce task featuring some arguments.
In my cas
ds) to pass this information between reducers.
>
> On Sat, Dec 18, 2010 at 8:04 AM, Martin Becker <_martinbec...@web.de> wrote:
>>
>> Hello everbody,
>>
>> I am wondering if there is a feature allowing (in my case) reduce
>> tasks to communicate. For example b
am afraid you will have to run a single
> reducer.
>
> Sent from my iPhone 4
>
> On Dec 18, 2010, at 10:33 AM, Martin Becker <_martinbec...@web.de> wrote:
>
>> Thank you Ted,
>>
>> I am using the 21.0 API so I would be drawing Counters from the
>> Cont
ers is what you really need here (assuming they get really
> updated, but I never tried that)
>
> Sent from my iPhone 4
>
> On Dec 18, 2010, at 3:19 PM, Martin Becker <_martinbec...@web.de> wrote:
>
>> Hello Jason,
>>
>> real time values are not required. So
Hello everybody,
is there a possibility to make sure that certain/all reduce tasks,
i.e. the reducers to certain keys, are executed in a specified order?
This is Job internal, so the Job Scheduler is probably the wrong place to start?
Does the order induced by the Comparable interface influence th
, Dec 19, 2010 at 9:09 PM, Martin Becker <_martinbec...@web.de> wrote:
>> Hello everybody,
>>
>> is there a possibility to make sure that certain/all reduce tasks,
>> i.e. the reducers to certain keys, are executed in a specified order?
>> This is Job internal,
11:04 AM, Martin Becker wrote:
>>
>> Hello everbody,
>>
>> I am wondering if there is a feature allowing (in my case) reduce
>> tasks to communicate. For example by some volatile variables at some
>> centralized point. Or maybe just notify other running or to-be
I just reread my first post. Maybe I was not clear enough:
It is only important to me that the Reduce tasks _start_ in a
specified order based on their key. That is the only additional
constraint I need.
On Mon, Dec 20, 2010 at 9:51 AM, Martin Becker <_martinbec...@web.de> wrote:
> As
ing mechanism (making sure the dormant reduce tasks
> stay alive by sending out some status reports).
>
> Although, how is ordering going to affect your reducer's processing? :)
>
> On Mon, Dec 20, 2010 at 2:37 PM, Martin Becker <_martinbec...@web.de> wrote:
>> I just r
I would like to know the answer to this question as well.
The reason I could think of from a theoretical point of view is:
* Each Mapper can theoretically output every possible Key.
* This means that the complete set of pairs processed by
a Reducer is not known until every Mapper has finished.
*
26 matches
Mail list logo