rue&attemptid=attempt_201107180916_0030_m_03_0&filter=stderr
11/07/18 14:53:17 INFO mapreduce.Job: map 100% reduce 29%
11/07/18 14:53:19 INFO mapreduce.Job: map 96% reduce 29%
11/07/18 14:53:25 INFO mapreduce.Job: map 98% reduce 29%
--
Geoffry Roberts
the relevant code.
// Prepare the output.
IntWritable[] out = new IntWritable[5];
// populate output array.
ctx.write(key, new ArrayWritable(IntWritable.class, out));
// If I comment the above line out, the job runs without issue.
Can anyone see what I'm doing wrong?
--
Geoffry Roberts
gotten value
> every iteration.
>
> The solution, when you want to persist a particular key or value
> object, is to .clone() it into the list so that the list does store
> real, new objects in it and not multiple references of the same
> object.
>
> On Sat, Jun 18, 2011 at
361442340113*more==>*
2005-09-16=3361442340113*more==>*
2005-09-16=4413542324489*more==>*
2005-09-16=44135 42324489*more==>*
Thanks in advance
--
Geoffry Roberts
configuration and log file output
will ensue.
Does anyone know anything about this?
Thanks.
--
Geoffry Roberts
on the same node as any other cluster
> machine, but there was no firewall between it and the cluster.
>
> Oh, I also had to create a tomcat6 user on the namenode/jobtracker and
> create a home directory in HDFS. I could have probably called
> set("user.name", "existin
RMI thing.
Now here's an idea: Use an aglet. ;-) If I get into a funky mood I might
just give this a try.
On 18 May 2011 08:07, Lior Schachter wrote:
> Another machine in the cluster.
>
>
> On Wed, May 18, 2011 at 6:05 PM, Geoffry Roberts <
> geoffry.robe...@gmail.com>
Is Tomcat installed on your hadoop name node? or another machine?
On 18 May 2011 07:58, Lior Schachter wrote:
> Hi,
> I have my application installed on Tomcat and I wish to submit M/R jobs
> programmatically.
> Is there any standard way to do that ?
>
> Thanks,
> Lior
>
--
Geoffry Roberts
ccumulate and convince
> ourselves that it would produce the same results, but it worked for us. I
> am not familiar with what you are trying to compute but it could work for
> you too.
>
> --Bobby Evans
>
>
>
> On 5/12/11 12:44 PM, "Geoffry Roberts" wrote:
>
rolling values from all prior records. An ultra simple example of this
would be a balance forward situation. (I'm not doing accounting I'm doing
epidemiology, but the concept is the same.)
Is a single reducer the best way to go in this?
Thanks
--
Geoffry Roberts
ed, May 11, 2011 at 8:40 PM, Geoffry Roberts
> wrote:
> > All,
> >
> > I am attempting to pass a string value from my driver to each one of my
> > mappers and it is not working. I can set the value, but when I read it
> back
> > it returns null. the value is n
otected void setup(Context ctx) throws IOException, InterruptedException
{
...
** String **scenarioName**= ctx.getConfiguration().get("**scenarioName**");
*
* // scenarioName** **is NOT null*
* String scenarioText = ctx.getConfiguration().get("scenario");
// **scenarioText is null*
* ...
}*
--
Geoffry Roberts
d it as a set of lines to
> the config - alternatively serialize it (maybe xml) to a known spot in HDFS
> and read it in in the setup code in the reducer - I assume this is an object
> known at the start of the job and not modified by the mapper
>
>
> On Fri, May 6, 20
}
} catch (NullPointerException e) {
LOG.error("" + "blankless=" + blankless);
LOG.error("" + "values=" + values.toString());
}
// In my logs, I see reasonable counts even when the output file is empty.
LOG.info("key=" + path + " count=" + k);
}
--
Geoffry Roberts
All,
I need for each one of my reducers to have read access to a certain object
or a clone thereof. I can instantiate this object a start up. How can I
give my reducers a copy?
--
Geoffry Roberts
catch on to the fact that I was using the wrong Context.
Hope this helps somebody.
On 3 May 2011 15:08, David Rosenstrauch wrote:
> On 05/03/2011 05:49 PM, Geoffry Roberts wrote:
>
>> David,
>>
>> Thanks for the response.
>>
&g
ements in the reduce() method
yield nada.
On 3 May 2011 13:39, David Rosenstrauch wrote:
> On 05/03/2011 01:21 PM, Geoffry Roberts wrote:
>
>> All,
>>
>> I have three questions I would appreciate if anyone could weigh in on. I
>> apologise in advance if I sound whiny.
&g
king ( see question 2) need I go on. Is anyone else
having either trouble or success with multiple outputs using the
aforementioned class?
Thanks
--
Geoffry Roberts
ng the
>>> getOutputfileFromInputFile() method.
>>>
>>>
>>> On Thu, Apr 14, 2011 at 5:19 PM, Harsh J wrote:
>>>
>>>> Hello again Hari,
>>>>
>>>> On Thu, Apr 14, 2011 at 5:10 PM, Hari Sreekumar
>>>> wrote:
>>>> > Here is a part of the code I am using:
>>>> > jobConf.setOutputFormat(MultipleTextOutputFormat.class);
>>>>
>>>> You need to subclass the OF and use it properly, else the abstract
>>>> class takes over with the default name always used (Thus, 'part'). You
>>>> can see a good, complete example at [1].
>>>>
>>>> I'd still recommend using MultipleOutputs for better portability
>>>> reasons. Its javadocs explain how to go about using it well enough
>>>> [2].
>>>>
>>>> [1] -
>>>> https://sites.google.com/site/hadoopandhive/home/how-to-write-output-to-multiple-named-files-in-hadoop-using-multipletextoutputformat
>>>> [2] -
>>>> http://hadoop.apache.org/common/docs/r0.20.2/api/org/apache/hadoop/mapred/lib/MultipleOutputs.html
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>>
>>
>
--
Geoffry Roberts
9:30 PM, Amogh Vasekar wrote:
> Yes. Also attached is an old thread I have kept handy with me. Hope this
> helps you.
>
>
> Thanks,
> Amogh
>
>
> On 12/11/09 10:07 PM, "Geoffry Roberts" wrote:
>
> Amogh,
>
> I don't have experience with patches fo
org/jira/browse/MAPREDUCE-370
>
> You’ll have to work around for now / try to apply patch.
>
> Amogh
>
>
>
> On 12/9/09 8:54 PM, "Geoffry Roberts" wrote:
>
> Aaron,
>
> I am using 0.20.1 and I'm not finding org.apache.hadoop.mapreduce.
>
> lib.o
ing 0.20, you should probably stick with the old API for your
> process.
>
> Cheers,
> - Aaron
>
>
> On Tue, Dec 8, 2009 at 12:40 PM, Geoffry Roberts <
> geoffry.robe...@gmail.com> wrote:
>
>> All,
>>
>> This one has me stumped.
>>
>> What
t;
> The new API comes with a related OF, called MultipleOutputs
> (o.a.h.mapreduce.lib.output.MultipleOutputs). You may want to look into
> using this instead.
>
> - Aaron
>
>
> On Tue, Sep 29, 2009 at 4:44 PM, Geoffry Roberts <
> geoffry.robe...@gmail.com> wrote:
&g
wrote:
> Are you perhaps creating large numbers of files, and running out of file
> descriptors in your tasks.
>
>
> On Wed, Oct 7, 2009 at 1:52 PM, Geoffry Roberts > wrote:
>
>> All,
>>
>> I have a MapRed job that ceases to produce output about halfway t
All,
I have a MapRed job that ceases to produce output about halfway through.
The obvious question is why?
This job reads a file and uses MultipleTextOutputFormat to generate output
files named with the output key. At about the halfway point, the job
continues to create files, but they are all o
All,
What I want to do is output from my reducer multiple files one for each key
value.
Can this still be done in the current API?
It seems that using MultipleTextOutputFormat requires one to use deprecated
parts of API.
It this correct?
I would like to use the class or its equivalent and stay
All,
I have an issue wrt common file access from within a map reduce job. I have
tried to do this two ways and wind up with either a FileNotFoundException or
a EOFException.
1. I copy the file into the hadoop hdfs using the -copyFromLocal utility.
I then attempt the following:
JobConf conf = n
Thanks for the response.
Now how do I fix this? Is the problem most likely in my MR code? or in my
hadoop configuration? or what?
On Mon, Jul 27, 2009 at 9:33 AM, Harish Mallipeddi <
harish.mallipe...@gmail.com> wrote:
>
> On Mon, Jul 27, 2009 at 9:42 PM, Geoffry Roberts <
All,
I am attempting to run my first map reduce job and I am getting the
following error. Does anyone know what it means?
Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
Full text of the output follows:
09/07/27 07:06:54 WARN mapred.JobClient: Use GenericOptionsParser for
pars
29 matches
Mail list logo