Brian
Thank you very much.
The version of Hadoop is 0.19.0,I think 4616 and 4635 patches is necessary.
I will try it.
-Original Message-
From: Brian Bockelman [mailto:bbock...@cse.unl.edu]
Sent: Monday, December 15, 2008 10:00 PM
To: core-user@hadoop.apache.org
Subject: Re: The error
On Dec 16, 2008, at 4:10 PM, Raghu Angadi wrote:
Brian Bockelman wrote:
Hey,
I hit a bit of a roadbump in solving the "truncated block issue" at
our site: namely, some of the blocks appear perfectly valid to the
datanode. The block verifies, but it is still the wrong size (it
appears th
Brian Bockelman wrote:
Hey,
I hit a bit of a roadbump in solving the "truncated block issue" at our
site: namely, some of the blocks appear perfectly valid to the
datanode. The block verifies, but it is still the wrong size (it
appears that the metadata is too small too).
What's the best w
Hey,
I hit a bit of a roadbump in solving the "truncated block issue" at
our site: namely, some of the blocks appear perfectly valid to the
datanode. The block verifies, but it is still the wrong size (it
appears that the metadata is too small too).
What's the best way to proceed? It ap
Silly me... my processes were only bound to my external IPs. :-/
On Dec 16, 2008, at 12:49, Brandon Dimcheff wrote:
I'm having some trouble on one node of a 5-node cluster. I can
successfully run maps on all of them, but the reduce phase always
stalls on one particular host. It throws a c
I'm having some trouble on one node of a 5-node cluster. I can
successfully run maps on all of them, but the reduce phase always
stalls on one particular host. It throws a connection refused
exception when attempting to connect to itself to get the data from
the map outputs. The only dif
Owen O'Malley wrote:
>
> On Dec 16, 2008, at 9:14 AM, David Coe wrote:
>
>> Does the SequenceFileOutputFormat work with NullWritable as the value?
>
> Yes.
Owen O'Malley wrote:
> It means you are trying to write a null value. Your reduce is doing
> something like:
>
> output.collect(key, null);
>
On Dec 16, 2008, at 9:14 AM, David Coe wrote:
Does the SequenceFileOutputFormat work with NullWritable as the value?
Yes.
On Dec 16, 2008, at 8:58 AM, David Coe wrote:
Thank you for your swift response. I am getting this error when I try
your suggestion:
java.lang.NullPointerException
at
org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:987)
It means you are trying to write a null value. Your
Owen O'Malley wrote:
>
> On Dec 16, 2008, at 8:28 AM, David Coe wrote:
>
>> Is there a way to output the write() instead?
>
>
> Use SequenceFileOutputFormat. It writes binary files using the write.
> The reverse is SequenceFileInputFormat, which reads the sequence files
> using readFields.
>
> -- O
Owen O'Malley wrote:
It is interesting, but it would be more interesting to track the authors
of the patch rather than the committer. The two are rarely the same.
Indeed. There was a period of over a year where I wrote hardly anything
but committed almost everything. So I am vastly overrepr
On Dec 16, 2008, at 12:36 AM, Stefan Groschupf wrote:
It is a neat way of visualizing who is behind the Hadoop source code
and how the project code base grew over the years.
It is interesting, but it would be more interesting to track the
authors of the patch rather than the committer. The
Owen O'Malley wrote:
>
> On Dec 16, 2008, at 8:28 AM, David Coe wrote:
>
>> Is there a way to output the write() instead?
>
>
> Use SequenceFileOutputFormat. It writes binary files using the write.
> The reverse is SequenceFileInputFormat, which reads the sequence files
> using readFields.
>
> -- O
On Dec 16, 2008, at 8:28 AM, David Coe wrote:
Is there a way to output the write() instead?
Use SequenceFileOutputFormat. It writes binary files using the write.
The reverse is SequenceFileInputFormat, which reads the sequence files
using readFields.
-- Owen
I've defined a custom key class that implements writable. I've noticed
that for use between the mapper and reducer the write and readFields are
actually used. However, when I use an identity reducer, toString is
called when I do something like output.collect(myClass, null)
Is there a way to outp
I tried on my mac, the same thing happened, and the PDF viewer occupied more
than 92% cpu.
On 08-12-16 下午11:05, "Brian Bockelman" wrote:
> Hey,
>
> Does anyone else check out the code on Mac OS X? I noticed that docs/
> cn/hdfs_design.pdf causes the Spotlight daemon which does the
> automatic
Hey,
Does anyone else check out the code on Mac OS X? I noticed that docs/
cn/hdfs_design.pdf causes the Spotlight daemon which does the
automatic indexing to peg at 100% CPU. Also, when I try to open the
file, the PDF viewer is unable to render things in a reasonable amount
of time (the
I've opened https://issues.apache.org/jira/browse/HADOOP-4881 and
attached a patch to fix this.
Tom
On Fri, Dec 12, 2008 at 2:18 AM, Tarandeep Singh wrote:
> The example is just to illustrate how one should implement one's own
> WritableComparable class and in the compreTo method, it is just sho
Hello,
Your mail messages are not going through to the list.
Brian
On Dec 16, 2008, at 3:52 AM, Sandeep Dhawan, Noida wrote:
DISCLAIMER:
---
The contents of this e-mail and any
DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and intended
for the named recipient(s) only.
It shall not attach any liability on the originator
DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and intended
for the named recipient(s) only.
It shall not attach any liability on the originator
Hi friends of Hadoop,
we from ScaleUnlimited.com put together a video that visualize the
code commit history of the Hadoop core project.
It is a neat way of visualizing who is behind the Hadoop source code
and how the project code base grew over the years.
Check it out here:
http://www.scal
22 matches
Mail list logo