combiner and reducer are doing the same thing.
Issue :
My job is running fine when I don't use a combiner.
But when I run with combiner , I am getting EOFException.
java.io.EOFException
at java.io.DataInputStream.readUnsignedShort(Unknown Source)
at java.io.DataInputStream.readUTF
@hadoop.apache.org
Subject: Re: MapReduce combiner issue : EOFException while reading Value
Hi Guys
Can anyone please proved any suggestion on this.
I am still facing this issue when running with combiner ?
Please give your valuable inputs
Regards,
Arpit Wanchoo | Sr. Software Engineer
Guavus Network Systems
I got the problem with I am unable to solve it. I need to apply a filter
for _SUCCESS file while using FileSystem.listStatus method. Can someone
please guide me how to filter _SUCCESS files. Thanks
On Tue, May 29, 2012 at 1:42 PM, waqas latif waqas...@gmail.com wrote:
So my question is that do
When your code does a listStatus, you can pass a PathFilter object
along that can do this filtering for you. See
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileSystem.html#listStatus(org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.PathFilter)
for the API javadocs on
Thanks Harsh. I got it running.
On Wed, May 30, 2012 at 5:58 PM, Harsh J ha...@cloudera.com wrote:
When your code does a listStatus, you can pass a PathFilter object
along that can do this filtering for you. See
So my question is that do hadoop 0.20 and 1.0.3 differ in their support of
writing or reading sequencefiles? same code works fine with hadoop 0.20 but
problem occurs when run it under hadoop 1.0.3.
On Sun, May 27, 2012 at 6:15 PM, waqas latif waqas...@gmail.com wrote:
But the thing is, it works
I have seen this issue with large file writes using SequenceFile writer.
Not found the same issue when testing with writing fairly small files (
1GB).
On Fri, May 25, 2012 at 10:33 PM, Kasi Subrahmanyam
kasisubbu...@gmail.comwrote:
Hi,
If you are using a custom writable object while passing
But the thing is, it works with hadoop 0.20. even with 100 x100(and even
bigger matrices) but when it comes to hadoop 1.0.3 then even there is a
problem with 3x3 matrix.
On Sun, May 27, 2012 at 12:00 PM, Prashant Kommireddi
prash1...@gmail.comwrote:
I have seen this issue with large file
a combiner.
But when I run with combiner , I am getting EOFException.
java.io.EOFException
at java.io.DataInputStream.readUnsignedShort(Unknown Source)
at java.io.DataInputStream.readUTF(Unknown Source)
at java.io.DataInputStream.readUTF(Unknown Source
Regards, waqas. I think that you have to ask to MapR experts.
On 05/25/2012 05:42 AM, waqas latif wrote:
Hi Experts,
I am fairly new to hadoop MapR and I was trying to run a matrix
multiplication example presented by Mr. Norstadt under following link
started getting EOFException
every time. Even the jobs that ran fine before (on the exact same input
files) aren't running now. I am a bit perplexed as to what is causing this
error. Here is the error:
12/04/30 12:55:55 INFO mapred.JobClient: Task Id :
attempt_201202240659_6328_m_01_1
I have been running several MapReduce jobs on some input text files. They
were working fine earlier and then I suddenly started getting EOFException
every time. Even the jobs that ran fine before (on the exact same input
files) aren't running now. I am a bit perplexed as to what is causing
164.28.62.204:50010java.io.**
EOFException
2012-03-15 10:41:31,402 [Thread-5] INFO
org.apache.hadoop.hdfs.**DFSClient
-
Abandoning block blk_-6402969611996946639_11837
2012-03-15 10:41:31,403 [Thread-5] INFO
org.apache.hadoop.hdfs.**DFSClient
-
Excluding datanode 164.28.62.204:50010
2012
On 03/15/2012 03:06 PM, Mohit Anchlia wrote:
When I start a job to read data from HDFS I start getting these errors.
Does anyone know what this means and how to resolve it?
2012-03-15 10:41:31,402 [Thread-5] INFO org.apache.hadoop.hdfs.DFSClient -
Exception in createBlockOutputStream
anyone know what this means and how to resolve it?
2012-03-15 10:41:31,402 [Thread-5] INFO org.apache.hadoop.hdfs.**DFSClient
-
Exception in createBlockOutputStream 164.28.62.204:50010java.io.**
EOFException
2012-03-15 10:41:31,402 [Thread-5] INFO org.apache.hadoop.hdfs.**DFSClient
-
Abandoning
, Hadoop will throw an EOFException, which I can't see the
reason. Below is the stack trace:
|10/12/08 23:04:04 INFO mapred.JobClient: Running job:
job_201012081252_0016
10/12/08 23:04:05 INFO mapred.JobClient:map0% reduce0%
10/12/08 23:04:16 INFO mapred.JobClient: Task Id
Yes, you're likely to see an error in the DN log. Do you see anything
about max number of xceivers?
-Todd
On Thu, Feb 4, 2010 at 11:42 PM, Meng Mao meng...@gmail.com wrote:
not sure what else I could be checking to see where the problem lies. Should
I be looking in the datanode logs? I looked
ack, after looking at the logs again, there are definitely xcievers errors.
It's set to 256!
I had thought I had cleared this a possible cause, but guess I was wrong.
Gonna retest right away.
Thanks!
On Fri, Feb 5, 2010 at 11:05 AM, Todd Lipcon t...@cloudera.com wrote:
Yes, you're likely to see
I wrote a hadoop job that checks for ulimits across the nodes, and every
node is reporting:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals
not sure what else I could be checking to see where the problem lies. Should
I be looking in the datanode logs? I looked briefly in there and didn't see
anything from around the time exceptions started getting reported.
lsof during the job execution? Number of open threads?
I'm at a loss here.
also, which is the ulimit that's important, the one for the user who is
running the job, or the hadoop user that owns the Hadoop processes?
On Tue, Feb 2, 2010 at 7:29 PM, Meng Mao meng...@gmail.com wrote:
I've been trying to run a fairly small input file (300MB) on Cloudera
Hadoop 0.20.1. The
I've been trying to run a fairly small input file (300MB) on Cloudera Hadoop
0.20.1. The job I'm using probably writes to on the order of over 1000
part-files at once, across the whole grid. The grid has 33 nodes in it. I
get the following exception in the run logs:
10/01/30 17:24:25 INFO
22 matches
Mail list logo