Re: No space left on device

2012-05-27 Thread yingnan.ma
ok,I find it. the jobtracker server is full.


2012-05-28 



yingnan.ma 



发件人: yingnan.ma 
发送时间: 2012-05-28  13:01:56 
收件人: common-user 
抄送: 
主题: No space left on device 
 
Hi,
I encounter a problem as following:
 Error - Job initialization failed:
org.apache.hadoop.fs.FSError: java.io.IOException: No space left on device
 at 
org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:201)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at java.io.FilterOutputStream.close(FilterOutputStream.java:140)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.close(ChecksumFileSystem.java:348)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
at 
org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1344)
..
So, I think that the HDFS is full or something, but I cannot find a way to 
address the problem, if you had some suggestion, Please show me , thank you.
Best Regards


MapReduce combiner issue : EOFException while reading Value

2012-05-27 Thread Arpit Wanchoo
Hi

I have been trying to setup a map reduce job with hadoop 0.20.203.1.

Scenario :
My mapper is writing key value pairs where I have total 13 types of keys and 
corresponding value classes.
For each input record I write all these i.e 13 key-val pair to context.

Also for one specific key (say K1) I want its mapper output to go in one file 
and for all other keys to rest of files.
For doing this ,this I have define my partitioner as :
public int getPartition(DimensionSet key, MeasureSet value, int numPartitions) {

if(numPartitions < 2){
int x= (key.hashCode() & Integer.MAX_VALUE) % numPartitions;
return x;
}
int cubeId = key.getCubeId();
if (cubeId == CubeName.AT_COutgoing.ordinal()) {
return 0;
} else {

int x=((key.hashCode() & Integer.MAX_VALUE) % (numPartitions-1)) + 1;
return x;

}
}
My combiner and reducer are doing the same thing.

Issue :
My job is running fine when I don't use a combiner.
But when I run with combiner , I am getting EOFException.

java.io.EOFException
at java.io.DataInputStream.readUnsignedShort(Unknown Source)
at java.io.DataInputStream.readUTF(Unknown Source)
at java.io.DataInputStream.readUTF(Unknown Source)
at 
com.guavus.mapred.common.collection.ValueCollection.readFieldsLong(ValueCollection.java:40)
at 
com.guavus.mapred.common.collection.ValueCollection.readFields(ValueCollection.java:21)
at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
at 
org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
at 
org.apache.hadoop.mapreduce.ReduceContext.nextKeyValue(ReduceContext.java:116)
at 
org.apache.hadoop.mapreduce.ReduceContext.nextKey(ReduceContext.java:92)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:175)
at 
org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1420)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1435)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$1800(MapTask.java:852)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1343)


My Finding :
On checking and debugging what I got was that for the particular key-val pair 
(K1, which I want to write to reduce number 0), the combiner reads the key 
successfully but while trying to read the values it gives EOFException because 
it doesn't find anything in DataInput stream. Also this is occurring when data 
is large and combiner runs more than once.
I have noticed that the combiner is failing to get the value for this key when 
running for the 2nd time . (I read somewhere that combiner begins when the some 
amount of data has been written by mapper even though mapper is still writing 
data to context).
Actually the issue occured with any key which was defined in partitioner  to 
get partition 0 for writing.

I verified many times that my mapper is writing no null value. The issue looks 
really strange because combiner is able to read the key but doesn't get any 
value in data stream.

Please suggest what could be the root cause for this or what can I do to track 
the root cause.



Regards,
Arpit Wanchoo



Re: EOFException at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)......

2012-05-27 Thread waqas latif
But the thing is, it works with hadoop 0.20. even with 100 x100(and even
bigger matrices)  but when it comes to hadoop 1.0.3 then even there is a
problem with 3x3 matrix.

On Sun, May 27, 2012 at 12:00 PM, Prashant Kommireddi
wrote:

> I have seen this issue with large file writes using SequenceFile writer.
> Not found the same issue when testing with writing fairly small files ( <
> 1GB).
>
> On Fri, May 25, 2012 at 10:33 PM, Kasi Subrahmanyam
> wrote:
>
> > Hi,
> > If you are using a custom writable object while passing data from the
> > mapper to the reducer make sure that the read fields and the write has
> the
> > same number of variables. It might be possible that you wrote datavtova
> > file using custom writable but later modified the custom writable (like
> > adding new attribute to the writable) which the old data doesn't have.
> >
> > It might be a possibility is please check once
> >
> > On Friday, May 25, 2012, waqas latif wrote:
> >
> > > Hi Experts,
> > >
> > > I am fairly new to hadoop MapR and I was trying to run a matrix
> > > multiplication example presented by Mr. Norstadt under following link
> > > http://www.norstad.org/matrix-multiply/index.html. I can run it
> > > successfully with hadoop 0.20.2 but I tried to run it with hadoop 1.0.3
> > but
> > > I am getting following error. Is it the problem with my hadoop
> > > configuration or it is compatibility problem in the code which was
> > written
> > > in hadoop 0.20 by author.Also please guide me that how can I fix this
> > error
> > > in either case. Here is the error I am getting.
> > >
> > > in thread "main" java.io.EOFException
> > >at java.io.DataInputStream.readFully(DataInputStream.java:180)
> > >at java.io.DataInputStream.readFully(DataInputStream.java:152)
> > >at
> > > org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
> > >at
> > > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
> > >at
> > > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
> > >at
> > > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
> > >at TestMatrixMultiply.fillMatrix(TestMatrixMultiply.java:60)
> > >at TestMatrixMultiply.readMatrix(TestMatrixMultiply.java:87)
> > >at TestMatrixMultiply.checkAnswer(TestMatrixMultiply.java:112)
> > >at TestMatrixMultiply.runOneTest(TestMatrixMultiply.java:150)
> > >at TestMatrixMultiply.testRandom(TestMatrixMultiply.java:278)
> > >at TestMatrixMultiply.main(TestMatrixMultiply.java:308)
> > >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >at
> > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > >at
> > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > >at java.lang.reflect.Method.invoke(Method.java:597)
> > >at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> > >
> > > Thanks in advance
> > >
> > > Regards,
> > > waqas
> > >
> >
>


Re: EOFException at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)......

2012-05-27 Thread Prashant Kommireddi
I have seen this issue with large file writes using SequenceFile writer.
Not found the same issue when testing with writing fairly small files ( <
1GB).

On Fri, May 25, 2012 at 10:33 PM, Kasi Subrahmanyam
wrote:

> Hi,
> If you are using a custom writable object while passing data from the
> mapper to the reducer make sure that the read fields and the write has the
> same number of variables. It might be possible that you wrote datavtova
> file using custom writable but later modified the custom writable (like
> adding new attribute to the writable) which the old data doesn't have.
>
> It might be a possibility is please check once
>
> On Friday, May 25, 2012, waqas latif wrote:
>
> > Hi Experts,
> >
> > I am fairly new to hadoop MapR and I was trying to run a matrix
> > multiplication example presented by Mr. Norstadt under following link
> > http://www.norstad.org/matrix-multiply/index.html. I can run it
> > successfully with hadoop 0.20.2 but I tried to run it with hadoop 1.0.3
> but
> > I am getting following error. Is it the problem with my hadoop
> > configuration or it is compatibility problem in the code which was
> written
> > in hadoop 0.20 by author.Also please guide me that how can I fix this
> error
> > in either case. Here is the error I am getting.
> >
> > in thread "main" java.io.EOFException
> >at java.io.DataInputStream.readFully(DataInputStream.java:180)
> >at java.io.DataInputStream.readFully(DataInputStream.java:152)
> >at
> > org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
> >at
> > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1486)
> >at
> > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
> >at
> > org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
> >at TestMatrixMultiply.fillMatrix(TestMatrixMultiply.java:60)
> >at TestMatrixMultiply.readMatrix(TestMatrixMultiply.java:87)
> >at TestMatrixMultiply.checkAnswer(TestMatrixMultiply.java:112)
> >at TestMatrixMultiply.runOneTest(TestMatrixMultiply.java:150)
> >at TestMatrixMultiply.testRandom(TestMatrixMultiply.java:278)
> >at TestMatrixMultiply.main(TestMatrixMultiply.java:308)
> >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >at java.lang.reflect.Method.invoke(Method.java:597)
> >at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >
> > Thanks in advance
> >
> > Regards,
> > waqas
> >
>