Hi all,
We are facing issues while porting some logs to HDFS. The way we are doing it
is using a simple java code which tries to read the file and writes to HDFS
using OutputStream. It was working perfectly fine and recently, we are getting
below error messages once in a while and when we try t
Hi all,
We have a cluster of 50 machines and we had to restart hadoop for some
reason. When we restart, the jobtracker is up. I can see the ui showing
everything perfectly fine. But the dfs ui is stuck. When I look into namenode
logs, it says, it reached 0.990, safe mode would be turned off i
Hi all,
I am not able to subscribe to pig mailing list (both dev and user). Here is
the error message that I am getting when I tried to confirm the
subscribtion.
Your message did not reach some or all of the intended recipients.
Subject:
pig-dev-sc.1239701669.ohbefaiphgajdgbcjmjg-pallavi.
string is represented in 5 bytes)+ 5 (bytes for C1:[]) + size
required to store that the object is of type Text
Thanks
Pallavi
Devaraj Das wrote:
>
>
>
>
> On 9/17/08 6:06 PM, "Pallavi Palleti" <[EMAIL PROTECTED]> wrote:
>
>>
>> Hi all,
>>
>
Hi all,
I am getting outofmemory error as shown below when I ran map-red on huge
amount of data.:
java.lang.OutOfMemoryError: Java heap space
at
org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:52)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutp
Hi,
I have running map-reduce task on huge data on 40 Quad Core(16GB RAM)
Machine hadoop cluster. As my mapper machines sufficiently run on 1GB heap
size. I had set map reduce tasks heap size to 1GB. And, total number of
tasks per node is set to 9 ( 5 map tasks and 4 reduce tasks). It took huge
a
Hi,
I have a dependency over mapper jobs completion time in my reducer
configure method. Where, I am building a dictionary by collecting the same
in pieces from mapper jobs. I will be using this dictionary in reduce()
method. Can some one please help me if I can put a constraint over reducer
star