Does anyone know that in what situation do we need a CN rather than BN?
Shen
You may want to check importcheckpoint option too
http://hadoop.apache.org/hdfs/docs/current/hdfs_user_guide.html#Import+Checkpoint
Boris.
On 9/17/10 1:36 AM, "ChingShen" wrote:
I think I got it!
Step by step as following:
1. put two files into hdfs.
2. kill -9 ${namenode_pid}.
3. delete t
Hi,
Thanks a lot ! I just removed the elements in the cache and the incomplete
blocks, and it worked.
Regards,
Krishnakumar.
On Sep 21, 2010, at 12:54 PM, Allen Wittenauer wrote:
>
> It is a "not enough information" error.
>
> Check the tasks, jobtracker, tasktracker, datanode, and namenode l
It is a "not enough information" error.
Check the tasks, jobtracker, tasktracker, datanode, and namenode logs.
On Sep 21, 2010, at 12:30 PM, C.V.Krishnakumar wrote:
>
> Hi,
> Just wanted to know if anyone has any idea about this one? This happens every
> time I run a job.
> Is this issue har
Hi,
Just wanted to know if anyone has any idea about this one? This happens every
time I run a job.
Is this issue hardware related?
Thanks in advance,
Krishnakumar.
Begin forwarded message:
> From: "C.V.Krishnakumar"
> Date: September 17, 2010 1:32:49 PM PDT
> To: common-user@hadoop.apache.
Hi,
I'm writing a fairly simple client application which basically concatenates
the output files of a MapReduce job (Hadoop 20.2). The code is as follows:
DFSClient client = new DFSClient(new Configuration());
FileStatus[] listing = client.listPaths("/myoutputdir");
int read = 0;
byte[] buffer =
Thanks, was going mad with this. It's working properly with 0.20.2
Once the patch is totally done will apply it to be able to keep using the
MarkableIterator as it simplifies me many MapReduce jobs
--
View this message in context:
http://lucene.472066.n3.nabble.com/can-not-report-progress-from-r
I am not sure but try 2 things :
1. Just give your proper Ipaddress in value
e, g 123.154.0.122:9001
2. Change the port to 9001 and your etc/hosts file must have these
values of master and slaves
123.154.0.122 hostname
123.154.0.111 hostname
123.154.0.112 hostname
David Ro
On 09/21/2010 03:17 AM, Jing Tie wrote:
I am still suffering from the problem... Did anyone encounter it
before? Or any suggestions?
Many thanks in advance!
Jing
On Fri, Sep 17, 2010 at 5:19 PM, Jing Tie wrote:
Dear all,
I am having this exception when starting jobtracker, and I checked by
This is a bug in 0.21. MAPREDUCE-1905 (
https://issues.apache.org/jira/browse/MAPREDUCE-1905) is open for this.
On 9/21/10 4:29 PM, "Marc Sturlese" wrote:
I am using hadoop 0.21
I have a reducer task wich takes more time to finish that the
mapreduce.task.timeout so it's being killed:
Task att
Hi all ,
I am working on a Sort function which takes in records of 40 bytes ( 8
bytes longwritable key and 32 bytes bytes byteswritable key ) and sorts them
and output them. For this I have got a modified Terasort working (thanks to
Jeff !) . Since the long long type in c and java long are
In case it can help someone, he problem was that I was not calling libjars
properly to add all the lib jars. I sorted it out putting all the lib jar
inside a lib folder into the main jar (the one containing the mapreduce app)
--
View this message in context:
http://lucene.472066.n3.nabble.com/w
I am using hadoop 0.21
I have a reducer task wich takes more time to finish that the
mapreduce.task.timeout so it's being killed:
Task attempt_201009211103_0001_r_00_0 failed to report status for 602
seconds. Killing!
I have implemented a thread which is suposed to send progress and update the
I am still suffering from the problem... Did anyone encounter it
before? Or any suggestions?
Many thanks in advance!
Jing
On Fri, Sep 17, 2010 at 5:19 PM, Jing Tie wrote:
> Dear all,
>
> I am having this exception when starting jobtracker, and I checked by
> netstat that the port is not in use
14 matches
Mail list logo