Hi Keren,
This looks like a bug, can u file a Jira ticket for the same, giving more
details about the scenario.
Thanks and Regards,
Ravi teja
From: Keren Ouaknine [ker...@gmail.com]
Sent: 29 November 2011 08:38:53
To: mapreduce-user
Subject: negative
Hi Keren,
This looks like a bug, can u file a Jira ticket for the same, giving more
details about the scenario.
Thanks and Regards,
Ravi teja
From: Keren Ouaknine [ker...@gmail.com]
Sent: 29 November 2011 08:38:53
To: mapreduce-user
Subject: negative map
Hi guys !
I see that hadoop doesn't capture the Map task I/O time and Reduce task I/O
time and captures only map runtime and reduce runtime. Am i right ?
By I/O time for map task i meant time taken by the map task to read the
input chunk allocated to it for processing and the time for it to
Update on this: I've shut down all the servers multiple times. Also
cleared the data directories and reformatted the namenode. Restarted it and
the same results: 100% cpu and millions of these calls to isBPServiceAlive.
2011/11/29 Stephen Boesch java...@gmail.com
I am just trying to get off
Looks you are getting HDFS-2553.
The cause might be that, you cleared the datadirectories directly without DN
restart. Workaround would be to restart DNs.
Regards,
Uma
From: Stephen Boesch [java...@gmail.com]
Sent: Tuesday, November 29, 2011 8:53 PM
To:
Hi Uma,
I mentioned that I have restarted the datanode *many *times, and in fact
the entire cluster more than ten times.
2011/11/29 Uma Maheswara Rao G mahesw...@huawei.com
Looks you are getting HDFS-2553.
The cause might be that, you cleared the datadirectories directly without
DN
I verified the DN was down via both jps and java. Anyways, it was enough
to see via top since as mentioned DN was consuming 100% of one cpu when
running.
2011/11/29 Stephen Boesch java...@gmail.com
Hi Uma,
I mentioned that I have restarted the datanode *many *times, and in
fact the entire
Hi
I have a computation to do for a large input - a single large sequence file.
Ideally I would like to set a specific number of mappers and designate each to
process over a specific range of records in the input sequence file. For
various reasons, the record ranges that I would want to pass