Thank you for your reply.
See my responses inline.
On Thu, Jul 16, 2009 at 4:23 AM, Jothi Padmanabhan joth...@yahoo-inc.comwrote:
See some responses inline.
My idea is that on each node there will be a special DataAssign thread
which
will take care of assigning data to each map thread.
Thanks for your reply.
I am working on the namenode master machine, and SSH to all the other
machines. Yes, I can ping the master node from the slave node, it works
fine.
And I don't know how to open task web UI on the slave. I can open the
namenode web UI from the master node, it said : There
It seems that your datanode is dead. My suggestion is that you can reformat
the namenode, and restart the cluster.
2009/7/16 Boyu Zhang boyuzhan...@gmail.com
Thanks for your reply.
I am working on the namenode master machine, and SSH to all the other
machines. Yes, I can ping the master
Just reporting back that after patching my cluster (base install of 0.19.1
with the single patch listed below) and using it heavily for more than a
day, the patch seems to have done the trick. Every map still has all slots
available.
For any people who have only used the released versions of
Hi
I am relatively new to using hadoop. After installing hadoop on 3 machines
i tried running the word count example one one of the machines running as a
single node only. However when i try to tun the word count example using the
following command on the terminal:
had...@user5:~$
Make sure every machine is able to talk to every other one, especially
if you use hostnames defined in /etc/hosts on the master.
J-D
On Thu, Jul 16, 2009 at 1:04 PM, Pooja Davedavepo...@gmail.com wrote:
Hi
I am relatively new to using hadoop. After installing hadoop on 3 machines
i tried
Hi Raakhi,
This might be one possiblity..
Are you using hadoop-0.19.1 or hadoop-0.19.0.?? Give the jar with correct
version number or simply use hadoop-*-index.jar..
Hope that works!!
Pankil
On Wed, Jul 15, 2009 at 2:04 AM, Rakhi Khatwani rakhi.khatw...@gmail.comwrote:
Hi Bhushan,
The next Hadoop User Group DC meeting is scheduled: You can RSVP at
http://www.meetup.com/Hadoop-DC/calendar/10796121/
When:
Thursday, July 23rd 6:30 pm
Where:
Univ of Maryland College Park
Computer Science Instructional Center (Building #406)
Paint Branch Drive
College Park, MD 20742
Hi Asif,
Just install the Hadoop package onto the external node to use it as a
client. On that node, you set your fs.default.name parameter to point to the
cluster, but you don't start any daemons locally, nor do you add that node
to the slaves file.
Then just do hadoop fs -put localfile