you can use the hadoop-daemon.sh script provided in bin folder. The following
will be the steps.
In the new machine to be added,
1.) ensure hadoop config is pointing to the right namenode.
2.) run bin/hadoop-daemon.sh start datanode
this should add datanode without needing a restart of complete
hello,
The report shows your dfs is not yet started. Sometimes it may take a
minute or two to start dfs on a small cluster. Did you wait for sometime for
dfs to start and leave safe mode?
- Prasad.
On Wednesday 15 October 2008 01:57:44 pm ZhiHong Fu wrote:
Hello:
I have installed
:
Actually, No.
As you said, I understand that dfs -put breaks the data into blocksand
then copies to datanodes,
but scp do not breaks the data into blocksand , and just copy the data
to the namenode!
2008/9/17, Prasad Pingali [EMAIL PROTECTED]:
Hello,
I observe that scp of data
copy the
data to
the namenode!
2008/9/17, Prasad Pingali [EMAIL PROTECTED]:
Hello,
I observe that scp of data to the namenode is faster than actually
putting
into dfs (all nodes coming from same switch and have same ethernet
cards,
homogenous nodes)? I understand that dfs
as fast as
copying data to namenode from a single machine, if not faster?
thanks and regards,
Prasad Pingali,
IIIT Hyderabad.
. (sorry for the bad formatting below).
thanks,
Prasad Pingali,
IIIT Hyderabad.
TaskCompleteStatus Start Time Finish Time Errors
Counters
task_200809151235_0026_r_00 90.91%
reduce reduce
15-Sep-2008 18:40:06
0
on disk got corrupted ( org.apache.hadoop.fs.FSError:
java.io.IOException: Input/output error); could you check the disks?
Arun
- Prasad Pingali.
IIIT, Hyderabad.
2008-09-11 06:31:19,837 INFO org.apache.hadoop.mapred.ReduceTask:
attempt_200809101353_0021_r_04_0: Got 1 new map