I never configure the ssh feature.
Not for running on a single node and not for a full size cluster.
I simply start all the required deamons (name/data/job/task) and configure
them on which ports each can be reached.
Niels Basjes
On May 16, 2013 4:55 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Without passwordless ssh you can start it in pseudo-distributed mode.
Instead of start-all.sh, start-dfs.sh etc, Use
hadoop-daemon.sh start/stop daemon-name
eg: for starting jobtracker
hadoop-daemon.sh start jobtracker
On Fri, May 17, 2013 at 4:45 PM, Bertrand Dechoux decho...@gmail.comwrote:
The scrits themselves will use ssh to connect to every machine (even
localhost).
It's up to you if you want to type the password everytime. For a
pseudo-distributed system, I don't see the issue with configuring a local
ssh access.
BUT Hadoop in itself does not require ssh. If you have a more
When you start the hadoop procecess, each process will ask the password to
start, to overcome this we will configure SSH if you use single node or
multiple nodes for each process, if you can enter the password for each
process Its not a mandatory even if you use multiple systems.
Thanks,
Kishore.
Actually, I should amend my statement -- SSH is required, but passwordless
ssh (i guess) you can live without if you are willing to enter your
password for each process that gets started.
But Why wouldn't you want to implement passwordless ssh in a pseudo
distributed cluster ? Its very easy to
Hi,
I am a bit confused here. I am planning to run on a single machine.
So what should i do to start hadoop processes. How should I do an SSH? Can you
please breifly explain me what SSH is?
Thanks,
Raj
From: Jay Vyas jayunit...@gmail.com
To:
Hello Raj,
ssh is actually 2 things :
1- ssh : The command we use to connect to remote machines - the client.
2- sshd : The daemon that is running on the server and allows clients to
connect to the server.
ssh is pre-enabled on Linux, but in order to start sshd daemon, we need to
install ssh