On Sat, Aug 30, 2008 at 10:12 AM, Gerardo Velez <[EMAIL PROTECTED]>wrote:
> Hi Victor!
>
> I got problem with remote writing as well, so I tried to go further on this
> and I would like to share what I did, maybe you have more luck than me
>
> 1) as I'm working with user gvelez in remote host I ha
Gerardo,
Thank for you information.
I've success with remote writing on HDFS using the following steps:
1. Installation of the latest stable version (hadoop 0.17.2.1) to data nodes
and client machine.
2. Open ports 50010, 50070, 54310, 54311 on data nodes machines to access
from client machine and
Hi Victor!
I got problem with remote writing as well, so I tried to go further on this
and I would like to share what I did, maybe you have more luck than me
1) as I'm working with user gvelez in remote host I had to give write access
to all, like this:
bin/hadoop dfs -chmod -R a+w input
2)
Jeff,
Thanks for detailed instructions, but on machine that is not hadoop server I
got error:
~/hadoop-0.17.2$ ./bin/hadoop dfs -copyFromLocal NOTICE.txt test
08/08/29 19:33:07 INFO dfs.DFSClient: Exception in createBlockOutputStream
java.net.ConnectException: Connection refused
08/08/29 19:33:07
Thanks Jeff and sorry for bothering you again!
I got clear the remoting writing into HDFS, but what about hadoop process?
Once the file has been copied to HDFS, do I still needs to run
hadoop -jarfile input output everytime?
if I need to do it everytime, should I do it from remote server as
You can use the hadoop command line on machines that aren't hadoop servers.
If you copy the hadoop configuration from one of your master servers or data
node to the client machine and run the command line dfs tools, it will copy
the files directly to the data node.
Or, you could use one of the cli
Hi Jeff, thank you for answering!
What about remote writing on HDFS, lets suppose I got an application server
on a
linux server A and I got a Hadoop cluster on servers B (master), C (slave),
D (slave)
What I would like is sent some files from Server A to be processed by
hadoop. So in order to do
Gerardo:
I can't really speak to all of your questions, but the master/slave issue is
a common concern with hadoop. A cluster has a single namenode and therefore
a single point of failure. There is also a secondary name node process
which runs on the same machine as the name node in most default