fs.default.name is the run the namenode machine,is it 192.168.1.103?
zjffdu wrote:
>
> Hi all,
>
>
>
> I have two computers, and in the hadoop-site.xml, I define the
> fs.default.name as localhost:9000, then I cannot access the cluster with
> Java API from another machine
>
> But if I chang
On Mon, Aug 24, 2009 at 8:55 PM, Matt Massie wrote:
> Jeff-
>
> If you look in /etc/hosts, you see the "localhost" is 127.0.0.1 (and if you
> use IPv6, ::1). This address is strictly loopback and can only be used for
> inter-process communication on a single machine.
>
> See: http://en.wikipedia.o
Jeff-
If you look in /etc/hosts, you see the "localhost" is 127.0.0.1 (and if you
use IPv6, ::1). This address is strictly loopback and can only be used for
inter-process communication on a single machine.
See: http://en.wikipedia.org/wiki/Localhost
-Matt
On Mon, Aug 24, 2009 at 5:47 PM, zhan
Jeff,
Hadoop (HDFS in particular) is overly strict about machine names. The
filesystem's id is based on the DNS name used to access it. This needs to be
consistent across all nodes and all configurations in your cluster. You
should always use the fully-qualified domain name of the namenode in your
Hi all,
I have two computers, and in the hadoop-site.xml, I define the
fs.default.name as localhost:9000, then I cannot access the cluster with
Java API from another machine
But if I change it to its real IP 192.168.1.103:9000, then I can access the
cluster with Java API from another machine.