Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-16 Thread Dan Dong
I installed the hadoop by untar the hadoop-2.6.0.tar.gz, will check
further. Thanks.

2014-12-16 14:39 GMT-06:00 Jiayu Ji :
>
> The cluster is running hadoop 2.0 while you client side in under hadoop
> 1.0.
>
> I would guess you have installed 1.0 on your client machine before and
> your env variable is still pointing to it.
>
> On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong  wrote:
>>
>> Thanks, the error now changes to the following:
>> $ hadoop dfsadmin -report
>> report: Server IPC version 9 cannot communicate with client version 4
>>
>> Not clear which Server and which client are conflicting. All hadoop
>> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay :
>>
>>> Give complete hostname with domain name not just master-node.
>>>
>>> 
>>>   fs.defaultFS
>>>   hdfs://master-node.domain.name:9000
>>> 
>>>
>>> Else give IP address also
>>>
>>>
>>> On 12/16/14, Dan Dong  wrote:
>>> > Hi, Johny,
>>> >   Yes, they have been turned off from the beginning. Guess the problem
>>> is
>>> > still in the conf files, it would be helpful if some example *.xml
>>> could be
>>> > shown.
>>> >
>>> >   Cheers,
>>> >   Dan
>>> >
>>> >
>>> > 2014-12-15 12:24 GMT-06:00 johny casanova :
>>> >>
>>> >> do you have selinux and iptables turned off?
>>> >>
>>> >>  --
>>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >> From: dongda...@gmail.com
>>> >> To: user@hadoop.apache.org
>>> >>
>>> >>
>>> >>   Found in the log file:
>>> >> 2014-12-12 15:51:10,434 ERROR
>>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>>> >> authority: file:///
>>> >> at
>>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>> >> at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>> >> at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>> >> at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>>> >> at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
>>> >> at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>>> >> at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>> >>
>>> >> But I have set it in core-site.xml already:
>>> >> 
>>> >>   fs.defaultFS
>>> >>   hdfs://master-node:9000
>>> >> 
>>> >>
>>> >> Other settings:
>>> >> $ cat mapred-site.xml
>>> >> 
>>> >> 
>>> >> mapred.job.tracker
>>> >> master-node:9002
>>> >> 
>>> >> 
>>> >> mapreduce.jobhistory.address
>>> >> master-node:10020
>>> >> 
>>> >> 
>>> >> mapreduce.jobhistory.webapp.address
>>> >> master-node:19888
>>> >> 
>>> >> 
>>> >>
>>> >> $ cat yarn-site.xml
>>> >> 
>>> >>
>>> >> 
>>> >> 
>>> >>mapreduce.framework.name
>>> >>yarn
>>> >> 
>>> >> 
>>> >>yarn.resourcemanager.address
>>> >>master-node:18040
>>> >> 
>>> >> 
>>> >>yarn.resourcemanager.scheduler.address
>>> >>master-node:18030
>>> >> 
>>> >> 
>>> >>yarn.resourcemanager.webapp.address
>>> >>master-node:18088
>>> >> 
>>> >> 
>>> >>yarn.resourcemanager.

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-16 Thread Jiayu Ji
The cluster is running hadoop 2.0 while you client side in under hadoop
1.0.

I would guess you have installed 1.0 on your client machine before and your
env variable is still pointing to it.

On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong  wrote:
>
> Thanks, the error now changes to the following:
> $ hadoop dfsadmin -report
> report: Server IPC version 9 cannot communicate with client version 4
>
> Not clear which Server and which client are conflicting. All hadoop
> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>
> Cheers,
> Dan
>
>
> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay :
>
>> Give complete hostname with domain name not just master-node.
>>
>> 
>>   fs.defaultFS
>>   hdfs://master-node.domain.name:9000
>> 
>>
>> Else give IP address also
>>
>>
>> On 12/16/14, Dan Dong  wrote:
>> > Hi, Johny,
>> >   Yes, they have been turned off from the beginning. Guess the problem
>> is
>> > still in the conf files, it would be helpful if some example *.xml
>> could be
>> > shown.
>> >
>> >   Cheers,
>> >   Dan
>> >
>> >
>> > 2014-12-15 12:24 GMT-06:00 johny casanova :
>> >>
>> >> do you have selinux and iptables turned off?
>> >>
>> >>  --
>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>> file
>> >> system"
>> >> From: dongda...@gmail.com
>> >> To: user@hadoop.apache.org
>> >>
>> >>
>> >>   Found in the log file:
>> >> 2014-12-12 15:51:10,434 ERROR
>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> >> authority: file:///
>> >> at
>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> >> at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> >> at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> >> at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>> >> at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
>> >> at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>> >> at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>> >>
>> >> But I have set it in core-site.xml already:
>> >> 
>> >>   fs.defaultFS
>> >>   hdfs://master-node:9000
>> >> 
>> >>
>> >> Other settings:
>> >> $ cat mapred-site.xml
>> >> 
>> >> 
>> >> mapred.job.tracker
>> >> master-node:9002
>> >> 
>> >> 
>> >> mapreduce.jobhistory.address
>> >> master-node:10020
>> >> 
>> >> 
>> >> mapreduce.jobhistory.webapp.address
>> >> master-node:19888
>> >> 
>> >> 
>> >>
>> >> $ cat yarn-site.xml
>> >> 
>> >>
>> >> 
>> >> 
>> >>mapreduce.framework.name
>> >>yarn
>> >> 
>> >> 
>> >>yarn.resourcemanager.address
>> >>master-node:18040
>> >> 
>> >> 
>> >>yarn.resourcemanager.scheduler.address
>> >>    master-node:18030
>> >> 
>> >> 
>> >>yarn.resourcemanager.webapp.address
>> >>master-node:18088
>> >> 
>> >> 
>> >>yarn.resourcemanager.resource-tracker.address
>> >>master-node:18025
>> >> 
>> >> 
>> >>yarn.resourcemanager.admin.address
>> >>master-node:18141
>> >> 
>> >> 
>> >>yarn.nodemanager.aux-services
>> >>mapreduce_shuffle
>> >> 
>> >> 
>> >>yarn.nodemanager.aux-services.mapreduce.shuffle.class
>> >>org.apache.hadoop.mapred.ShuffleHandler
>> >> 
>> >> 
>> >>
>> >> Cheers,
>> >> Dan
>> >>
>> >>
&

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-16 Thread Dan Dong
Thanks, the error now changes to the following:
$ hadoop dfsadmin -report
report: Server IPC version 9 cannot communicate with client version 4

Not clear which Server and which client are conflicting. All hadoop
components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?

Cheers,
Dan


2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay :
>
> Give complete hostname with domain name not just master-node.
>
> 
>   fs.defaultFS
>   hdfs://master-node.domain.name:9000
> 
>
> Else give IP address also
>
>
> On 12/16/14, Dan Dong  wrote:
> > Hi, Johny,
> >   Yes, they have been turned off from the beginning. Guess the problem is
> > still in the conf files, it would be helpful if some example *.xml could
> be
> > shown.
> >
> >   Cheers,
> >   Dan
> >
> >
> > 2014-12-15 12:24 GMT-06:00 johny casanova :
> >>
> >> do you have selinux and iptables turned off?
> >>
> >>  ----------
> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> From: dongda...@gmail.com
> >> To: user@hadoop.apache.org
> >>
> >>
> >>   Found in the log file:
> >> 2014-12-12 15:51:10,434 ERROR
> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
> >> authority: file:///
> >> at
> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
> >> at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
> >> at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
> >>
> >> But I have set it in core-site.xml already:
> >> 
> >>   fs.defaultFS
> >>   hdfs://master-node:9000
> >> 
> >>
> >> Other settings:
> >> $ cat mapred-site.xml
> >> 
> >> 
> >> mapred.job.tracker
> >> master-node:9002
> >> 
> >> 
> >> mapreduce.jobhistory.address
> >> master-node:10020
> >> 
> >> 
> >> mapreduce.jobhistory.webapp.address
> >> master-node:19888
> >> 
> >> 
> >>
> >> $ cat yarn-site.xml
> >> 
> >>
> >> 
> >> 
> >>mapreduce.framework.name
> >>yarn
> >> 
> >> 
> >>yarn.resourcemanager.address
> >>master-node:18040
> >> 
> >> 
> >>yarn.resourcemanager.scheduler.address
> >>master-node:18030
> >> 
> >> 
> >>yarn.resourcemanager.webapp.address
> >>master-node:18088
> >> 
> >> 
> >>yarn.resourcemanager.resource-tracker.address
> >>master-node:18025
> >> 
> >> 
> >>    yarn.resourcemanager.admin.address
> >>master-node:18141
> >> 
> >> 
> >>yarn.nodemanager.aux-services
> >>mapreduce_shuffle
> >> 
> >> 
> >>yarn.nodemanager.aux-services.mapreduce.shuffle.class
> >>org.apache.hadoop.mapred.ShuffleHandler
> >> 
> >> 
> >>
> >> Cheers,
> >> Dan
> >>
> >>
> >> 2014-12-15 9:17 GMT-06:00 Dan Dong :
> >>
> >> Thank you all, but still the same after change file:/ to file://, and
> >> HADOOP_CONF_DIR points to the correct position already:
> >> $ echo $HADOOP_CONF_DIR
> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
> >>
> >>
> >> 2014-12-15 8:57 GMT-06:00 johny casanova :
> >>
> >>  Don't you have to use file:// instead of just one /?
> >>
> >>  --
> >> From: brahmareddy.batt...@huawei.com
> >> To: user@hadoop.apache.org
> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> Date: Sat, 13 Dec 2014 05:48:18 +00

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-15 Thread Susheel Kumar Gadalay
Give complete hostname with domain name not just master-node.


  fs.defaultFS
  hdfs://master-node.domain.name:9000


Else give IP address also


On 12/16/14, Dan Dong  wrote:
> Hi, Johny,
>   Yes, they have been turned off from the beginning. Guess the problem is
> still in the conf files, it would be helpful if some example *.xml could be
> shown.
>
>   Cheers,
>   Dan
>
>
> 2014-12-15 12:24 GMT-06:00 johny casanova :
>>
>> do you have selinux and iptables turned off?
>>
>>  --
>> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> From: dongda...@gmail.com
>> To: user@hadoop.apache.org
>>
>>
>>   Found in the log file:
>> 2014-12-12 15:51:10,434 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at
>> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>
>> But I have set it in core-site.xml already:
>> 
>>   fs.defaultFS
>>   hdfs://master-node:9000
>> 
>>
>> Other settings:
>> $ cat mapred-site.xml
>> 
>> 
>> mapred.job.tracker
>> master-node:9002
>> 
>> 
>> mapreduce.jobhistory.address
>> master-node:10020
>> 
>> 
>> mapreduce.jobhistory.webapp.address
>> master-node:19888
>> 
>> 
>>
>> $ cat yarn-site.xml
>> 
>>
>> 
>> 
>>mapreduce.framework.name
>>yarn
>> 
>> 
>>yarn.resourcemanager.address
>>master-node:18040
>> 
>> 
>>yarn.resourcemanager.scheduler.address
>>master-node:18030
>> 
>> 
>>yarn.resourcemanager.webapp.address
>>master-node:18088
>> 
>> 
>>yarn.resourcemanager.resource-tracker.address
>>master-node:18025
>> 
>> 
>>yarn.resourcemanager.admin.address
>>master-node:18141
>> 
>> 
>>yarn.nodemanager.aux-services
>>mapreduce_shuffle
>> 
>> 
>>yarn.nodemanager.aux-services.mapreduce.shuffle.class
>>    org.apache.hadoop.mapred.ShuffleHandler
>> 
>> 
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 9:17 GMT-06:00 Dan Dong :
>>
>> Thank you all, but still the same after change file:/ to file://, and
>> HADOOP_CONF_DIR points to the correct position already:
>> $ echo $HADOOP_CONF_DIR
>> /home/dong/import/hadoop-2.6.0/etc/hadoop
>>
>>
>> 2014-12-15 8:57 GMT-06:00 johny casanova :
>>
>>  Don't you have to use file:// instead of just one /?
>>
>>  --
>> From: brahmareddy.batt...@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>> Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  --
>> *From:* Dan Dong [dongda...@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>> Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> 
>>   fs.defaultFS
>>   hdfs://master-node:9000
>> 
>>
>> and in hdfs-site.xml:
>> 
>>   dfs.namenode.name.dir
>>   file:/home/dong/hadoop-2.6.0-dist/dfs/name
>>   true
>> 
>> 
>>   dfs.dataname.data.dir
>>   file:/home/dong/hadoop-2.6.0-dist/dfs/data
>>   true
>> 
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>
>


Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-15 Thread Dan Dong
Hi, Johny,
  Yes, they have been turned off from the beginning. Guess the problem is
still in the conf files, it would be helpful if some example *.xml could be
shown.

  Cheers,
  Dan


2014-12-15 12:24 GMT-06:00 johny casanova :
>
> do you have selinux and iptables turned off?
>
>  --
> Date: Mon, 15 Dec 2014 09:54:41 -0600
> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> From: dongda...@gmail.com
> To: user@hadoop.apache.org
>
>
>   Found in the log file:
> 2014-12-12 15:51:10,434 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>
> But I have set it in core-site.xml already:
> 
>   fs.defaultFS
>   hdfs://master-node:9000
> 
>
> Other settings:
> $ cat mapred-site.xml
> 
> 
> mapred.job.tracker
> master-node:9002
> 
> 
> mapreduce.jobhistory.address
> master-node:10020
> 
> 
> mapreduce.jobhistory.webapp.address
> master-node:19888
> 
> 
>
> $ cat yarn-site.xml
> 
>
> 
> 
>mapreduce.framework.name
>yarn
> 
> 
>yarn.resourcemanager.address
>master-node:18040
> 
> 
>yarn.resourcemanager.scheduler.address
>master-node:18030
> 
> 
>yarn.resourcemanager.webapp.address
>master-node:18088
> 
> 
>yarn.resourcemanager.resource-tracker.address
>master-node:18025
> 
> 
>yarn.resourcemanager.admin.address
>master-node:18141
> 
> 
>yarn.nodemanager.aux-services
>mapreduce_shuffle
> 
> 
>yarn.nodemanager.aux-services.mapreduce.shuffle.class
>org.apache.hadoop.mapred.ShuffleHandler
> 
> 
>
> Cheers,
> Dan
>
>
> 2014-12-15 9:17 GMT-06:00 Dan Dong :
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova :
>
>  Don't you have to use file:// instead of just one /?
>
>  --
> From: brahmareddy.batt...@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
> Thanks & Regards
> Brahma Reddy Battula
>
>
>  --
> *From:* Dan Dong [dongda...@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
> Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> 
>   fs.defaultFS
>   hdfs://master-node:9000
> 
>
> and in hdfs-site.xml:
> 
>   dfs.namenode.name.dir
>   file:/home/dong/hadoop-2.6.0-dist/dfs/name
>   true
> 
> 
>   dfs.dataname.data.dir
>   file:/home/dong/hadoop-2.6.0-dist/dfs/data
>   true
> 
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>


RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-15 Thread johny casanova
do you have selinux and iptables turned off?
 



Date: Mon, 15 Dec 2014 09:54:41 -0600
Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file 
system"
From: dongda...@gmail.com
To: user@hadoop.apache.org






Found in the log file:
2014-12-12 15:51:10,434 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
java.lang.IllegalArgumentException: Does not contain a valid host:port 
authority: file:///
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:
  
  fs.defaultFS  
  hdfs://master-node:9000  


Other settings:
$ cat mapred-site.xml


mapred.job.tracker
master-node:9002


mapreduce.jobhistory.address
master-node:10020


mapreduce.jobhistory.webapp.address
master-node:19888




$ cat yarn-site.xml



  
   mapreduce.framework.name  
   yarn  
  
  
   yarn.resourcemanager.address  
   master-node:18040  
  
  
   yarn.resourcemanager.scheduler.address  
   master-node:18030  
  
  
   yarn.resourcemanager.webapp.address  
   master-node:18088  
  
  
   yarn.resourcemanager.resource-tracker.address  
   master-node:18025  
  
  
   yarn.resourcemanager.admin.address  
   master-node:18141  
  
  
   yarn.nodemanager.aux-services  
   mapreduce_shuffle  
  
  
   yarn.nodemanager.aux-services.mapreduce.shuffle.class  
   org.apache.hadoop.mapred.ShuffleHandler  
  


Cheers,
Dan




2014-12-15 9:17 GMT-06:00 Dan Dong :

Thank you all, but still the same after change file:/ to file://, and 
HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR 
/home/dong/import/hadoop-2.6.0/etc/hadoop




2014-12-15 8:57 GMT-06:00 johny casanova :




Don't you have to use file:// instead of just one /?
 



From: brahmareddy.batt...@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file 
system"
Date: Sat, 13 Dec 2014 05:48:18 +




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR 
where following configuration files are present..



Thanks & Regards
Brahma Reddy Battula






From: Dan Dong [dongda...@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error 
when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
  
  fs.defaultFS  
  hdfs://master-node:9000  
 

and in hdfs-site.xml:
   
  dfs.namenode.name.dir   
  file:/home/dong/hadoop-2.6.0-dist/dfs/name   
  true  
   
   
  dfs.dataname.data.dir   
  file:/home/dong/hadoop-2.6.0-dist/dfs/data   
  true  
  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




  

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-15 Thread Dan Dong
Found in the log file:
2014-12-12 15:51:10,434 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.IllegalArgumentException: Does not contain a valid host:port
authority: file:///
at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:

  fs.defaultFS
  hdfs://master-node:9000


Other settings:
$ cat mapred-site.xml


mapred.job.tracker
master-node:9002


mapreduce.jobhistory.address
master-node:10020


mapreduce.jobhistory.webapp.address
master-node:19888



$ cat yarn-site.xml




   mapreduce.framework.name
   yarn


   yarn.resourcemanager.address
   master-node:18040


   yarn.resourcemanager.scheduler.address
   master-node:18030


   yarn.resourcemanager.webapp.address
   master-node:18088


   yarn.resourcemanager.resource-tracker.address
   master-node:18025


   yarn.resourcemanager.admin.address
   master-node:18141


   yarn.nodemanager.aux-services
   mapreduce_shuffle


   yarn.nodemanager.aux-services.mapreduce.shuffle.class
   org.apache.hadoop.mapred.ShuffleHandler



Cheers,
Dan


2014-12-15 9:17 GMT-06:00 Dan Dong :
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova :
>
>> Don't you have to use file:// instead of just one /?
>>
>>  --
>> From: brahmareddy.batt...@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>>  Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  --
>> *From:* Dan Dong [dongda...@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>> Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> 
>>   fs.defaultFS
>>   hdfs://master-node:9000
>> 
>>
>> and in hdfs-site.xml:
>> 
>>   dfs.namenode.name.dir
>>   file:/home/dong/hadoop-2.6.0-dist/dfs/name
>>   true
>> 
>> 
>>   dfs.dataname.data.dir
>>   file:/home/dong/hadoop-2.6.0-dist/dfs/data
>>   true
>> 
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>


Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-15 Thread Dan Dong
Thank you all, but still the same after change file:/ to file://, and
HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR
/home/dong/import/hadoop-2.6.0/etc/hadoop


2014-12-15 8:57 GMT-06:00 johny casanova :
>
> Don't you have to use file:// instead of just one /?
>
>  --
> From: brahmareddy.batt...@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
>  Thanks & Regards
> Brahma Reddy Battula
>
>
>  --
> *From:* Dan Dong [dongda...@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
> Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> 
>   fs.defaultFS
>   hdfs://master-node:9000
> 
>
> and in hdfs-site.xml:
> 
>   dfs.namenode.name.dir
>   file:/home/dong/hadoop-2.6.0-dist/dfs/name
>   true
> 
> 
>   dfs.dataname.data.dir
>   file:/home/dong/hadoop-2.6.0-dist/dfs/data
>   true
> 
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>


RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-15 Thread johny casanova
Don't you have to use file:// instead of just one /?
 



From: brahmareddy.batt...@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file 
system"
Date: Sat, 13 Dec 2014 05:48:18 +




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR 
where following configuration files are present..




Thanks & Regards

Brahma Reddy Battula







From: Dan Dong [dongda...@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error 
when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
  
  fs.defaultFS  
  hdfs://master-node:9000  
 

and in hdfs-site.xml:
   
  dfs.namenode.name.dir   
  file:/home/dong/hadoop-2.6.0-dist/dfs/name   
  true  
   
   
  dfs.dataname.data.dir   
  file:/home/dong/hadoop-2.6.0-dist/dfs/data   
  true  
  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




  

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

2014-12-12 Thread Brahma Reddy Battula
Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR 
where following configuration files are present..


Thanks & Regards

Brahma Reddy Battula



From: Dan Dong [dongda...@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Hi,
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error 
when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:

  fs.defaultFS
  hdfs://master-node:9000


and in hdfs-site.xml:

  dfs.namenode.name.dir
  file:/home/dong/hadoop-2.6.0-dist/dfs/name
  true


  dfs.dataname.data.dir
  file:/home/dong/hadoop-2.6.0-dist/dfs/data
  true


The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan