Sai,

just use 127.0.0.1 in all the URIs you have. Less complicated and easily
replaceable


On Sun, Feb 24, 2013 at 5:37 PM, sudhakara st <sudhakara...@gmail.com>wrote:

> Hi,
>
> Execute ifcongf find the IP of system
> and add line in /etc/host
> (your ip) ubuntu
>
> use URI string  : public static String fsURI = "hdfs://ubuntu:9000";
>
>
> On Sun, Feb 24, 2013 at 5:23 PM, Sai Sai <saigr...@yahoo.in> wrote:
>
>> Many Thanks Nitin for your quick reply.
>>
>> Heres what i have in my hosts file and i am running in VM i m assuming it
>> is the pseudo mode:
>>
>> *********************
>> 127.0.0.1    localhost.localdomain    localhost
>> #::1    ubuntu    localhost6.localdomain6    localhost6
>> #127.0.1.1    ubuntu
>> 127.0.0.1   ubuntu
>>
>> # The following lines are desirable for IPv6 capable hosts
>> ::1     localhost ip6-localhost ip6-loopback
>> fe00::0 ip6-localnet
>> ff00::0 ip6-mcastprefix
>> ff02::1 ip6-allnodes
>> ff02::2 ip6-allrouters
>> ff02::3 ip6-allhosts
>> *********************
>> In my masters i have:
>> ubuntu
>> In my slaves i have:
>> localhost
>> ***********************
>> My question is in my variable below:
>> public static String fsURI = "hdfs://master:9000";
>>
>> what would be the right value so i can connect to Hadoop successfully.
>> Please let me know if you need more info.
>> Thanks
>> Sai
>>
>>
>>
>>
>>
>>    ------------------------------
>> *From:* Nitin Pawar <nitinpawar...@gmail.com>
>> *To:* user@hadoop.apache.org; Sai Sai <saigr...@yahoo.in>
>> *Sent:* Sunday, 24 February 2013 3:42 AM
>> *Subject:* Re: Trying to copy file to Hadoop file system from a program
>>
>> if you want to use master as your hostname then make such entry in your
>> /etc/hosts file
>>
>> or change the hdfs://master to hdfs://localhost
>>
>>
>> On Sun, Feb 24, 2013 at 5:10 PM, Sai Sai <saigr...@yahoo.in> wrote:
>>
>>
>> Greetings,
>>
>> Below is the program i am trying to run and getting this exception:
>>  ***************************************
>> Test Start.....
>> java.net.UnknownHostException: unknown host: master
>>     at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:214)
>>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1196)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1050)
>>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>     at $Proxy1.getProtocolVersion(Unknown Source)
>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>     at
>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>>     at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>>     at kelly.hadoop.hive.test.HadoopTest.main(HadoopTest.java:54)
>>
>>
>> ********************
>>
>> public class HdpTest {
>>
>>     public static String fsURI = "hdfs://master:9000";
>>
>>
>>     public static void copyFileToDFS(FileSystem fs, String srcFile,
>>             String dstFile) throws IOException {
>>         try {
>>             System.out.println("Initialize copy...");
>>             URI suri = new URI(srcFile);
>>             URI duri = new URI(fsURI + "/" + dstFile);
>>             Path dst = new Path(duri.toString());
>>             Path src = new Path(suri.toString());
>>             System.out.println("Start copy...");
>>             fs.copyFromLocalFile(src, dst);
>>             System.out.println("End copy...");
>>         } catch (Exception e) {
>>             e.printStackTrace();
>>         }
>>     }
>>
>>     public static void main(String[] args) {
>>         try {
>>             System.out.println("Test Start.....");
>>             Configuration conf = new Configuration();
>>             DistributedFileSystem fs = new DistributedFileSystem();
>>             URI duri = new URI(fsURI);
>>             fs.initialize(duri, conf); // Here is the xception occuring
>>             long start = 0, end = 0;
>>             start = System.nanoTime();
>>             //writing data from local to HDFS
>>             copyFileToDFS(fs, "/home/kosmos/Work/input/wordpair.txt",
>>                     "/input/raptor/trade1.txt");
>>             //Writing data from HDFS to Local
>> //             copyFileFromDFS(fs, "/input/raptor/trade1.txt",
>> "/home/kosmos/Work/input/wordpair1.txt");
>>             end = System.nanoTime();
>>             System.out.println("Total Execution times: " + (end - start));
>>             fs.close();
>>         } catch (Throwable t) {
>>             t.printStackTrace();
>>         }
>>     }
>> ******************************
>> I am trying to access in FireFox this url:
>>  hdfs://master:9000
>>
>>  Get an error msg FF does not know how to display this message.
>>
>>  I can successfully access my admin page:
>>
>>  http://localhost:50070/dfshealth.jsp
>>
>> Just wondering if anyone can give me any suggestions, your help will be
>> really appreciated.
>> Thanks
>> Sai
>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>>
>>
>
>
> --
>
> Regards,
> .....  Sudhakara.st
>
>



-- 
Nitin Pawar

Reply via email to