Re: Error:Hdfs Client for hadoop using native java api

2012-07-22 Thread shashwat shriparv
there may be a problem with hostname resolution check if it is forward and
backword resolvable...and if you have any ip address anywhere please
replace it with hostname and the try..

On Sun, Jul 22, 2012 at 9:56 PM, Minh Duc Nguyen mdngu...@gmail.com wrote:

 As Shaswat mentioned previously, you're problem may be related to your
 configuration.

 Is core-site.xml on your classpath?  For example, what is the value for
 conf.get(fs.default.name)?

 Alternatively, you can set this property directly in your code:

 conf.set(fs.default.name, hdfs://hadoop1.devqa.local:8020);

 HTH,
 Minh

 On Thu, Jul 19, 2012 at 1:18 PM, Sandeep Reddy P 
 sandeepreddy.3...@gmail.com wrote:

  Hi Shashwat,
  Here is the snippet of code which is throwing error
  Path phdfs = new Path(
  hdfs://hadoop1.devqa.local:8020/user/hdfs/java/);
  java.lang.IllegalArgumentException: Wrong FS:
  hdfs://hadoop1.devqa.local:8020/user/hdfs/java, expected: file:///
  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:410)
  at
 
 
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:56)
  at
 
 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:404)
  at
 
 
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
  at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:797)
  at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:349)
  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:205)
  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:157)
  at
 
 
 org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:55)
  at
  org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1292)
  at Hdfs.main(Hdfs.java:18)
 
  On Thu, Jul 19, 2012 at 12:02 PM, shashwat shriparv 
  dwivedishash...@gmail.com wrote:
 
   Can you provide the code(some part of it if you don't want to throw the
   full code here) and let us know which part of your code is throwing
 this
   error.
  
   Regards
  
   ∞
   Shashwat Shriparv
  
  
  
   On Thu, Jul 19, 2012 at 6:46 PM, Sandeep Reddy P 
   sandeepreddy.3...@gmail.com wrote:
  
Hi John,
We have applications in windows. So our dev's need to connect to HDFS
   from
eclipse installed in windows. I'm trying to put data from
 local-file
  to
hdfs-file using java code from windows.
   
On Thu, Jul 19, 2012 at 5:41 AM, John Hancock 
 jhancock1...@gmail.com
wrote:
   
 Sandeep,

 I don't understand your situation completely, but why not just use
 bin/hadoop dfs -copyFromLocal local-file-name hdfs-file-name ?

 -John

 On Wed, Jul 18, 2012 at 11:33 AM, Sandeep Reddy P 
 sandeepreddy.3...@gmail.com wrote:

  Hi,
  I'm trying to load data into hdfs from local linux file system
  using
java
  code from a windows machine.But i'm getting the error
 
  java.lang.IllegalArgumentException: Wrong FS:
  hdfs://hadoop1.devqa.local:8020/user/hdfs/java, expected:
 file:///
  at
  org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:410)
  at
 
 

   
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:56)
  at
 
 

   
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:404)
  at
 
 

   
  
 
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
  at
 org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:797)
  at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:349)
  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:205)
  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:157)
  at
 
 

   
  
 
 org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:55)
  at
 
   org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1292)
  at Hdfs.main(Hdfs.java:18)
  File not found
 
  Can any one please help me with the issue.
 
  --
  Thanks,
  sandeep
 

   
   
   
--
Thanks,
sandeep
   
  
  
  
   --
  
  
   ∞
   Shashwat Shriparv
  
 
 
 
  --
  Thanks,
  sandeep
 




-- 


∞
Shashwat Shriparv


Re: Error:Hdfs Client for hadoop using native java api

2012-07-19 Thread shashwat shriparv
Can you provide the code(some part of it if you don't want to throw the
full code here) and let us know which part of your code is throwing this
error.

Regards

∞
Shashwat Shriparv



On Thu, Jul 19, 2012 at 6:46 PM, Sandeep Reddy P 
sandeepreddy.3...@gmail.com wrote:

 Hi John,
 We have applications in windows. So our dev's need to connect to HDFS from
 eclipse installed in windows. I'm trying to put data from local-file to
 hdfs-file using java code from windows.

 On Thu, Jul 19, 2012 at 5:41 AM, John Hancock jhancock1...@gmail.com
 wrote:

  Sandeep,
 
  I don't understand your situation completely, but why not just use
  bin/hadoop dfs -copyFromLocal local-file-name hdfs-file-name ?
 
  -John
 
  On Wed, Jul 18, 2012 at 11:33 AM, Sandeep Reddy P 
  sandeepreddy.3...@gmail.com wrote:
 
   Hi,
   I'm trying to load data into hdfs from local linux file system using
 java
   code from a windows machine.But i'm getting the error
  
   java.lang.IllegalArgumentException: Wrong FS:
   hdfs://hadoop1.devqa.local:8020/user/hdfs/java, expected: file:///
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:410)
   at
  
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:56)
   at
  
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:404)
   at
  
  
 
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:797)
   at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:349)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:205)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:157)
   at
  
  
 
 org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:55)
   at
   org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1292)
   at Hdfs.main(Hdfs.java:18)
   File not found
  
   Can any one please help me with the issue.
  
   --
   Thanks,
   sandeep
  
 



 --
 Thanks,
 sandeep




-- 


∞
Shashwat Shriparv


Re: Error:Hdfs Client for hadoop using native java api

2012-07-19 Thread shashwat shriparv
By the way you have these two options


   1.

   Put the configuration files in the classpath, so that the code picks it.
   2.

   Use 
Configuration.set()http://hadoop.apache.org/common/docs/r0.21.0/api/org/apache/hadoop/conf/Configuration.html#set%28java.lang.String,%20java.lang.String%29to
set the required parameters in the code.

because the configuration may be directing to local file whice code is nt
abl to find


On Thu, Jul 19, 2012 at 9:32 PM, shashwat shriparv 
dwivedishash...@gmail.com wrote:

 Can you provide the code(some part of it if you don't want to throw the
 full code here) and let us know which part of your code is throwing this
 error.

 Regards

 ∞
 Shashwat Shriparv




 On Thu, Jul 19, 2012 at 6:46 PM, Sandeep Reddy P 
 sandeepreddy.3...@gmail.com wrote:

 Hi John,
 We have applications in windows. So our dev's need to connect to HDFS from
 eclipse installed in windows. I'm trying to put data from local-file to
 hdfs-file using java code from windows.

 On Thu, Jul 19, 2012 at 5:41 AM, John Hancock jhancock1...@gmail.com
 wrote:

  Sandeep,
 
  I don't understand your situation completely, but why not just use
  bin/hadoop dfs -copyFromLocal local-file-name hdfs-file-name ?
 
  -John
 
  On Wed, Jul 18, 2012 at 11:33 AM, Sandeep Reddy P 
  sandeepreddy.3...@gmail.com wrote:
 
   Hi,
   I'm trying to load data into hdfs from local linux file system using
 java
   code from a windows machine.But i'm getting the error
  
   java.lang.IllegalArgumentException: Wrong FS:
   hdfs://hadoop1.devqa.local:8020/user/hdfs/java, expected: file:///
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:410)
   at
  
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:56)
   at
  
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:404)
   at
  
  
 
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:797)
   at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:349)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:205)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:157)
   at
  
  
 
 org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:55)
   at
  
 org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1292)
   at Hdfs.main(Hdfs.java:18)
   File not found
  
   Can any one please help me with the issue.
  
   --
   Thanks,
   sandeep
  
 



 --
 Thanks,
 sandeep




 --


 ∞
 Shashwat Shriparv





-- 


∞
Shashwat Shriparv


Re: Error:Hdfs Client for hadoop using native java api

2012-07-19 Thread shashwat shriparv
Or try something like this

Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);

HFile.Writer hwriter = new HFileWriterV2(conf, new CacheConfig(conf), fs, new
Path(fs.getWorkingDirectory() + /foo));



On Thu, Jul 19, 2012 at 9:35 PM, shashwat shriparv 
dwivedishash...@gmail.com wrote:

 By the way you have these two options


1.

Put the configuration files in the classpath, so that the code picks
it.
2.

Use 
 Configuration.set()http://hadoop.apache.org/common/docs/r0.21.0/api/org/apache/hadoop/conf/Configuration.html#set%28java.lang.String,%20java.lang.String%29to
  set the required parameters in the code.

 because the configuration may be directing to local file whice code is nt
 abl to find


 On Thu, Jul 19, 2012 at 9:32 PM, shashwat shriparv 
 dwivedishash...@gmail.com wrote:

 Can you provide the code(some part of it if you don't want to throw the
 full code here) and let us know which part of your code is throwing this
 error.

 Regards

 ∞
 Shashwat Shriparv




 On Thu, Jul 19, 2012 at 6:46 PM, Sandeep Reddy P 
 sandeepreddy.3...@gmail.com wrote:

 Hi John,
 We have applications in windows. So our dev's need to connect to HDFS
 from
 eclipse installed in windows. I'm trying to put data from local-file to
 hdfs-file using java code from windows.

 On Thu, Jul 19, 2012 at 5:41 AM, John Hancock jhancock1...@gmail.com
 wrote:

  Sandeep,
 
  I don't understand your situation completely, but why not just use
  bin/hadoop dfs -copyFromLocal local-file-name hdfs-file-name ?
 
  -John
 
  On Wed, Jul 18, 2012 at 11:33 AM, Sandeep Reddy P 
  sandeepreddy.3...@gmail.com wrote:
 
   Hi,
   I'm trying to load data into hdfs from local linux file system using
 java
   code from a windows machine.But i'm getting the error
  
   java.lang.IllegalArgumentException: Wrong FS:
   hdfs://hadoop1.devqa.local:8020/user/hdfs/java, expected: file:///
   at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:410)
   at
  
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:56)
   at
  
  
 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:404)
   at
  
  
 
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:797)
   at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:349)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:205)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:157)
   at
  
  
 
 org.apache.hadoop.fs.LocalFileSystem.copyFromLocalFile(LocalFileSystem.java:55)
   at
  
 org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1292)
   at Hdfs.main(Hdfs.java:18)
   File not found
  
   Can any one please help me with the issue.
  
   --
   Thanks,
   sandeep
  
 



 --
 Thanks,
 sandeep




 --


 ∞
 Shashwat Shriparv





 --


 ∞
 Shashwat Shriparv





-- 


∞
Shashwat Shriparv


Re: Problem with running hadoop jar demo.jar in local mode.

2012-06-23 Thread shashwat shriparv
In addition to Harsh answer, configure the tmp hadoop directory, and set
the full permission for the user who is trying to run jar.

Regards
Shashwat Shriparv


On Sat, Jun 23, 2012 at 2:39 PM, Harsh J ha...@cloudera.com wrote:

 Your local /tmp directory needs to be writable by your user, for the
 hadoop jar method to execute properly out of the box.

 If that is not possible, edit your conf/core-site.xml to change the
 hadoop.tmp.dir default of /tmp/hadoop-${user.name} to somewhere
 that is writable by you, perhaps ${user.home}/tmp for your user
 alone.

 On Sat, Jun 23, 2012 at 1:10 PM, Sheng Guo enigma...@gmail.com wrote:
  Hi all,
 
  sorry to bother, I have a simple hadoop job. It was running well both in
  local mode and in real hadoop cluster. Recently I try to run it again in
  single node cluster, and I got the following error:
 
  $ hadoop-1.0.1/bin/hadoop jar CarDemo.jar
 
  Exception in thread main java.io.IOException: Mkdirs failed to create
  /tmp/hadoop-sguo/hadoop-unjar6763909861121801460/META-INF/license
  at org.apache.hadoop.util.RunJar.unJar(RunJar.java:47)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:132)
 
 
  I tried this both on 0.20.2 and 1.0.0. Both of them exit with exception
  like the above.
  Can anyone help me on this?
 
  Thanks!!
 
  Sheng



 --
 Harsh J




-- 


∞
Shashwat Shriparv


Re: [Newbie] How to make Multi Node Cluster from Single Node Cluster

2012-06-14 Thread shashwat shriparv
Just follow this:
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/

On Thu, Jun 14, 2012 at 3:39 PM, ramon@accenture.com wrote:

 At Newbie level just the same.

 -Original Message-
 From: Alpha Bagus Sunggono [mailto:bagusa...@gmail.com]
 Sent: jueves, 14 de junio de 2012 12:01
 To: common-user@hadoop.apache.org
 Subject: [Newbie] How to make Multi Node Cluster from Single Node Cluster

 Dear All.

 I've been configuring 3 server using Hadoop 1.0.x  , Single Node, how to
 assembly them into 1 Multi Node Cluster?

 Because when I search for documentation, i've just got configuration for
 Hadoop 0.20.x

 Would you mind to assist me?

 
 Subject to local law, communications with Accenture and its affiliates
 including telephone calls and emails (including content), may be monitored
 by our systems for the purposes of security and the assessment of internal
 compliance with Accenture policy.

 __

 www.accenture.com




-- 


∞
Shashwat Shriparv


Re: master and slaves are running but they seem disconnected

2012-06-09 Thread shashwat shriparv
Please send the content of all the hosts file from all the machines. and
master and slaves contents from all the machines from master and the slave
machines.

On Sun, Jun 10, 2012 at 1:39 AM, Joey Krabacher jkrabac...@gmail.comwrote:

 Not sure, but I did notice that safe mode is still. I would investigate
 that and see if the other nodes show up.

 /* Joey */
 On Jun 9, 2012 2:52 PM, Pierre Antoine DuBoDeNa pad...@gmail.com
 wrote:

  Hello everyone..
 
  I have a cluster of 5 VMs, 1 as master/slave the rest are slaves. I run
  bin/start-all.sh everything seems ok i get no errors..
 
  I check with jps in all server they run:
 
  master:
  22418 Jps
  21497 NameNode
  21886 SecondaryNameNode
  21981 JobTracker
  22175 TaskTracker
  21688 DataNode
 
  slave:
  3161 Jps
  2953 DataNode
  3105 TaskTracker
 
  But  in the web interface i get only 1 server connected.. is like the
  others are ignored.. Any clue why this can happen? where to look for
  errors?
 
  The hdfs web interface:
 
  Live Nodes
  http://fusemaster.cs.columbia.edu:50070/dfsnodelist.jsp?whatNodes=LIVE
  : 1 Dead Nodes
  http://fusemaster.cs.columbia.edu:50070/dfsnodelist.jsp?whatNodes=DEAD
  : 0
  it doesn't even show the rest slaves as dead..
 
  can it be a networking issue? (but i start all processes from master and
 it
  starts all processes to all others).
 
  best,
  PA
 




-- 


∞
Shashwat Shriparv


Re: Nutch hadoop integration

2012-06-08 Thread shashwat shriparv
Check out these links :

http://wiki.apache.org/nutch/NutchHadoopTutorial

http://wiki.apache.org/nutch/NutchTutorial
http://joey.mazzarelli.com/2007/07/25/nutch-and-hadoop-as-user-with-nfs/
http://stackoverflow.com/questions/5301883/run-nutch-on-existing-hadoop-cluster

Regards

∞
Shashwat Shriparv

On Fri, Jun 8, 2012 at 1:29 PM, abhishek tiwari 
abhishektiwari.u...@gmail.com wrote:

 how can i integrate hadood and nutch ..anyone please brief me .




-- 


∞
Shashwat Shriparv


Re: Hadoop-Git-Eclipse

2012-06-08 Thread shashwat shriparv
Check out this link:
http://www.cloudera.com/blog/2009/04/configuring-eclipse-for-hadoop-development-a-screencast/

Regards

∞
Shashwat Shriparv




On Fri, Jun 8, 2012 at 1:32 PM, Prajakta Kalmegh prkal...@in.ibm.comwrote:

 Hi

 I have done MapReduce programming using Eclipse before but now I need to
 learn the Hadoop code internals for one of my projects.

 I have forked Hadoop from github (https://github.com/apache/hadoop-common
 ) and need to configure it to work with Eclipse. All the links I could
 find list steps for earlier versions of Hadoop. I am right now following
 instructions given in these links:
 - http://wiki.apache.org/hadoop/GitAndHadoop
 - http://wiki.apache.org/hadoop/EclipseEnvironment
 - http://wiki.apache.org/hadoop/HowToContribute

 Can someone please give me a link to the steps to be followed for getting
 Hadoop (latest from trunk) started in Eclipse? I need to be able to commit
 changes to my forked repository on github.

 Thanks in advance.
 Regards,
 Prajakta




-- 


∞
Shashwat Shriparv


Re: Hadoop-Git-Eclipse

2012-06-08 Thread shashwat shriparv
Check out these thread :

http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/22976
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201012.mbox/%3c4cff292d.3090...@corp.mail.ru%3E


On Fri, Jun 8, 2012 at 6:24 PM, Prajakta Kalmegh pkalm...@gmail.com wrote:

 Hi

 Yes I did configure using the wiki link at
 http://wiki.apache.org/hadoop/EclipseEnvironment.
 I am facing a new problem while setting up Hadoop in Psuedo-distributed
 mode on my laptop.  I am trying to execute the following commands for
 setting up Hadoop:
 hdfs namenode -format
 hdfs namenode
 hdfs datanode
 yarn resourcemanager
 yarn nodemanager

 It gives me a Hadoop Common not found. error for all the commands. When I
 try to use hadoop namenode -format instead, it gives me a deprecated
 command warning.

 I am following the instructions for setting up Hadoop with Eclipse given in
 - http://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment
 -

 http://hadoop.apache.org/common/docs/r2.0.0-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html

 This issue is discussed in JIRA 
 https://issues.apache.org/jira/browse/HDFS-2014  and is resolved. Not
 sure
 why I am getting the error.

 My environment variables look something like:

 HADOOP_COMMON_HOME=/home/Projects/hadoop-common/hadoop-common-project/hadoop-common/target/hadoop-common-3.0.0-SNAPSHOT

 HADOOP_CONF_DIR=/home/Projects/hadoop-common/hadoop-common-project/hadoop-common/target/hadoop-common-3.0.0-SNAPSHOT/etc/hadoop

 HADOOP_HDFS_HOME=/home/Projects/hadoop-common/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT

 HADOOP_MAPRED_HOME=/home/Projects/hadoop-common/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT

 YARN_HOME=/home/Projects/hadoop-common/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/target/hadoop-yarn-common-3.0.0-SNAPSHOT

 YARN_CONF_DIR=/home/Projects/hadoop-common/hadoop-mapreduce-project/hadoop-yarn/conf

 I have included them in the PATH. I am trying to build and setup from
 apache-hadoop-common git repository (my own cloned fork). Any idea why
 'Hadoop Common Not found' error is coming? Do I have to add anything to the
 hadoop-config.sh or hdfs-config.sh?

 Regards,
 Prajakta





 Deniz Demir denizde...@me.com
 06/08/2012 05:35 PM
 Please respond to
 common-user@hadoop.apache.org
  To
 common-user@hadoop.apache.org,
  cc
  Subject
  Re: Hadoop-Git-Eclipse


 I did not find that screencast useful. This one worked for me:

 http://wiki.apache.org/hadoop/EclipseEnvironment

 Best,
 Deniz

 On Jun 8, 2012, at 1:08 AM, shashwat shriparv wrote:

  Check out this link:
 

 http://www.cloudera.com/blog/2009/04/configuring-eclipse-for-hadoop-development-a-screencast/
 
  Regards
 
  ∞
  Shashwat Shriparv
 
 
 
 
  On Fri, Jun 8, 2012 at 1:32 PM, Prajakta Kalmegh prkal...@in.ibm.com
 wrote:
 
  Hi
 
  I have done MapReduce programming using Eclipse before but now I need to
  learn the Hadoop code internals for one of my projects.
 
  I have forked Hadoop from github (
 https://github.com/apache/hadoop-common
  ) and need to configure it to work with Eclipse. All the links I could
  find list steps for earlier versions of Hadoop. I am right now following
  instructions given in these links:
  - http://wiki.apache.org/hadoop/GitAndHadoop
  - http://wiki.apache.org/hadoop/EclipseEnvironment
  - http://wiki.apache.org/hadoop/HowToContribute
 
  Can someone please give me a link to the steps to be followed for
 getting
  Hadoop (latest from trunk) started in Eclipse? I need to be able to
 commit
  changes to my forked repository on github.
 
  Thanks in advance.
  Regards,
  Prajakta
 
 
 
 
  --
 
 
  ∞
  Shashwat Shriparv




-- 


∞
Shashwat Shriparv


Re: Pseudo Distributed: ERROR org.apache.hadoop.hbase.HServerAddress: Could not resolve the DNS name of localhost.localdomain

2012-06-07 Thread shashwat shriparv
Are you able ping to

yourpcipaddress
domainnameyougaveformachine
hostnameofthemachine


Hbase stops means its not able to start itself on the ip or hostname which
you are giving.


On Thu, Jun 7, 2012 at 2:48 PM, Manu S manupk...@gmail.com wrote:

 Hi All,

 In pseudo distributed node HBaseMaster is stopping automatically when we
 starts HbaseRegion.

 I have changed all the configuration files of Hadoop,Hbase  Zookeeper to
 set the exact hostname of the machine. Also commented the localhost entry
 from /etc/hosts  cleared the cache as well. There is no entry of
 localhost.localdomain entry in these configurations, but this it is
 resolving to localhost.localdomain.

 Please find the error:
 2012-06-07 12:13:11,995 INFO
 org.apache.hadoop.hbase.master.MasterFileSystem: No logs to split
 *2012-06-07 12:13:12,103 ERROR org.apache.hadoop.hbase.HServerAddress:
 Could not resolve the DNS name of localhost.localdomain
 2012-06-07 12:13:12,104 FATAL org.apache.hadoop.hbase.master.HMaster:
 Unhandled exception. Starting shutdown.*
 *java.lang.IllegalArgumentException: hostname can't be null*
at java.net.InetSocketAddress.init(InetSocketAddress.java:121)
at

 org.apache.hadoop.hbase.HServerAddress.getResolvedAddress(HServerAddress.java:108)
at
 org.apache.hadoop.hbase.HServerAddress.init(HServerAddress.java:64)
at

 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.dataToHServerAddress(RootRegionTracker.java:82)
at

 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:73)
at

 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForRoot(CatalogTracker.java:222)
at

 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForRootServerConnection(CatalogTracker.java:240)
at

 org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:487)
at
 org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:455)
at

 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:406)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:293)
 2012-06-07 12:13:12,106 INFO org.apache.hadoop.hbase.master.HMaster:
 Aborting
 2012-06-07 12:13:12,106 DEBUG org.apache.hadoop.hbase.master.HMaster:
 Stopping service threads

 Thanks,
 Manu S




-- 


∞
Shashwat Shriparv


Re: Pseudo Distributed: ERROR org.apache.hadoop.hbase.HServerAddress: Could not resolve the DNS name of localhost.localdomain

2012-06-07 Thread shashwat shriparv
Hey manu which linux distribution you are using??

On Thu, Jun 7, 2012 at 8:18 PM, Manu S manupk...@gmail.com wrote:

 Thank you Harsh  Shashwat

 I given the hostname in /etc/sysconfig/network as pseudo-distributed.
 hostname command returns this name also. I added this name in /etc/hosts
 file and changed all the configuration accordingly. But zookeeper is trying
 to resolve to localhost.localdomain. There was no entries in any conf files
 or hostname related files for localhost.localdomain.

 Yea, everything is pinging as I given the names in /etc/hosts.

 On Thu, Jun 7, 2012 at 7:13 PM, shashwat shriparv 
 dwivedishash...@gmail.com
  wrote:

  Are you able ping to
 
  yourpcipaddress
  domainnameyougaveformachine
  hostnameofthemachine
 
 
  Hbase stops means its not able to start itself on the ip or hostname
 which
  you are giving.
 
 
  On Thu, Jun 7, 2012 at 2:48 PM, Manu S manupk...@gmail.com wrote:
 
   Hi All,
  
   In pseudo distributed node HBaseMaster is stopping automatically when
 we
   starts HbaseRegion.
  
   I have changed all the configuration files of Hadoop,Hbase  Zookeeper
 to
   set the exact hostname of the machine. Also commented the localhost
 entry
   from /etc/hosts  cleared the cache as well. There is no entry of
   localhost.localdomain entry in these configurations, but this it is
   resolving to localhost.localdomain.
  
   Please find the error:
   2012-06-07 12:13:11,995 INFO
   org.apache.hadoop.hbase.master.MasterFileSystem: No logs to split
   *2012-06-07 12:13:12,103 ERROR org.apache.hadoop.hbase.HServerAddress:
   Could not resolve the DNS name of localhost.localdomain
   2012-06-07 12:13:12,104 FATAL org.apache.hadoop.hbase.master.HMaster:
   Unhandled exception. Starting shutdown.*
   *java.lang.IllegalArgumentException: hostname can't be null*
  at java.net.InetSocketAddress.init(InetSocketAddress.java:121)
  at
  
  
 
 org.apache.hadoop.hbase.HServerAddress.getResolvedAddress(HServerAddress.java:108)
  at
   org.apache.hadoop.hbase.HServerAddress.init(HServerAddress.java:64)
  at
  
  
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.dataToHServerAddress(RootRegionTracker.java:82)
  at
  
  
 
 org.apache.hadoop.hbase.zookeeper.RootRegionTracker.waitRootRegionLocation(RootRegionTracker.java:73)
  at
  
  
 
 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForRoot(CatalogTracker.java:222)
  at
  
  
 
 org.apache.hadoop.hbase.catalog.CatalogTracker.waitForRootServerConnection(CatalogTracker.java:240)
  at
  
  
 
 org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRootRegionLocation(CatalogTracker.java:487)
  at
  
 
 org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:455)
  at
  
  
 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:406)
  at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:293)
   2012-06-07 12:13:12,106 INFO org.apache.hadoop.hbase.master.HMaster:
   Aborting
   2012-06-07 12:13:12,106 DEBUG org.apache.hadoop.hbase.master.HMaster:
   Stopping service threads
  
   Thanks,
   Manu S
  
 
 
 
  --
 
 
  ∞
  Shashwat Shriparv
 




-- 


∞
Shashwat Shriparv


Re: java.lang.NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException

2012-06-07 Thread shashwat shriparv
If you have

*property*
*  namehadoop.tmp.dir/name*
*
  value../Hadoop/hdfs/tmp/value
  /property

in your configuration file then remove it and try

Thanks and regards

∞
Shashwat Shriparv
*


On Thu, Jun 7, 2012 at 1:56 PM, huanchen.zhang
huanchen.zh...@ipinyou.comwrote:

 Hi,

 I coded a map reduce program with hadoop java api.

 When I submitted the job to the cluster, I got the following errors:

 Exception in thread main java.lang.NoClassDefFoundError:
 org/codehaus/jackson/map/JsonMappingException
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:489)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:487)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:475)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:506)
at
 com.ipinyou.data.preprocess.mapreduce.ExtractFeatureFromURLJob.main(ExtractFeatureFromURLJob.java:52)
 Caused by: java.lang.ClassNotFoundException:
 org.codehaus.jackson.map.JsonMappingException
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 8 more


 I found the classes not found here is in jackson-core-asl-1.5.2 and
 jackson-mapper-asl-1.5.2, so added these two jars to the project and
 resubmitted the job. But I got the following errors:

 Jun 7, 2012 4:18:55 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
 INFO: Initializing JVM Metrics with processName=JobTracker, sessionId=
 Jun 7, 2012 4:18:55 PM org.apache.hadoop.util.NativeCodeLoader clinit
 WARNING: Unable to load native-hadoop library for your platform... using
 builtin-java classes where applicable
 Jun 7, 2012 4:18:55 PM org.apache.hadoop.mapred.JobClient
 copyAndConfigureFiles
 WARNING: Use GenericOptionsParser for parsing the arguments. Applications
 should implement Tool for the same.
 Jun 7, 2012 4:18:55 PM org.apache.hadoop.mapred.JobClient$2 run
 INFO: Cleaning up the staging area
 file:/tmp/hadoop-huanchen/mapred/staging/huanchen757608919/.staging/job_local_0001
 Exception in thread main
 org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path
 does not exist: file:/data/huanchen/pagecrawler/url
at
 org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231)
at
 org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248)
at
 org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:944)
at
 org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:961)
at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:476)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:506)
at
 com.ipinyou.data.preprocess.mapreduce.ExtractFeatureFromURLJob.main(ExtractFeatureFromURLJob.java:51)


 Note that the error is Input path does not exist: file:/ instead of
  Input path does not exist: hdfs:/ . So does it mean the job does not
 successfully connect to the hadoop cluster? The first
 NoClassDefFoundError: org/codehaus/jackson/map/JsonMappingException error
 is also for this reason?

 Any one has any ideas?

 Thank you !


 Best,
 Huanchen

 2012-06-07



 huanchen.zhang




-- 


∞
Shashwat Shriparv


Re: SecondaryNameNode not connecting to NameNode : PriviledgedActionException

2012-06-04 Thread shashwat shriparv
)
at sun.net.www.http.HttpClient.init(HttpClient.java:240)
at sun.net.www.http.HttpClient.New(HttpClient.java:321)
at sun.net.www.http.HttpClient.New(HttpClient.java:338)
at

sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpU
RL
Co
nnection.java:935)
at

sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLCo
nn
ec
tion.java:876)
at

  sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.
 java:801)

 Is there any configuration I'm missing? At this point my
 mapred-site.xml is very simple just:

 ?xml version=1.0?
 ?xml-stylesheet type=text/xsl href=configuration.xsl?
 configuration  property
namemapred.job.tracker/name
valuehadoop00:9001/value
  /property
  property
namemapred.system.dir/name
value/home/hadoop/mapred/system/value
  /property
  property
namemapred.local.dir/name
value/home/hadoop/mapred/local/value
  /property
  property
namemapred.jobtracker.taskScheduler/name
valueorg.apache.hadoop.mapred.FairScheduler/value
  /property
  property
namemapred.fairscheduler.allocation.file/name
value/home/hadoop/hadoop/conf/fairscheduler.xml/value
  /property
 /configuration



  Subject to local law,
 communications with Accenture and its affiliates including
 telephone calls and emails (including content), may be monitored
 by our systems for the purposes of security and the assessment
 of internal compliance with Accenture
  policy.

 
 __
 __
 __
 

 www.accenture.com

   

Subject to local law, communications with Accenture and its
affiliates including telephone calls and emails (including
content), may be monitored by our systems for the purposes of
security and the assessment of internal compliance with Accenture
 policy.
   
__
__
__

   
www.accenture.com
   
   
  
   
   Subject to local law, communications with Accenture and its
   affiliates including telephone calls and emails (including content),
   may be monitored by our systems for the purposes of security and the
   assessment of internal compliance with Accenture policy.
  
   
   __
   
  
   www.accenture.com
  
  
 
  
  Subject to local law, communications with Accenture and its affiliates
  including telephone calls and emails (including content), may be
  monitored by our systems for the purposes of security and the
  assessment of internal compliance with Accenture policy.
 
  __
  
 
  www.accenture.com
 
 

 
 Subject to local law, communications with Accenture and its affiliates
 including telephone calls and emails (including content), may be monitored
 by our systems for the purposes of security and the assessment of internal
 compliance with Accenture policy.

 __

 www.accenture.com




-- 


∞
Shashwat Shriparv


Re: Small glitch with setting up two node cluster...only secondary node starts (datanode and namenode don't show up in jps)

2012-05-30 Thread shashwat shriparv
Please send you conf file contents and host file contents too..


On Tue, May 29, 2012 at 11:08 PM, Harsh J ha...@cloudera.com wrote:

 Rohit,

 The SNN may start and run infinitely without doing any work. The NN
 and DN have probably not started cause the NN has an issue (perhaps NN
 name directory isn't formatted) and the DN can't find the NN (or has
 data directory issues as well).

 So this isn't a glitch but a real issue you'll have to take a look at
 your logs for.

 On Sun, May 27, 2012 at 10:51 PM, Rohit Pandey rohitpandey...@gmail.com
 wrote:
  Hello Hadoop community,
 
  I have been trying to set up a double node Hadoop cluster (following
  the instructions in -
 
 http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
 )
  and am very close to running it apart from one small glitch - when I
  start the dfs (using start-dfs.sh), it says:
 
  10.63.88.53: starting datanode, logging to
  /usr/local/hadoop/bin/../logs/hadoop-pandro51-datanode-ubuntu.out
  10.63.88.109: starting datanode, logging to
 
 /usr/local/hadoop/bin/../logs/hadoop-pandro51-datanode-pandro51-OptiPlex-960.out
  10.63.88.109: starting secondarynamenode, logging to
 
 /usr/local/hadoop/bin/../logs/hadoop-pandro51-secondarynamenode-pandro51-OptiPlex-960.out
  starting jobtracker, logging to
 
 /usr/local/hadoop/bin/../logs/hadoop-pandro51-jobtracker-pandro51-OptiPlex-960.out
  10.63.88.109: starting tasktracker, logging to
 
 /usr/local/hadoop/bin/../logs/hadoop-pandro51-tasktracker-pandro51-OptiPlex-960.out
  10.63.88.53: starting tasktracker, logging to
  /usr/local/hadoop/bin/../logs/hadoop-pandro51-tasktracker-ubuntu.out
 
  which looks like it's been successful in starting all the nodes.
  However, when I check them out by running 'jps', this is what I see:
  27531 SecondaryNameNode
  27879 Jps
 
  As you can see, there is no datanode and name node. I have been
  racking my brains at this for quite a while now. Checked all the
  inputs and every thing. Any one know what the problem might be?
 
  --
 
  Thanks in advance,
 
  Rohit



 --
 Harsh J




-- 


∞
Shashwat Shriparv