Re: Hadoop Installation Path problem

2014-11-25 Thread Anand Murali
Dear Alex:
If I make changes to .bashrc, the above variables, will it not conflict with 
hadoop-env.sh. And I was advised other then just JAVA_HOME, no other 
environment variables should be set. Please advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail) 

 On Tuesday, November 25, 2014 1:23 PM, AlexWang wangxin...@gmail.com 
wrote:
   

 hadoop environment variable for example :
echo  export HADOOP_HOME=/usr/lib/hadoopexport 
HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfsexport 
HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce#export 
HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduceexport 
HADOOP_COMMON_HOME=\${HADOOP_HOME}export 
HADOOP_LIBEXEC_DIR=\${HADOOP_HOME}/libexecexport 
HADOOP_CONF_DIR=\${HADOOP_HOME}/etc/hadoopexport 
HDFS_CONF_DIR=\${HADOOP_HOME}/etc/hadoopexport 
HADOOP_YARN_HOME=/usr/lib/hadoop-yarnexport 
YARN_CONF_DIR=\${HADOOP_HOME}/etc/hadoopexport 
HADOOP_COMMON_LIB_NATIVE_DIR=\${HADOOP_HOME}/lib/nativeexport 
LD_LIBRARY_PATH=\${HADOOP_HOME}/lib/nativeexport HADOOP_OPTS=\\${HADOOP_OPTS} 
-Djava.library.path=\${HADOOP_HOME}/lib:\${LD_LIBRARY_PATH}\export 
PATH=\${HADOOP_HOME}/bin:\${HADOOP_HOME}/sbin:\$PATH
 ~/.bashrc
 .   ~/.bashrc 




On Nov 24, 2014, at 21:25, Anand Murali anand_vi...@yahoo.com wrote:
Dear All:
After hadoop namenode -format I do the following with errors.
anand_vihar@linux-v4vm:~/hadoop/etc/hadoop hadoop start-dfs.sh
Error: Could not find or load main class start-dfs.sh
anand_vihar@linux-v4vm:~/hadoop/etc/hadoop start-dfs.sh
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or 
dfs.namenode.rpc-address is not configured.
Starting namenodes on [2014-11-24 18:47:27,717 WARN  [main] 
util.NativeCodeLoader (NativeCodeLoader.java:clinit(62)) - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable]
Error: Cannot find configuration directory: /etc/hadoop
Error: Cannot find configuration directory: /etc/hadoop
Starting secondary namenodes [2014-11-24 18:47:28,457 WARN  [main] 
util.NativeCodeLoader (NativeCodeLoader.java:clinit(62)) - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
0.0.0.0]
Error: Cannot find configuration directory: /etc/hadoop
But in my hadoop-env.sh I have set 

export JAVA_HOME=/usr/lib64/jdk1.7.1_71/jdk7u71
export HADOOP_HOME=/anand_vihar/hadoopexport 
PATH=:PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/share
Would anyone know how to fix this problem.
Thanks
Regards,

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail)

On Monday, November 24, 2014 6:30 PM, Anand Murali anand_vi...@yahoo.com 
wrote:


it works thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail)

On Monday, November 24, 2014 6:19 PM, Anand Murali anand_vi...@yahoo.com 
wrote:


Ok. Many thanks I shall try.
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail)

On Monday, November 24, 2014 6:13 PM, Rohith Sharma K S 
rohithsharm...@huawei.com wrote:


The problem is with setting JAVA_HOME. There is .(Dot) before /usr which cause 
append current directory.export JAVA_HOME=./usr/lib64/jdk1.7.0_71/jdk7u71 Do 
not use .(Dot) before /usr. Thanks  RegardsRohith Sharma K S This e-mail and 
its attachments contain confidential information from HUAWEI, which is intended 
only for the person or entity whose address is listed above. Any use of the 
information contained herein in any way (including, but not limited to, total 
or partial disclosure, reproduction, or dissemination) by persons other than 
the intended recipient(s) is prohibited. If you receive this e-mail in error, 
please notify the sender by phone or email immediately and delete it! From: 
Anand Murali [mailto:anand_vi...@yahoo.com] 
Sent: 24 November 2014 17:44
To: user@hadoop.apache.org; user@hadoop.apache.org
Subject: Hadoop Installation Path problem Hi All:

I have done the follwoing in hadoop-env.sh export 
JAVA_HOME=./usr/lib64/jdk1.7.0_71/jdk7u71
export HADOOP_HOME=/home/anand_vihar/hadoop
export PATH=:$PATH:$JAVA_HOME:$HADOOP_HOME/bin:$HADOOP_HOME/sbin Now when I run 
hadoop-env.sh and type hadoop version, I get this error. 
/home/anand_vihar/hadoop/bin/hadoop: line 133: 
/home/anand_vihar/hadoop/etc/hadoop/usr/lib64/jdk1.7.0_71/jdk7u71/bin/java: No 
such file or directory
/home/anand_vihar/hadoop/bin/hadoop: line 133: exec: 
/home/anand_vihar/hadoop/etc/hadoop/usr/lib64/jdk1.7.0_71/jdk7u71/bin/java: 
cannot execute: No such file or directory

Can somebody advise. I have asked this to many people, they all say the obvious 
path problem, but where I cannot debug. This has become a show stopper for me. 
Help most welcome. Thanks Regards

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, 

[blog] How to do Update operation in hive-0.14.0

2014-11-25 Thread unmesha sreeveni
Hi

Hope this link helps for those who are trying to do practise ACID
properties in hive 0.14.

http://unmeshasreeveni.blogspot.in/2014/11/updatedeleteinsert-in-hive-0140.html

-- 
*Thanks  Regards *


*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/


Re: Hadoop Installation Path problem

2014-11-25 Thread AlexWang
Normally we only need to configure the environment variables in ~/.bashrc or 
/etc/profile file, you can also configure the hadoop-env.sh file, they are not 
in conflict.
I think hadoop-env.sh variables will override .bashrc variables.
For your question, you can try setting HDFS_CONF_DIR variables. Then try.
Cloudera hadoop installation you can use Cloudera Manager tool

http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_ig_install_path_a.html
Install apache hadoop, unzip the tar.gz file and configure hadoop-related 
configuration files and environment variables.
apache hadoop installation tools: http: //ambari.apache.org/


 On Nov 25, 2014, at 16:12, Anand Murali anand_vi...@yahoo.com wrote:
 
 Dear Alex:
 
 If I make changes to .bashrc, the above variables, will it not conflict with 
 hadoop-env.sh. And I was advised other then just JAVA_HOME, no other 
 environment variables should be set. Please advise.
 
 Thanks
  
 Anand Murali  
 11/7, 'Anand Vihar', Kandasamy St, Mylapore
 Chennai - 600 004, India
 Ph: (044)- 28474593/ 43526162 (voicemail)
 
 
 On Tuesday, November 25, 2014 1:23 PM, AlexWang wangxin...@gmail.com wrote:
 
 
 hadoop environment variable for example :
 
 echo  
 export HADOOP_HOME=/usr/lib/hadoop
 export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
 export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
 #export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
 export HADOOP_COMMON_HOME=\${HADOOP_HOME}
 export HADOOP_LIBEXEC_DIR=\${HADOOP_HOME}/libexec
 export HADOOP_CONF_DIR=\${HADOOP_HOME}/etc/hadoop
 export HDFS_CONF_DIR=\${HADOOP_HOME}/etc/hadoop
 export HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
 export YARN_CONF_DIR=\${HADOOP_HOME}/etc/hadoop
 export HADOOP_COMMON_LIB_NATIVE_DIR=\${HADOOP_HOME}/lib/native
 export LD_LIBRARY_PATH=\${HADOOP_HOME}/lib/native
 export HADOOP_OPTS=\\${HADOOP_OPTS} 
 -Djava.library.path=\${HADOOP_HOME}/lib:\${LD_LIBRARY_PATH}\
 export PATH=\${HADOOP_HOME}/bin:\${HADOOP_HOME}/sbin:\$PATH
 
  ~/.bashrc
 
  .   ~/.bashrc 
 
 
 
 
 On Nov 24, 2014, at 21:25, Anand Murali anand_vi...@yahoo.com 
 mailto:anand_vi...@yahoo.com wrote:
 
 Dear All:
 
 After hadoop namenode -format I do the following with errors.
 
 anand_vihar@linux-v4vm:~/hadoop/etc/hadoop hadoop start-dfs.sh
 Error: Could not find or load main class start-dfs.sh
 anand_vihar@linux-v4vm:~/hadoop/etc/hadoop start-dfs.sh
 Incorrect configuration: namenode address dfs.namenode.servicerpc-address or 
 dfs.namenode.rpc-address is not configured.
 Starting namenodes on [2014-11-24 18:47:27,717 WARN  [main] 
 util.NativeCodeLoader (NativeCodeLoader.java:clinit(62)) - Unable to load 
 native-hadoop library for your platform... using builtin-java classes where 
 applicable]
 Error: Cannot find configuration directory: /etc/hadoop
 Error: Cannot find configuration directory: /etc/hadoop
 Starting secondary namenodes [2014-11-24 18:47:28,457 WARN  [main] 
 util.NativeCodeLoader (NativeCodeLoader.java:clinit(62)) - Unable to load 
 native-hadoop library for your platform... using builtin-java classes where 
 applicable
 0.0.0.0]
 Error: Cannot find configuration directory: /etc/hadoop
 
 But in my hadoop-env.sh I have set 
 
 export JAVA_HOME=/usr/lib64/jdk1.7.1_71/jdk7u71
 export HADOOP_HOME=/anand_vihar/hadoop
 export PATH=:PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/share
 
 Would anyone know how to fix this problem.
 
 Thanks
 
 Regards,
 
  
 Anand Murali  
 11/7, 'Anand Vihar', Kandasamy St, Mylapore
 Chennai - 600 004, India
 Ph: (044)- 28474593/ 43526162 (voicemail)
 
 
 On Monday, November 24, 2014 6:30 PM, Anand Murali anand_vi...@yahoo.com 
 mailto:anand_vi...@yahoo.com wrote:
 
 
 it works thanks
  
 Anand Murali  
 11/7, 'Anand Vihar', Kandasamy St, Mylapore
 Chennai - 600 004, India
 Ph: (044)- 28474593/ 43526162 (voicemail)
 
 
 On Monday, November 24, 2014 6:19 PM, Anand Murali anand_vi...@yahoo.com 
 mailto:anand_vi...@yahoo.com wrote:
 
 
 Ok. Many thanks I shall try.
  
 Anand Murali  
 11/7, 'Anand Vihar', Kandasamy St, Mylapore
 Chennai - 600 004, India
 Ph: (044)- 28474593/ 43526162 (voicemail)
 
 
 On Monday, November 24, 2014 6:13 PM, Rohith Sharma K S 
 rohithsharm...@huawei.com mailto:rohithsharm...@huawei.com wrote:
 
 
 The problem is with setting JAVA_HOME. There is .(Dot) before /usr which 
 cause append current directory.
 export JAVA_HOME=./usr/lib64/jdk1.7.0_71/jdk7u71
  
 Do not use .(Dot) before /usr.
  
 Thanks  Regards
 Rohith Sharma K S
  
 This e-mail and its attachments contain confidential information from 
 HUAWEI, which is intended only for the person or entity whose address is 
 listed above. Any use of the information contained herein in any way 
 (including, but not limited to, total or partial disclosure, reproduction, 
 or dissemination) by persons other than the intended recipient(s) is 
 prohibited. If you receive this e-mail in error, please notify the sender by 
 phone or email immediately and delete it!
  
 From: Anand 

Re: Hadoop Installation Path problem

2014-11-25 Thread Anand Murali
Dear Alex:
I am trying to install Hadoop-2.5.2 on Suse Enterprise Desktop 11 ONLY in 
standalone/pseudo-distributed mode. Ambari needs a server. Now these are the 
changes I have made in hadoop-env.sh based on Tom Whyte's text book Hadoop the 
definitive guide.

export JAVA_HOME=/usr/lib64/jdk1.7.0_71/jdk7u71
export HADOOP_HOME=/home/anand_vihar/hadoop
export PATH=:$PATH:$JAVA_HOME:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
All other variables are left un-touched as they are supposed to pick the right 
defaults. Once having done this at
$hadoop version
Hadoop runs and shows version, which is first step successful the
$hadoop namenode -format
Is successful except for some warnings. I have set deafults in core-site.xml, 
hdfs-site.xml and yarn-site.xml
then 

$start-dfs.sh
I get plenty of errors.. I am wondering if there is a clear cut install 
procedure, or do you think Suse Desktop Enterprise 11 does not support Hadoop. 
Reply welcome.
Thanks
Regards,
Anand Murali.















































 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail) 

 On Tuesday, November 25, 2014 2:22 PM, AlexWang wangxin...@gmail.com 
wrote:
   

 Normally we only need to configure the environment variables in ~/.bashrc or 
/etc/profile file, you can also configure the hadoop-env.sh file, they are not 
in conflict.I think hadoop-env.sh variables will override .bashrc variables.For 
your question, you can try setting HDFS_CONF_DIR variables. Then try.Cloudera 
hadoop installation you can use Cloudera Manager tool 
http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_ig_install_path_a.htmlInstall
 apache hadoop, unzip the tar.gz file and configure hadoop-related 
configuration files and environment variables.apache hadoop installation tools: 
http: //ambari.apache.org/


On Nov 25, 2014, at 16:12, Anand Murali anand_vi...@yahoo.com wrote:
Dear Alex:
If I make changes to .bashrc, the above variables, will it not conflict with 
hadoop-env.sh. And I was advised other then just JAVA_HOME, no other 
environment variables should be set. Please advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail) 

 On Tuesday, November 25, 2014 1:23 PM, AlexWang wangxin...@gmail.com 
wrote:
   

 hadoop environment variable for example :
echo  export HADOOP_HOME=/usr/lib/hadoopexport 
HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfsexport 
HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce#export 
HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduceexport 
HADOOP_COMMON_HOME=\${HADOOP_HOME}export 
HADOOP_LIBEXEC_DIR=\${HADOOP_HOME}/libexecexport 
HADOOP_CONF_DIR=\${HADOOP_HOME}/etc/hadoopexport 
HDFS_CONF_DIR=\${HADOOP_HOME}/etc/hadoopexport 
HADOOP_YARN_HOME=/usr/lib/hadoop-yarnexport 
YARN_CONF_DIR=\${HADOOP_HOME}/etc/hadoopexport 
HADOOP_COMMON_LIB_NATIVE_DIR=\${HADOOP_HOME}/lib/nativeexport 
LD_LIBRARY_PATH=\${HADOOP_HOME}/lib/nativeexport HADOOP_OPTS=\\${HADOOP_OPTS} 
-Djava.library.path=\${HADOOP_HOME}/lib:\${LD_LIBRARY_PATH}\export 
PATH=\${HADOOP_HOME}/bin:\${HADOOP_HOME}/sbin:\$PATH
 ~/.bashrc
 .   ~/.bashrc 




On Nov 24, 2014, at 21:25, Anand Murali anand_vi...@yahoo.com wrote:
Dear All:
After hadoop namenode -format I do the following with errors.
anand_vihar@linux-v4vm:~/hadoop/etc/hadoop hadoop start-dfs.sh
Error: Could not find or load main class start-dfs.sh
anand_vihar@linux-v4vm:~/hadoop/etc/hadoop start-dfs.sh
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or 
dfs.namenode.rpc-address is not configured.
Starting namenodes on [2014-11-24 18:47:27,717 WARN  [main] 
util.NativeCodeLoader (NativeCodeLoader.java:clinit(62)) - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable]
Error: Cannot find configuration directory: /etc/hadoop
Error: Cannot find configuration directory: /etc/hadoop
Starting secondary namenodes [2014-11-24 18:47:28,457 WARN  [main] 
util.NativeCodeLoader (NativeCodeLoader.java:clinit(62)) - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
0.0.0.0]
Error: Cannot find configuration directory: /etc/hadoop
But in my hadoop-env.sh I have set 

export JAVA_HOME=/usr/lib64/jdk1.7.1_71/jdk7u71
export HADOOP_HOME=/anand_vihar/hadoopexport 
PATH=:PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/share
Would anyone know how to fix this problem.
Thanks
Regards,

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail)

On Monday, November 24, 2014 6:30 PM, Anand Murali anand_vi...@yahoo.com 
wrote:


it works thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail)

On Monday, November 24, 2014 6:19 PM, Anand Murali anand_vi...@yahoo.com 
wrote:


Ok. Many thanks I 

Job object toString() is throwing an exception

2014-11-25 Thread Corey Nolet
I was playing around in the Spark shell and newing up an instance of Job
that I could use to configure the inputformat for a job. By default, the
Scala shell println's the result of every command typed. It throws an
exception when it printlns the newly created instance of Job because it
looks like it's setting a state upon allocation and it's not happy with the
state that it's in when toString() is called before the job is submitted.

I'm using Hadoop 2.5.1. I don't see any tickets for this for 2.6. Has
anyone else ran into this?


RE: Job object toString() is throwing an exception

2014-11-25 Thread Rohith Sharma K S
Could you give error message or stack trace?

From: Corey Nolet [mailto:cjno...@gmail.com]
Sent: 26 November 2014 07:54
To: user@hadoop.apache.org
Subject: Job object toString() is throwing an exception

I was playing around in the Spark shell and newing up an instance of Job that I 
could use to configure the inputformat for a job. By default, the Scala shell 
println's the result of every command typed. It throws an exception when it 
printlns the newly created instance of Job because it looks like it's setting a 
state upon allocation and it's not happy with the state that it's in when 
toString() is called before the job is submitted.

I'm using Hadoop 2.5.1. I don't see any tickets for this for 2.6. Has anyone 
else ran into this?


Re: Job object toString() is throwing an exception

2014-11-25 Thread Corey Nolet
Here's the stack trace. I was going to file a ticket for this but wanted to
check on the user list first to make sure there wasn't already a fix in the
works. It has to do with the Scala shell doing a toString() each time a
command is typed in. The stack trace stops the instance of Job from ever
being assigned.


scala val job = new org.apache.hadoop.mapreduce.Job

warning: there were 1 deprecation warning(s); re-run with -deprecation for
details

java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING

at org.apache.hadoop.mapreduce.Job.ensureState(Job.java:283)

at org.apache.hadoop.mapreduce.Job.toString(Job.java:452)

at
scala.runtime.ScalaRunTime$.scala$runtime$ScalaRunTime$$inner$1(ScalaRunTime.scala:324)

at scala.runtime.ScalaRunTime$.stringOf(ScalaRunTime.scala:329)

at scala.runtime.ScalaRunTime$.replStringOf(ScalaRunTime.scala:337)

at .init(console:10)

at .clinit(console)

at $print(console)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:789)

at
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1062)

at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:615)

at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:646)

at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:610)

at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:814)

at
org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:859)

at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:771)

at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:616)

at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:624)

at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:629)

at
org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:954)

at
org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:902)

at
org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:902)

at
scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)

at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:902)

at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:997)

at org.apache.spark.repl.Main$.main(Main.scala:31)

at org.apache.spark.repl.Main.main(Main.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



On Tue, Nov 25, 2014 at 9:39 PM, Rohith Sharma K S 
rohithsharm...@huawei.com wrote:

  Could you give error message or stack trace?



 *From:* Corey Nolet [mailto:cjno...@gmail.com]
 *Sent:* 26 November 2014 07:54
 *To:* user@hadoop.apache.org
 *Subject:* Job object toString() is throwing an exception



 I was playing around in the Spark shell and newing up an instance of Job
 that I could use to configure the inputformat for a job. By default, the
 Scala shell println's the result of every command typed. It throws an
 exception when it printlns the newly created instance of Job because it
 looks like it's setting a state upon allocation and it's not happy with the
 state that it's in when toString() is called before the job is submitted.



 I'm using Hadoop 2.5.1. I don't see any tickets for this for 2.6. Has
 anyone else ran into this?



Re: Hadoop Installation Path problem

2014-11-25 Thread Hamza Zafar
Please set the compute nodes in slaves file at
$HADOOP_HOME/etc/hadoop/slaves

run the following commands in $HADOOP_HOME/sbin to start the HDFS and Yarn
Services

hadoop-daemon.sh start namenode  //start the namenode service
hadoop-daemons.sh start datanode //start datanode on all nodes listed in
slaves file

yarn-daemon.sh start resourcemanager //start the resourcemanager
yarn-daemons.sh start nodemanager // start nodemanager service on all nodes
listed in slaves file



On Tue, Nov 25, 2014 at 2:22 PM, Anand Murali anand_vi...@yahoo.com wrote:

 Dear Alex:

 I am trying to install Hadoop-2.5.2 on Suse Enterprise Desktop 11 ONLY in
 standalone/pseudo-distributed mode. Ambari needs a server. Now these are
 the changes I have made in hadoop-env.sh based on Tom Whyte's text book
 Hadoop the definitive guide.

 export JAVA_HOME=/usr/lib64/jdk1.7.0_71/jdk7u71
 export HADOOP_HOME=/home/anand_vihar/hadoop
 export PATH=:$PATH:$JAVA_HOME:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

 All other variables are left un-touched as they are supposed to pick the
 right defaults. Once having done this at

 $hadoop version

 Hadoop runs and shows version, which is first step successful the

 $hadoop namenode -format

 Is successful except for some warnings. I have set deafults in
 core-site.xml, hdfs-site.xml and yarn-site.xml

 then

 $start-dfs.sh

 I get plenty of errors.. I am wondering if there is a clear cut install
 procedure, or do you think Suse Desktop Enterprise 11 does not support
 Hadoop. Reply welcome.

 Thanks

 Regards,

 Anand Murali.
















































 Anand Murali
 11/7, 'Anand Vihar', Kandasamy St, Mylapore
 Chennai - 600 004, India
 Ph: (044)- 28474593/ 43526162 (voicemail)


   On Tuesday, November 25, 2014 2:22 PM, AlexWang wangxin...@gmail.com
 wrote:


 Normally we only need to configure the environment variables in ~/.bashrc
 or /etc/profile file, you can also configure the hadoop-env.sh file, they
 are not in conflict.
 I think hadoop-env.sh variables will override .bashrc variables.
 For your question, you can try setting HDFS_CONF_DIR variables. Then try.
 Cloudera hadoop installation you can use Cloudera Manager tool

 http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_ig_install_path_a.html
 Install apache hadoop, unzip the tar.gz file and configure hadoop-related
 configuration files and environment variables.
 apache hadoop installation tools: http: //ambari.apache.org/


 On Nov 25, 2014, at 16:12, Anand Murali anand_vi...@yahoo.com wrote:

 Dear Alex:

 If I make changes to .bashrc, the above variables, will it not conflict
 with hadoop-env.sh. And I was advised other then just JAVA_HOME, no other
 environment variables should be set. Please advise.

 Thanks

 Anand Murali
 11/7, 'Anand Vihar', Kandasamy St, Mylapore
 Chennai - 600 004, India
 Ph: (044)- 28474593/ 43526162 (voicemail)


   On Tuesday, November 25, 2014 1:23 PM, AlexWang wangxin...@gmail.com
 wrote:


 hadoop environment variable for example :

 echo  
 export HADOOP_HOME=/usr/lib/hadoop
 export HADOOP_HDFS_HOME=/usr/lib/hadoop-hdfs
 export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
 #export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
 export HADOOP_COMMON_HOME=\${HADOOP_HOME}
 export HADOOP_LIBEXEC_DIR=\${HADOOP_HOME}/libexec
 export HADOOP_CONF_DIR=\${HADOOP_HOME}/etc/hadoop
 *export HDFS_CONF_DIR=\${HADOOP_HOME}/etc/hadoop*
 export HADOOP_YARN_HOME=/usr/lib/hadoop-yarn
 export YARN_CONF_DIR=\${HADOOP_HOME}/etc/hadoop
 export HADOOP_COMMON_LIB_NATIVE_DIR=\${HADOOP_HOME}/lib/native
 export LD_LIBRARY_PATH=\${HADOOP_HOME}/lib/native
 export HADOOP_OPTS=\\${HADOOP_OPTS}
 -Djava.library.path=\${HADOOP_HOME}/lib:\${LD_LIBRARY_PATH}\
 export PATH=\${HADOOP_HOME}/bin:\${HADOOP_HOME}/sbin:\$PATH

  ~/.bashrc

  .   ~/.bashrc




 On Nov 24, 2014, at 21:25, Anand Murali anand_vi...@yahoo.com wrote:

 Dear All:

 After hadoop namenode -format I do the following with errors.

 anand_vihar@linux-v4vm:~/hadoop/etc/hadoop hadoop start-dfs.sh
 Error: Could not find or load main class start-dfs.sh
 anand_vihar@linux-v4vm:~/hadoop/etc/hadoop start-dfs.sh
 Incorrect configuration: namenode address dfs.namenode.servicerpc-address
 or dfs.namenode.rpc-address is not configured.
 Starting namenodes on [2014-11-24 18:47:27,717 WARN  [main]
 util.NativeCodeLoader (NativeCodeLoader.java:clinit(62)) - Unable to load
 native-hadoop library for your platform... using builtin-java classes where
 applicable]
 Error: Cannot find configuration directory: /etc/hadoop
 Error: Cannot find configuration directory: /etc/hadoop
 Starting secondary namenodes [2014-11-24 18:47:28,457 WARN  [main]
 util.NativeCodeLoader (NativeCodeLoader.java:clinit(62)) - Unable to load
 native-hadoop library for your platform... using builtin-java classes where
 applicable
 0.0.0.0]
 Error: Cannot find configuration directory: /etc/hadoop

 But in my hadoop-env.sh I have set

 export