ok,i will rebuild myself. if i want to use spark with hadoop 2.7.2, when i build spark, i should put what on param --hadoop, 2.7.2 or others?

来自我的华为手机


-------- 原始邮件 --------
主题:Re: spark-1.6.1-bin-without-hadoop can not use spark-sql
发件人:Ted Yu
收件人:喜之郎 <251922...@qq.com>
抄送:user


I wonder if the tar ball was built with:

-Phive -Phive-thriftserver

Maybe rebuild by yourself with the above ?

FYI

On Wed, Jun 22, 2016 at 4:38 AM, 喜之郎 <251922...@qq.com> wrote:
Hi all.
I download spark-1.6.1-bin-without-hadoop.tgz from website.
And I configured "SPARK_DIST_CLASSPATH" in spark-env.sh.
Now spark-shell run well. But spark-sql can not run.
My hadoop version is 2.7.2.
This is error infos:

bin/spark-sql 
java.lang.ClassNotFoundException: org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:278)
at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Failed to load main class org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.
You need to build Spark with -Phive and -Phive-thriftserver.

Do I need configure something else in spark-env.sh or spark-default.conf?
Suggestions are appreciated ,thanks.




Reply via email to