Raymond:

Did "namenode" appear in any of the Spark config files ?

BTW Scala 2.11 is used by the default build.

On Tue, Apr 5, 2016 at 6:22 AM, Raymond Honderdors <
raymond.honderd...@sizmek.com> wrote:

> I can see that the build is successful
>
> (-Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver
> –Dscala-2.11 -DskipTests clean package)
>
>
>
> the documents page it still says that
>
> “
>
> Building With Hive and JDBC Support
>
> To enable Hive integration for Spark SQL along with its JDBC server and
> CLI, add the -Phive and Phive-thriftserver profiles to your existing build
> options. By default Spark will build with Hive 0.13.1 bindings.
>
>
>
> # Apache Hadoop 2.4.X with Hive 13 support
>
> mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -Phive-thriftserver
> -DskipTests clean package
>
> Building for Scala 2.11
>
> To produce a Spark package compiled with Scala 2.11, use the -Dscala-2.11
> property:
>
>
>
> ./dev/change-scala-version.sh 2.11
>
> mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package
>
> Spark does not yet support its JDBC component for Scala 2.11.
>
> ”
>
> Source : http://spark.apache.org/docs/latest/building-spark.html
>
>
>
> When I try to start the thrift server I get the following error:
>
> “
>
> 16/04/05 16:09:11 INFO BlockManagerMaster: Registered BlockManager
>
> 16/04/05 16:09:12 ERROR SparkContext: Error initializing SparkContext.
>
> java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode
>
>                 at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
>
>                 at
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
>
>                 at
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
>
>                 at
> org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
>
>                 at
> org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:601)
>
>                 at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
>
>                 at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
>
>                 at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
>
>                 at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
>
>                 at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
>
>                 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>
>                 at
> org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1667)
>
>                 at
> org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:67)
>
>                 at
> org.apache.spark.SparkContext.<init>(SparkContext.scala:517)
>
>                 at
> org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:57)
>
>                 at
> org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:77)
>
>                 at
> org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:726)
>
>                 at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:183)
>
>                 at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:208)
>
>                 at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:122)
>
>                 at
> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> Caused by: java.net.UnknownHostException: namenode
>
>                 ... 26 more
>
> 16/04/05 16:09:12 INFO SparkUI: Stopped Spark web UI at
> http://10.10.182.195:4040
>
> 16/04/05 16:09:12 INFO SparkDeploySchedulerBackend: Shutting down all
> executors
>
> ”
>
>
>
>
>
>
>
> *Raymond Honderdors *
>
> *Team Lead Analytics BI*
>
> *Business Intelligence Developer *
>
> *raymond.honderd...@sizmek.com <raymond.honderd...@sizmek.com> *
>
> *T +972.7325.3569*
>
> *Herzliya*
>
>
>
> *From:* Reynold Xin [mailto:r...@databricks.com]
> *Sent:* Tuesday, April 05, 2016 3:57 PM
> *To:* Raymond Honderdors <raymond.honderd...@sizmek.com>
> *Cc:* dev@spark.apache.org
> *Subject:* Re: Build with Thrift Server & Scala 2.11
>
>
>
> What do you mean? The Jenkins build for Spark uses 2.11 and also builds
> the thrift server.
>
> On Tuesday, April 5, 2016, Raymond Honderdors <
> raymond.honderd...@sizmek.com> wrote:
>
> Is anyone looking into this one, Build with Thrift Server & Scala 2.11?
>
> I9f so when can we expect it
>
>
>
> *Raymond Honderdors *
>
> *Team Lead Analytics BI*
>
> *Business Intelligence Developer *
>
> *raymond.honderd...@sizmek.com *
>
> *T +972.7325.3569*
>
> *Herzliya*
>
>
>
> [image: Read More] <http://feeds.feedburner.com/~r/sizmek-blog/~6/1>
>
> <http://www.sizmek.com/>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>

Reply via email to