Hi All,
 Any explanation for this?
 As Reece said I can do operations with hive but -

val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc) -- gives error.

I have already created spark ec2 cluster with the spark-ec2 script. How can
I build it again?

Thanks
_Roni

On Tue, Jul 28, 2015 at 2:46 PM, ReeceRobinson <re...@therobinsons.gen.nz>
wrote:

> I am building an analytics environment based on Spark and want to use HIVE
> in
> multi-user mode i.e. not use the embedded derby database but use Postgres
> and HDFS instead. I am using the included Spark Thrift Server to process
> queries using Spark SQL.
>
> The documentation gives me the impression that I need to create a custom
> build of Spark 1.4.1. However I don't think this is either accurate now OR
> it is for some different context I'm not aware of?
>
> I used the pre-built Spark 1.4.1 distribution today with my hive-site.xml
> for Postgres and HDFS and it worked! I see the warehouse files turn up in
> HDFS and I see the metadata inserted into Postgres when I created a test
> table.
>
> I can connect to the Thrift Server using beeline and perform queries on my
> data. I also verified using the Spark UI that the SQL is being processed by
> Spark SQL.
>
> So I guess I'm asking is the document out-of-date or am I missing
> something?
>
> Cheers,
> Reece
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Do-I-really-need-to-build-Spark-for-Hive-Thrift-Server-support-tp24013p24039.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to