Re: HiveThriftServer2.startWithContext no more showing tables in 1.6.2

2016-07-21 Thread Todd Nist
This is due to a change in 1.6,  by default the Thrift server runs in
multi-session mode. You would want to set the following to true on your
spark config.

spark-default.conf set spark.sql.hive.thriftServer.singleSession

Good write up here:
https://community.hortonworks.com/questions/29090/i-cant-find-my-tables-in-spark-sql-using-beeline.html

HTH.

-Todd

On Thu, Jul 21, 2016 at 10:30 AM, Marco Colombo  wrote:

> Thanks.
>
> That is just a typo. I'm using on 'spark://10.0.2.15:7077' (standalone).
> Same url used in --master in spark-submit
>
>
>
> 2016-07-21 16:08 GMT+02:00 Mich Talebzadeh :
>
>> Hi Marco
>>
>> In your code
>>
>> val conf = new SparkConf()
>>   .setMaster("spark://10.0.2.15:7077")
>>   .setMaster("local")
>>   .set("spark.cassandra.connection.host", "10.0.2.15")
>>   .setAppName("spark-sql-dataexample");
>>
>> As I understand the first .setMaster("spark://:7077 indicates
>> that you are using Spark in standalone mode and then .setMaster("local")
>> means you are using it in Local mode?
>>
>> Any reason for it?
>>
>> Basically you are overriding standalone with local.
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 21 July 2016 at 14:55, Marco Colombo 
>> wrote:
>>
>>> Hi all, I have a spark application that was working in 1.5.2, but now
>>> has a problem in 1.6.2.
>>>
>>> Here is an example:
>>>
>>> val conf = new SparkConf()
>>>   .setMaster("spark://10.0.2.15:7077")
>>>   .setMaster("local")
>>>   .set("spark.cassandra.connection.host", "10.0.2.15")
>>>   .setAppName("spark-sql-dataexample");
>>>
>>> val hiveSqlContext = new HiveContext(SparkContext.getOrCreate(conf));
>>>
>>> //Registering tables
>>> var query = """OBJ_TAB""".stripMargin;
>>>
>>> val options = Map(
>>>   "driver" -> "org.postgresql.Driver",
>>>   "url" -> "jdbc:postgresql://127.0.0.1:5432/DB",
>>>   "user" -> "postgres",
>>>   "password" -> "postgres",
>>>   "dbtable" -> query);
>>>
>>> import hiveSqlContext.implicits._;
>>> val df: DataFrame =
>>> hiveSqlContext.read.format("jdbc").options(options).load();
>>> df.registerTempTable("V_OBJECTS");
>>>
>>>  val optionsC = Map("table"->"data_tab", "keyspace"->"data");
>>> val stats : DataFrame =
>>> hiveSqlContext.read.format("org.apache.spark.sql.cassandra").options(optionsC).load();
>>> //stats.foreach { x => println(x) }
>>> stats.registerTempTable("V_DATA");
>>>
>>> //START HIVE SERVER
>>> HiveThriftServer2.startWithContext(hiveSqlContext);
>>>
>>> Now, from app I can perform queries and joins over the 2 registered
>>> table, but if I connect to port 1 via beeline, I see no registered
>>> tables.
>>> show tables is empty.
>>>
>>> I'm using embedded DERBY DB, but this was working in 1.5.2.
>>>
>>> Any suggestion?
>>>
>>> Thanks
>>>
>>>
>>
>
>
> --
> Ing. Marco Colombo
>


Re: HiveThriftServer2.startWithContext no more showing tables in 1.6.2

2016-07-21 Thread Marco Colombo
Thanks.

That is just a typo. I'm using on 'spark://10.0.2.15:7077' (standalone).
Same url used in --master in spark-submit



2016-07-21 16:08 GMT+02:00 Mich Talebzadeh :

> Hi Marco
>
> In your code
>
> val conf = new SparkConf()
>   .setMaster("spark://10.0.2.15:7077")
>   .setMaster("local")
>   .set("spark.cassandra.connection.host", "10.0.2.15")
>   .setAppName("spark-sql-dataexample");
>
> As I understand the first .setMaster("spark://:7077 indicates
> that you are using Spark in standalone mode and then .setMaster("local")
> means you are using it in Local mode?
>
> Any reason for it?
>
> Basically you are overriding standalone with local.
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 21 July 2016 at 14:55, Marco Colombo 
> wrote:
>
>> Hi all, I have a spark application that was working in 1.5.2, but now has
>> a problem in 1.6.2.
>>
>> Here is an example:
>>
>> val conf = new SparkConf()
>>   .setMaster("spark://10.0.2.15:7077")
>>   .setMaster("local")
>>   .set("spark.cassandra.connection.host", "10.0.2.15")
>>   .setAppName("spark-sql-dataexample");
>>
>> val hiveSqlContext = new HiveContext(SparkContext.getOrCreate(conf));
>>
>> //Registering tables
>> var query = """OBJ_TAB""".stripMargin;
>>
>> val options = Map(
>>   "driver" -> "org.postgresql.Driver",
>>   "url" -> "jdbc:postgresql://127.0.0.1:5432/DB",
>>   "user" -> "postgres",
>>   "password" -> "postgres",
>>   "dbtable" -> query);
>>
>> import hiveSqlContext.implicits._;
>> val df: DataFrame =
>> hiveSqlContext.read.format("jdbc").options(options).load();
>> df.registerTempTable("V_OBJECTS");
>>
>>  val optionsC = Map("table"->"data_tab", "keyspace"->"data");
>> val stats : DataFrame =
>> hiveSqlContext.read.format("org.apache.spark.sql.cassandra").options(optionsC).load();
>> //stats.foreach { x => println(x) }
>> stats.registerTempTable("V_DATA");
>>
>> //START HIVE SERVER
>> HiveThriftServer2.startWithContext(hiveSqlContext);
>>
>> Now, from app I can perform queries and joins over the 2 registered
>> table, but if I connect to port 1 via beeline, I see no registered
>> tables.
>> show tables is empty.
>>
>> I'm using embedded DERBY DB, but this was working in 1.5.2.
>>
>> Any suggestion?
>>
>> Thanks
>>
>>
>


-- 
Ing. Marco Colombo


Re: HiveThriftServer2.startWithContext no more showing tables in 1.6.2

2016-07-21 Thread Mich Talebzadeh
Hi Marco

In your code

val conf = new SparkConf()
  .setMaster("spark://10.0.2.15:7077")
  .setMaster("local")
  .set("spark.cassandra.connection.host", "10.0.2.15")
  .setAppName("spark-sql-dataexample");

As I understand the first .setMaster("spark://:7077 indicates
that you are using Spark in standalone mode and then .setMaster("local")
means you are using it in Local mode?

Any reason for it?

Basically you are overriding standalone with local.

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 21 July 2016 at 14:55, Marco Colombo  wrote:

> Hi all, I have a spark application that was working in 1.5.2, but now has
> a problem in 1.6.2.
>
> Here is an example:
>
> val conf = new SparkConf()
>   .setMaster("spark://10.0.2.15:7077")
>   .setMaster("local")
>   .set("spark.cassandra.connection.host", "10.0.2.15")
>   .setAppName("spark-sql-dataexample");
>
> val hiveSqlContext = new HiveContext(SparkContext.getOrCreate(conf));
>
> //Registering tables
> var query = """OBJ_TAB""".stripMargin;
>
> val options = Map(
>   "driver" -> "org.postgresql.Driver",
>   "url" -> "jdbc:postgresql://127.0.0.1:5432/DB",
>   "user" -> "postgres",
>   "password" -> "postgres",
>   "dbtable" -> query);
>
> import hiveSqlContext.implicits._;
> val df: DataFrame =
> hiveSqlContext.read.format("jdbc").options(options).load();
> df.registerTempTable("V_OBJECTS");
>
>  val optionsC = Map("table"->"data_tab", "keyspace"->"data");
> val stats : DataFrame =
> hiveSqlContext.read.format("org.apache.spark.sql.cassandra").options(optionsC).load();
> //stats.foreach { x => println(x) }
> stats.registerTempTable("V_DATA");
>
> //START HIVE SERVER
> HiveThriftServer2.startWithContext(hiveSqlContext);
>
> Now, from app I can perform queries and joins over the 2 registered table,
> but if I connect to port 1 via beeline, I see no registered tables.
> show tables is empty.
>
> I'm using embedded DERBY DB, but this was working in 1.5.2.
>
> Any suggestion?
>
> Thanks
>
>


Re: hivethriftserver2 problems on upgrade to 1.6.0

2016-01-27 Thread Deenar Toraskar
James

The problem you are facing is due to a feature introduced in Spark 1.6 -
multi-session mode, if you want to see temporary tables across session,
*set spark.sql.hive.thriftServer.singleSession=true*


   - From Spark 1.6, by default the Thrift server runs in multi-session
   mode. Which means each JDBC/ODBC connection owns a copy of their own SQL
   configuration and temporary function registry. Cached tables are still
   shared though. If you prefer to run the Thrift server in the old
   single-session mode, please set option
   spark.sql.hive.thriftServer.singleSession to true. You may either add
   this option to spark-defaults.conf, or pass it to start-thriftserver.sh
via --conf:

./sbin/start-thriftserver.sh \
 --conf spark.sql.hive.thriftServer.singleSession=true \
 ...


On 25 January 2016 at 15:06, james.gre...@baesystems.com <
james.gre...@baesystems.com> wrote:

> On upgrade from 1.5.0 to 1.6.0 I have a problem with the
> hivethriftserver2, I have this code:
>
>
>
> *val *hiveContext = *new *HiveContext(SparkContext.*getOrCreate*(conf));
>
> *val *thing = 
> hiveContext.read.parquet(*"hdfs://dkclusterm1.imp.net:8020/user/jegreen1/ex208
> "*)
>
> thing.registerTempTable(*"thing"*)
>
>
>
> HiveThriftServer2.*startWithContext*(hiveContext)
>
>
>
>
>
> When I start things up on the cluster my hive-site.xml is found – I can
> see that the metastore connects:
>
>
>
>
>
> INFO  metastore - Trying to connect to metastore with URI thrift://
> dkclusterm2.imp.net:9083
>
> INFO  metastore - Connected to metastore.
>
>
>
>
>
> But then later on the thrift server seems not to connect to the remote
> hive metastore but to start a derby instance instead:
>
>
>
> INFO  AbstractService - Service:CLIService is started.
>
> INFO  ObjectStore - ObjectStore, initialize called
>
> INFO  Query - Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
>
> INFO  MetaStoreDirectSql - Using direct SQL, underlying DB is DERBY
>
> INFO  ObjectStore - Initialized ObjectStore
>
> INFO  HiveMetaStore - 0: get_databases: default
>
> INFO  audit - ugi=jegreen1  ip=unknown-ip-addr  cmd=get_databases:
> default
>
> INFO  HiveMetaStore - 0: Shutting down the object store...
>
> INFO  audit - ugi=jegreen1  ip=unknown-ip-addr  cmd=Shutting down
> the object store...
>
> INFO  HiveMetaStore - 0: Metastore shutdown complete.
>
> INFO  audit - ugi=jegreen1  ip=unknown-ip-addr  cmd=Metastore
> shutdown complete.
>
> INFO  AbstractService - Service:ThriftBinaryCLIService is started.
>
> INFO  AbstractService - Service:HiveServer2 is started.
>
>
>
>
>
> So if I connect to this with JDBC I can see all the tables on the hive
> server – but not anything temporary – I guess they are going to derby.
>
>
>
> I see someone on the databricks website is also having this problem.
>
>
>
>
>
> Thanks
>
>
>
> James
>
>
>
>
>
>
>
>
>
>
>
>
>
> *From:* patcharee [mailto:patcharee.thong...@uni.no]
> *Sent:* 25 January 2016 14:31
> *To:* user@spark.apache.org
> *Cc:* Eirik Thorsnes
> *Subject:* streaming textFileStream problem - got only ONE line
>
>
>
> Hi,
>
> My streaming application is receiving data from file system and just
> prints the input count every 1 sec interval, as the code below:
>
> * val *sparkConf = *new *SparkConf()
> * val *ssc = *new *StreamingContext(sparkConf, *Milliseconds*
> (interval_ms))
> * val *lines = ssc.textFileStream(args(0))
> lines.count().print()
>
> The problem is sometimes the data received from scc.textFileStream is ONLY
> ONE line. But in fact there are multiple lines in the new file found in
> that interval. See log below which shows three intervals. In the 2nd
> interval, the new file is:
> hdfs://helmhdfs/user/patcharee/cerdata/datetime_19617.txt. This file
> contains 6288 lines. The ssc.textFileStream returns ONLY ONE line (the
> header).
>
> Any ideas/suggestions what the problem is?
>
>
> -
> SPARK LOG
>
> -
>
> 16/01/25 15:11:11 INFO FileInputDStream: Cleared 1 old files that were
> older than 1453731011000 ms: 145373101 ms
> 16/01/25 15:11:11 INFO FileInputDStream: Cleared 0 old files that were
> older than 1453731011000 ms:
> 16/01/25 15:11:12 INFO FileInputDStream: Finding new files took 4 ms
> 16/01/25 15:11:12 INFO FileInputDStream: New files at time 1453731072000
> ms:
> hdfs://helmhdfs/user/patcharee/cerdata/datetime_19616.txt
> ---
> Time: 1453731072000 ms
> ---
> 6288
>
> 16/01/25 15:11:12 INFO FileInputDStream: Cleared 1 old files that were
> older than 1453731012000 ms: 1453731011000 ms
> 16/01/25 15:11:12 INFO FileInputDStream: Cleared 0 old files that were
> older than 1453731012000 ms:
> 

Re: HiveThriftServer2.startWithContext error with registerTempTable

2015-07-16 Thread Srikanth
Cheng,

Yes, select * from temp_table was working. I was able to perform some
transformation+action on the dataframe and print it on console.
HiveThriftServer2.startWithContext was being run on the same session.

When you say try --jars option, are you asking me to pass spark-csv jar?
I'm already doing this with --packages com.databricks:spark-csv_2.10:1.0.3
Not sure if I'm missing your point here.

Anyways, I gave it a shot. I downloaded spark-csv_2.10-0.1.jar and started
spark-shell with --jars.
I still get the same exception. I'm pasting the exception below.

scala 15/07/16 11:29:22 ERROR SparkExecuteStatementOperation: Error
executing query:
java.lang.ClassNotFoundException:
com.databricks.spark.csv.CsvRelation$$anonfun$buildScan$1$$anonfun$1
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

15/07/16 11:29:22 WARN ThriftCLIService: Error executing statement:
org.apache.hive.service.cli.HiveSQLException:
java.lang.ClassNotFoundException:
com.databricks.spark.csv.CsvRelation$$anonfun$buildScan$1$$anonfun$1
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.run(Shim13.scala:206)
at
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:231)
at
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:218)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

Srikanth




On Thu, Jul 16, 2015 at 12:44 AM, Cheng, Hao hao.ch...@intel.com wrote:

  Have you ever try query the “select * from temp_table” from the spark
 shell? Or can you try the option --jars while starting the spark shell?



 *From:* Srikanth [mailto:srikanth...@gmail.com]
 *Sent:* Thursday, July 16, 2015 9:36 AM
 *To:* user
 *Subject:* Re: HiveThriftServer2.startWithContext error with
 registerTempTable



 Hello,



 Re-sending this to see if I'm second time lucky!

 I've not managed to move past this error.



 Srikanth



 On Mon, Jul 13, 2015 at 9:14 PM, Srikanth srikanth...@gmail.com wrote:

  Hello,



 I want to expose result of Spark computation to external tools. I plan to
 do this with Thrift server JDBC interface by registering result Dataframe
 as temp table.

 I wrote a sample program in spark-shell to test this.



 val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
 import hiveContext.implicits._
 HiveThriftServer2.startWithContext(hiveContext)
 val myDF =
 hiveContext.read.format(com.databricks.spark.csv).option(header,
 true).load(/datafolder/weblog/pages.csv)
 myDF.registerTempTable(temp_table)



 I'm able to see the temp table in Beeline



 +-+--+
 |  tableName  | isTemporary  |
 +-+--+
 | temp_table  | true |
 | my_table| false|
 +-+--+



 Now when I issue select * from temp_table from Beeline, I see below
 exception in spark-shell



 15/07/13 17:18:27 WARN ThriftCLIService: Error executing statement:

 org.apache.hive.service.cli.HiveSQLException: 
 *java.lang.ClassNotFoundException:
 com.databricks.spark.csv.CsvRelation$$anonfun$buildScan$1$$anonfun$1*

 at
 org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.run(Shim13.scala:206)

 at
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:231)

 at
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:218)

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)



 I'm able to read the other table(my_table) from Beeline though.

 Any suggestions on how to overcome this?



 This is with Spark 1.4 pre-built version. Spark-shell was started with
 --package to pass spark-csv.



 Srikanth





RE: HiveThriftServer2.startWithContext error with registerTempTable

2015-07-15 Thread Cheng, Hao
Have you ever try query the “select * from temp_table” from the spark shell? Or 
can you try the option --jars while starting the spark shell?

From: Srikanth [mailto:srikanth...@gmail.com]
Sent: Thursday, July 16, 2015 9:36 AM
To: user
Subject: Re: HiveThriftServer2.startWithContext error with registerTempTable

Hello,

Re-sending this to see if I'm second time lucky!
I've not managed to move past this error.

Srikanth

On Mon, Jul 13, 2015 at 9:14 PM, Srikanth 
srikanth...@gmail.commailto:srikanth...@gmail.com wrote:
Hello,

I want to expose result of Spark computation to external tools. I plan to do 
this with Thrift server JDBC interface by registering result Dataframe as temp 
table.
I wrote a sample program in spark-shell to test this.

val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
import hiveContext.implicits._
HiveThriftServer2.startWithContext(hiveContext)
val myDF = hiveContext.read.format(com.databricks.spark.csv).option(header, 
true).load(/datafolder/weblog/pages.csv)
myDF.registerTempTable(temp_table)

I'm able to see the temp table in Beeline

+-+--+
|  tableName  | isTemporary  |
+-+--+
| temp_table  | true |
| my_table| false|
+-+--+

Now when I issue select * from temp_table from Beeline, I see below exception 
in spark-shell

15/07/13 17:18:27 WARN ThriftCLIService: Error executing statement:
org.apache.hive.service.cli.HiveSQLException: java.lang.ClassNotFoundException: 
com.databricks.spark.csv.CsvRelation$$anonfun$buildScan$1$$anonfun$1
at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.run(Shim13.scala:206)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:231)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:218)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

I'm able to read the other table(my_table) from Beeline though.
Any suggestions on how to overcome this?

This is with Spark 1.4 pre-built version. Spark-shell was started with 
--package to pass spark-csv.

Srikanth



Re: HiveThriftServer2.startWithContext error with registerTempTable

2015-07-15 Thread Srikanth
Hello,

Re-sending this to see if I'm second time lucky!
I've not managed to move past this error.

Srikanth

On Mon, Jul 13, 2015 at 9:14 PM, Srikanth srikanth...@gmail.com wrote:

 Hello,

 I want to expose result of Spark computation to external tools. I plan to
 do this with Thrift server JDBC interface by registering result Dataframe
 as temp table.
 I wrote a sample program in spark-shell to test this.

 val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
 import hiveContext.implicits._
 HiveThriftServer2.startWithContext(hiveContext)
 val myDF =
 hiveContext.read.format(com.databricks.spark.csv).option(header,
 true).load(/datafolder/weblog/pages.csv)
 myDF.registerTempTable(temp_table)


 I'm able to see the temp table in Beeline

 +-+--+
 |  tableName  | isTemporary  |
 +-+--+
 | temp_table  | true |
 | my_table| false|
 +-+--+


 Now when I issue select * from temp_table from Beeline, I see below
 exception in spark-shell

 15/07/13 17:18:27 WARN ThriftCLIService: Error executing statement:
 org.apache.hive.service.cli.HiveSQLException: 
 *java.lang.ClassNotFoundException:
 com.databricks.spark.csv.CsvRelation$$anonfun$buildScan$1$$anonfun$1*
 at
 org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.run(Shim13.scala:206)
 at
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:231)
 at
 org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:218)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 I'm able to read the other table(my_table) from Beeline though.
 Any suggestions on how to overcome this?

 This is with Spark 1.4 pre-built version. Spark-shell was started with
 --package to pass spark-csv.

 Srikanth



Re: HiveThriftServer2

2015-04-11 Thread Cheng Lian
Unfortunately the spark-hive-thriftserver hasn't been published yet, you 
may either publish it locally or use it as an unmanaged SBT dependency.


On 4/8/15 8:58 AM, Mohammed Guller wrote:


Hi –

I want to create an instance of HiveThriftServer2 in my Scala 
application, so  I imported the following line:


import org.apache.spark.sql.hive.thriftserver._

However, when I compile the code, I get the following error:

object thriftserver is not a member of package org.apache.spark.sql.hive

I tried to include the following in build.sbt, but it looks like it is 
not published:


org.apache.spark %% spark-hive-thriftserver % 1.3.0,

What library dependency do I need to include in my build.sbt to use 
the ThriftServer2 object?


Thanks,

Mohammed





RE: HiveThriftServer2

2015-04-11 Thread Mohammed Guller
Thanks, Cheng.

BTW, there is another thread on the same topic. It looks like the thrift-server 
will be published for 1.3.1.

Mohammed

From: Cheng Lian [mailto:lian.cs@gmail.com]
Sent: Saturday, April 11, 2015 5:37 AM
To: Mohammed Guller; user@spark.apache.org
Subject: Re: HiveThriftServer2

Unfortunately the spark-hive-thriftserver hasn't been published yet, you may 
either publish it locally or use it as an unmanaged SBT dependency.
On 4/8/15 8:58 AM, Mohammed Guller wrote:
Hi -

I want to create an instance of HiveThriftServer2 in my Scala application, so  
I imported the following line:

import org.apache.spark.sql.hive.thriftserver._

However, when I compile the code, I get the following error:

object thriftserver is not a member of package org.apache.spark.sql.hive

I tried to include the following in build.sbt, but it looks like it is not 
published:

org.apache.spark %% spark-hive-thriftserver % 1.3.0,

What library dependency do I need to include in my build.sbt to use the 
ThriftServer2 object?

Thanks,
Mohammed