This is due to a change in 1.6,  by default the Thrift server runs in
multi-session mode. You would want to set the following to true on your
spark config.

spark-default.conf set spark.sql.hive.thriftServer.singleSession

Good write up here:
https://community.hortonworks.com/questions/29090/i-cant-find-my-tables-in-spark-sql-using-beeline.html

HTH.

-Todd

On Thu, Jul 21, 2016 at 10:30 AM, Marco Colombo <ing.marco.colo...@gmail.com
> wrote:

> Thanks.
>
> That is just a typo. I'm using on 'spark://10.0.2.15:7077' (standalone).
> Same url used in --master in spark-submit
>
>
>
> 2016-07-21 16:08 GMT+02:00 Mich Talebzadeh <mich.talebza...@gmail.com>:
>
>> Hi Marco
>>
>> In your code
>>
>> val conf = new SparkConf()
>>       .setMaster("spark://10.0.2.15:7077")
>>       .setMaster("local")
>>       .set("spark.cassandra.connection.host", "10.0.2.15")
>>       .setAppName("spark-sql-dataexample");
>>
>> As I understand the first .setMaster("spark://<IP_ADDRESS>:7077 indicates
>> that you are using Spark in standalone mode and then .setMaster("local")
>> means you are using it in Local mode?
>>
>> Any reason for it?
>>
>> Basically you are overriding standalone with local.
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 21 July 2016 at 14:55, Marco Colombo <ing.marco.colo...@gmail.com>
>> wrote:
>>
>>> Hi all, I have a spark application that was working in 1.5.2, but now
>>> has a problem in 1.6.2.
>>>
>>> Here is an example:
>>>
>>>     val conf = new SparkConf()
>>>       .setMaster("spark://10.0.2.15:7077")
>>>       .setMaster("local")
>>>       .set("spark.cassandra.connection.host", "10.0.2.15")
>>>       .setAppName("spark-sql-dataexample");
>>>
>>>     val hiveSqlContext = new HiveContext(SparkContext.getOrCreate(conf));
>>>
>>>     //Registering tables....
>>>     var query = """OBJ_TAB""".stripMargin;
>>>
>>>     val options = Map(
>>>       "driver" -> "org.postgresql.Driver",
>>>       "url" -> "jdbc:postgresql://127.0.0.1:5432/DB",
>>>       "user" -> "postgres",
>>>       "password" -> "postgres",
>>>       "dbtable" -> query);
>>>
>>>     import hiveSqlContext.implicits._;
>>>     val df: DataFrame =
>>> hiveSqlContext.read.format("jdbc").options(options).load();
>>>     df.registerTempTable("V_OBJECTS");
>>>
>>>      val optionsC = Map("table"->"data_tab", "keyspace"->"data");
>>>     val stats : DataFrame =
>>> hiveSqlContext.read.format("org.apache.spark.sql.cassandra").options(optionsC).load();
>>>     //stats.foreach { x => println(x) }
>>>     stats.registerTempTable("V_DATA");
>>>
>>>     //START HIVE SERVER
>>>     HiveThriftServer2.startWithContext(hiveSqlContext);
>>>
>>> Now, from app I can perform queries and joins over the 2 registered
>>> table, but if I connect to port 10000 via beeline, I see no registered
>>> tables.
>>> show tables is empty.
>>>
>>> I'm using embedded DERBY DB, but this was working in 1.5.2.
>>>
>>> Any suggestion?
>>>
>>> Thanks!!!!
>>>
>>>
>>
>
>
> --
> Ing. Marco Colombo
>

Reply via email to