Hi,

I am using Zeppelin 0.6.0. Please find below the issues along with their 
detailed explanation.

Zeppelin 0.6.0 Issues:

a.       not able to execute DDL statements like Create/Drop tables using 
temptables derived from the hive table

*         Error Log: "java.lang.RuntimeException: [1.1] failure: ``with'' 
expected but identifier drop found : When using sql interpreter to drop"



b.      not able to use the HiveContext to read the Hive table

*         Error Log: "error: object HiveContext in package hive cannot be 
accessed in package org.apache.spark.sql.hive"

Detailed Explanation:
I upgraded to 0.6.0 from Zeppelin 0.5.6 last week. I am facing some issues 
while using notebooks on 0.6.0. I am using Ambari 2.4.2 as my Cluster Manager 
and Spark version is 1.6.

The workflow of notebook is as follows:

1.       Create a spark scala dataframe by reading a hive table in parquet/text 
format using sqlContext (sqlContext.read.parquet("/tablelocation/tablename")

2.       Import sqlcontext_implicits

3.       Register the dataframe as a temp table

4.       Write queries using %sql interpreter or sqlContext.sql

The issue which I am facing right now is that Even though I am able to execute 
"SELECT"  query on the temptables but I am not able to execute DDL statements 
like Create/Drop tables using temptables derived from the hive table.
Following is my code:
1st case:  sqlContext.sql("drop if exists tablename")
2nd case: %sql
                  drop if exists tablename

I am getting the same error for both the cases: java.lang.RuntimeException: 
[1.1] failure: ``with'' expected but identifier drop found : When using sql 
interpreter to drop

It is to be noted that, the same code used to work in Zeppelin 0.5.6.

After researching a bit, I came across that I need to use HiveContext to query 
hive table.

The second issue which I am facing is I was able to import HiveContext using 
"import org.apache.spark.sql.hive.HiveContext" but I was not able to use the 
HiveContext to read the Hive table.

This is the code which I wrote :
(HiveContext.read.parquet("/tablelocation/tablename")

I got the following error:
error: object HiveContext in package hive cannot be accessed in package 
org.apache.spark.sql.hive

I am not able to deep dive into this error as there is not much support online.

Could anyone please suggest any fix for the errors ?

Thanks and Regards,
................................................................................................................................................................................................
[Description: cid:image001.png@01D1EBF4.36D373B0]

Valluri Naga Sravanthi| On assignment to PfizerWorks
Cell: +91 9008412366
Email: pfizerwor...@pfizer.com<mailto:pfizerwor...@pfizer.com>; 
valluri.nagasravan...@pfizer.com<mailto:valluri.nagasravan...@pfizer.com%7C>
Website: http://pfizerWorks.pfizer.com<http://pfizerworks.pfizer.com/>



..............................................................................................................................................................................

Reply via email to