Hi,
We tried “hiveContext” instead of “HiveContext” but it didn’t work out.
Please find below the code and the error log:
import org.apache.spark.sql.hive.HiveContext
val df_test_hive = hiveContext.read.parquet("/location/hivefile")
import org.apache.spark.sql.hive.HiveContext
<console>:67: error: not found: value hiveContext
val df_test_hive = hiveContext.read.parquet("/location/hivefile")
It seems hiveContext is not an object of HiveContext package.
Could you please suggest if there is any other way out?
Regards,
[Description: cid:[email protected]]
Pranav Sundararajan| On assignment to PfizerWorks
Cell: +91 9008412366
Email: [email protected]<mailto:[email protected]>;
[email protected]<mailto:[email protected]>
Website: http://pfizerWorks.pfizer.com<http://pfizerworks.pfizer.com/>
From: Jeff Zhang [mailto:[email protected]]
Sent: Friday, August 19, 2016 3:35 AM
To: Sundararajan, Pranav
Cc: [email protected]; [email protected];
[email protected]; Jaisankar, Saurabh
Subject: Re: Issues in Zeppelin 0.6.0
It should be "hiveContext" instead of "HiveContext"
On Fri, Aug 19, 2016 at 3:33 PM, Sundararajan, Pranav
<[email protected]<mailto:[email protected]>> wrote:
Hi,
PFB the code.
import org.apache.spark.sql.hive.HiveContext
val df_test_hive = HiveContext.read.parquet("/location/hivefile ")
We are getting the following error in the log:
import org.apache.spark.sql.hive.HiveContext
<console>:57: error: object HiveContext in package hive cannot be accessed in
package org.apache.spark.sql.hive
val df_test_hive = HiveContext.read.parquet("/location/hivefile”)
PFB the embedded image of hive interpreter settings:
[cid:[email protected]]
Regards,
[Description: cid:[email protected]]
Pranav Sundararajan| On assignment to PfizerWorks
Cell: +91 9008412366<tel:%2B91%209008412366>
Email: [email protected]<mailto:[email protected]>;
[email protected]<mailto:[email protected]>
Website: http://pfizerWorks.pfizer.com<http://pfizerworks.pfizer.com/>
From: Jeff Zhang [mailto:[email protected]<mailto:[email protected]>]
Sent: Thursday, August 18, 2016 8:31 PM
To: [email protected]<mailto:[email protected]>
Cc:
[email protected]<mailto:[email protected]>;
[email protected]<mailto:[email protected]>; Sundararajan,
Pranav; Jaisankar, Saurabh
Subject: Re: Issues in Zeppelin 0.6.0
Hi
Since you have many issues, let's focus one issue first.
>>> not able to use the HiveContext to read the Hive table
Can you paste your code of how you use HiveContext ? Do you create it by
yourself ? It should be created by zeppelin, so you don't need to create it.
What's in interpreter log ?
On Thu, Aug 18, 2016 at 7:35 PM, Nagasravanthi, Valluri
<[email protected]<mailto:[email protected]>>
wrote:
Hi,
I am using Zeppelin 0.6.0. Please find below the issues along with their
detailed explanation.
Zeppelin 0.6.0 Issues:
a. not able to execute DDL statements like Create/Drop tables using
temptables derived from the hive table
• Error Log: “java.lang.RuntimeException: [1.1] failure: ``with''
expected but identifier drop found : When using sql interpreter to drop”
b. not able to use the HiveContext to read the Hive table
• Error Log: “error: object HiveContext in package hive cannot be
accessed in package org.apache.spark.sql.hive”
Detailed Explanation:
I upgraded to 0.6.0 from Zeppelin 0.5.6 last week. I am facing some issues
while using notebooks on 0.6.0. I am using Ambari 2.4.2 as my Cluster Manager
and Spark version is 1.6.
The workflow of notebook is as follows:
1. Create a spark scala dataframe by reading a hive table in parquet/text
format using sqlContext (sqlContext.read.parquet(“/tablelocation/tablename”)
2. Import sqlcontext_implicits
3. Register the dataframe as a temp table
4. Write queries using %sql interpreter or sqlContext.sql
The issue which I am facing right now is that Even though I am able to execute
“SELECT” query on the temptables but I am not able to execute DDL statements
like Create/Drop tables using temptables derived from the hive table.
Following is my code:
1st case: sqlContext.sql(“drop if exists tablename”)
2nd case: %sql
drop if exists tablename
I am getting the same error for both the cases: java.lang.RuntimeException:
[1.1] failure: ``with'' expected but identifier drop found : When using sql
interpreter to drop
It is to be noted that, the same code used to work in Zeppelin 0.5.6.
After researching a bit, I came across that I need to use HiveContext to query
hive table.
The second issue which I am facing is I was able to import HiveContext using
“import org.apache.spark.sql.hive.HiveContext” but I was not able to use the
HiveContext to read the Hive table.
This is the code which I wrote :
(HiveContext.read.parquet(“/tablelocation/tablename”)
I got the following error:
error: object HiveContext in package hive cannot be accessed in package
org.apache.spark.sql.hive
I am not able to deep dive into this error as there is not much support online.
Could anyone please suggest any fix for the errors ?
Thanks and Regards,
…………………………………………………………………………………………………………………………………………………………………………
Valluri Naga Sravanthi| On assignment to PfizerWorks
Cell: +91 9008412366<tel:%2B91%209008412366>
Email: [email protected]<mailto:[email protected]>;
[email protected]<mailto:[email protected]%7C>
Website: http://pfizerWorks.pfizer.com<http://pfizerworks.pfizer.com/>
…………………………………………………………………………………………………………………………………………………………
--
Best Regards
Jeff Zhang
--
Best Regards
Jeff Zhang