It should be "hiveContext" instead of "HiveContext"

On Fri, Aug 19, 2016 at 3:33 PM, Sundararajan, Pranav <
pranav.sundarara...@pfizer.com> wrote:

> Hi,
>
>
>
> PFB the code.
>
> import org.apache.spark.sql.hive.HiveContext
>
> val df_test_hive = HiveContext.read.parquet("/location/hivefile ")
>
>
>
>
>
> We are getting the following error in the log:
>
> import org.apache.spark.sql.hive.HiveContext
>
> <console>:57: error: object HiveContext in package hive cannot be accessed
> in package org.apache.spark.sql.hive
>
> val df_test_hive = HiveContext.read.parquet("/location/hivefile”)
>
>
>
>
>
> PFB the embedded image of hive interpreter settings:
>
>
>
>
>
>
>
> Regards,
>
>
>
>
>
>
>
> [image: Description: cid:image001.png@01D1C173.60CDE0C0]
>
> *Pranav Sundararajan| On assignment to **P**fizerWorks*
>
> Cell*: +91 9008412366 <%2B91%209008412366>*
>
> *Email**: **pfizerwor...@pfizer.com <pfizerwor...@pfizer.com>**; 
> **pranav.sundarara...@pfizer.com
> <pranav.sundarara...@pfizer.com>*
>
> *Website**: **http://pfizerWorks.pfizer.com
> <http://pfizerworks.pfizer.com/>*
>
>
>
>
>
>
>
> *From:* Jeff Zhang [mailto:zjf...@gmail.com]
> *Sent:* Thursday, August 18, 2016 8:31 PM
> *To:* dev@zeppelin.apache.org
> *Cc:* d...@zeppelin.incubator.apache.org; us...@zeppelin.apache.org;
> Sundararajan, Pranav; Jaisankar, Saurabh
> *Subject:* Re: Issues in Zeppelin 0.6.0
>
>
>
> Hi
>
>
>
> Since you have many issues, let's focus one issue first.
>
>
>
> >>> *not able to use the HiveContext to read the Hive table*
>
>
>
>
>
> Can you paste your code of how you use HiveContext ? Do you create it by
> yourself ? It should be created by zeppelin, so you don't need to create it.
>
> What's  in interpreter log ?
>
>
>
>
>
>
>
> On Thu, Aug 18, 2016 at 7:35 PM, Nagasravanthi, Valluri <
> valluri.nagasravan...@pfizer.com> wrote:
>
> Hi,
>
>
>
> I am using Zeppelin 0.6.0. Please find below the issues along with their
> detailed explanation.
>
>
>
> *Zeppelin 0.6.0 Issues:*
>
> a.       *not able to execute DDL statements like Create/Drop tables
> using temptables derived from the hive table*
>
> ·         Error Log:* “*java.lang.RuntimeException: [1.1] failure:
> ``with'' expected but identifier drop found : When using sql interpreter to
> drop*”*
>
>
>
> b.      *not able to use the HiveContext to read the Hive table*
>
> ·         Error Log: *“*error: object HiveContext in package hive cannot
> be accessed in package org.apache.spark.sql.hive*”*
>
>
>
> *Detailed Explanation:*
>
> I upgraded to 0.6.0 from Zeppelin 0.5.6 last week. I am facing some issues
> while using notebooks on 0.6.0. I am using Ambari 2.4.2 as my Cluster
> Manager and Spark version is 1.6.
>
>
>
> The workflow of notebook is as follows:
>
> 1.       Create a spark scala dataframe by reading a hive table in
> parquet/text format using sqlContext (sqlContext.read.parquet(“/
> tablelocation/tablename”)
>
> 2.       Import sqlcontext_implicits
>
> 3.       Register the dataframe as a temp table
>
> 4.       Write queries using %sql interpreter or sqlContext.sql
>
>
>
> The issue which I am facing right now is that Even though I am able to
> execute “SELECT”  query on the temptables but *I am not able to execute
> DDL statements like Create/Drop tables using temptables derived from the
> hive table.  *
>
> Following is my code:
>
> 1st case:  sqlContext.sql(“drop if exists tablename”)
>
> 2nd case: %sql
>
>                   drop if exists tablename
>
>
>
> I am getting the same error for both the cases:
> java.lang.RuntimeException: [1.1] failure: ``with'' expected but identifier
> drop found : When using sql interpreter to drop
>
>
>
> It is to be noted that, the same code used to work in Zeppelin 0.5.6.
>
>
>
> After researching a bit, I came across that I need to use HiveContext to
> query hive table.
>
>
>
> The second issue which I am facing is I was able to import HiveContext
> using “import org.apache.spark.sql.hive.HiveContext” *but I was not able
> to use the HiveContext to read the Hive table.*
>
>
>
> This is the code which I wrote :
>
> (HiveContext.read.parquet(“/tablelocation/tablename”)
>
>
>
> I got the following error:
>
> error: object HiveContext in package hive cannot be accessed in package
> org.apache.spark.sql.hive
>
>
>
> I am not able to deep dive into this error as there is not much support
> online.
>
>
>
> Could anyone please suggest any fix for the errors ?
>
>
>
> Thanks and Regards,
>
> *…………………………………………………………………………………………………………………………………………………………………………*
>
> *Valluri Naga Sravanthi| On assignment to **P**fizerWorks*
>
> Cell*: +91 9008412366 <%2B91%209008412366>*
>
> *Email**: **pfizerwor...@pfizer.com* <pfizerwor...@pfizer.com>*; *
> *valluri.nagasravan...@pfizer.com* <valluri.nagasravan...@pfizer.com%7C>
>
> *Website**: **http://pfizerWorks.pfizer.com*
> <http://pfizerworks.pfizer.com/>
>
>
>
> *…………………………………………………………………………………………………………………………………………………………*
>
>
>
>
>
>
>
> --
>
> Best Regards
>
> Jeff Zhang
>



-- 
Best Regards

Jeff Zhang

Reply via email to