The issue is on Spark shell this works OK

Spark context Web UI available at http://50.140.197.217:55555
Spark context available as 'sc' (master = local, app id =
local-1471191662017).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.0.0
      /_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
1.8.0_77)
Type in expressions to have them evaluated.
Type :help for more information.
scala> import org.apache.spark.SparkContext
scala> import org.apache.spark.SparkConf
scala> import org.apache.spark.sql.Row
scala> import org.apache.spark.sql.hive.HiveContext
scala> import org.apache.spark.sql.types._
scala> import org.apache.spark.sql.SparkSession
scala> import org.apache.spark.sql.functions._

The code itself






*scala>   val conf = new SparkConf().     |
setAppName("ETL_scratchpad_dummy").     |
set("spark.driver.allowMultipleContexts", "true").     |
set("enableHiveSupport","true")*conf: org.apache.spark.SparkConf =
org.apache.spark.SparkConf@33215ffb


*scala>   val sc = new SparkContext(conf)*sc: org.apache.spark.SparkContext
= org.apache.spark.SparkContext@3cbfdf5c


*scala>   val HiveContext = new
org.apache.spark.sql.hive.HiveContext(sc)*warning:
there was one deprecation warning; re-run with -deprecation for details
HiveContext: org.apache.spark.sql.hive.HiveContext =
org.apache.spark.sql.hive.HiveContext@2152fde5


*scala>   HiveContext.sql("use oraclehadoop")*res0:
org.apache.spark.sql.DataFrame = []

I think I am getting something missing here a dependency


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 14 August 2016 at 17:16, Koert Kuipers <ko...@tresata.com> wrote:

> HiveContext is gone
>
> SparkSession now combines functionality of SqlContext and HiveContext (if
> hive support is available)
>
> On Sun, Aug 14, 2016 at 12:12 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> Thanks Koert,
>>
>> I did that before as well. Anyway this is dependencies
>>
>> libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"
>> libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0"
>> libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0"
>>
>>
>> and the error
>>
>>
>> [info] Compiling 1 Scala source to /data6/hduser/scala/ETL_scratc
>> hpad_dummy/target/scala-2.10/classes...
>> [error] 
>> /data6/hduser/scala/ETL_scratchpad_dummy/src/main/scala/ETL_scratchpad_dummy.scala:4:
>> object hive is not a member of package org.apache.spark.sql
>> [error] import org.apache.spark.sql.hive.HiveContext
>> [error]                             ^
>> [error] 
>> /data6/hduser/scala/ETL_scratchpad_dummy/src/main/scala/ETL_scratchpad_dummy.scala:20:
>> object hive is not a member of package org.apache.spark.sql
>> [error]   val HiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>>
>>
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 14 August 2016 at 17:00, Koert Kuipers <ko...@tresata.com> wrote:
>>
>>> you cannot mix spark 1 and spark 2 jars
>>>
>>> change this
>>> libraryDependencies += "org.apache.spark" %% "spark-hive" % "1.5.1"
>>> to
>>> libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.0.0"
>>>
>>> On Sun, Aug 14, 2016 at 11:58 AM, Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> In Spark 2 I am using sbt or mvn to compile my scala program. This used
>>>> to compile and run perfectly with Spark 1.6.1 but now it is throwing error
>>>>
>>>>
>>>> I believe the problem is here. I have
>>>>
>>>> name := "scala"
>>>> version := "1.0"
>>>> scalaVersion := "2.11.7"
>>>> libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"
>>>> libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0"
>>>> libraryDependencies += "org.apache.spark" %% "spark-hive" % "1.5.1"
>>>>
>>>> However the error I am getting is
>>>>
>>>> [error] bad symbolic reference. A signature in HiveContext.class refers
>>>> to type Logging
>>>> [error] in package org.apache.spark which is not available.
>>>> [error] It may be completely missing from the current classpath, or the
>>>> version on
>>>> [error] the classpath might be incompatible with the version used when
>>>> compiling HiveContext.class.
>>>> [error] one error found
>>>> [error] (compile:compileIncremental) Compilation failed
>>>>
>>>>
>>>> And this is the code
>>>>
>>>> import org.apache.spark.SparkContext
>>>> import org.apache.spark.SparkConf
>>>> import org.apache.spark.sql.Row
>>>> import org.apache.spark.sql.hive.HiveContext
>>>> import org.apache.spark.sql.types._
>>>> import org.apache.spark.sql.SparkSession
>>>> import org.apache.spark.sql.functions._
>>>> object ETL_scratchpad_dummy {
>>>>   def main(args: Array[String]) {
>>>>   val conf = new SparkConf().
>>>>                setAppName("ETL_scratchpad_dummy").
>>>>                set("spark.driver.allowMultipleContexts", "true").
>>>>                set("enableHiveSupport","true")
>>>>   val sc = new SparkContext(conf)
>>>>   //import sqlContext.implicits._
>>>>   val HiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
>>>>   HiveContext.sql("use oraclehadoop")
>>>>
>>>>
>>>> Anyone has come across this
>>>>
>>>>
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * 
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>
>>>
>>
>

Reply via email to