The fundamental problem seems to be the spark-assembly-n.n.n-hadoopn.n..jar
libraries that are incompatible and cause issues. For example Hive does not
work with existing Spark 1.6.1 binaries, In other words if you set
hive.execution.engine in the following $HIVE_HOME/cong/hive-site.xml

    <name>hive.execution.engine</name>

*<value>spark</value>*    <description>
      Expects one of [mr, tez, spark].
      Chooses execution engine. Options are: mr (Map reduce, default), tez,
spark. While MR
      remains the default engine for historical reasons, it is itself a
historical engine
      and is deprecated in Hive 2 line. It may be removed without further
warning.
    </description>

It will crash.

In short it only currently works for me Spark 1.3.1 binaries together with
putting the spark assembly jar file spark-assembly-1.3.1-hadoop2.4.0.jar (to
be extracted via Spark 1.3.1 source build) and put in $HIVE_HOME/lib and
installing Spark 1.3.1 binaries.

Afterwards whenever you invoke Hive you will need to initialise it using
the following:

set spark.home=/usr/lib/spark-1.3.1-bin-hadoop2.6;
set hive.execution.engine=spark;
set spark.master=yarn-client;
This is just a work-around which is not what you want.

HTH






Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 8 April 2016 at 19:16, Szehon Ho <sze...@cloudera.com> wrote:

> Yes, that is a good goal we will have to do eventually.  I was not aware
> that it is not working to be honest.
>
> Can you let us know what is broken on Hive 2 on Spark 1.6.1?  Preferably
> via filing a JIRA on HIVE side?
>
> On Fri, Apr 8, 2016 at 7:47 AM, Mich Talebzadeh <mich.talebza...@gmail.com
> > wrote:
>
>> This is a different thing. the question is when will Hive 2 be able to
>> run on Spark 1.6.1 installed binaries as execution engine.
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 8 April 2016 at 11:30, 469564481 <469564...@qq.com> wrote:
>>
>>>   I do not install spark engines.
>>>   I can use jdbc  connectting to hive and execute
>>> sql(create,drop...),but odbc testcase(HiveclientTest) can connect to hive,
>>> can not execute sql.
>>>
>>>
>>> ------------------ 原始邮件 ------------------
>>> *发件人:* "Mich Talebzadeh";<mich.talebza...@gmail.com>;
>>> *发送时间:* 2016年4月8日(星期五) 下午5:02
>>> *收件人:* "user"<u...@hive.apache.org>; "user @spark"<user@spark.apache.org>;
>>>
>>> *主题:* Work on Spark engine for Hive
>>>
>>> Hi,
>>>
>>> Is there any scheduled work to enable Hive to use recent version of
>>> Spark engines?
>>>
>>> This is becoming an issue as some applications have to rely on MapR
>>> engine to do operations on Hive 2 which is serial and slow.
>>>
>>> Thanks
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>
>>
>

Reply via email to