Hi Dr.Mich,
Can you please share your London meetup presentation. Curious to see the 
comparison according to you of various query engines.

Thanks,
Chandra

> On Jul 28, 2016, at 12:13 AM, Mich Talebzadeh <mich.talebza...@gmail.com> 
> wrote:
> 
> Hi,
> 
> I made a presentation in London on 20th July on this subject:. In that I 
> explained how to make Spark work as an execution engine for Hive.
> 
> Query Engines for Hive, MR, Spark, Tez and LLAP – Considerations!
> 
> See if I can send the presentation
> 
> Cheers
> 
> 
> Dr Mich Talebzadeh
>  
> LinkedIn  
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  
> http://talebzadehmich.wordpress.com
> 
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
> damage or destruction of data or any other property which may arise from 
> relying on this email's technical content is explicitly disclaimed. The 
> author will in no case be liable for any monetary damages arising from such 
> loss, damage or destruction.
>  
> 
>> On 28 July 2016 at 04:24, Mudit Kumar <mudit.ku...@askme.in> wrote:
>> Yes Mich,exactly.
>> 
>> Thanks,
>> Mudit
>> 
>> From: Mich Talebzadeh <mich.talebza...@gmail.com>
>> Reply-To: <user@hive.apache.org>
>> Date: Thursday, July 28, 2016 at 1:08 AM
>> To: user <user@hive.apache.org>
>> Subject: Re: Hive on spark
>> 
>> You mean you want to run Hive using Spark as the execution engine which uses 
>> Yarn by default?
>> 
>> 
>> Something like below
>> 
>> hive> select max(id) from oraclehadoop.dummy_parquet;
>> Starting Spark Job = 8218859d-1d7c-419c-adc7-4de175c3ca6d
>> Query Hive on Spark job[1] stages:
>> 2
>> 3
>> Status: Running (Hive on Spark job[1])
>> Job Progress Format
>> CurrentTime StageId_StageAttemptId: 
>> SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount 
>> [StageCost]
>> 2016-07-27 20:38:17,269 Stage-2_0: 0(+8)/24     Stage-3_0: 0/1
>> 2016-07-27 20:38:20,298 Stage-2_0: 8(+4)/24     Stage-3_0: 0/1
>> 2016-07-27 20:38:22,309 Stage-2_0: 11(+1)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:23,330 Stage-2_0: 12(+8)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:26,360 Stage-2_0: 17(+7)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:27,386 Stage-2_0: 20(+4)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:28,391 Stage-2_0: 21(+3)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:29,395 Stage-2_0: 24/24 Finished       Stage-3_0: 1/1 
>> Finished
>> Status: Finished successfully in 13.14 seconds
>> OK
>> 100000000
>> Time taken: 13.426 seconds, Fetched: 1 row(s)
>> 
>> 
>> HTH
>> 
>> Dr Mich Talebzadeh
>>  
>> LinkedIn  
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>  
>> http://talebzadehmich.wordpress.com
>> 
>> Disclaimer: Use it at your own risk. Any and all responsibility for any 
>> loss, damage or destruction of data or any other property which may arise 
>> from relying on this email's technical content is explicitly disclaimed. The 
>> author will in no case be liable for any monetary damages arising from such 
>> loss, damage or destruction.
>>  
>> 
>>> On 27 July 2016 at 20:31, Mudit Kumar <mudit.ku...@askme.in> wrote:
>>> Hi All,
>>> 
>>> I need to configure hive cluster based on spark engine (yarn).
>>> I already have a running hadoop cluster.
>>> 
>>> Can someone point me to relevant documentation?
>>> 
>>> TIA.
>>> 
>>> Thanks,
>>> Mudit
> 

Reply via email to