Using the prebuilt versions from the apache website for both Hive and Spark


> On 26 Jan 2016, at 14:00, Sofia Panagiotidi <sofia.panagiot...@taiger.com> 
> wrote:
> 
> I have also managed to use Hive 1.2.1 with Spark 1.4.1
> 
> 
>> On 26 Jan 2016, at 10:20, Mich Talebzadeh <m...@peridale.co.uk 
>> <mailto:m...@peridale.co.uk>> wrote:
>> 
>> As far as I have worked this one out Hive 1.2.1 works on Spark 1.3.1 for 
>> now. This means Hive will use spark engine.
>>  
>>  
>> set spark.home=/usr/lib/spark-1.3.1-bin-hadoop2.6;
>> set hive.execution.engine=spark;
>> set spark.master=yarn-client;
>> set hive.optimize.ppd=true;
>> Beeline version 1.2.1 by Apache Hive
>> select count(1) from smallsales;
>> INFO  :
>> Query Hive on Spark job[0] stages:
>> INFO  : 0
>> INFO  : 1
>> INFO  :
>> Status: Running (Hive on Spark job[0])
>> INFO  : Job Progress Format
>> CurrentTime StageId_StageAttemptId: 
>> SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount 
>> [StageCost]
>> INFO  : 2016-01-26 09:27:30,240 Stage-0_0: 0(+1)/1      Stage-1_0: 0/1
>> INFO  : 2016-01-26 09:27:33,258 Stage-0_0: 0(+1)/1      Stage-1_0: 0/1
>> INFO  : 2016-01-26 09:27:35,265 Stage-0_0: 1/1 Finished Stage-1_0: 0(+1)/1
>> INFO  : 2016-01-26 09:27:38,281 Stage-0_0: 1/1 Finished Stage-1_0: 1/1 
>> Finished
>> INFO  : Status: Finished successfully in 28.17 seconds
>> +----------+--+
>> |   _c0    |
>> +----------+--+
>> | 5000000  |
>> +----------+--+
>>  
>> HTH
>>  
>> Dr Mich Talebzadeh
>>  
>> LinkedIn  
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>  
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>
>>  
>> Sybase ASE 15 Gold Medal Award 2008
>> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>>  
>> <http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf>
>> Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", 
>> ISBN 978-0-9563693-0-7. 
>> co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 
>> 978-0-9759693-0-4
>> Publications due shortly:
>> Complex Event Processing in Heterogeneous Environments, ISBN: 
>> 978-0-9563693-3-8
>> Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume 
>> one out shortly
>>  
>> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>>  
>> NOTE: The information in this email is proprietary and confidential. This 
>> message is for the designated recipient only, if you are not the intended 
>> recipient, you should destroy it immediately. Any information in this 
>> message shall not be understood as given or endorsed by Peridale Technology 
>> Ltd, its subsidiaries or their employees, unless expressly so stated. It is 
>> the responsibility of the recipient to ensure that this email is virus free, 
>> therefore neither Peridale Technology Ltd, its subsidiaries nor their 
>> employees accept any responsibility.
>>  
>> From: kevin [mailto:kiss.kevin...@gmail.com 
>> <mailto:kiss.kevin...@gmail.com>] 
>> Sent: 26 January 2016 08:45
>> To: u...@spark.apache.org <mailto:u...@spark.apache.org>; 
>> user@hive.apache.org <mailto:user@hive.apache.org>
>> Subject: hive1.2.1 on spark 1.5.2
>>  
>> hi,all
>>    I tried hive on spark with version hive1.2.1 spark1.5.2. I build spark 
>> witout -Phive . And I test spark cluster stand alone with spark-submit and 
>> it is ok.
>>    but when I use hive , on spark web-site I can see the hive on spark 
>> application ,finally I got error:
>>  
>> 16/01/26 16:23:42 INFO slf4j.Slf4jLogger: Slf4jLogger started
>> 16/01/26 16:23:42 INFO Remoting: Starting remoting
>> 16/01/26 16:23:42 INFO Remoting: Remoting started; listening on addresses 
>> :[akka.tcp://driverPropsFetcher@10.1.3.116:42307 
>> <http://driverPropsFetcher@10.1.3.116:42307/>]
>> 16/01/26 16:23:42 INFO util.Utils: Successfully started service 
>> 'driverPropsFetcher' on port 42307.
>> Exception in thread "main" akka.actor.ActorNotFound: Actor not found for: 
>> ActorSelection[Anchor(akka.tcp://sparkDriver@10.1.3.107:34725/ 
>> <http://sparkDriver@10.1.3.107:34725/>), Path(/user/CoarseGrainedScheduler)]
>>         at 
>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
>>         at 
>> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
>>         at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>>         at 
>> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>>         at 
>> akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73)
>>         at 
>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
>>         at 
>> akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:120)
>>         at 
>> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
>>         at 
>> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>>         at 
>> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>>         at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:266)
>>         at akka.remote.DefaultMessageDispatcher.dispatch(Endpoint.scala:89)
>>         at 
>> akka.remote.EndpointReader$$anonfun$receive$2.applyOrElse(Endpoint.scala:935)
>>         at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>>         at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:411)
>>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>>         at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>>         at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>>         at 
>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>>         at 
>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>         at 
>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>         at 
>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>         at 
>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>  
>>  
>> Could anyone tell me is it for the version of hive and spark not matching ? 
>> which version is ok or there is some other reason? 
> 

Reply via email to