And if you want to use the SQL CLI (based on catalyst) as it works in Shark, 
you can also check out https://github.com/amplab/shark/pull/337 :)

This preview version doesn’t require the Hive to be setup in the cluster. 
(Don’t forget to put the hive-site.xml under SHARK_HOME/conf also)

Cheng Hao

From: Michael Armbrust [mailto:mich...@databricks.com]
Sent: Saturday, June 07, 2014 2:22 AM
To: user@spark.apache.org
Subject: Re: Is Spark-1.0.0 not backward compatible with Shark-0.9.1 ?

There is not an official updated version of Shark for Spark-1.0 (though you 
might check out the untested spark-1.0 branch on the github).

You can also check out the preview release of Shark that runs on Spark SQL: 
https://github.com/amplab/shark/tree/sparkSql

Michael

On Fri, Jun 6, 2014 at 6:02 AM, bijoy deb 
<bijoy.comput...@gmail.com<mailto:bijoy.comput...@gmail.com>> wrote:
Hi,

I am trying to run build Shark-0.9.1 from source,with Spark-1.0.0 as its 
dependency,using sbt package command.But I am getting the below error during 
build,which is making me think that perhaps Spark-1.0.0 is not compatible with 
Shark-0.9.1:

[info]   Compilation completed in 9.046 s
[error] /vol1/shark/src/main/scala/shark/api/JavaTableRDD.scala:57: 
org.apache.spark.api.java.function.Function[shark.api.Row,Boolean] does not 
take parameters
[error]     wrapRDD(rdd.filter((x => f(x).booleanValue())))
[error]                               ^
[error] /vol1/shark/src/main/scala/shark/execution/CoGroupedRDD.scala:84: type 
mismatch;
[error]  found   : String
[error]  required: org.apache.spark.serializer.Serializer
[error]         new ShuffleDependency[Any, Any](rdd, part, 
SharkEnv.shuffleSerializerName)
[error]                                                             ^
[error] /vol1/shark/src/main/scala/shark/execution/CoGroupedRDD.scala:120: 
value serializerManager is not a member of org.apache.spark.SparkEnv
[error]     val serializer = 
SparkEnv.get.serializerManager.get(SharkEnv.shuffleSerializerName, 
SparkEnv.get.conf)
[error]                                   ^
[warn] /vol1/shark/src/main/scala/shark/execution/ExtractOperator.scala:111: 
non-variable type argument (shark.execution.ReduceKey, Any) in type pattern 
org.apache.spark.rdd.RDD[(shark.execution.ReduceKey, Any)] is unchecked since 
it is eliminated by erasure
[warn]           case r: RDD[(ReduceKey, Any)] => RDDUtils.sortByKey(r)
[warn]                   ^
[error] 
/vol1/shark/src/main/scala/shark/execution/GroupByPostShuffleOperator.scala:204:
 type mismatch;
[error]  found   : String
[error]  required: org.apache.spark.serializer.Serializer
[error]       .setSerializer(SharkEnv.shuffleSerializerName)
[error]                               ^
.....
...
Can you please suggest if there is any way to use the Shark with the new 
Spark-1.0.0 version?
Thanks
Bijoy

Reply via email to