Hi Sourav,

>From 0.5.5-incubating (currently in vote), it is recommended to export
SPARK_HOME to make Zeppelin uses spark-submit command internally.
In this case, spark.home is not effective.

But i can not get the same error with split function in Spark SQL even
without SPARK_HOME.
Could you tell me how to reproduce the problem?

Thanks,
moon

On Thu, Nov 12, 2015 at 12:46 AM Sourav Mazumder <
sourav.mazumde...@gmail.com> wrote:

> Hi,
>
> If I'm trying to execute the function split in Spark SQL
>
> I get following error in some situations.
>
> java.util.NoSuchElementException: key not found: split at
> scala.collection.MapLike$class.default(MapLike.scala:228) at
> scala.collection.AbstractMap.default(Map.scala:58) at
> scala.collection.mutable.HashMap.apply(HashMap.scala:64) at
> org.apache.spark.sql.catalyst.analysis.StringKeyHashMap.apply(FunctionRegistry.scala:92)
> at
> org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry.lookupFunction(FunctionRegistry.scala:57)
> at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$13$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:465)
> at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$13$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:463)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
> at
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:221)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:242)
> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at
> scala.collection.Iterator$class.foreach(Iterator.scala:727) at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
> at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
> at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
> at scala.collection.AbstractIterator.to(Iterator.scala:1157) at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:272)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:227)
> at 
> org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1(QueryPlan.scala:75)
> at
> org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1$$anonfun$apply$1.apply(QueryPlan.scala:90)
>
> Here are the situation when it works and does not work -
>
> 1. Case 1 : SPARK_HOME in zeppelin-env.sh is not specified, spark.home in
> interpreter UI is not specified - This does not work
> 2. Case 2 : SPARK_HOME in zeppelin-env.sh is not specified, spark.home in
> interpreter UI is specified - This does not work
> 3. Case 3 : SPARK_HOME in zeppelin-env.sh is specified, spark.home in
> interpreter UI is also specified - This does work
> 4. Case 4 : SPARK_HOME in zeppelin-env.sh is specified, spark.home in
> interpreter UI is not specified - This does work
>
> Any idea what is going on ?
>
> Also wondering what is the order of precedence between APARK_HOME in
> zeppelin-env.sh and spark.home in interpreter UI.
>
> Regards,
> Sourav
> .
>

Reply via email to