How could I do that? Could you please send me any reference link? 

On Feb 17, 2017 1:17 AM, Jeff Zhang <zjf...@gmail.com> wrote:

zeppelin will create SparkContext implicitly for users, so it might be too late to set it after interpreter is opened. You can try to set that in interpreter setting.




Muhammad Rezaul Karim <reza_cse_du@yahoo.com>于2017年2月16日周四 下午11:52写道:
Hi Lee,

Thanks for the info that really helped. I set the compression codec in the Spark side -i.e. inside the SPARK_HOME and now the problem resolved. However, I was wondering if it's possible to set the same from the Zeppelin notebook.

I tried in the following way:

%spark conf.set("spark.io.compression.codec", "lz4")

But getting an error. Please suggest.



On Thursday, February 16, 2017 7:40 AM, Jongyoul Lee <jongyoul@gmail.com> wrote:


Hi, Can you check if the script passes in spark-shell or not? AFAIK, you have to add compression codec by yourself in Spark side.
On Wed, Feb 15, 2017 at 1:10 AM, Muhammad Rezaul Karim <reza_cse_du@yahoo.com> wrote:
Hi All,

I am receiving the following exception while executing SQL queries:
 java.lang. NoSuchMethodException: org.apache.spark.io. LZ4CompressionCodec.<init>( org.apache.spark.SparkConf)
    at java.lang.Class. getConstructor0(Class.java: 3082)
    at java.lang.Class. getConstructor(Class.java: 1825)
    at org.apache.spark.io. CompressionCodec$.createCodec( CompressionCodec.scala:71)
    at org.apache.spark.io. CompressionCodec$.createCodec( CompressionCodec.scala:65)
    at org.apache.spark.sql. execution.SparkPlan.org$ apache$spark$sql$execution$ SparkPlan$$decodeUnsafeRows( SparkPlan.scala:250)
    at org.apache.spark.sql. execution.SparkPlan$$anonfun$ executeCollect$1.apply( SparkPlan.scala:276)
    at org.apache.spark.sql. execution.SparkPlan$$anonfun$ executeCollect$1.apply( SparkPlan.scala:275)
    at scala.collection. IndexedSeqOptimized$class. foreach(IndexedSeqOptimized. scala:33)
    at scala.collection.mutable. ArrayOps$ofRef.foreach( ArrayOps.scala:186)
    at org.apache.spark.sql. execution.SparkPlan. executeCollect(SparkPlan. scala:275)
    at org.apache.spark.sql. execution.exchange. BroadcastExchangeExec$$ anonfun$relationFuture$1$$ anonfun$apply$1.apply( BroadcastExchangeExec.scala: 78)
    at org.apache.spark.sql. execution.exchange. BroadcastExchangeExec$$ anonfun$relationFuture$1$$ anonfun$apply$1.apply( BroadcastExchangeExec.scala: 75)
    at org.apache.spark.sql. execution.SQLExecution$. withExecutionId(SQLExecution. scala:94)
    at org.apache.spark.sql. execution.exchange. BroadcastExchangeExec$$ anonfun$relationFuture$1. apply(BroadcastExchangeExec. scala:74)
    at org.apache.spark.sql. execution.exchange. BroadcastExchangeExec$$ anonfun$relationFuture$1. apply(BroadcastExchangeExec. scala:74)
    at scala.concurrent.impl.Future$ PromiseCompletingRunnable. liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$ PromiseCompletingRunnable.run( Future.scala:24)
    at java.util.concurrent. ThreadPoolExecutor.runWorker( ThreadPoolExecutor.java:1142)
    at java.util.concurrent. ThreadPoolExecutor$Worker.run( ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread. java:745)


My SQL query is:
%sql  select * from land where Price >= 10000 AND CLUSTER = 2

I am experiencing the above exception in the 1st run always but when I re-execute the same query for the 2nd or 3rd time, I don't get this error.

Am I doing something wrong? Someone, please help me out.





Kinds regards,
Reza



--
이종열, Jongyoul Lee, 李宗烈

Reply via email to