Hello All,

I'm new Kylin user. I successfully managed to get everything work with 
"Sample Cube" (http://kylin.apache.org/docs/tutorial/kylin_sample.html)

Now I wanted to make it work with Spark 
(http://kylin.apache.org/docs/tutorial/cube_spark.html) but I'm 
struggling with one problem:

When I run "build", I got this exception:
kylin                 | Exception in thread "main" 
java.lang.NoClassDefFoundError: org/slf4j/Logger
kylin                 |         at 
java.lang.Class.getDeclaredMethods0(Native Method)
kylin                 |         at 
java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
kylin                 |         at 
java.lang.Class.privateGetMethodRecursive(Class.java:3048)
kylin                 |         at 
java.lang.Class.getMethod0(Class.java:3018)
kylin                 |         at 
java.lang.Class.getMethod(Class.java:1784)
kylin                 |         at 
sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
kylin                 |         at 
sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
kylin                 | Caused by: java.lang.ClassNotFoundException: 
org.slf4j.Logger
kylin                 |         at 
java.net.URLClassLoader.findClass(URLClassLoader.java:381)
kylin                 |         at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424)
kylin                 |         at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
kylin                 |         at 
java.lang.ClassLoader.loadClass(ClassLoader.java:357)
kylin                 |         ... 7 more
kylin                 | The command is:
kylin                 | export HADOOP_CONF_DIR=/etc/hadoop && 
/opt/spark-2.3.2-bin-without-hadoop/bin/spark-submit --class 
org.apache.kylin.common.util.SparkEnt
ry  --conf spark.executor.instances=40  --conf 
spark.network.timeout=600  --conf spark.yarn.queue=default  --conf 
spark.history.fs.logDirectory=hdfs://namenode:
8020/kylin/spark-history  --conf 
spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec 
--conf spark.dynamicAllocation.enabled=true  --conf spar
k.master=yarn  --conf spark.dynamicAllocation.executorIdleTimeout=300  
--conf spark.hadoop.yarn.timeline-service.enabled=false  --conf 
spark.executor.memory=4G
  --conf spark.eventLog.enabled=true  --conf 
spark.eventLog.dir=hdfs://namenode:8020/kylin/spark-history --conf 
spark.dynamicAllocation.minExecutors=1  --conf s
park.executor.cores=1  --conf 
spark.hadoop.mapreduce.output.fileoutputformat.compress=true --conf 
spark.yarn.executor.memoryOverhead=1024  --conf spark.hadoop.
dfs.replication=2  --conf spark.dynamicAllocation.maxExecutors=1000  
--conf 
spark.hadoop.mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.c
ompress.DefaultCodec  --conf spark.driver.memory=2G  --conf 
spark.submit.deployMode=cluster  --conf 
spark.shuffle.service.enabled=true --jars /opt/apache-kylin-
2.6.0-bin/lib/kylin-job-2.6.0.jar 
/opt/apache-kylin-2.6.0-bin/lib/kylin-job-2.6.0.jar -className 
org.apache.kylin.engine.spark.SparkFactDistinct -counterOutput
hdfs://namenode:8020/kylin/kylin_metadata/kylin-50b3e245-c00f-0136-1ec8-d5c5c472a311/kylin_sales_cube/counter
 
-statisticssamplingpercent 100 -cubename kylin_sal
es_cube -hiveTable 
default.kylin_intermediate_kylin_sales_cube_a2c3dfb4_900c_f8eb_5086_8bbee7e5c60a
 
-output hdfs://namenode:8020/kylin/kylin_metadata/kylin-50b3
e245-c00f-0136-1ec8-d5c5c472a311/kylin_sales_cube/fact_distinct_columns 
-input 
hdfs://namenode:8020/kylin/kylin_metadata/kylin-50b3e245-c00f-0136-1ec8-d5c5c472a
311/kylin_intermediate_kylin_sales_cube_a2c3dfb4_900c_f8eb_5086_8bbee7e5c60a 
-segmentId a2c3dfb4-900c-f8eb-5086-8bbee7e5c60a -metaUrl 
kylin_metadata@hdfs,path=h
dfs://namenode:8020/kylin/kylin_metadata/kylin-50b3e245-c00f-0136-1ec8-d5c5c472a311/kylin_sales_cube/metadata
kylin                 |         at 
org.apache.kylin.common.util.CliCommandExecutor.execute(CliCommandExecutor.java:96)
kylin                 |         at 
org.apache.kylin.engine.spark.SparkExecutable$2.call(SparkExecutable.java:281)
kylin                 |         at 
org.apache.kylin.engine.spark.SparkExecutable$2.call(SparkExecutable.java:276)
kylin                 |         at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
kylin                 |         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
kylin                 |         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
kylin                 |         at java.lang.Thread.run(Thread.java:748)
kylin                 | 2019-01-29 11:44:18,469 INFO  [Scheduler 
1207320921 Job 50b3e245-c00f-0136-1ec8-d5c5c472a311-118] 
execution.ExecutableManager:453 : job
id:50b3e245-c00f-0136-1ec8-d5c5c472a311-02 from RUNNING to ERROR



How I can get rid of this exception?

Thank you in advance,
Kamil

Reply via email to