Or you can check whether there is old hadoop jars on your cluster,
according to https://issues.apache.org/jira/browse/HADOOP-11064


2017-06-20 9:33 GMT+08:00 skyyws <sky...@163.com>:

> No, I deploy kylin on linux, this is my machine info:
> --------------------------
> 3.2.0-4-amd64 #1 SMP Debian 3.2.82-1 x86_64 GNU/Linux
> -------------------------
>
> 2017-06-20
>
> skyyws
>
>
>
> 发件人:ShaoFeng Shi <shaofeng...@apache.org>
> 发送时间:2017-06-20 00:10
> 主题:Re: Build sample error with spark on kylin 2.0.0
> 收件人:"dev"<dev@kylin.apache.org>
> 抄送:
>
> Are you running Kylin on windows? If yes, check:
> https://stackoverflow.com/questions/33211599/hadoop-
> error-on-windows-java-lang-unsatisfiedlinkerror
>
> 2017-06-19 21:55 GMT+08:00 skyyws <sky...@163.com>:
>
> > Hi all,
> > I met an error when using spark engine build kylin sample on step "Build
> > Cube with Spark", here is the exception log:
> > ------------------------------------------------------------
> > -----------------------------
> > Exception in thread "main" java.lang.UnsatisfiedLinkError:
> > org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteAr
> > ray(II[BI[BIILjava/lang/String;JZ)V
> >         at org.apache.hadoop.util.NativeCrc32.
> > nativeComputeChunkedSumsByteArray(Native Method)
> >         at org.apache.hadoop.util.NativeCrc32.
> > calculateChunkedSumsByteArray(NativeCrc32.java:86)
> >         at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(
> > DataChecksum.java:430)
> >         at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(
> > FSOutputSummer.java:202)
> >         at org.apache.hadoop.fs.FSOutputSummer.write1(
> > FSOutputSummer.java:124)
> >         at org.apache.hadoop.fs.FSOutputSummer.write(
> > FSOutputSummer.java:110)
> >         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(
> > FSDataOutputStream.java:58)
> >         at java.io.DataOutputStream.write(DataOutputStream.java:107)
> >         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
> >         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52)
> >         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
> >         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
> >         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
> >         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
> >         at org.apache.spark.deploy.yarn.Client.copyFileToRemote(
> > Client.scala:317)
> >         at org.apache.spark.deploy.yarn.Client.org$apache$spark$
> > deploy$yarn$Client$$distribute$1(Client.scala:407)
> >         at org.apache.spark.deploy.yarn.Client$$anonfun$
> > prepareLocalResources$5.apply(Client.scala:446)
> >         at org.apache.spark.deploy.yarn.Client$$anonfun$
> > prepareLocalResources$5.apply(Client.scala:444)
> >         at scala.collection.immutable.List.foreach(List.scala:318)
> >         at org.apache.spark.deploy.yarn.Client.prepareLocalResources(
> > Client.scala:444)
> >         at org.apache.spark.deploy.yarn.Client.
> > createContainerLaunchContext(Client.scala:727)
> >         at org.apache.spark.deploy.yarn.Client.submitApplication(
> > Client.scala:142)
> >         at org.apache.spark.scheduler.cluster.
> YarnClientSchedulerBackend.
> > start(YarnClientSchedulerBackend.scala:57)
> >         at org.apache.spark.scheduler.TaskSchedulerImpl.start(
> > TaskSchedulerImpl.scala:144)
> >         at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
> >         at org.apache.spark.api.java.JavaSparkContext.<init>(
> > JavaSparkContext.scala:59)
> >         at org.apache.kylin.engine.spark.SparkCubingByLayer.execute(
> > SparkCubingByLayer.java:150)
> >         at org.apache.kylin.common.util.AbstractApplication.execute(
> > AbstractApplication.java:37)
> >         at org.apache.kylin.common.util.SparkEntry.main(SparkEntry.
> > java:44)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke(
> > NativeMethodAccessorImpl.java:57)
> >         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> > DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:606)
> >         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> > deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> >         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> > SparkSubmit.scala:181)
> >         at org.apache.spark.deploy.SparkSubmit$.submit(
> > SparkSubmit.scala:206)
> >         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> > scala:121)
> >         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> > 17/06/19 21:22:06 INFO storage.DiskBlockManager: Shutdown hook called
> > 17/06/19 21:22:06 INFO util.ShutdownHookManager: Shutdown hook called
> > 17/06/19 21:22:06 INFO util.ShutdownHookManager: Deleting directory
> > /tmp/spark-0d1d3709-86cd-446c-b728-5070f168de28
> > 17/06/19 21:22:06 INFO util.ShutdownHookManager: Deleting directory
> > /tmp/spark-0d1d3709-86cd-446c-b728-5070f168de28/httpd-
> > 9bcb9a5d-569f-4f28-ad89-038a9020eda8
> > 17/06/19 21:22:06 INFO util.ShutdownHookManager: Deleting directory
> > /tmp/spark-0d1d3709-86cd-446c-b728-5070f168de28/userFiles-
> > 2e9ff265-3d37-40e0-8894-6fd4d1a3ad8b
> >
> >         at org.apache.kylin.common.util.CliCommandExecutor.execute(
> > CliCommandExecutor.java:92)
> >         at org.apache.kylin.engine.spark.SparkExecutable.doWork(
> > SparkExecutable.java:124)
> >         at org.apache.kylin.job.execution.AbstractExecutable.
> > execute(AbstractExecutable.java:124)
> >         at org.apache.kylin.job.execution.DefaultChainedExecutable.
> doWork(
> > DefaultChainedExecutable.java:64)
> >         at org.apache.kylin.job.execution.AbstractExecutable.
> > execute(AbstractExecutable.java:124)
> >         at org.apache.kylin.job.impl.threadpool.DefaultScheduler$
> > JobRunner.run(DefaultScheduler.java:142)
> >         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> > ThreadPoolExecutor.java:1145)
> >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:745)
> >
> > ------------------------------------------------------------
> > -----------------------------
> > I can use the kylin in-build spark-shell to do some operations like:
> > ------------------------------------------------------------
> > -----------------------------
> > var textFile = sc.textFile("hdfs://xxxx/xxxx/README.md")
> > textFile.count()
> > textFile.first()
> > textFile.filter(line => line.contains("hello")).count()
> > ------------------------------------------------------------
> > -----------------------------
> > Here is the env info:
> > kylin version is 2.0.0
> > hadoop version is 2.7.*
> > spark version is 1.6.*
> > ------------------------------------------------------------
> > -----------------------------
> > Anyone can help me?THX
> >
> >
> > 2017-06-19
> > skyyws
>
>
>
>
> --
> Best regards,
>
> Shaofeng Shi 史少锋
>



-- 
Best regards,

Shaofeng Shi 史少锋

Reply via email to