[ 
https://issues.apache.org/jira/browse/KYLIN-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17553521#comment-17553521
 ] 

WANG HUI  commented on KYLIN-5189:
----------------------------------

there are some path wrongs on Windows, u can use wsl2 for windows to get an 
Ubuntu environment, then all the problems will be solved

https://www.jetbrains.com/help/idea/how-to-use-wsl-development-environment-in-product.html#wsl-general

> DebugTomcat for Windows  doesn't work when use local spark
> ----------------------------------------------------------
>
>                 Key: KYLIN-5189
>                 URL: https://issues.apache.org/jira/browse/KYLIN-5189
>             Project: Kylin
>          Issue Type: Bug
>          Components: Tools, Build and Test
>    Affects Versions: v4.0.1
>            Reporter: lec ssmi
>            Priority: Minor
>
> When debugging on Windows with local spark ,  DebugTomcat will set the 
> windows variable to false, and throw exception as follows:
>   
> {code:java}
> Caused by: java.lang.RuntimeException: Error while running command to get 
> file permissions : java.io.IOException: Cannot run program "/bin/ls": 
> CreateProcess error=2, 系统找不到指定的文件。
>     at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
>     at org.apache.hadoop.util.Shell.runCommand(Shell.java:523)
>     at org.apache.hadoop.util.Shell.run(Shell.java:479)
>     at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
>     at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
>     at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
>     at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
>     at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:659)
>     at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
>     at 
> org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:49)
>     at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1733)
>     at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1713)
>     at org.apache.hadoop.fs.FileSystem$6.hasNext(FileSystem.java:1798)
>     at 
> org.apache.kylin.common.persistence.HDFSResourceStore.visitFolderImpl(HDFSResourceStore.java:150)
>     at 
> org.apache.kylin.common.persistence.ResourceStore.visitFolderInner(ResourceStore.java:773)
>     at 
> org.apache.kylin.common.persistence.ResourceStore.visitFolderAndContent(ResourceStore.java:758)
>     at 
> org.apache.kylin.common.persistence.ResourceStore.lambda$getAllResourcesMap$0(ResourceStore.java:255)
>     at 
> org.apache.kylin.common.persistence.ExponentialBackoffRetry.doWithRetry(ExponentialBackoffRetry.java:52)
>     at 
> org.apache.kylin.common.persistence.ResourceStore.getAllResourcesMap(ResourceStore.java:253)
>     at 
> org.apache.kylin.metadata.cachesync.CachedCrudAssist.reloadAll(CachedCrudAssist.java:127)
>     at org.apache.kylin.cube.CubeManager.<init>(CubeManager.java:140)
>     at org.apache.kylin.cube.CubeManager.newInstance(CubeManager.java:98)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at org.apache.kylin.common.KylinConfig.getManager(KylinConfig.java:498)
>     at org.apache.kylin.cube.CubeManager.getInstance(CubeManager.java:93)
>     at 
> org.apache.kylin.engine.spark.job.ResourceDetectBeforeCubingJob.doExecute(ResourceDetectBeforeCubingJob.java:58)
>     at 
> org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:304)
>     at 
> org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:93)
>     at 
> org.apache.kylin.engine.spark.job.ResourceDetectBeforeCubingJob.main(ResourceDetectBeforeCubingJob.java:106)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.apache.kylin.engine.spark.job.NSparkExecutable.runLocalMode(NSparkExecutable.java:451)
>     at 
> org.apache.kylin.engine.spark.job.NSparkExecutable.doWork(NSparkExecutable.java:161)
>     at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:206)
>     at 
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:94)
>     at 
> org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:206)
>     at 
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:113)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at java.lang.Thread.run(Thread.java:748) {code}
>     



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to