[jira] [Commented] (KYLIN-3906) ExecutableManager is spelled as ExecutableManger

2019-04-08 Thread Yanwen Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/KYLIN-3906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812914#comment-16812914
 ] 

Yanwen Lin commented on KYLIN-3906:
---

Sorry for late replying.

Sure, I will regenerate patch following the link ASAP. Thanks!

> ExecutableManager is spelled as ExecutableManger
> 
>
> Key: KYLIN-3906
> URL: https://issues.apache.org/jira/browse/KYLIN-3906
> Project: Kylin
>  Issue Type: Improvement
>  Components: Job Engine
>Affects Versions: v3.0.0
>Reporter: Yanwen Lin
>Priority: Trivial
>  Labels: patch
> Fix For: v3.0.0
>
> Attachments: KYLIN-3906.patch
>
>
> As titled, please see attachment for patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KYLIN-3906) ExecutableManager is spelled as ExecutableManger

2019-03-25 Thread Yanwen Lin (JIRA)
Yanwen Lin created KYLIN-3906:
-

 Summary: ExecutableManager is spelled as ExecutableManger
 Key: KYLIN-3906
 URL: https://issues.apache.org/jira/browse/KYLIN-3906
 Project: Kylin
  Issue Type: Improvement
  Components: Job Engine
Affects Versions: v3.0.0
Reporter: Yanwen Lin
 Fix For: v3.0.0
 Attachments: KYLIN-3906.patch

As titled, please see attachment for patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3894) Build buildSupportsSnappy Error When Doing Integration Testing

2019-03-19 Thread Yanwen Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KYLIN-3894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanwen Lin updated KYLIN-3894:
--
Priority: Blocker  (was: Major)

> Build buildSupportsSnappy Error When Doing Integration Testing
> --
>
> Key: KYLIN-3894
> URL: https://issues.apache.org/jira/browse/KYLIN-3894
> Project: Kylin
>  Issue Type: Test
>  Components: Tools, Build and Test
>Affects Versions: v2.6.0
> Environment: Hortonworks HDP 3.0.1.0-187 Docker container.
>Reporter: Yanwen Lin
>Priority: Blocker
>  Labels: test
>
> Hi all,
>  I am currently running integration test. However, I met the following error. 
> Could you please share some suggestions on this?
>  I've passed maven install(skip test) and maven test.
>   
>  *1. Command*:
>  mvn verify -fae -Dhdp.version=3.0.1.0-187 -P sandbox
>  
>  *2. Error message from Yarn Container Attempt:*
>  2019-03-18 16:43:25,583 INFO [main] org.apache.kylin.engine.mr.KylinMapper: 
> Accepting Mapper Key with ordinal: 12019-03-18 16:43:25,583 INFO [main] 
> org.apache.kylin.engine.mr.KylinMapper: Do map, available memory: 
> 322m2019-03-18 16:43:25,596 INFO [main] org.apache.kylin.common.KylinConfig: 
> Creating new manager instance of class 
> org.apache.kylin.cube.cuboid.CuboidManager2019-03-18 16:43:25,599 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 12019-03-18 16:43:25,599 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: 
> FileOutputCommitter skip cleanup _temporary folders under output 
> directory:false, ignore cleanup failures: false2019-03-18 16:43:25,795 ERROR 
> [main] org.apache.kylin.engine.mr.KylinMapper: 
> java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at 
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) at 
> org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
>  at 
> org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
>  at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
> at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) 
> at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1304) at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1192) at 
> org.apache.hadoop.io.SequenceFile$BlockCompressWriter.(SequenceFile.java:1552)
>  at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:289) at 
> org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542) at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
>  at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
>  at 
> org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat$LazyRecordWriter.write(LazyOutputFormat.java:113)
>  at 
> org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write(MultipleOutputs.java:468)
>  at 
> org.apache.kylin.engine.mr.steps.FilterRecommendCuboidDataMapper.doMap(FilterRecommendCuboidDataMapper.java:85)
>  at 
> org.apache.kylin.engine.mr.steps.FilterRecommendCuboidDataMapper.doMap(FilterRecommendCuboidDataMapper.java:44)
>  at org.apache.kylin.engine.mr.KylinMapper.map(KylinMapper.java:77) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)2019-03-18 
> 16:43:25,797 INFO [main] org.apache.kylin.engine.mr.KylinMapper: Do cleanup, 
> available memory: 318m2019-03-18 16:43:25,813 INFO [main] 
> org.apache.kylin.engine.mr.KylinMapper: Total rows: 12019-03-18 16:43:25,813 
> ERROR [main] org.apache.hadoop.mapred.YarnChild: Error running child : 
> java.lang.UnsatisfiedLinkError: 
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at 
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) at 
> org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
>  at 
> org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
>  at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
> at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) 
> at 

[jira] [Updated] (KYLIN-3894) Build buildSupportsSnappy Error When Doing Integration Testing

2019-03-19 Thread Yanwen Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KYLIN-3894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanwen Lin updated KYLIN-3894:
--
Description: 
Hi all,
 I am currently running integration test. However, I met the following error. 
Could you please share some suggestions on this?
 I've passed maven install(skip test) and maven test.
  
 *1. Command*:
 mvn verify -fae -Dhdp.version=3.0.1.0-187 -P sandbox

 
 *2. Error message from Yarn Container Attempt:*

 2019-03-18 16:43:25,583 INFO [main] org.apache.kylin.engine.mr.KylinMapper: 
Accepting Mapper Key with ordinal: 12019-03-18 16:43:25,583 INFO [main] 
org.apache.kylin.engine.mr.KylinMapper: Do map, available memory: 
322m2019-03-18 16:43:25,596 INFO [main] org.apache.kylin.common.KylinConfig: 
Creating new manager instance of class 
org.apache.kylin.cube.cuboid.CuboidManager2019-03-18 16:43:25,599 INFO [main] 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
Committer Algorithm version is 12019-03-18 16:43:25,599 INFO [main] 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter 
skip cleanup _temporary folders under output directory:false, ignore cleanup 
failures: false2019-03-18 16:43:25,795 ERROR [main] 
org.apache.kylin.engine.mr.KylinMapper: java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) at 
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
 at 
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
 at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) at 
org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1304) at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1192) at 
org.apache.hadoop.io.SequenceFile$BlockCompressWriter.(SequenceFile.java:1552)
 at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:289) at 
org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542) at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
 at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
 at 
org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat$LazyRecordWriter.write(LazyOutputFormat.java:113)
 at 
org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write(MultipleOutputs.java:468)
 at 
org.apache.kylin.engine.mr.steps.FilterRecommendCuboidDataMapper.doMap(FilterRecommendCuboidDataMapper.java:85)
 at 
org.apache.kylin.engine.mr.steps.FilterRecommendCuboidDataMapper.doMap(FilterRecommendCuboidDataMapper.java:44)
 at org.apache.kylin.engine.mr.KylinMapper.map(KylinMapper.java:77) at 
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)2019-03-18 
16:43:25,797 INFO [main] org.apache.kylin.engine.mr.KylinMapper: Do cleanup, 
available memory: 318m2019-03-18 16:43:25,813 INFO [main] 
org.apache.kylin.engine.mr.KylinMapper: Total rows: 12019-03-18 16:43:25,813 
ERROR [main] org.apache.hadoop.mapred.YarnChild: Error running child : 
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) at 
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
 at 
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
 at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) at 
org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1304) at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1192) at 
org.apache.hadoop.io.SequenceFile$BlockCompressWriter.(SequenceFile.java:1552)
 at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:289) at 
org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542) at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
 at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
 at 

[jira] [Created] (KYLIN-3894) Build buildSupportsSnappy Error When Doing Integration Testing

2019-03-19 Thread Yanwen Lin (JIRA)
Yanwen Lin created KYLIN-3894:
-

 Summary: Build buildSupportsSnappy Error When Doing Integration 
Testing
 Key: KYLIN-3894
 URL: https://issues.apache.org/jira/browse/KYLIN-3894
 Project: Kylin
  Issue Type: Test
  Components: Tools, Build and Test
Affects Versions: v2.6.0
 Environment: Hortonworks HDP 3.0.1.0-187 Docker container.
Reporter: Yanwen Lin


Hi all,
I am currently running integration test. However, I met the following error. 
Could you please share some suggestions on this?
I've passed maven install(skip test) and maven test.
 
*1. Command*:
mvn verify -fae -Dhdp.version=3.0.1.0-187 -P sandbox

 
*2. Error message from Yarn Container Attempt:*
{noformat}
2019-03-18 16:43:25,583 INFO [main] org.apache.kylin.engine.mr.KylinMapper: 
Accepting Mapper Key with ordinal: 12019-03-18 16:43:25,583 INFO [main] 
org.apache.kylin.engine.mr.KylinMapper: Do map, available memory: 
322m2019-03-18 16:43:25,596 INFO [main] org.apache.kylin.common.KylinConfig: 
Creating new manager instance of class 
org.apache.kylin.cube.cuboid.CuboidManager2019-03-18 16:43:25,599 INFO [main] 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
Committer Algorithm version is 12019-03-18 16:43:25,599 INFO [main] 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter 
skip cleanup _temporary folders under output directory:false, ignore cleanup 
failures: false2019-03-18 16:43:25,795 ERROR [main] 
org.apache.kylin.engine.mr.KylinMapper:
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) at 
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
 at 
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
 at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) at 
org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1304) at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1192) at 
org.apache.hadoop.io.SequenceFile$BlockCompressWriter.(SequenceFile.java:1552)
 at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:289) at 
org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542) at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
 at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
 at 
org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat$LazyRecordWriter.write(LazyOutputFormat.java:113)
 at 
org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write(MultipleOutputs.java:468)
 at 
org.apache.kylin.engine.mr.steps.FilterRecommendCuboidDataMapper.doMap(FilterRecommendCuboidDataMapper.java:85)
 at 
org.apache.kylin.engine.mr.steps.FilterRecommendCuboidDataMapper.doMap(FilterRecommendCuboidDataMapper.java:44)
 at org.apache.kylin.engine.mr.KylinMapper.map(KylinMapper.java:77) at 
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)2019-03-18 
16:43:25,797 INFO [main] org.apache.kylin.engine.mr.KylinMapper: Do cleanup, 
available memory: 318m2019-03-18 16:43:25,813 INFO [main] 
org.apache.kylin.engine.mr.KylinMapper: Total rows: 12019-03-18 16:43:25,813 
ERROR [main] org.apache.hadoop.mapred.YarnChild: Error running child : 
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) at 
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
 at 
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
 at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) 
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:168) at 
org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1304) at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1192) at 
org.apache.hadoop.io.SequenceFile$BlockCompressWriter.(SequenceFile.java:1552)
 at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:289) at 
org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542) at 

[jira] [Closed] (KYLIN-3871) Kylin inside Cloudera CDH Quickstart Sandbox

2019-03-15 Thread Yanwen Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KYLIN-3871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanwen Lin closed KYLIN-3871.
-
Resolution: Workaround

Use Hortonworks HDP as a workaround

> Kylin inside Cloudera CDH Quickstart Sandbox
> 
>
> Key: KYLIN-3871
> URL: https://issues.apache.org/jira/browse/KYLIN-3871
> Project: Kylin
>  Issue Type: Test
>  Components: Job Engine, Real-time Streaming
>Affects Versions: v2.6.0
> Environment: Cloudera Quickstart Docker image:
> - OS: centos6
> - memory: 13GB
> - disk: 20GB
> - java: 1.8
> - maven: 3.5.3
>Reporter: Yanwen Lin
>Priority: Blocker
>
> When doing integration test, I met the following error. I know this is 
> related to Java version error and the reason is that Kylin use java1.8 while 
> Cloudera use java1.7. So I manually installed java 1.8 and set JAVA_HOME to 
> point to Spark2.x. (I also type spark-submit --version to check this). But 
> the bug did not go away. I guess during the process of Spark job, some 
> command may change the Java version back to java1.7 (not sure). Is there 
> anyway to force it not change back to Java1.7 or any workaround?
> I have successfully finished maven installing and unit tests.
> *Branch:*
> realtime-streaming
> *Executed command with problem:*
> mvn verify -fae -Dhdp.version=2.4.0.0-169 -P sandbox
> *Error stack:*
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/spark/network/util/ByteUnit : Unsupported major.minor version 52.0
>  at java.lang.ClassLoader.defineClass1(Native Method)
>  at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>  at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>  at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>  at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>  at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>  at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>  at org.apache.spark.deploy.history.config$.(config.scala:44)
>  at org.apache.spark.deploy.history.config$.(config.scala)
>  at org.apache.spark.SparkConf$.(SparkConf.scala:635)
>  at org.apache.spark.SparkConf$.(SparkConf.scala)
>  at org.apache.spark.SparkConf.set(SparkConf.scala:94)
>  at 
> org.apache.spark.SparkConf$$anonfun$loadFromSystemProperties$3.apply(SparkConf.scala:76)
>  at 
> org.apache.spark.SparkConf$$anonfun$loadFromSystemProperties$3.apply(SparkConf.scala:75)
>  at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>  at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>  at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>  at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:75)
>  at org.apache.spark.SparkConf.(SparkConf.scala:70)
>  at org.apache.spark.SparkConf.(SparkConf.scala:57)
>  at 
> org.apache.spark.deploy.yarn.ApplicationMaster.(ApplicationMaster.scala:62)
>  at 
> org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:838)
>  at 
> org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:869)
>  at 
> org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KYLIN-3871) Kylin inside Cloudera CDH Quickstart Sandbox

2019-03-13 Thread Yanwen Lin (JIRA)
Yanwen Lin created KYLIN-3871:
-

 Summary: Kylin inside Cloudera CDH Quickstart Sandbox
 Key: KYLIN-3871
 URL: https://issues.apache.org/jira/browse/KYLIN-3871
 Project: Kylin
  Issue Type: Test
  Components: Job Engine
Affects Versions: v2.6.0
 Environment: Cloudera Quickstart Docker image:
- OS: centos6
- memory: 13GB
- disk: 20GB
- java: 1.8
- maven: 3.5.3
Reporter: Yanwen Lin


When doing integration test, I met the following error. I know this is related 
to Java version error and the reason is that Kylin use java1.8 while Cloudera 
use java1.7. So I manually installed java 1.8 and set JAVA_HOME to point to 
Spark2.x. (I also type spark-submit --version to check this). But the bug did 
not go away. I guess during the process of Spark job, some command may change 
the Java version back to java1.7 (not sure). Is there anyway to force it not 
change back to Java1.7 or any workaround?

I have successfully finished maven installing and unit tests.

*Branch:*

realtime-streaming

*Executed command with problem:*

mvn verify -fae -Dhdp.version=2.4.0.0-169 -P sandbox

*Error stack:*

Exception in thread "main" java.lang.UnsupportedClassVersionError: 
org/apache/spark/network/util/ByteUnit : Unsupported major.minor version 52.0
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
 at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at org.apache.spark.deploy.history.config$.(config.scala:44)
 at org.apache.spark.deploy.history.config$.(config.scala)
 at org.apache.spark.SparkConf$.(SparkConf.scala:635)
 at org.apache.spark.SparkConf$.(SparkConf.scala)
 at org.apache.spark.SparkConf.set(SparkConf.scala:94)
 at 
org.apache.spark.SparkConf$$anonfun$loadFromSystemProperties$3.apply(SparkConf.scala:76)
 at 
org.apache.spark.SparkConf$$anonfun$loadFromSystemProperties$3.apply(SparkConf.scala:75)
 at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
 at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
 at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
 at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
 at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
 at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
 at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:75)
 at org.apache.spark.SparkConf.(SparkConf.scala:70)
 at org.apache.spark.SparkConf.(SparkConf.scala:57)
 at 
org.apache.spark.deploy.yarn.ApplicationMaster.(ApplicationMaster.scala:62)
 at 
org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:838)
 at 
org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:869)
 at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KYLIN-3871) Kylin inside Cloudera CDH Quickstart Sandbox

2019-03-13 Thread Yanwen Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KYLIN-3871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanwen Lin updated KYLIN-3871:
--
Component/s: Real-time Streaming

> Kylin inside Cloudera CDH Quickstart Sandbox
> 
>
> Key: KYLIN-3871
> URL: https://issues.apache.org/jira/browse/KYLIN-3871
> Project: Kylin
>  Issue Type: Test
>  Components: Job Engine, Real-time Streaming
>Affects Versions: v2.6.0
> Environment: Cloudera Quickstart Docker image:
> - OS: centos6
> - memory: 13GB
> - disk: 20GB
> - java: 1.8
> - maven: 3.5.3
>Reporter: Yanwen Lin
>Priority: Blocker
>
> When doing integration test, I met the following error. I know this is 
> related to Java version error and the reason is that Kylin use java1.8 while 
> Cloudera use java1.7. So I manually installed java 1.8 and set JAVA_HOME to 
> point to Spark2.x. (I also type spark-submit --version to check this). But 
> the bug did not go away. I guess during the process of Spark job, some 
> command may change the Java version back to java1.7 (not sure). Is there 
> anyway to force it not change back to Java1.7 or any workaround?
> I have successfully finished maven installing and unit tests.
> *Branch:*
> realtime-streaming
> *Executed command with problem:*
> mvn verify -fae -Dhdp.version=2.4.0.0-169 -P sandbox
> *Error stack:*
> Exception in thread "main" java.lang.UnsupportedClassVersionError: 
> org/apache/spark/network/util/ByteUnit : Unsupported major.minor version 52.0
>  at java.lang.ClassLoader.defineClass1(Native Method)
>  at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>  at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>  at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>  at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>  at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>  at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>  at org.apache.spark.deploy.history.config$.(config.scala:44)
>  at org.apache.spark.deploy.history.config$.(config.scala)
>  at org.apache.spark.SparkConf$.(SparkConf.scala:635)
>  at org.apache.spark.SparkConf$.(SparkConf.scala)
>  at org.apache.spark.SparkConf.set(SparkConf.scala:94)
>  at 
> org.apache.spark.SparkConf$$anonfun$loadFromSystemProperties$3.apply(SparkConf.scala:76)
>  at 
> org.apache.spark.SparkConf$$anonfun$loadFromSystemProperties$3.apply(SparkConf.scala:75)
>  at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>  at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>  at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>  at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:75)
>  at org.apache.spark.SparkConf.(SparkConf.scala:70)
>  at org.apache.spark.SparkConf.(SparkConf.scala:57)
>  at 
> org.apache.spark.deploy.yarn.ApplicationMaster.(ApplicationMaster.scala:62)
>  at 
> org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:838)
>  at 
> org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:869)
>  at 
> org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)