[ 
https://issues.apache.org/jira/browse/KYLIN-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenwen updated KYLIN-3508:
---------------------------
    Environment: 
hadoop 2.7.2
hbase 1.2.5
hive 1.2.2
kylin-2.4.0-bin-hbase1x
kafka_2.10-0.10.2.2
centos 7

  was:
hadoop 2.7.2
hbase 1.2.5
hive 1.2.2
kylin-2.4.0-bin-hbase1x
kafka_2.11-2.0.0
centos 7

    Description: 
I have a kafka topic using lz4 compression algorithm, then I created a cube in 
kylin to consume this topic, it will report the following error, is my 
configuration wrong? I switched to the other algorithm gzip, and snappy works 
fine.

Error: java.lang.ClassNotFoundException: net.jpountz.lz4.LZ4Exception at 
java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
java.lang.Class.forName0(Native Method) at 
java.lang.Class.forName(Class.java:264) at 
org.apache.kafka.common.record.MemoryRecordsBuilder$4.get(MemoryRecordsBuilder.java:82)
 at 
org.apache.kafka.common.record.MemoryRecordsBuilder$MemoizingConstructorSupplier.get(MemoryRecordsBuilder.java:489)
 at 
org.apache.kafka.common.record.MemoryRecordsBuilder.wrapForInput(MemoryRecordsBuilder.java:455)
 at 
org.apache.kafka.common.record.RecordsIterator$DeepRecordsIterator.<init>(RecordsIterator.java:157)
 at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:81)
 at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:33)
 at 
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
 at 
org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
 at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:787)
 at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:482)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1062)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:996) at 
org.apache.kylin.source.kafka.hadoop.KafkaInputRecordReader.nextKeyValue(KafkaInputRecordReader.java:119)
 at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
 at 
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
 at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Container 
killed by the ApplicationMaster. Container killed on request. Exit code is 143 
Container exited with a non-zero exit code 143



  was:
I have a kafka topic using lz4 compression algorithm, then I created a cube in 
kylin to consume this topic, it will report the following error, is my 
configuration wrong? I switched to the other algorithm gzip, and snappy works 
fine.

Error: org.apache.kafka.common.KafkaException: Received exception when fetching 
the next record from kylin.log.error-3. If needed, please seek past the record 
to continue consumption. at 
org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1201)
 at 
org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1500(Fetcher.java:1035)
 at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:544)
 at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:505)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1259)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187) 
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) 
at 
org.apache.kylin.source.kafka.hadoop.KafkaInputRecordReader.nextKeyValue(KafkaInputRecordReader.java:119)
 at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
 at 
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
 at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
org.apache.kafka.common.KafkaException: java.lang.NoClassDefFoundError: 
net/jpountz/lz4/LZ4Exception at 
org.apache.kafka.common.record.CompressionType$4.wrapForInput(CompressionType.java:113)
 at 
org.apache.kafka.common.record.AbstractLegacyRecordBatch$DeepRecordsIterator.<init>(AbstractLegacyRecordBatch.java:330)
 at 
org.apache.kafka.common.record.AbstractLegacyRecordBatch$DeepRecordsIterator.<init>(AbstractLegacyRecordBatch.java:310)
 at 
org.apache.kafka.common.record.AbstractLegacyRecordBatch.iterator(AbstractLegacyRecordBatch.java:232)
 at 
org.apache.kafka.common.record.AbstractLegacyRecordBatch.streamingIterator(AbstractLegacyRecordBatch.java:263)
 at 
org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.nextFetchedRecord(Fetcher.java:1144)
 at 
org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1181)
 ... 18 more Caused by: java.lang.NoClassDefFoundError: 
net/jpountz/lz4/LZ4Exception at 
org.apache.kafka.common.record.CompressionType$4.wrapForInput(CompressionType.java:110)
 ... 24 more Caused by: java.lang.ClassNotFoundException: 
net.jpountz.lz4.LZ4Exception at 
java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) 




> kylin cube kafka streaming lz4 exception
> ----------------------------------------
>
>                 Key: KYLIN-3508
>                 URL: https://issues.apache.org/jira/browse/KYLIN-3508
>             Project: Kylin
>          Issue Type: Bug
>          Components: Streaming
>    Affects Versions: v2.4.0
>         Environment: hadoop 2.7.2
> hbase 1.2.5
> hive 1.2.2
> kylin-2.4.0-bin-hbase1x
> kafka_2.10-0.10.2.2
> centos 7
>            Reporter: chenwen
>            Priority: Major
>
> I have a kafka topic using lz4 compression algorithm, then I created a cube 
> in kylin to consume this topic, it will report the following error, is my 
> configuration wrong? I switched to the other algorithm gzip, and snappy works 
> fine.
> Error: java.lang.ClassNotFoundException: net.jpountz.lz4.LZ4Exception at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
> java.lang.Class.forName0(Native Method) at 
> java.lang.Class.forName(Class.java:264) at 
> org.apache.kafka.common.record.MemoryRecordsBuilder$4.get(MemoryRecordsBuilder.java:82)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder$MemoizingConstructorSupplier.get(MemoryRecordsBuilder.java:489)
>  at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.wrapForInput(MemoryRecordsBuilder.java:455)
>  at 
> org.apache.kafka.common.record.RecordsIterator$DeepRecordsIterator.<init>(RecordsIterator.java:157)
>  at 
> org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:81)
>  at 
> org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:33)
>  at 
> org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
>  at 
> org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
>  at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:787)
>  at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:482)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1062)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:996) 
> at 
> org.apache.kylin.source.kafka.hadoop.KafkaInputRecordReader.nextKeyValue(KafkaInputRecordReader.java:119)
>  at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
>  at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
>  at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Container 
> killed by the ApplicationMaster. Container killed on request. Exit code is 
> 143 Container exited with a non-zero exit code 143



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to