I have no idea either; Kylin 1.6 is a very old version. We didn't get such
reporting before.

BTW, you may need to look at the cube design (as you already increase the
node memory to a very big number), especially for the UHC dimensions. Try
to decrease the cardinality, or using other encodings like fixed-length
encoding.

Best regards,

Shaofeng Shi 史少锋
Apache Kylin PMC
Email: [email protected]

Apache Kylin FAQ: https://kylin.apache.org/docs/gettingstarted/faq.html
Join Kylin user mail group: [email protected]
Join Kylin dev mail group: [email protected]




Raghu Ram Reddy Medapati <[email protected]> 于2020年2月1日周六 下午8:06写道:

> Hi Shi,
>
> Thanks for the quick reply.
> I haven't changed any configurations lately. We always had
> "kylin.hbase.default.compression.codec=snappy" and this has been there for
> 3 years.
> All the other cubes build fine and we are able to query this cube and
> other cubes fine as well.
> The data has increased by only about 200K records for this cube and
> suddenly it started failing at "Build DimensionDictionary" step this week.
> I bumped up the ec2 instance of job node from 64 GB to 160 GB and the build
> dimension dictionary step is running fine now but the cube is failing at
> "#12 Step Name: Build N-Dimension Cuboid Data : 22-Dimension"
> Any help would be greatly appreciated.
>
> On 2020/02/01 02:06:45, ShaoFeng Shi <[email protected]> wrote:
> > I encountered the same error just a couple of days ago on Kylin 3.0. But
> I
> > don't think it is the same root cause, because my error is happened in
> > query time. And the reason is the different compression settings:
> >
> > https://issues.apache.org/jira/browse/KYLIN-4363
> >
> > Did you make any code/configuration change for Kylin or Hadoop in the
> > between?
> >
> > Best regards,
> >
> > Shaofeng Shi 史少锋
> > Apache Kylin PMC
> > Email: [email protected]
> >
> > Apache Kylin FAQ: https://kylin.apache.org/docs/gettingstarted/faq.html
> > Join Kylin user mail group: [email protected]
> > Join Kylin dev mail group: [email protected]
> >
> >
> >
> >
> > raghu <[email protected]> 于2020年2月1日周六 上午8:30写道:
> >
> > > One of our cubes is failing at "#12 Step Name: Build N-Dimension Cuboid
> > > Data
> > > : 22-Dimension" step and irrespective of how much i bump up the mapper
> and
> > > reducer memory and other configs, the jobs fails at this step
> everytime.
> > > Looked at the yarn application logs and it doesnt give any useful info
> > > there. Can someone help.
> > > The cube has 8 measures 27 dimensions. Source data is ~42 million.
> > > FYI.. this cube has been running fine for a few months now, its just
> > > started
> > > failing a few days back and the data has not increased by much as well.
> > > Kylin version= 1.6.0
> > > Hortonworks = 2.4.0
> > >
> > > Yarn MR Error:
> > > java.lang.RuntimeException: java.io.IOException: I failed to find the
> one
> > > of
> > > the right cookies.
> > >         at
> > >
> > >
> org.apache.kylin.measure.bitmap.BitmapSerializer.deserialize(BitmapSerializer.java:61)
> > >         at
> > >
> > >
> org.apache.kylin.measure.bitmap.BitmapSerializer.deserialize(BitmapSerializer.java:30)
> > >         at
> > > org.apache.kylin.measure.MeasureCodec.decode(MeasureCodec.java:97)
> > >         at
> > >
> > >
> org.apache.kylin.measure.BufferedMeasureCodec.decode(BufferedMeasureCodec.java:79)
> > >         at
> > > org.apache.kylin.engine.mr
> > > .steps.CuboidReducer.reduce(CuboidReducer.java:93)
> > >         at
> > > org.apache.kylin.engine.mr
> > > .steps.CuboidReducer.reduce(CuboidReducer.java:42)
> > >         at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> > >         at
> > > org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1688)
> > >         at
> > >
> > >
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1637)
> > >         at
> > >
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1489)
> > >         at
> > >
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:723)
> > >         at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793)
> > >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> > >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at
> > >
> > >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
> > >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> > > Caused by: java.io.IOException: I failed to find the one of the right
> > > cookies.
> > >         at
> > >
> > >
> org.roaringbitmap.buffer.MutableRoaringArray.deserialize(MutableRoaringArray.java:218)
> > >         at
> > >
> > >
> org.roaringbitmap.buffer.MutableRoaringBitmap.deserialize(MutableRoaringBitmap.java:829)
> > >         at
> > >
> > >
> org.apache.kylin.measure.bitmap.BitmapCounter.readRegisters(BitmapCounter.java:113)
> > >         at
> > >
> > >
> org.apache.kylin.measure.bitmap.BitmapSerializer.deserialize(BitmapSerializer.java:59)
> > >         ... 17 more
> > >
> > > 2020-01-31 21:49:16,955 INFO [IPC Server handler 24 on 38174]
> > > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report
> from
> > > attempt_1568236776824_87076_m_000900_0: Error:
> java.lang.RuntimeException:
> > > java.io.IOException: I failed to find the one of the right cookies.
> > >         at
> > >
> > >
> org.apache.kylin.measure.bitmap.BitmapSerializer.deserialize(BitmapSerializer.java:61)
> > >         at
> > >
> > >
> org.apache.kylin.measure.bitmap.BitmapSerializer.deserialize(BitmapSerializer.java:30)
> > >         at
> > > org.apache.kylin.measure.MeasureCodec.decode(MeasureCodec.java:97)
> > >         at
> > >
> > >
> org.apache.kylin.measure.BufferedMeasureCodec.decode(BufferedMeasureCodec.java:79)
> > >         at
> > > org.apache.kylin.engine.mr
> > > .steps.CuboidReducer.reduce(CuboidReducer.java:93)
> > >         at
> > > org.apache.kylin.engine.mr
> > > .steps.CuboidReducer.reduce(CuboidReducer.java:42)
> > >         at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> > >         at
> > > org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1688)
> > >         at
> > >
> > >
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1637)
> > >         at
> > >
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1489)
> > >         at
> > >
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:723)
> > >         at
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793)
> > >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> > >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at
> > >
> > >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
> > >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> > > Caused by: java.io.IOException: I failed to find the one of the right
> > > cookies.
> > >         at
> > >
> > >
> org.roaringbitmap.buffer.MutableRoaringArray.deserialize(MutableRoaringArray.java:218)
> > >         at
> > >
> > >
> org.roaringbitmap.buffer.MutableRoaringBitmap.deserialize(MutableRoaringBitmap.java:829)
> > >         at
> > >
> > >
> org.apache.kylin.measure.bitmap.BitmapCounter.readRegisters(BitmapCounter.java:113)
> > >         at
> > >
> > >
> org.apache.kylin.measure.bitmap.BitmapSerializer.deserialize(BitmapSerializer.java:59)
> > >         ... 17 more
> > >
> > >
> > > --
> > > Sent from: http://apache-kylin.74782.x6.nabble.com/
> > >
> >
>

Reply via email to