[ 
https://issues.apache.org/jira/browse/DRILL-7471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16990775#comment-16990775
 ] 

ASF GitHub Bot commented on DRILL-7471:
---------------------------------------

vvysotskyi commented on pull request #1919: DRILL-7471: DESCRIBE TABLE command 
fails with ClassCastException when Metastore is enabled
URL: https://github.com/apache/drill/pull/1919
 
 
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> describe table command fails with ClassCastException when metastore is enabled
> ------------------------------------------------------------------------------
>
>                 Key: DRILL-7471
>                 URL: https://issues.apache.org/jira/browse/DRILL-7471
>             Project: Apache Drill
>          Issue Type: Bug
>    Affects Versions: 1.17.0
>            Reporter: Vova Vysotskyi
>            Assignee: Vova Vysotskyi
>            Priority: Blocker
>              Labels: ready-to-commit
>             Fix For: 1.17.0
>
>
> For the case when metastore is enabled and {{ANALYZE}} was produced for the 
> table, {{DESCRIBE TABLE}} statement fails with ClassCastException:
> {code:sql}
> set `metastore.enabled`=true;
> analyze table lineitem refresh metadata;
> describe table lineitem;
> {code}
> {noformat}
> Error: SYSTEM ERROR: ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Double
> {noformat}
> Stack trace from the logs:
> {noformat}
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> ClassCastException: java.lang.Long cannot be cast to java.lang.Double
> Fragment 0:0
> Please, refer to logs for more information.
> [Error Id: 6b1295ee-7674-4362-a3c4-096e0688ed0b on user515050-pc:31010]
>       at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:637)
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:363)
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:219)
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:329)
>       at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>       at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Double
>       at 
> org.apache.drill.exec.store.ischema.Records$Column.<init>(Records.java:652)
>       at 
> org.apache.drill.exec.store.ischema.RecordCollector$MetastoreRecordCollector.lambda$columns$4(RecordCollector.java:350)
>       at java.util.ArrayList.forEach(ArrayList.java:1257)
>       at 
> org.apache.drill.exec.store.ischema.RecordCollector$MetastoreRecordCollector.columns(RecordCollector.java:333)
>       at 
> org.apache.drill.exec.store.ischema.RecordCollector$MetastoreRecordCollector.lambda$columns$3(RecordCollector.java:308)
>       at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>       at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
>       at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>       at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
>       at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>       at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
>       at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
>       at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>       at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
>       at 
> org.apache.drill.exec.store.ischema.RecordCollector$MetastoreRecordCollector.columns(RecordCollector.java:309)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator$Columns.collect(InfoSchemaRecordGenerator.java:170)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.lambda$visit$0(InfoSchemaRecordGenerator.java:75)
>       at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>       at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>       at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
>       at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>       at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747)
>       at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721)
>       at java.util.stream.AbstractTask.compute(AbstractTask.java:327)
>       at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
>       at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>       at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401)
>       at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734)
>       at 
> java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714)
>       at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
>       at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.visit(InfoSchemaRecordGenerator.java:77)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:69)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:63)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:63)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:51)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaTableType.getRecordReader(InfoSchemaTableType.java:87)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:35)
>       at 
> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:30)
>       at 
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:163)
>       at 
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:186)
>       at 
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:141)
>       at 
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:186)
>       at 
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:114)
>       at 
> org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:90)
>       at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:263)
>       ... 4 common frames omitted
> {noformat}
> This issue was caused by DRILL-7273 because there were mixed statistics and 
> metadata and therefore in some places was misused 
> {{ColumnStatisticsKind.NON_NULL_COUNT}} column statistics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to