xuekaiqi created KYLIN-4453:
-------------------------------

             Summary: Query on refreshed cube failed with FileNotFoundException
                 Key: KYLIN-4453
                 URL: https://issues.apache.org/jira/browse/KYLIN-4453
             Project: Kylin
          Issue Type: New Feature
          Components: Storage - Parquet
            Reporter: xuekaiqi
            Assignee: nichunen
             Fix For: v4.0.0


Steps to reproduce:
 # Build a segment of any cube
 # Refresh the segment
 # Query the cube, get error message like

 
{code:java}
java.io.FileNotFoundException: File 
file:/Users/kyligence/Downloads/localmeta_n/vvv/parquet/gg/20200401000000_20200403000000/4/part-00000-cdaa5f21-34dd-432d-865e-92089a7ffa03-c000.snappy.parquet
 does not exist It is possible the underlying files have been updated. You can 
explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' 
command in SQL or by recreating the Dataset/DataFrame involved. at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127)
 at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:177)
 at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:101)
 at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.scan_nextBatch_0$(Unknown
 Source) at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(Unknown
 Source) at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source) at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
 at
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to