[ 
https://issues.apache.org/jira/browse/CARBONDATA-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen reassigned CARBONDATA-3513:
--------------------------------------

    Assignee: ocean

> can not run major compaction when using hive partition table
> ------------------------------------------------------------
>
>                 Key: CARBONDATA-3513
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3513
>             Project: CarbonData
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 1.6.0
>            Reporter: ocean
>            Assignee: ocean
>            Priority: Major
>             Fix For: 1.6.1
>
>          Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Major compaction command runs error.ERROR information:
> {code:java}
> 2019-09-03 13:35:49 INFO  BlockManagerInfo:54 - Added broadcast_0_piece0 in 
> memory on czh-yhfx-redis1:41430 (size: 26.4 KB, free: 5.2 GB)2019-09-03 
> 13:35:49 INFO  BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on 
> czh-yhfx-redis1:41430 (size: 26.4 KB, free: 5.2 GB)2019-09-03 13:35:52 WARN  
> TaskSetManager:66 - Lost task 1.0 in stage 0.0 (TID 1, czh-yhfx-redis1, 
> executor 1): java.lang.NumberFormatException: For input string: 
> "32881200100001100000" at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) 
> at java.lang.Long.parseLong(Long.java:592) at 
> java.lang.Long.parseLong(Long.java:631) at 
> org.apache.carbondata.core.util.path.CarbonTablePath$DataFileUtil.getTaskIdFromTaskNo(CarbonTablePath.java:503)
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerModel.getCarbonFactDataHandlerModel(CarbonFactDataHandlerModel.java:396)
>  at 
> org.apache.carbondata.processing.merger.RowResultMergerProcessor.<init>(RowResultMergerProcessor.java:86)
>  at 
> org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.<init>(CarbonMergerRDD.scala:213)
>  at 
> org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:86)
>  at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:82) at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at 
> org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at 
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at 
> org.apache.spark.scheduler.Task.run(Task.scala:108) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:748){code}
>  
> version:apache-carbondata-1.5.1-bin-spark2.2.1-hadoop2.7.2.jar
> table is a hive partition table.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to