[ 
https://issues.apache.org/jira/browse/CARBONDATA-967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-967:
---------------------------------------
    Fix Version/s: 1.1.0

> select * with order by and limit for join not working
> -----------------------------------------------------
>
>                 Key: CARBONDATA-967
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-967
>             Project: CarbonData
>          Issue Type: Bug
>          Components: spark-integration
>            Reporter: joobisb
>            Priority: Minor
>             Fix For: 1.1.0
>
>         Attachments: carbon1.csv, carbon2.csv
>
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> CREATE TABLE carbon1 (imei string,age int,task bigint,sale 
> decimal(10,3),productdate timestamp,score double)STORED BY 
> 'org.apache.carbondata.format';
> LOAD DATA INPATH 'hdfs://hacluster/data/carbon1.csv'  INTO TABLE carbon1 
> options ('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='false','BAD_RECORDS_ACTION'='FORCE',
>   'FILEHEADER'= ''); 
> CREATE TABLE carbon2 (imei string,age int,task bigint,sale 
> decimal(10,3),productdate timestamp,score double)STORED BY 
> 'org.apache.carbondata.format';
> LOAD DATA INPATH 'hdfs://hacluster/data/carbon2.csv'  INTO TABLE carbon2 
> options ('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='false','BAD_RECORDS_ACTION'='FORCE',
>   'FILEHEADER'= ''); 
> CREATE TABLE carbon3 (imei string,age int,task bigint,sale 
> decimal(10,3),productdate timestamp,score double)STORED BY 
> 'org.apache.carbondata.format';
> LOAD DATA INPATH 'hdfs://hacluster/data/carbon1.csv'  INTO TABLE carbon3 
> options ('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_LOGGER_ENABLE'='false','BAD_RECORDS_ACTION'='FORCE',
>   'FILEHEADER'= '');
> select * from carbon1 a full outer join carbon2 b on 
> substr(a.productdate,1,10)=substr(b.productdate,1,10) order by a.imei limit 
> 100;
> it is throwing below exception
> ERROR TaskSetManager: Task 0 in stage 12.0 failed 1 times; aborting job
> Exception in thread "main" org.apache.spark.SparkException: Job aborted due 
> to stage failure: Task 0 in stage 12.0 failed 1 times, most recent failure: 
> Lost task 0.0 in stage 12.0 (TID 211, localhost, executor driver): 
> java.lang.ClassCastException: org.apache.spark.unsafe.types.UTF8String cannot 
> be cast to java.lang.Integer
>       at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:101)
>       at 
> org.apache.spark.sql.CarbonDictionaryDecoder$$anonfun$doExecute$1$$anonfun$3$$anon$1$$anonfun$next$1.apply$mcVI$sp(CarbonDictionaryDecoder.scala:112)
>       at 
> org.apache.spark.sql.CarbonDictionaryDecoder$$anonfun$doExecute$1$$anonfun$3$$anon$1$$anonfun$next$1.apply(CarbonDictionaryDecoder.scala:109)
>       at 
> org.apache.spark.sql.CarbonDictionaryDecoder$$anonfun$doExecute$1$$anonfun$3$$anon$1$$anonfun$next$1.apply(CarbonDictionaryDecoder.scala:109)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>       at 
> org.apache.spark.sql.CarbonDictionaryDecoder$$anonfun$doExecute$1$$anonfun$3$$anon$1.next(CarbonDictionaryDecoder.scala:109)
>       at 
> org.apache.spark.sql.CarbonDictionaryDecoder$$anonfun$doExecute$1$$anonfun$3$$anon$1.next(CarbonDictionaryDecoder.scala:99)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:232)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
>       at



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to