[ https://issues.apache.org/jira/browse/SPARK-21337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16079785#comment-16079785 ]
fengchaoge commented on SPARK-21337: ------------------------------------ thank you very much, what should i do for next? Thank you for your guidance the table like this : CREATE TABLE app_claim_assess_rule_granularity( report_no string, case_times string, id_clm_channel_process string, loss_object_no string, assess_times string, loss_name string, max_loss_amount string, impairment_amount string, rule_code string, rule_name string, application_code string, brand_name string, manufacturer_name string, series_name string, group_name string, model_name string, end_case_date string, updated_date string, assess_um string, car_mark string, garage_code string, garage_name string , garage_type string , privilege_group_name string , small_type string, is_transfer string, praepostor_type string, channel_type string, channel_flag string, loss_type string, loss_agree_amount string, loss_count_agree string, department_code string, department_code_01 string, department_code_02_v string, department_code_03 string, department_code_04 string, department_code_name_01 string, department_code_name_02 string, department_code_name_03 string, department_code_name_04 string, assess_dept_code string, verify_department_code_01 string, verify_department_code_02 string, verify_department_code_03 string, verify_department_code_04 string, verify_department_code_name_01 string, verify_department_code_name_02 string, verify_department_code_name_03 string, verify_department_code_name_04 string, assess_quote_price_um string, assess_guide_um string, assess_center_guide_um string, rule_type string, loss_count_assess string, loss_name_rank string, loss_name_rule_rank string, both_trigger string) PARTITIONED BY ( department_code_02 string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe' WITH SERDEPROPERTIES ( 'field.delim'='\u0001', 'serialization.format'='\u0001') STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.RCFileInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.RCFileOutputFormat' LOCATION 'hdfs://hdp-hdfs01/user/hive/warehouse/gbd_dm_pac_safe.db/app_claim_assess_rule_granularity' TBLPROPERTIES ( 'transient_lastDdlTime'='1499412897') > SQL which has large ‘case when’ expressions may cause code generation beyond > 64KB > --------------------------------------------------------------------------------- > > Key: SPARK-21337 > URL: https://issues.apache.org/jira/browse/SPARK-21337 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.1.1 > Environment: spark-2.1.1-hadoop-2.6.0-cdh-5.4.2 > Reporter: fengchaoge > Fix For: 2.1.1 > > Attachments: test1.JPG, test.JPG > > > when there are large 'case when ' expressions in spark sql,the CodeGenerator > failed to compile it. > Error message is followed by a huge dump of generated source code,at last > failed. > java.util.concurrent.ExecutionException: java.lang.Exception: failed to > compile: org.codehaus.janino.JaninoRuntimeException: Code of method > "apply_9$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V" > of class > "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection" > grows beyond 64 KB. > It seems that SPARK-13242 has solved this problem in spark-1.6.2,however it > apparence in spark-2.1.1 again. > https://issues.apache.org/jira/browse/SPARK-13242. > is there something wrong ? -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org