[ 
https://issues.apache.org/jira/browse/DRILL-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987902#comment-15987902
 ] 

ASF GitHub Bot commented on DRILL-5140:
---------------------------------------

Github user jinfengni commented on a diff in the pull request:

    https://github.com/apache/drill/pull/818#discussion_r113825753
  
    --- Diff: 
exec/java-exec/src/main/java/org/apache/drill/exec/expr/ClassGenerator.java ---
    @@ -66,7 +68,17 @@
       public static final GeneratorMapping DEFAULT_CONSTANT_MAP = 
GM("doSetup", "doSetup", null, null);
     
       static final org.slf4j.Logger logger = 
org.slf4j.LoggerFactory.getLogger(ClassGenerator.class);
    -  public static enum BlockType {SETUP, EVAL, RESET, CLEANUP};
    +
    +  /**
    +   * Field has 2 indexes within the constant pull: field item + name and 
type item.
    --- End diff --
    
    I could not make sense how you calculate this number of 26767. According to 
JVM spec [1], the constant pool table consists of 2 or more bytes "info" 
section for each item. In some case like class name, one item could use 2 
bytes, while in other cases like field, method, one item could use 4 bytes. 
Integer uses 4 bytes, and Long constants uses 8 bytes. 
    
    That is, if I have a class with fields < 26767, we may still hit the 
complain of "too many constants".    
     
    
    1. http://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.html#jvms-4.4


> Fix CompileException in run-time generated code when record batch has large 
> number of fields.
> ---------------------------------------------------------------------------------------------
>
>                 Key: DRILL-5140
>                 URL: https://issues.apache.org/jira/browse/DRILL-5140
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Execution - Flow
>    Affects Versions: 1.9.0
>            Reporter: Khurram Faraaz
>            Assignee: Volodymyr Vysotskyi
>            Priority: Critical
>         Attachments: drill_5117.q, manyColumns.csv
>
>
> CTAS that does SELECT over 5003 columns fails with CompileException: File 
> 'org.apache.drill.exec.compile.DrillJavaFileObject...
> Drill 1.9.0 git commit ID : 4c1b420b
> CTAS statement and CSV data file are attached.
> I ran test with and without setting the below system option, test failed in 
> both cases.
> alter system set `exec.java_compiler`='JDK';
> sqlline session just closes with below message, after the failing CTAS is 
> executed.
> Closing: org.apache.drill.jdbc.impl.DrillConnectionImpl
> Stack trace from drillbit.log
> {noformat}
> 2016-12-20 12:02:16,016 [27a6e241-99b1-1f2a-8a91-394f8166e969:frag:0:0] ERROR 
> o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: CompileException: File 
> 'org.apache.drill.exec.compile.DrillJavaFileObject[ProjectorGen45.java]', 
> Line 11, Column 8: ProjectorGen45.java:11: error: too many constants
> public class ProjectorGen45 {
>        ^ (compiler.err.limit.pool)
> Fragment 0:0
> [Error Id: ced84dce-669d-47c2-b5d2-5e0559dbd9fd on centos-01.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> CompileException: File 
> 'org.apache.drill.exec.compile.DrillJavaFileObject[ProjectorGen45.java]', 
> Line 11, Column 8: ProjectorGen45.java:11: error: too many constants
> public class ProjectorGen45 {
>        ^ (compiler.err.limit.pool)
> Fragment 0:0
> [Error Id: ced84dce-669d-47c2-b5d2-5e0559dbd9fd on centos-01.qa.lab:31010]
>         at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>  ~[drill-common-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.9.0.jar:1.9.0]
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: org.apache.drill.exec.exception.SchemaChangeException: Failure 
> while attempting to load generated class
>         at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:487)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:78)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext(WriterRecordBatch.java:91)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.8.0_91]
>         at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_91]
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>  ~[hadoop-common-2.7.0-mapr-1607.jar:na]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>         ... 4 common frames omitted
> Caused by: org.apache.drill.exec.exception.ClassTransformationException: 
> java.util.concurrent.ExecutionException: 
> org.apache.drill.exec.exception.ClassTransformationException: Failure 
> generating transformation classes for value:
> ...
> {noformat}
> Thanks,
> Khurram



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to