[ 
https://issues.apache.org/jira/browse/DRILL-802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14005378#comment-14005378
 ] 

Aditya Kishore commented on DRILL-802:
--------------------------------------

What do you get if you run the query as following

{{select c_row, c_int, convert_from(convert_to(cast(c_int as int), 
'INT_HADOOPV'), 'INT_HADOOPV') from data;}}

> convert int to int_hadoopv throws indexoutofboundexception
> ----------------------------------------------------------
>
>                 Key: DRILL-802
>                 URL: https://issues.apache.org/jira/browse/DRILL-802
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Execution - Operators
>            Reporter: Chun Chang
>            Assignee: Aditya Kishore
>
> A query converting INT to INT_HADOOPV caused the exception:
> 0: jdbc:drill:schema=dfs> select c_row, c_int, convert_from(convert_to(c_int, 
> 'INT_HADOOPV'), 'INT_HADOOPV') from data;
> Query failed: org.apache.drill.exec.rpc.RpcException: Remote failure while 
> running query.[error_id: "2460e868-70a8-4133-9aa4-2884140cfb8b"
> endpoint {
>   address: "qa-node117.qa.lab"
>   user_port: 31010
>   control_port: 31011
>   data_port: 31012
> }
> error_type: 0
> message: "Failure while running fragment. < IndexOutOfBoundsException:[ 
> writerIndex: 0 (expected: readerIndex(1) <= writerIndex <= capacity(9)) ]"
> ]
> Error: exception while executing query (state=,code=0)
> column c_int holds the following int value:
> 0: jdbc:drill:schema=dfs> select c_row, c_int from data;
> +------------+------------+
> |   c_row    |   c_int    |
> +------------+------------+
> | 1          | 0          |
> | 2          | 1          |
> | 3          | -1         |
> | 4          | 12         |
> | 5          | 123        |
> | 6          | 92032039   |
> | 7          | -23395000  |
> | 8          | -99392039  |
> | 9          | -2147483648 |
> | 10         | 2147483647 |
> | 11         | 32767      |
> | 12         | -32767     |
> | 13         | 49032      |
> | 14         | -4989385   |
> | 15         | 69834830   |
> | 16         | 243        |
> | 17         | -426432    |
> | 18         | -3904      |
> | 19         | 489392758  |
> | 20         | 589032574  |
> | 21         | 340000504  |
> | 22         | 0          |
> | 23         | 1          |
> +------------+------------+
> stack trace:
> 15:33:58.248 [c8d7eb9d-21af-4e1b-bc61-6f3349915d9a:frag:0:0] DEBUG 
> o.a.d.e.w.fragment.FragmentExecutor - Caught exception while running fragment
> java.lang.IndexOutOfBoundsException: writerIndex: 0 (expected: readerIndex(1) 
> <= writerIndex <= capacity(9))
>   at io.netty.buffer.AbstractByteBuf.writerIndex(AbstractByteBuf.java:87) 
> ~[netty-buffer-4.0.7.Final.jar:na]
>   at io.netty.buffer.SwappedByteBuf.writerIndex(SwappedByteBuf.java:112) 
> ~[netty-buffer-4.0.7.Final.jar:na]
>   at 
> org.apache.drill.exec.util.ConvertUtil$HadoopWritables.writeVLong(ConvertUtil.java:96)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.test.generated.ProjectorGen722.doEval(ProjectorTemplate.java:40)
>  ~[na:na]
>   at 
> org.apache.drill.exec.test.generated.ProjectorGen722.projectRecords(ProjectorTemplate.java:65)
>  ~[na:na]
>   at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.doWork(ProjectRecordBatch.java:94)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:66)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.next(ProjectRecordBatch.java:83)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:111)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.next(ScreenCreator.java:80)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:104)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_45]
>   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
> 15:33:58.249 [c8d7eb9d-21af-4e1b-bc61-6f3349915d9a:frag:0:0] ERROR 
> o.a.d.e.w.f.AbstractStatusReporter - Error 
> 2460e868-70a8-4133-9aa4-2884140cfb8b: Failure while running fragment.
> java.lang.IndexOutOfBoundsException: writerIndex: 0 (expected: readerIndex(1) 
> <= writerIndex <= capacity(9))
>   at io.netty.buffer.AbstractByteBuf.writerIndex(AbstractByteBuf.java:87) 
> ~[netty-buffer-4.0.7.Final.jar:na]
>   at io.netty.buffer.SwappedByteBuf.writerIndex(SwappedByteBuf.java:112) 
> ~[netty-buffer-4.0.7.Final.jar:na]
>   at 
> org.apache.drill.exec.util.ConvertUtil$HadoopWritables.writeVLong(ConvertUtil.java:96)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.test.generated.ProjectorGen722.doEval(ProjectorTemplate.java:40)
>  ~[na:na]
>   at 
> org.apache.drill.exec.test.generated.ProjectorGen722.projectRecords(ProjectorTemplate.java:65)
>  ~[na:na]
>   at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.doWork(ProjectRecordBatch.java:94)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.next(AbstractSingleRecordBatch.java:66)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.next(ProjectRecordBatch.java:83)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:111)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.next(ScreenCreator.java:80)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:104)
>  
> ~[drill-java-exec-1.0.0-m2-incubating-SNAPSHOT-rebuffed.jar:1.0.0-m2-incubating-SNAPSHOT]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_45]
>   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
> physical plan:
> 0: jdbc:drill:schema=dfs> explain plan for select c_row, c_int, 
> convert_from(convert_to(c_int, 'INT_HADOOPV'), 'INT_HADOOPV') from data;
> +------------+------------+
> |    text    |    json    |
> +------------+------------+
> | ScreenPrel
>   ProjectPrel(c_row=[$1], c_int=[$2], EXPR$2=[CONVERT_FROM(CONVERT_TO($2, 
> 'INT_HADOOPV'), 'INT_HADOOPV')])
>     ScanPrel(groupscan=[ParquetGroupScan [entries=[ReadEntryWithPath 
> [path=maprfs:/user/root/mondrian/data]], 
> selectionRoot=/user/root/mondrian/data, columns=[SchemaPath [`c_row`], 
> SchemaPath [`c_int`]]]])
>  | {
>   "head" : {
>     "version" : 1,
>     "generator" : {
>       "type" : "ExplainHandler",
>       "info" : ""
>     },
>     "type" : "APACHE_DRILL_PHYSICAL",
>     "options" : [ ],
>     "resultMode" : "EXEC"
>   },
>   "graph" : [ {
>     "pop" : "parquet-scan",
>     "@id" : 1,
>     "entries" : [ {
>       "path" : "maprfs:/user/root/mondrian/data"
>     } ],
>     "storage" : {
>       "type" : "file",
>       "connection" : "maprfs:///",
>       "workspaces" : {
>         "default" : {
>           "location" : "/user/root/mondrian/",
>           "writable" : false,
>           "storageformat" : null
>         },
>         "home" : {
>           "location" : "/",
>           "writable" : false,
>           "storageformat" : null
>         },
>         "root" : {
>           "location" : "/",
>           "writable" : false,
>           "storageformat" : null
>         },
>         "tmp" : {
>           "location" : "/tmp",
>           "writable" : true,
>           "storageformat" : "csv"
>         }
>       },
>       "formats" : {
>         "psv" : {
>           "type" : "text",
>           "extensions" : [ "tbl" ],
>           "delimiter" : "|"
>         },
>         "csv" : {
>           "type" : "text",
>           "extensions" : [ "csv" ],
>           "delimiter" : ","
>         },
>         "tsv" : {
>           "type" : "text",
>           "extensions" : [ "tsv" ],
>           "delimiter" : "\t"
>         },
>         "parquet" : {
>           "type" : "parquet"
>         },
>         "json" : {
>           "type" : "json"
>         }
>       }
>     },
>     "format" : {
>       "type" : "parquet"
>     },
>     "columns" : [ "`c_row`", "`c_int`" ],
>     "selectionRoot" : "/user/root/mondrian/data"
>   }, {
>     "pop" : "project",
>     "@id" : 2,
>     "exprs" : [ {
>       "ref" : "`c_row`",
>       "expr" : "`c_row`"
>     }, {
>       "ref" : "`c_int`",
>       "expr" : "`c_int`"
>     }, {
>       "ref" : "`EXPR$2`",
>       "expr" : "convert_from(convert_to(`c_int`, \"INT_HADOOPV\"), 
> \"INT_HADOOPV\")"
>     } ],
>     "child" : 1,
>     "initialAllocation" : 1000000,
>     "maxAllocation" : 10000000000
>   }, {
>     "pop" : "screen",
>     "@id" : 3,
>     "child" : 2,
>     "initialAllocation" : 1000000,
>     "maxAllocation" : 10000000000
>   } ]
> } |



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to