[ 
https://issues.apache.org/jira/browse/DRILL-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16051405#comment-16051405
 ] 

Paul Rogers commented on DRILL-5590:
------------------------------------

Thanks for the sample data! I ended up having to double the file to reproduce 
the problems (there seem to be multiple.)

The problem appears to be in {{FieldVarCharOutput}}, which is used for text 
files with headers.

{code}
  public void finishBatch() {
    batchIndex++;
    ...
         this.vectors[i].getMutator().setValueCount(batchIndex);
{code}

Notice the error: the batch index counts batches, but the value count for each 
vector should be the *record count* not the batch count. Not sure if this is 
the only problem, but it is *a* problem.

Note also that {{batchIndex}} is useless: it is set to 0 at the start of each 
batch, and incremented at the end of each batch: it just flips from 0 to 1 and 
back again.

Interestingly, however, this case is for a user that uses the {{columns}} form 
of CSV access which uses a different class, {{RepeatedVarCharOutput}}. But, we 
are consistent, that class has similar code, but this time, "batchIndex" does 
not mean "batch count", it means "record index within the batch." (Sigh...)

{code}
  @Override
  public void finishBatch() {
    mutator.setValueCount(batchIndex);
  }
{code}

Moreover, the above functions are doubly useless. It turns out that 
{{ScanBatch}} already does the work on line 231:

{code}
      for (VectorWrapper<?> w : container) {
        w.getValueVector().getMutator().setValueCount(recordCount);
      }
{code}

So, simply removed the code from the two {{finishBatch}} methods.

This then finds another problem. After reading the first batch of 8096 rows, 
the {{ScanBatch}} code above tries to set the value count to 8096 as shown 
above.  The vectors are of type {{VarCharVector}} with this code:

{code}
    public void setValueCount(int valueCount) {
      final int currentByteCapacity = getByteCapacity();
      final int idx = offsetVector.getAccessor().get(valueCount);
{code}

{{valueCount}} = 8096
{{currentByteCapacity}} = 32768

The offset vector is of type {{UInt4Vector}} with this code:

{code}
    public int get(int index) {
      return data.getInt(index * 4);
    }
{code}

{{index * 4}} = 32384, but the actual size of the offset vector is 16384 bytes, 
so we get an {{IndexOutOfBounds}} exception.

Now, the offset vector had to have been updated for each row, so this error 
should never occur.

Probing further, the query used in the reproduction is:

{code}
SELECT count(1) FROM `dfs.data`.`test2.txt` lc where lc.rfc like 'CUBA7706%'
{code}

Note the reference to {{lc.rfc}} (in lower case). But, the CSV file has the 
header as "RFC" (in upper case.) Drill is supposed to be case insensitive. But, 
the code in {{}} is case sensitive:

{code}
  public FieldVarCharOutput(OutputMutator outputMutator, String [] fieldNames, 
Collection<SchemaPath> columns, boolean isStarQuery) throws 
SchemaChangeException {
...
    List<String> outputColumns = new ArrayList<>(Arrays.asList(fieldNames));
...
        if (pathStr.equals(COL_NAME) && path.getRootSegment().getChild() != 
null) {
...
          index = outputColumns.indexOf(pathStr);
          if (index < 0) {
            // found col that is not a part of fieldNames, add it
            // this col might be part of some another scanner
            index = totalFields++;
            outputColumns.add(pathStr);
          }
{code}

As a result of the case difference, the {{if (index < 0)}} line above finds no 
match for "rfc" and creates a "null" column to "fill in" the "missing" column. 
This would be fine if we asked for "foo" and there was no "foo."

For "real" field, the field is updated here:

{code}
  public boolean endField() {
    ...
      currentVector.getMutator().setSafe(recordCount, fieldBytes, 0, 
currentDataPointer);
{code}

But, the null fields are never updated, causing the offsets to be missing, thus 
the IOOB error found above. The cause of this is, in part due to DRILL-5529: 
missing logic to fill in missing offsets.

The solution is to add logic to the {{finishRecord()}} method to set the null 
vectors.

Also, added case-insensitive name mapping to the {{FieldVarCharOutput}} 
constructor to fix the bug that lead to the null-column bug.

Note that all of this code is due for replacement as part of the fixes for 
DRILL-5211. So, the fix created for this bug is a short-term work-around until 
the more general solution from DRILL-5211 becomes available.

> Drill return IndexOutOfBoundsException when a (Text) file > 4096 rows
> ---------------------------------------------------------------------
>
>                 Key: DRILL-5590
>                 URL: https://issues.apache.org/jira/browse/DRILL-5590
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - Text & CSV
>    Affects Versions: 1.10.0
>         Environment: OS: Oracle Linux Enterprise 7, OSX 10.10.1
> JVM: 1.8
> Drill Installation type: Embebed or distributed(Cluster 2 Nodes)
>            Reporter: Victor Garcia
>            Assignee: Paul Rogers
>         Attachments: xaa_19.txt
>
>
> I describe below, the storage (name lco):
> {
>   "type": "file",
>   "enabled": true,
>   "connection": "file:///",
>   "config": null,
>   "workspaces": {
>     "root": {
>       "location": "/data/source/lco",
>       "writable": false,
>       "defaultInputFormat": "psv"
>     }
>   },
>   "formats": {
>     "psv": {
>       "type": "text",
>       "extensions": [
>         "txt"
>       ],
>       "extractHeader": true,
>       "delimiter": "|"
>     }
>   }
> }
> Querying a CSV file with 3 columns and when the file have > 4096 (including 
> the header), Drill return a error, but when i reduce the rows to 4095 rows 
> the query work.
> Query used: (Select count(1) from lco.root.* as lc where lc.rfc like 
> 'CUBA7706%')
> The original file have 35M of rows, but i test reducing the rows until that 
> find the number of rows that produce the error.
> The original source file is in this URL 
> (http://cfdisat.blob.core.windows.net/lco/l_RFC_2017_05_11_2.txt.gz)
> First part of error:
> at 
> org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:123)
>  [drill-java-exec-1.10.0.jar:1.10.0]
>       at 
> org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:343) 
> [drill-java-exec-1.10.0.jar:1.10.0]
>       at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:88) 
> [drill-java-exec-1.10.0.jar:1.10.0]
>       at 
> org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:274) 
> [drill-rpc-1.10.0.jar:1.10.0]
>       at 
> org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:244) 
> [drill-rpc-1.10.0.jar:1.10.0]
>       at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
>  [netty-codec-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
>  [netty-handler-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>  [netty-codec-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
>  [netty-codec-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) 
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>  [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) 
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) 
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>  [netty-common-4.0.27.Final.jar:4.0.27.Final]
>       at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
> 2017-06-15 14:45:03,056 [qtp2036240117-58] ERROR 
> o.a.d.e.server.rest.QueryResources - Query from Web UI Failed
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
> IndexOutOfBoundsException: index: 16384, length: 4 (expected: range(0, 16384))



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to