[jira] [Commented] (DRILL-4337) Drill fails to read INT96 fields from hive generated parquet files

2018-07-13 Thread Vitalii Diravka (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543482#comment-16543482
 ] 

Vitalii Diravka commented on DRILL-4337:


I have reproduced the issue only with dataset from DRILL-5495. The issue is 
solved in context of that Jira.

> Drill fails to read INT96 fields from hive generated parquet files
> --
>
> Key: DRILL-4337
> URL: https://issues.apache.org/jira/browse/DRILL-4337
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Rahul Challapalli
>Assignee: Vitalii Diravka
>Priority: Blocker
> Fix For: 1.14.0
>
> Attachments: hive1_fewtypes_null.parquet
>
>
> git.commit.id.abbrev=576271d
> Cluster : 2 nodes running MaprFS 4.1
> The data file used in the below table is generated from hive. Below is output 
> from running the same query multiple times. 
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: NegativeArraySizeException
> Fragment 0:0
> [Error Id: 5517e983-ccae-4c96-b09c-30f331919e56 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: IllegalArgumentException: Reading past RLE/BitPacking 
> stream.
> Fragment 0:0
> [Error Id: 94ed5996-d2ac-438d-b460-c2d2e41bdcc3 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 0:0
> [Error Id: 41dca093-571e-49e5-a2ab-fd69210b143d on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> ++
> | timestamp_col  |
> ++
> | null   |
> | [B@7c766115|
> | [B@3fdfe989|
> | null   |
> | [B@55d4222 |
> | [B@2da0c8ee|
> | [B@16e798a9|
> | [B@3ed78afe|
> | [B@38e649ed|
> | [B@16ff83ca|
> | [B@61254e91|
> | [B@5849436a|
> | [B@31e9116e|
> | [B@3c77665b|
> | [B@42e0ff60|
> | [B@419e19ed|
> | [B@72b83842|
> | [B@1c75afe5|
> | [B@726ef1fb|
> | [B@51d0d06e|
> | [B@64240fb8|
> +
> {code}
> Attached the log, hive ddl used to generate the parquet file and the parquet 
> file itself



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-4337) Drill fails to read INT96 fields from hive generated parquet files

2017-05-09 Thread Rahul Challapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003589#comment-16003589
 ] 

Rahul Challapalli commented on DRILL-4337:
--

Marked it as a blocker. This bug would prevent drill users from consuming 
parquet files with timestamp generated from hive, spark etc

> Drill fails to read INT96 fields from hive generated parquet files
> --
>
> Key: DRILL-4337
> URL: https://issues.apache.org/jira/browse/DRILL-4337
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Rahul Challapalli
>Priority: Blocker
> Attachments: hive1_fewtypes_null.parquet
>
>
> git.commit.id.abbrev=576271d
> Cluster : 2 nodes running MaprFS 4.1
> The data file used in the below table is generated from hive. Below is output 
> from running the same query multiple times. 
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: NegativeArraySizeException
> Fragment 0:0
> [Error Id: 5517e983-ccae-4c96-b09c-30f331919e56 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: IllegalArgumentException: Reading past RLE/BitPacking 
> stream.
> Fragment 0:0
> [Error Id: 94ed5996-d2ac-438d-b460-c2d2e41bdcc3 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 0:0
> [Error Id: 41dca093-571e-49e5-a2ab-fd69210b143d on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> ++
> | timestamp_col  |
> ++
> | null   |
> | [B@7c766115|
> | [B@3fdfe989|
> | null   |
> | [B@55d4222 |
> | [B@2da0c8ee|
> | [B@16e798a9|
> | [B@3ed78afe|
> | [B@38e649ed|
> | [B@16ff83ca|
> | [B@61254e91|
> | [B@5849436a|
> | [B@31e9116e|
> | [B@3c77665b|
> | [B@42e0ff60|
> | [B@419e19ed|
> | [B@72b83842|
> | [B@1c75afe5|
> | [B@726ef1fb|
> | [B@51d0d06e|
> | [B@64240fb8|
> +
> {code}
> Attached the log, hive ddl used to generate the parquet file and the parquet 
> file itself



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (DRILL-4337) Drill fails to read INT96 fields from hive generated parquet files

2017-05-09 Thread Rahul Challapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003541#comment-16003541
 ] 

Rahul Challapalli commented on DRILL-4337:
--

[~vitalii] The error is in the ParquetScanner. So I don't think using a 
convert_from helps. In any case, I tried it and got an 
ArrayIndexOutOfBoundsException

> Drill fails to read INT96 fields from hive generated parquet files
> --
>
> Key: DRILL-4337
> URL: https://issues.apache.org/jira/browse/DRILL-4337
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Rahul Challapalli
>Priority: Critical
> Attachments: hive1_fewtypes_null.parquet
>
>
> git.commit.id.abbrev=576271d
> Cluster : 2 nodes running MaprFS 4.1
> The data file used in the below table is generated from hive. Below is output 
> from running the same query multiple times. 
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: NegativeArraySizeException
> Fragment 0:0
> [Error Id: 5517e983-ccae-4c96-b09c-30f331919e56 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: IllegalArgumentException: Reading past RLE/BitPacking 
> stream.
> Fragment 0:0
> [Error Id: 94ed5996-d2ac-438d-b460-c2d2e41bdcc3 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 0:0
> [Error Id: 41dca093-571e-49e5-a2ab-fd69210b143d on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> ++
> | timestamp_col  |
> ++
> | null   |
> | [B@7c766115|
> | [B@3fdfe989|
> | null   |
> | [B@55d4222 |
> | [B@2da0c8ee|
> | [B@16e798a9|
> | [B@3ed78afe|
> | [B@38e649ed|
> | [B@16ff83ca|
> | [B@61254e91|
> | [B@5849436a|
> | [B@31e9116e|
> | [B@3c77665b|
> | [B@42e0ff60|
> | [B@419e19ed|
> | [B@72b83842|
> | [B@1c75afe5|
> | [B@726ef1fb|
> | [B@51d0d06e|
> | [B@64240fb8|
> +
> {code}
> Attached the log, hive ddl used to generate the parquet file and the parquet 
> file itself



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (DRILL-4337) Drill fails to read INT96 fields from hive generated parquet files

2016-08-26 Thread Vitalii Diravka (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15440697#comment-15440697
 ] 

Vitalii Diravka commented on DRILL-4337:


[~rkins] Do you have the same errors while using CONVERT_FROM(timestamp_col, 
'TIMESTAMP_IMPALA')?

> Drill fails to read INT96 fields from hive generated parquet files
> --
>
> Key: DRILL-4337
> URL: https://issues.apache.org/jira/browse/DRILL-4337
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Rahul Challapalli
>Priority: Critical
> Attachments: hive1_fewtypes_null.parquet
>
>
> git.commit.id.abbrev=576271d
> Cluster : 2 nodes running MaprFS 4.1
> The data file used in the below table is generated from hive. Below is output 
> from running the same query multiple times. 
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: NegativeArraySizeException
> Fragment 0:0
> [Error Id: 5517e983-ccae-4c96-b09c-30f331919e56 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: IllegalArgumentException: Reading past RLE/BitPacking 
> stream.
> Fragment 0:0
> [Error Id: 94ed5996-d2ac-438d-b460-c2d2e41bdcc3 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 0:0
> [Error Id: 41dca093-571e-49e5-a2ab-fd69210b143d on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> ++
> | timestamp_col  |
> ++
> | null   |
> | [B@7c766115|
> | [B@3fdfe989|
> | null   |
> | [B@55d4222 |
> | [B@2da0c8ee|
> | [B@16e798a9|
> | [B@3ed78afe|
> | [B@38e649ed|
> | [B@16ff83ca|
> | [B@61254e91|
> | [B@5849436a|
> | [B@31e9116e|
> | [B@3c77665b|
> | [B@42e0ff60|
> | [B@419e19ed|
> | [B@72b83842|
> | [B@1c75afe5|
> | [B@726ef1fb|
> | [B@51d0d06e|
> | [B@64240fb8|
> +
> {code}
> Attached the log, hive ddl used to generate the parquet file and the parquet 
> file itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4337) Drill fails to read INT96 fields from hive generated parquet files

2016-02-02 Thread Rahul Challapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129409#comment-15129409
 ] 

Rahul Challapalli commented on DRILL-4337:
--

I am also seeing this error when using a hive through drill's native parquet 
reader

> Drill fails to read INT96 fields from hive generated parquet files
> --
>
> Key: DRILL-4337
> URL: https://issues.apache.org/jira/browse/DRILL-4337
> Project: Apache Drill
>  Issue Type: Bug
>Reporter: Rahul Challapalli
>Priority: Critical
> Attachments: hive1_fewtypes_null.parquet
>
>
> git.commit.id.abbrev=576271d
> Cluster : 2 nodes running MaprFS 4.1
> The data file used in the below table is generated from hive. Below is output 
> from running the same query multiple times. 
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: NegativeArraySizeException
> Fragment 0:0
> [Error Id: 5517e983-ccae-4c96-b09c-30f331919e56 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: IllegalArgumentException: Reading past RLE/BitPacking 
> stream.
> Fragment 0:0
> [Error Id: 94ed5996-d2ac-438d-b460-c2d2e41bdcc3 on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 0
> Fragment 0:0
> [Error Id: 41dca093-571e-49e5-a2ab-fd69210b143d on qa-node191.qa.lab:31010] 
> (state=,code=0)
> 0: jdbc:drill:zk=10.10.100.190:5181> select timestamp_col from 
> hive1_fewtypes_null;
> ++
> | timestamp_col  |
> ++
> | null   |
> | [B@7c766115|
> | [B@3fdfe989|
> | null   |
> | [B@55d4222 |
> | [B@2da0c8ee|
> | [B@16e798a9|
> | [B@3ed78afe|
> | [B@38e649ed|
> | [B@16ff83ca|
> | [B@61254e91|
> | [B@5849436a|
> | [B@31e9116e|
> | [B@3c77665b|
> | [B@42e0ff60|
> | [B@419e19ed|
> | [B@72b83842|
> | [B@1c75afe5|
> | [B@726ef1fb|
> | [B@51d0d06e|
> | [B@64240fb8|
> +
> {code}
> Attached the log, hive ddl used to generate the parquet file and the parquet 
> file itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-4337) Drill fails to read INT96 fields from hive generated parquet files

2016-02-01 Thread Rahul Challapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127212#comment-15127212
 ] 

Rahul Challapalli commented on DRILL-4337:
--

Failure 2 :
{code}
2016-02-01 21:38:58,089 [29502f8d-bb52-bd78-51a4-bdbf14f3a498:frag:0:0] DEBUG 
o.a.d.exec.physical.impl.ScanBatch - Failed to read the batch. Stopping...
org.apache.drill.common.exceptions.DrillRuntimeException: Error in parquet 
record reader.
Message:
Hadoop path: 
/drill/testdata/hive_storage/hive1_fewtypes_null/hive1_fewtypes_null.parquet
Total records read: 0
Mock records read: 0
Records to read: 21
Row group index: 0
Records in row group: 21
Parquet Metadata: ParquetMetaData{FileMetaData{schema: message hive_schema {
  optional int32 int_col;
  optional int64 bigint_col;
  optional binary date_col (UTF8);
  optional binary time_col (UTF8);
  optional int96 timestamp_col;
  optional binary interval_col (UTF8);
  optional binary varchar_col (UTF8);
  optional float float_col;
  optional double double_col;
  optional boolean bool_col;
}
, metadata: {}}, blocks: [BlockMetaData{21, 1886 [ColumnMetaData{UNCOMPRESSED 
[int_col] INT32  [RLE, BIT_PACKED, PLAIN], 4}, ColumnMetaData{UNCOMPRESSED 
[bigint_col] INT64  [RLE, BIT_PACKED, PLAIN], 111}, ColumnMetaData{UNCOMPRESSED 
[date_col] BINARY  [RLE, BIT_PACKED, PLAIN], 298}, ColumnMetaData{UNCOMPRESSED 
[time_col] BINARY  [RLE, BIT_PACKED, PLAIN_DICTIONARY], 563}, 
ColumnMetaData{UNCOMPRESSED [timestamp_col] INT96  [RLE, BIT_PACKED, 
PLAIN_DICTIONARY], 793}, ColumnMetaData{UNCOMPRESSED [interval_col] BINARY  
[RLE, BIT_PACKED, PLAIN_DICTIONARY], 1031}, ColumnMetaData{UNCOMPRESSED 
[varchar_col] BINARY  [RLE, BIT_PACKED, PLAIN], 1189}, 
ColumnMetaData{UNCOMPRESSED [float_col] FLOAT  [RLE, BIT_PACKED, PLAIN], 1543}, 
ColumnMetaData{UNCOMPRESSED [double_col] DOUBLE  [RLE, BIT_PACKED, PLAIN], 
1654}, ColumnMetaData{UNCOMPRESSED [bool_col] BOOLEAN  [RLE, BIT_PACKED, 
PLAIN], 1851}]}]}
at 
org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.handleAndRaise(ParquetRecordReader.java:349)
 ~[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next(ParquetRecordReader.java:451)
 ~[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:191) 
~[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at java.security.AccessController.doPrivileged(Native Method) 
[na:1.7.0_71]
at javax.security.auth.Subject.doAs(Subject.java:415) [na:1.7.0_71]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
 [hadoop-common-2.7.0-mapr-1506.jar:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_71]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.lang.IllegalArgumentException: Reading past RLE/BitPacking 
stream.
at 

[jira] [Commented] (DRILL-4337) Drill fails to read INT96 fields from hive generated parquet files

2016-02-01 Thread Rahul Challapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127215#comment-15127215
 ] 

Rahul Challapalli commented on DRILL-4337:
--

Failure 3 :
{code}
org.apache.drill.common.exceptions.DrillRuntimeException: Error in parquet 
record reader.
Message:
Hadoop path: 
/drill/testdata/hive_storage/hive1_fewtypes_null/hive1_fewtypes_null.parquet
Total records read: 0
Mock records read: 0
Records to read: 21
Row group index: 0
Records in row group: 21
Parquet Metadata: ParquetMetaData{FileMetaData{schema: message hive_schema {
  optional int32 int_col;
  optional int64 bigint_col;
  optional binary date_col (UTF8);
  optional binary time_col (UTF8);
  optional int96 timestamp_col;
  optional binary interval_col (UTF8);
  optional binary varchar_col (UTF8);
  optional float float_col;
  optional double double_col;
  optional boolean bool_col;
}
, metadata: {}}, blocks: [BlockMetaData{21, 1886 [ColumnMetaData{UNCOMPRESSED 
[int_col] INT32  [RLE, BIT_PACKED, PLAIN], 4}, ColumnMetaData{UNCOMPRESSED 
[bigint_col] INT64  [RLE, BIT_PACKED, PLAIN], 111}, ColumnMetaData{UNCOMPRESSED 
[date_col] BINARY  [RLE, BIT_PACKED, PLAIN], 298}, ColumnMetaData{UNCOMPRESSED 
[time_col] BINARY  [RLE, BIT_PACKED, PLAIN_DICTIONARY], 563}, 
ColumnMetaData{UNCOMPRESSED [timestamp_col] INT96  [RLE, BIT_PACKED, 
PLAIN_DICTIONARY], 793}, ColumnMetaData{UNCOMPRESSED [interval_col] BINARY  
[RLE, BIT_PACKED, PLAIN_DICTIONARY], 1031}, ColumnMetaData{UNCOMPRESSED 
[varchar_col] BINARY  [RLE, BIT_PACKED, PLAIN], 1189}, 
ColumnMetaData{UNCOMPRESSED [float_col] FLOAT  [RLE, BIT_PACKED, PLAIN], 1543}, 
ColumnMetaData{UNCOMPRESSED [double_col] DOUBLE  [RLE, BIT_PACKED, PLAIN], 
1654}, ColumnMetaData{UNCOMPRESSED [bool_col] BOOLEAN  [RLE, BIT_PACKED, 
PLAIN], 1851}]}]}
at 
org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.handleAndRaise(ParquetRecordReader.java:349)
 ~[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next(ParquetRecordReader.java:451)
 ~[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:191) 
~[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
[drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at java.security.AccessController.doPrivileged(Native Method) 
[na:1.7.0_71]
at javax.security.auth.Subject.doAs(Subject.java:415) [na:1.7.0_71]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
 [hadoop-common-2.7.0-mapr-1506.jar:na]
at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250)
 [drill-java-exec-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.5.0-SNAPSHOT.jar:1.5.0-SNAPSHOT]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_71]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.parquet.column.values.rle.RunLengthBitPackingHybridDecoder.readInt(RunLengthBitPackingHybridDecoder.java:75)
 ~[parquet-column-1.8.1-drill-r0.jar:1.8.1-drill-r0]
at