[jira] [Updated] (DRILL-7038) Queries on partitioned columns scan the entire datasets

2019-03-07 Thread Bohdan Kazydub (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bohdan Kazydub updated DRILL-7038:
--
Labels:   (was: doc-impacting)

> Queries on partitioned columns scan the entire datasets
> ---
>
> Key: DRILL-7038
> URL: https://issues.apache.org/jira/browse/DRILL-7038
> Project: Apache Drill
>  Issue Type: Improvement
>Reporter: Bohdan Kazydub
>Assignee: Bohdan Kazydub
>Priority: Major
> Fix For: 1.16.0
>
>
> For tables with hive-style partitions like
> {code}
> /table/2018/Q1
> /table/2018/Q2
> /table/2019/Q1
> etc.
> {code}
> if any of the following queries is run:
> {code}
> select distinct dir0 from dfs.`/table`
> {code}
> {code}
> select dir0 from dfs.`/table` group by dir0
> {code}
> it will actually scan every single record in the table rather than just 
> getting a list of directories at the dir0 level. This applies even when 
> cached metadata is available. This is a big penalty especially as the 
> datasets grow.
> To avoid such situations, a logical prune rule can be used to collect 
> partition columns (`dir0`), either from metadata cache (if available) or 
> group scan, and drop unnecessary files from being read. The rule will be 
> applied on following conditions:
> 1) all queried columns are partitoin columns, and
> 2) either {{DISTINCT}} or {{GROUP BY}} operations are performed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7077) Add Function to Facilitate Time Series Analysis

2019-03-07 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-7077:

Affects Version/s: (was: 1.16.0)

> Add Function to Facilitate Time Series Analysis
> ---
>
> Key: DRILL-7077
> URL: https://issues.apache.org/jira/browse/DRILL-7077
> Project: Apache Drill
>  Issue Type: New Feature
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Major
>  Labels: doc-impacting
> Fix For: 1.16.0
>
>
> When analyzing time based data, you will often have to aggregate by time 
> grains. While some time grains will be easy to calculate, others, such as 
> quarter, can be quite difficult. These functions enable a user to quickly and 
> easily aggregate data by various units of time. Usage is as follows:
> {code:java}
> SELECT 
> FROM 
> GROUP BY nearestDate(, {code}
> So let's say that a user wanted to count the number of hits on a web server 
> per 15 minute, the query might look like this:
> {code:java}
> SELECT nearestDate(`eventDate`, '15MINUTE' ) AS eventDate,
> COUNT(*) AS hitCount
> FROM dfs.`log.httpd`
> GROUP BY nearestDate(`eventDate`, '15MINUTE'){code}
> Currently supports the following time units:
>  * YEAR
>  * QUARTER
>  * MONTH
>  * WEEK_SUNDAY
>  * WEEK_MONDAY
>  * DAY
>  * HOUR
>  * HALF_HOUR / 30MIN
>  * QUARTER_HOUR / 15MIN
>  * MINUTE
>  * 30SECOND
>  * 15SECOND
>  * SECOND
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7077) Add Function to Facilitate Time Series Analysis

2019-03-07 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-7077:

Labels: doc-impacting  (was: )

> Add Function to Facilitate Time Series Analysis
> ---
>
> Key: DRILL-7077
> URL: https://issues.apache.org/jira/browse/DRILL-7077
> Project: Apache Drill
>  Issue Type: New Feature
>Affects Versions: 1.16.0
>Reporter: Charles Givre
>Assignee: Charles Givre
>Priority: Major
>  Labels: doc-impacting
> Fix For: 1.16.0
>
>
> When analyzing time based data, you will often have to aggregate by time 
> grains. While some time grains will be easy to calculate, others, such as 
> quarter, can be quite difficult. These functions enable a user to quickly and 
> easily aggregate data by various units of time. Usage is as follows:
> {code:java}
> SELECT 
> FROM 
> GROUP BY nearestDate(, {code}
> So let's say that a user wanted to count the number of hits on a web server 
> per 15 minute, the query might look like this:
> {code:java}
> SELECT nearestDate(`eventDate`, '15MINUTE' ) AS eventDate,
> COUNT(*) AS hitCount
> FROM dfs.`log.httpd`
> GROUP BY nearestDate(`eventDate`, '15MINUTE'){code}
> Currently supports the following time units:
>  * YEAR
>  * QUARTER
>  * MONTH
>  * WEEK_SUNDAY
>  * WEEK_MONDAY
>  * DAY
>  * HOUR
>  * HALF_HOUR / 30MIN
>  * QUARTER_HOUR / 15MIN
>  * MINUTE
>  * 30SECOND
>  * 15SECOND
>  * SECOND
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7084) ResultSet getObject method throws not implemented exception if the column type is NULL

2019-03-07 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7084:
---

 Summary: ResultSet getObject method throws not implemented 
exception if the column type is NULL
 Key: DRILL-7084
 URL: https://issues.apache.org/jira/browse/DRILL-7084
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


This method is used by some tools, for example DBeaver. Not reproduced with 
sqlline or Drill Web-UI.

*Query:*
{code:sql}
select coalesce(n_name1, n_name2) from cp.`tpch/nation.parquet` limit 1;
{code}

*Expected result:*
null

*Actual result:*
Exception is thrown:
{noformat}
java.lang.RuntimeException: not implemented
at 
oadd.org.apache.calcite.avatica.AvaticaSite.notImplemented(AvaticaSite.java:421)
at oadd.org.apache.calcite.avatica.AvaticaSite.get(AvaticaSite.java:380)
at 
org.apache.drill.jdbc.impl.DrillResultSetImpl.getObject(DrillResultSetImpl.java:183)
at 
org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCResultSetImpl.getObject(JDBCResultSetImpl.java:628)
at 
org.jkiss.dbeaver.model.impl.jdbc.data.handlers.JDBCObjectValueHandler.fetchColumnValue(JDBCObjectValueHandler.java:60)
at 
org.jkiss.dbeaver.model.impl.jdbc.data.handlers.JDBCAbstractValueHandler.fetchValueObject(JDBCAbstractValueHandler.java:49)
at 
org.jkiss.dbeaver.ui.controls.resultset.ResultSetDataReceiver.fetchRow(ResultSetDataReceiver.java:122)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.fetchQueryData(SQLQueryJob.java:729)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.executeStatement(SQLQueryJob.java:465)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.lambda$0(SQLQueryJob.java:392)
at org.jkiss.dbeaver.model.DBUtils.tryExecuteRecover(DBUtils.java:1598)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:390)
at 
org.jkiss.dbeaver.runtime.sql.SQLQueryJob.extractData(SQLQueryJob.java:822)
at 
org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:2532)
at 
org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:93)
at org.jkiss.dbeaver.model.DBUtils.tryExecuteRecover(DBUtils.java:1598)
at 
org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:91)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:101)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)

{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7073) CREATE SCHEMA command / TupleSchema / ColumnMetadata improvements

2019-03-07 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-7073:

Description: 
CREATE SCHEMA command improvements:
1. add format
2. add default
3. add column properties
Example: 
{noformat}
create schema 
(col date not null format '-MM-dd' default '2018-12-31' properties { 
'prop1' = 'val1', 'prop2' = 'val2' })
path '/path/to/schema'
{noformat}

TupleSchema / ColumnMetadata improvements:
1. add properties map;
2. add format;
3. add default from string literal;
4. add ser / de methods.

  was:
CREATE SCHEMA command improvements:
1. add format
2. add default
3. add column properties

TupleSchema / ColumnMetadata improvements:
1. add properties map;
2. add format;
3. add default from string literal;
4. add ser / de methods.


> CREATE SCHEMA command / TupleSchema / ColumnMetadata improvements
> -
>
> Key: DRILL-7073
> URL: https://issues.apache.org/jira/browse/DRILL-7073
> Project: Apache Drill
>  Issue Type: Sub-task
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>Priority: Major
> Fix For: 1.16.0
>
>
> CREATE SCHEMA command improvements:
> 1. add format
> 2. add default
> 3. add column properties
> Example: 
> {noformat}
> create schema 
> (col date not null format '-MM-dd' default '2018-12-31' properties { 
> 'prop1' = 'val1', 'prop2' = 'val2' })
> path '/path/to/schema'
> {noformat}
> TupleSchema / ColumnMetadata improvements:
> 1. add properties map;
> 2. add format;
> 3. add default from string literal;
> 4. add ser / de methods.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7073) CREATE SCHEMA command / TupleSchema / ColumnMetadata improvements

2019-03-07 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-7073:

Description: 
CREATE SCHEMA command improvements:
1. add format
2. add default
3. add column properties

TupleSchema / ColumnMetadata improvements:
1. add properties map;
2. add format;
3. add default from string literal;
4. add ser / de methods.

Example: 
{noformat}
create schema 
(col date not null format '-MM-dd' default '2018-12-31' properties { 
'prop1' = 'val1', 'prop2' = 'val2' })
path '/path/to/schema'
{noformat}

The following schema will be created:
{noformat}
{
  "schema" : {
"columns" : [
  {
"name" : "col",
"type" : "DATE",
"mode" : "REQUIRED",
"format" : "-MM-dd",
"default" : "2018-12-31",
"properties" : {
  "prop2" : "val2",
  "prop1" : "val1"
}
  }
]
  },
  "version" : 1
}
{noformat}


  was:
CREATE SCHEMA command improvements:
1. add format
2. add default
3. add column properties
Example: 
{noformat}
create schema 
(col date not null format '-MM-dd' default '2018-12-31' properties { 
'prop1' = 'val1', 'prop2' = 'val2' })
path '/path/to/schema'
{noformat}

The following schema will be created:
{noformat}
{
  "schema" : {
"columns" : [
  {
"name" : "col",
"type" : "DATE",
"mode" : "REQUIRED",
"format" : "-MM-dd",
"default" : "2018-12-31",
"properties" : {
  "prop2" : "val2",
  "prop1" : "val1"
}
  }
]
  },
  "version" : 1
}
{noformat}

TupleSchema / ColumnMetadata improvements:
1. add properties map;
2. add format;
3. add default from string literal;
4. add ser / de methods.


> CREATE SCHEMA command / TupleSchema / ColumnMetadata improvements
> -
>
> Key: DRILL-7073
> URL: https://issues.apache.org/jira/browse/DRILL-7073
> Project: Apache Drill
>  Issue Type: Sub-task
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>Priority: Major
> Fix For: 1.16.0
>
>
> CREATE SCHEMA command improvements:
> 1. add format
> 2. add default
> 3. add column properties
> TupleSchema / ColumnMetadata improvements:
> 1. add properties map;
> 2. add format;
> 3. add default from string literal;
> 4. add ser / de methods.
> Example: 
> {noformat}
> create schema 
> (col date not null format '-MM-dd' default '2018-12-31' properties { 
> 'prop1' = 'val1', 'prop2' = 'val2' })
> path '/path/to/schema'
> {noformat}
> The following schema will be created:
> {noformat}
> {
>   "schema" : {
> "columns" : [
>   {
> "name" : "col",
> "type" : "DATE",
> "mode" : "REQUIRED",
> "format" : "-MM-dd",
> "default" : "2018-12-31",
> "properties" : {
>   "prop2" : "val2",
>   "prop1" : "val1"
> }
>   }
> ]
>   },
>   "version" : 1
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7073) CREATE SCHEMA command / TupleSchema / ColumnMetadata improvements

2019-03-07 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-7073:

Description: 
CREATE SCHEMA command improvements:
1. add format
2. add default
3. add column properties
Example: 
{noformat}
create schema 
(col date not null format '-MM-dd' default '2018-12-31' properties { 
'prop1' = 'val1', 'prop2' = 'val2' })
path '/path/to/schema'
{noformat}

The following schema will be created:
{noformat}
{
  "schema" : {
"columns" : [
  {
"name" : "col",
"type" : "DATE",
"mode" : "REQUIRED",
"format" : "-MM-dd",
"default" : "2018-12-31",
"properties" : {
  "prop2" : "val2",
  "prop1" : "val1"
}
  }
]
  },
  "version" : 1
}
{noformat}

TupleSchema / ColumnMetadata improvements:
1. add properties map;
2. add format;
3. add default from string literal;
4. add ser / de methods.

  was:
CREATE SCHEMA command improvements:
1. add format
2. add default
3. add column properties
Example: 
{noformat}
create schema 
(col date not null format '-MM-dd' default '2018-12-31' properties { 
'prop1' = 'val1', 'prop2' = 'val2' })
path '/path/to/schema'
{noformat}

TupleSchema / ColumnMetadata improvements:
1. add properties map;
2. add format;
3. add default from string literal;
4. add ser / de methods.


> CREATE SCHEMA command / TupleSchema / ColumnMetadata improvements
> -
>
> Key: DRILL-7073
> URL: https://issues.apache.org/jira/browse/DRILL-7073
> Project: Apache Drill
>  Issue Type: Sub-task
>Reporter: Arina Ielchiieva
>Assignee: Arina Ielchiieva
>Priority: Major
> Fix For: 1.16.0
>
>
> CREATE SCHEMA command improvements:
> 1. add format
> 2. add default
> 3. add column properties
> Example: 
> {noformat}
> create schema 
> (col date not null format '-MM-dd' default '2018-12-31' properties { 
> 'prop1' = 'val1', 'prop2' = 'val2' })
> path '/path/to/schema'
> {noformat}
> The following schema will be created:
> {noformat}
> {
>   "schema" : {
> "columns" : [
>   {
> "name" : "col",
> "type" : "DATE",
> "mode" : "REQUIRED",
> "format" : "-MM-dd",
> "default" : "2018-12-31",
> "properties" : {
>   "prop2" : "val2",
>   "prop1" : "val1"
> }
>   }
> ]
>   },
>   "version" : 1
> }
> {noformat}
> TupleSchema / ColumnMetadata improvements:
> 1. add properties map;
> 2. add format;
> 3. add default from string literal;
> 4. add ser / de methods.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-4814) extractHeader attribute not working with the table function

2019-03-07 Thread Arina Ielchiieva (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786930#comment-16786930
 ] 

Arina Ielchiieva commented on DRILL-4814:
-

Can be reproduced when storage format is set to extractHeader, so this not 
table function specific.

> extractHeader attribute not working with the table function
> ---
>
> Key: DRILL-4814
> URL: https://issues.apache.org/jira/browse/DRILL-4814
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Functions - Drill
>Affects Versions: 1.8.0
>Reporter: Krystal
>Assignee: Paul Rogers
>Priority: Major
>
> I have the following table with line delimiter as \r\n:
> Id,col1,col2
> 1,aaa,bbb
> 2,ccc,ddd
> 3,eee
> 4,fff,ggg
> The following queries work fine:
> select * from 
> table(`drill-3149/header.csv`(type=>'text',lineDelimiter=>'\r\n',fieldDelimiter=>','));
> +---+
> |columns|
> +---+
> | ["Id","col1","col2"]  |
> | ["1","aaa","bbb"] |
> | ["2","ccc","ddd"] |
> | ["3","eee"]   |
> | ["4","fff","ggg"] |
> +---+
> select * from 
> table(`drill-3149/header.csv`(type=>'text',lineDelimiter=>'\r\n',fieldDelimiter=>',',skipFirstLine=>true));
> ++
> |  columns   |
> ++
> | ["1","aaa","bbb"]  |
> | ["2","ccc","ddd"]  |
> | ["3","eee"]|
> | ["4","fff","ggg"]  |
> ++
> The following query fail with extractHeader attribute:
> select * from 
> table(`drill-3149/header.csv`(type=>'text',lineDelimiter=>'\r\n',fieldDelimiter=>',',extractHeader=>true));
> {code}
> java.lang.IndexOutOfBoundsException: index: 254, length: 3 (expected: 
> range(0, 256))
>   at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1134)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.getBytes(PooledUnsafeDirectByteBuf.java:136)
>   at io.netty.buffer.WrappedByteBuf.getBytes(WrappedByteBuf.java:289)
>   at 
> io.netty.buffer.UnsafeDirectLittleEndian.getBytes(UnsafeDirectLittleEndian.java:30)
>   at io.netty.buffer.DrillBuf.getBytes(DrillBuf.java:629)
>   at 
> org.apache.drill.exec.vector.VarCharVector$Accessor.get(VarCharVector.java:441)
>   at 
> org.apache.drill.exec.vector.accessor.VarCharAccessor.getBytes(VarCharAccessor.java:125)
>   at 
> org.apache.drill.exec.vector.accessor.VarCharAccessor.getString(VarCharAccessor.java:146)
>   at 
> org.apache.drill.exec.vector.accessor.VarCharAccessor.getObject(VarCharAccessor.java:136)
>   at 
> org.apache.drill.exec.vector.accessor.VarCharAccessor.getObject(VarCharAccessor.java:94)
>   at 
> org.apache.drill.exec.vector.accessor.BoundCheckingAccessor.getObject(BoundCheckingAccessor.java:148)
>   at 
> org.apache.drill.jdbc.impl.TypeConvertingSqlAccessor.getObject(TypeConvertingSqlAccessor.java:795)
>   at 
> org.apache.drill.jdbc.impl.AvaticaDrillSqlAccessor.getObject(AvaticaDrillSqlAccessor.java:179)
>   at 
> net.hydromatic.avatica.AvaticaResultSet.getObject(AvaticaResultSet.java:351)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.getObject(DrillResultSetImpl.java:420)
>   at sqlline.Rows$Row.(Rows.java:157)
>   at sqlline.IncrementalRows.hasNext(IncrementalRows.java:63)
>   at 
> sqlline.TableOutputFormat$ResizingRowsProvider.next(TableOutputFormat.java:87)
>   at sqlline.TableOutputFormat.print(TableOutputFormat.java:118)
>   at sqlline.SqlLine.print(SqlLine.java:1593)
>   at sqlline.Commands.execute(Commands.java:852)
>   at sqlline.Commands.sql(Commands.java:751)
>   at sqlline.SqlLine.dispatch(SqlLine.java:746)
>   at sqlline.SqlLine.begin(SqlLine.java:621)
>   at sqlline.SqlLine.start(SqlLine.java:375)
>   at sqlline.SqlLine.main(SqlLine.java:268)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-4814) extractHeader attribute not working with the table function

2019-03-07 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-4814:

Component/s: (was: Functions - Drill)
 Storage - Text & CSV

> extractHeader attribute not working with the table function
> ---
>
> Key: DRILL-4814
> URL: https://issues.apache.org/jira/browse/DRILL-4814
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Storage - Text & CSV
>Affects Versions: 1.8.0
>Reporter: Krystal
>Assignee: Paul Rogers
>Priority: Major
>
> I have the following table with line delimiter as \r\n:
> Id,col1,col2
> 1,aaa,bbb
> 2,ccc,ddd
> 3,eee
> 4,fff,ggg
> The following queries work fine:
> select * from 
> table(`drill-3149/header.csv`(type=>'text',lineDelimiter=>'\r\n',fieldDelimiter=>','));
> +---+
> |columns|
> +---+
> | ["Id","col1","col2"]  |
> | ["1","aaa","bbb"] |
> | ["2","ccc","ddd"] |
> | ["3","eee"]   |
> | ["4","fff","ggg"] |
> +---+
> select * from 
> table(`drill-3149/header.csv`(type=>'text',lineDelimiter=>'\r\n',fieldDelimiter=>',',skipFirstLine=>true));
> ++
> |  columns   |
> ++
> | ["1","aaa","bbb"]  |
> | ["2","ccc","ddd"]  |
> | ["3","eee"]|
> | ["4","fff","ggg"]  |
> ++
> The following query fail with extractHeader attribute:
> select * from 
> table(`drill-3149/header.csv`(type=>'text',lineDelimiter=>'\r\n',fieldDelimiter=>',',extractHeader=>true));
> {code}
> java.lang.IndexOutOfBoundsException: index: 254, length: 3 (expected: 
> range(0, 256))
>   at io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1134)
>   at 
> io.netty.buffer.PooledUnsafeDirectByteBuf.getBytes(PooledUnsafeDirectByteBuf.java:136)
>   at io.netty.buffer.WrappedByteBuf.getBytes(WrappedByteBuf.java:289)
>   at 
> io.netty.buffer.UnsafeDirectLittleEndian.getBytes(UnsafeDirectLittleEndian.java:30)
>   at io.netty.buffer.DrillBuf.getBytes(DrillBuf.java:629)
>   at 
> org.apache.drill.exec.vector.VarCharVector$Accessor.get(VarCharVector.java:441)
>   at 
> org.apache.drill.exec.vector.accessor.VarCharAccessor.getBytes(VarCharAccessor.java:125)
>   at 
> org.apache.drill.exec.vector.accessor.VarCharAccessor.getString(VarCharAccessor.java:146)
>   at 
> org.apache.drill.exec.vector.accessor.VarCharAccessor.getObject(VarCharAccessor.java:136)
>   at 
> org.apache.drill.exec.vector.accessor.VarCharAccessor.getObject(VarCharAccessor.java:94)
>   at 
> org.apache.drill.exec.vector.accessor.BoundCheckingAccessor.getObject(BoundCheckingAccessor.java:148)
>   at 
> org.apache.drill.jdbc.impl.TypeConvertingSqlAccessor.getObject(TypeConvertingSqlAccessor.java:795)
>   at 
> org.apache.drill.jdbc.impl.AvaticaDrillSqlAccessor.getObject(AvaticaDrillSqlAccessor.java:179)
>   at 
> net.hydromatic.avatica.AvaticaResultSet.getObject(AvaticaResultSet.java:351)
>   at 
> org.apache.drill.jdbc.impl.DrillResultSetImpl.getObject(DrillResultSetImpl.java:420)
>   at sqlline.Rows$Row.(Rows.java:157)
>   at sqlline.IncrementalRows.hasNext(IncrementalRows.java:63)
>   at 
> sqlline.TableOutputFormat$ResizingRowsProvider.next(TableOutputFormat.java:87)
>   at sqlline.TableOutputFormat.print(TableOutputFormat.java:118)
>   at sqlline.SqlLine.print(SqlLine.java:1593)
>   at sqlline.Commands.execute(Commands.java:852)
>   at sqlline.Commands.sql(Commands.java:751)
>   at sqlline.SqlLine.dispatch(SqlLine.java:746)
>   at sqlline.SqlLine.begin(SqlLine.java:621)
>   at sqlline.SqlLine.start(SqlLine.java:375)
>   at sqlline.SqlLine.main(SqlLine.java:268)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)