[jira] [Updated] (DRILL-2362) Drill should manage Query Profiling archiving

2019-02-15 Thread Kunal Khatua (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-2362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Khatua updated DRILL-2362:

Fix Version/s: 1.16.0

> Drill should manage Query Profiling archiving
> -
>
> Key: DRILL-2362
> URL: https://issues.apache.org/jira/browse/DRILL-2362
> Project: Apache Drill
>  Issue Type: New Feature
>  Components: Storage - Other
>Affects Versions: 0.7.0
>Reporter: Chris Westin
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.16.0
>
>
> We collect query profile information for analysis purposes, but we keep it 
> forever. At this time, for a few queries, it isn't a problem. But as users 
> start putting Drill into production, automated use via other applications 
> will make this grow quickly. We need to come up with a retention policy 
> mechanism, with suitable settings administrators can use, and implement it so 
> that this data can be cleaned up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7040) Update Protocol Buffers syntax to proto3

2019-02-15 Thread Vitalii Diravka (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitalii Diravka updated DRILL-7040:
---
Component/s: Tools, Build & Test

> Update Protocol Buffers syntax to proto3
> 
>
> Key: DRILL-7040
> URL: https://issues.apache.org/jira/browse/DRILL-7040
> Project: Apache Drill
>  Issue Type: Task
>  Components: Tools, Build  Test
>Affects Versions: 1.15.0
>Reporter: Anton Gozhiy
>Priority: Major
> Fix For: Future
>
>
> Updating of protobuf library version is addressed by DRILL-6642.
> Although we still use proto2 syntax. To update the syntax to proto3 we need 
> to meet some requirements:
> # Proto3 doesn't support required fields. So it is needed to change all 
> existing required fields to optional. If we expect such fields to be always 
> present in the messages, we need to revisit the approach.
> # Custom default values are no more supported. And Drill uses custom defaults 
> in some places. The impact from removal of them should be further 
> investigated, but it would definitely require changes in logic.
> # No more ability to determine if a missing field was not included, or was 
> assigned the default value. Need investigation whether it is used in code.
> # Support for nested groups is excluded from proto3. This shouldn't be a 
> problem as they are not used in Drill.
> # Protostuff and protobuf-maven-plugin should be also updated which may cause 
> some compatibility issues.
> Links to the language specs:
> [Proto2|https://developers.google.com/protocol-buffers/docs/proto]
> [Proto3|https://developers.google.com/protocol-buffers/docs/proto3]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7040) Update Protocol Buffers syntax to proto3

2019-02-15 Thread Vitalii Diravka (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitalii Diravka updated DRILL-7040:
---
Fix Version/s: Future

> Update Protocol Buffers syntax to proto3
> 
>
> Key: DRILL-7040
> URL: https://issues.apache.org/jira/browse/DRILL-7040
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.15.0
>Reporter: Anton Gozhiy
>Priority: Major
> Fix For: Future
>
>
> Updating of protobuf library version is addressed by DRILL-6642.
> Although we still use proto2 syntax. To update the syntax to proto3 we need 
> to meet some requirements:
> # Proto3 doesn't support required fields. So it is needed to change all 
> existing required fields to optional. If we expect such fields to be always 
> present in the messages, we need to revisit the approach.
> # Custom default values are no more supported. And Drill uses custom defaults 
> in some places. The impact from removal of them should be further 
> investigated, but it would definitely require changes in logic.
> # No more ability to determine if a missing field was not included, or was 
> assigned the default value. Need investigation whether it is used in code.
> # Support for nested groups is excluded from proto3. This shouldn't be a 
> problem as they are not used in Drill.
> # Protostuff and protobuf-maven-plugin should be also updated which may cause 
> some compatibility issues.
> Links to the language specs:
> [Proto2|https://developers.google.com/protocol-buffers/docs/proto]
> [Proto3|https://developers.google.com/protocol-buffers/docs/proto3]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (DRILL-7041) CompileException happens if a nested coalesce function returns null

2019-02-15 Thread Pritesh Maker (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pritesh Maker reassigned DRILL-7041:


Assignee: Bohdan Kazydub

> CompileException happens if a nested coalesce function returns null
> ---
>
> Key: DRILL-7041
> URL: https://issues.apache.org/jira/browse/DRILL-7041
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Bohdan Kazydub
>Priority: Major
>
> *Query:*
> {code:sql}
> select coalesce(coalesce(n_name1, n_name2), n_name) from 
> cp.`tpch/nation.parquet`
> {code}
> *Expected result:*
> Values from "n_name" column should be returned
> *Actual result:*
> An exception happens:
> {code}
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
> CompileException: Line 57, Column 27: Assignment conversion not possible from 
> type "org.apache.drill.exec.expr.holders.NullableVarCharHolder" to type 
> "org.apache.drill.exec.vector.UntypedNullHolder" Fragment 0:0 Please, refer 
> to logs for more information. [Error Id: e54d5bfd-604d-4a39-b62f-33bb964e5286 
> on userf87d-pc:31010] (org.apache.drill.exec.exception.SchemaChangeException) 
> Failure while attempting to load generated class 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput():573
>  
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema():583
>  org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():101 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():143
>  org.apache.drill.exec.record.AbstractRecordBatch.next():186 
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83 
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():297 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():284 
> java.security.AccessController.doPrivileged():-2 
> javax.security.auth.Subject.doAs():422 
> org.apache.hadoop.security.UserGroupInformation.doAs():1746 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():284 
> org.apache.drill.common.SelfCleaningRunnable.run():38 
> java.util.concurrent.ThreadPoolExecutor.runWorker():1149 
> java.util.concurrent.ThreadPoolExecutor$Worker.run():624 
> java.lang.Thread.run():748 Caused By 
> (org.apache.drill.exec.exception.ClassTransformationException) 
> java.util.concurrent.ExecutionException: 
> org.apache.drill.exec.exception.ClassTransformationException: Failure 
> generating transformation classes for value: package 
> org.apache.drill.exec.test.generated; import 
> org.apache.drill.exec.exception.SchemaChangeException; import 
> org.apache.drill.exec.expr.holders.BigIntHolder; import 
> org.apache.drill.exec.expr.holders.BitHolder; import 
> org.apache.drill.exec.expr.holders.NullableVarBinaryHolder; import 
> org.apache.drill.exec.expr.holders.NullableVarCharHolder; import 
> org.apache.drill.exec.expr.holders.VarCharHolder; import 
> org.apache.drill.exec.ops.FragmentContext; import 
> org.apache.drill.exec.record.RecordBatch; import 
> org.apache.drill.exec.vector.UntypedNullHolder; import 
> org.apache.drill.exec.vector.UntypedNullVector; import 
> org.apache.drill.exec.vector.VarCharVector; public class ProjectorGen35 { 
> BigIntHolder const6; BitHolder constant9; UntypedNullHolder constant13; 
> VarCharVector vv14; UntypedNullVector vv19; public void doEval(int inIndex, 
> int outIndex) throws SchemaChangeException { { UntypedNullHolder out0 = new 
> UntypedNullHolder(); if (constant9 .value == 1) { if (constant13 .isSet!= 0) 
> { out0 = constant13; } } else { VarCharHolder out17 = new VarCharHolder(); { 
> out17 .buffer = vv14 .getBuffer(); long startEnd = vv14 
> .getAccessor().getStartEnd((inIndex)); out17 .start = ((int) startEnd); out17 
> .end = ((int)(startEnd >> 32)); } // start of eval portion of 
> convertToNullableVARCHAR function. // NullableVarCharHolder out18 = new 
> NullableVarCharHolder(); { final NullableVarCharHolder output = new 
> NullableVarCharHolder(); VarCharHolder input = out17; 
> GConvertToNullableVarCharHolder_eval: { output.isSet = 1; output.start = 
> input.start; output.end = input.end; output.buffer = input.buffer; } out18 = 
> output; } // end of eval portion of convertToNullableVARCHAR function. 
> // if (out18 .isSet!= 0) { out0 = out18; } } if (!(out0 .isSet == 0)) { 
> vv19 .getMutator().set((outIndex), out0 .isSet, out0); } } } public void 
> doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing) 
> throws SchemaChangeException { { UntypedNullHolder out1 = new 
> UntypedNullHolder(); NullableVarBinaryHolder out2 = new 
> 

[jira] [Assigned] (DRILL-7041) CompileException happens if a nested coalesce function returns null

2019-02-15 Thread Anton Gozhiy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Gozhiy reassigned DRILL-7041:
---

Assignee: (was: Anton Gozhiy)

> CompileException happens if a nested coalesce function returns null
> ---
>
> Key: DRILL-7041
> URL: https://issues.apache.org/jira/browse/DRILL-7041
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Priority: Major
>
> *Query:*
> {code:sql}
> select coalesce(coalesce(n_name1, n_name2), n_name) from 
> cp.`tpch/nation.parquet`
> {code}
> *Expected result:*
> Values from "n_name" column should be returned
> *Actual result:*
> An exception happens:
> {code}
> org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
> CompileException: Line 57, Column 27: Assignment conversion not possible from 
> type "org.apache.drill.exec.expr.holders.NullableVarCharHolder" to type 
> "org.apache.drill.exec.vector.UntypedNullHolder" Fragment 0:0 Please, refer 
> to logs for more information. [Error Id: e54d5bfd-604d-4a39-b62f-33bb964e5286 
> on userf87d-pc:31010] (org.apache.drill.exec.exception.SchemaChangeException) 
> Failure while attempting to load generated class 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput():573
>  
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema():583
>  org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():101 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():143
>  org.apache.drill.exec.record.AbstractRecordBatch.next():186 
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83 
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():297 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():284 
> java.security.AccessController.doPrivileged():-2 
> javax.security.auth.Subject.doAs():422 
> org.apache.hadoop.security.UserGroupInformation.doAs():1746 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():284 
> org.apache.drill.common.SelfCleaningRunnable.run():38 
> java.util.concurrent.ThreadPoolExecutor.runWorker():1149 
> java.util.concurrent.ThreadPoolExecutor$Worker.run():624 
> java.lang.Thread.run():748 Caused By 
> (org.apache.drill.exec.exception.ClassTransformationException) 
> java.util.concurrent.ExecutionException: 
> org.apache.drill.exec.exception.ClassTransformationException: Failure 
> generating transformation classes for value: package 
> org.apache.drill.exec.test.generated; import 
> org.apache.drill.exec.exception.SchemaChangeException; import 
> org.apache.drill.exec.expr.holders.BigIntHolder; import 
> org.apache.drill.exec.expr.holders.BitHolder; import 
> org.apache.drill.exec.expr.holders.NullableVarBinaryHolder; import 
> org.apache.drill.exec.expr.holders.NullableVarCharHolder; import 
> org.apache.drill.exec.expr.holders.VarCharHolder; import 
> org.apache.drill.exec.ops.FragmentContext; import 
> org.apache.drill.exec.record.RecordBatch; import 
> org.apache.drill.exec.vector.UntypedNullHolder; import 
> org.apache.drill.exec.vector.UntypedNullVector; import 
> org.apache.drill.exec.vector.VarCharVector; public class ProjectorGen35 { 
> BigIntHolder const6; BitHolder constant9; UntypedNullHolder constant13; 
> VarCharVector vv14; UntypedNullVector vv19; public void doEval(int inIndex, 
> int outIndex) throws SchemaChangeException { { UntypedNullHolder out0 = new 
> UntypedNullHolder(); if (constant9 .value == 1) { if (constant13 .isSet!= 0) 
> { out0 = constant13; } } else { VarCharHolder out17 = new VarCharHolder(); { 
> out17 .buffer = vv14 .getBuffer(); long startEnd = vv14 
> .getAccessor().getStartEnd((inIndex)); out17 .start = ((int) startEnd); out17 
> .end = ((int)(startEnd >> 32)); } // start of eval portion of 
> convertToNullableVARCHAR function. // NullableVarCharHolder out18 = new 
> NullableVarCharHolder(); { final NullableVarCharHolder output = new 
> NullableVarCharHolder(); VarCharHolder input = out17; 
> GConvertToNullableVarCharHolder_eval: { output.isSet = 1; output.start = 
> input.start; output.end = input.end; output.buffer = input.buffer; } out18 = 
> output; } // end of eval portion of convertToNullableVARCHAR function. 
> // if (out18 .isSet!= 0) { out0 = out18; } } if (!(out0 .isSet == 0)) { 
> vv19 .getMutator().set((outIndex), out0 .isSet, out0); } } } public void 
> doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing) 
> throws SchemaChangeException { { UntypedNullHolder out1 = new 
> UntypedNullHolder(); NullableVarBinaryHolder out2 = new 
> NullableVarBinaryHolder(); /** 

[jira] [Updated] (DRILL-6855) Query from non-existent proxy user fails with "No default schema selected" when impersonation is enabled

2019-02-15 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-6855:
-
Reviewer: Sorabh Hamirwasia

> Query from non-existent proxy user fails with "No default schema selected" 
> when impersonation is enabled
> 
>
> Key: DRILL-6855
> URL: https://issues.apache.org/jira/browse/DRILL-6855
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Abhishek Ravi
>Assignee: Abhishek Ravi
>Priority: Major
> Fix For: 1.16.0
>
>
> Query from a *proxy user* fails with following error when *impersonation* is 
> *enabled* but user does not exist. This behaviour was discovered when running 
> Drill on MapR.
> {noformat}
> Error: VALIDATION ERROR: Schema [[dfs]] is not valid with respect to either 
> root schema or current default schema.
> Current default schema: No default schema selected
> {noformat}
> The above error is very confusing and made it very hard to relate to proxy 
> user does not exist + impersonation issue. 
> The {{fs.access(wsPath, FsAction.READ)}} in 
> {{WorkspaceSchemaFactory.accessible fails with IOException,}} which is not 
> handled in {{accessible}} but in {{DynamicRootSchema.loadSchemaFactory}}. At 
> this point none of the schemas are registered and hence the root schema will 
> be registered as default schema. 
> The query execution continues and fails much ahead at 
> {{DrillSqlWorker.getQueryPlan}} where the {{SqlConverter.validate}} 
> eventually throws  {{SchemaUtilites.throwSchemaNotFoundException}}.
> One possible fix could be to handle {{IOException}} similar to 
> {{FileNotFoundException}} in {{WorkspaceSchemaFactory.accessible}}.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7041) CompileException happens if a nested coalesce function returns null

2019-02-15 Thread Anton Gozhiy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Gozhiy updated DRILL-7041:

Description: 
*Query:*
{code:sql}
select coalesce(coalesce(n_name1, n_name2), n_name) from 
cp.`tpch/nation.parquet`
{code}

*Expected result:*
Values from "n_name" column should be returned

*Actual result:*
An exception happens:
{code}
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
CompileException: Line 57, Column 27: Assignment conversion not possible from 
type "org.apache.drill.exec.expr.holders.NullableVarCharHolder" to type 
"org.apache.drill.exec.vector.UntypedNullHolder" Fragment 0:0 Please, refer to 
logs for more information. [Error Id: e54d5bfd-604d-4a39-b62f-33bb964e5286 on 
userf87d-pc:31010] (org.apache.drill.exec.exception.SchemaChangeException) 
Failure while attempting to load generated class 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput():573
 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema():583
 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():101 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():143 
org.apache.drill.exec.record.AbstractRecordBatch.next():186 
org.apache.drill.exec.physical.impl.BaseRootExec.next():104 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83 
org.apache.drill.exec.physical.impl.BaseRootExec.next():94 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():297 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():284 
java.security.AccessController.doPrivileged():-2 
javax.security.auth.Subject.doAs():422 
org.apache.hadoop.security.UserGroupInformation.doAs():1746 
org.apache.drill.exec.work.fragment.FragmentExecutor.run():284 
org.apache.drill.common.SelfCleaningRunnable.run():38 
java.util.concurrent.ThreadPoolExecutor.runWorker():1149 
java.util.concurrent.ThreadPoolExecutor$Worker.run():624 
java.lang.Thread.run():748 Caused By 
(org.apache.drill.exec.exception.ClassTransformationException) 
java.util.concurrent.ExecutionException: 
org.apache.drill.exec.exception.ClassTransformationException: Failure 
generating transformation classes for value: package 
org.apache.drill.exec.test.generated; import 
org.apache.drill.exec.exception.SchemaChangeException; import 
org.apache.drill.exec.expr.holders.BigIntHolder; import 
org.apache.drill.exec.expr.holders.BitHolder; import 
org.apache.drill.exec.expr.holders.NullableVarBinaryHolder; import 
org.apache.drill.exec.expr.holders.NullableVarCharHolder; import 
org.apache.drill.exec.expr.holders.VarCharHolder; import 
org.apache.drill.exec.ops.FragmentContext; import 
org.apache.drill.exec.record.RecordBatch; import 
org.apache.drill.exec.vector.UntypedNullHolder; import 
org.apache.drill.exec.vector.UntypedNullVector; import 
org.apache.drill.exec.vector.VarCharVector; public class ProjectorGen35 { 
BigIntHolder const6; BitHolder constant9; UntypedNullHolder constant13; 
VarCharVector vv14; UntypedNullVector vv19; public void doEval(int inIndex, int 
outIndex) throws SchemaChangeException { { UntypedNullHolder out0 = new 
UntypedNullHolder(); if (constant9 .value == 1) { if (constant13 .isSet!= 0) { 
out0 = constant13; } } else { VarCharHolder out17 = new VarCharHolder(); { 
out17 .buffer = vv14 .getBuffer(); long startEnd = vv14 
.getAccessor().getStartEnd((inIndex)); out17 .start = ((int) startEnd); out17 
.end = ((int)(startEnd >> 32)); } // start of eval portion of 
convertToNullableVARCHAR function. // NullableVarCharHolder out18 = new 
NullableVarCharHolder(); { final NullableVarCharHolder output = new 
NullableVarCharHolder(); VarCharHolder input = out17; 
GConvertToNullableVarCharHolder_eval: { output.isSet = 1; output.start = 
input.start; output.end = input.end; output.buffer = input.buffer; } out18 = 
output; } // end of eval portion of convertToNullableVARCHAR function. 
// if (out18 .isSet!= 0) { out0 = out18; } } if (!(out0 .isSet == 0)) { 
vv19 .getMutator().set((outIndex), out0 .isSet, out0); } } } public void 
doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing) 
throws SchemaChangeException { { UntypedNullHolder out1 = new 
UntypedNullHolder(); NullableVarBinaryHolder out2 = new 
NullableVarBinaryHolder(); /** start SETUP for function isnotnull **/ { 
NullableVarBinaryHolder input = out2; 
GNullOpNullableVarBinaryHolder$IsNotNull_setup: {} } /** end SETUP for function 
isnotnull **/ // start of eval portion of isnotnull function. // 
BitHolder out3 = new BitHolder(); { final BitHolder out = new BitHolder(); 
NullableVarBinaryHolder input = out2; 
GNullOpNullableVarBinaryHolder$IsNotNull_eval: { out.value = (input.isSet == 0 
? 0 : 1); } out3 = out; } // end of eval portion of isnotnull function. 
// if (out3 .value == 1) { UntypedNullHolder out4 = new 

[jira] [Created] (DRILL-7041) CompileException happens if a nested coalesce function returns null

2019-02-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7041:
---

 Summary: CompileException happens if a nested coalesce function 
returns null
 Key: DRILL-7041
 URL: https://issues.apache.org/jira/browse/DRILL-7041
 Project: Apache Drill
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Anton Gozhiy
Assignee: Anton Gozhiy


*Query:*
{code:sql}
select coalesce(coalesce(n_name1, n_name2), n_name) from 
cp.`tpch/nation.parquet`
{code}

*Expected result:*
Values from "n_name" column should be returned

*Actual result:*
An exception happens:
{noformat}
org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: 
CompileException: Line 57, Column 27: Assignment conversion not possible from 
type "org.apache.drill.exec.expr.holders.NullableVarCharHolder" to type 
"org.apache.drill.exec.vector.UntypedNullHolder" Fragment 0:0 Please, refer to 
logs for more information. [Error Id: e54d5bfd-604d-4a39-b62f-33bb964e5286 on 
userf87d-pc:31010] (org.apache.drill.exec.exception.SchemaChangeException) 
Failure while attempting to load generated class 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput():573
 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema():583
 org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():101 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():143 
org.apache.drill.exec.record.AbstractRecordBatch.next():186 
org.apache.drill.exec.physical.impl.BaseRootExec.next():104 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83 
org.apache.drill.exec.physical.impl.BaseRootExec.next():94 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():297 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():284 
java.security.AccessController.doPrivileged():-2 
javax.security.auth.Subject.doAs():422 
org.apache.hadoop.security.UserGroupInformation.doAs():1746 
org.apache.drill.exec.work.fragment.FragmentExecutor.run():284 
org.apache.drill.common.SelfCleaningRunnable.run():38 
java.util.concurrent.ThreadPoolExecutor.runWorker():1149 
java.util.concurrent.ThreadPoolExecutor$Worker.run():624 
java.lang.Thread.run():748 Caused By 
(org.apache.drill.exec.exception.ClassTransformationException) 
java.util.concurrent.ExecutionException: 
org.apache.drill.exec.exception.ClassTransformationException: Failure 
generating transformation classes for value: package 
org.apache.drill.exec.test.generated; import 
org.apache.drill.exec.exception.SchemaChangeException; import 
org.apache.drill.exec.expr.holders.BigIntHolder; import 
org.apache.drill.exec.expr.holders.BitHolder; import 
org.apache.drill.exec.expr.holders.NullableVarBinaryHolder; import 
org.apache.drill.exec.expr.holders.NullableVarCharHolder; import 
org.apache.drill.exec.expr.holders.VarCharHolder; import 
org.apache.drill.exec.ops.FragmentContext; import 
org.apache.drill.exec.record.RecordBatch; import 
org.apache.drill.exec.vector.UntypedNullHolder; import 
org.apache.drill.exec.vector.UntypedNullVector; import 
org.apache.drill.exec.vector.VarCharVector; public class ProjectorGen35 { 
BigIntHolder const6; BitHolder constant9; UntypedNullHolder constant13; 
VarCharVector vv14; UntypedNullVector vv19; public void doEval(int inIndex, int 
outIndex) throws SchemaChangeException { { UntypedNullHolder out0 = new 
UntypedNullHolder(); if (constant9 .value == 1) { if (constant13 .isSet!= 0) { 
out0 = constant13; } } else { VarCharHolder out17 = new VarCharHolder(); { 
out17 .buffer = vv14 .getBuffer(); long startEnd = vv14 
.getAccessor().getStartEnd((inIndex)); out17 .start = ((int) startEnd); out17 
.end = ((int)(startEnd >> 32)); } // start of eval portion of 
convertToNullableVARCHAR function. // NullableVarCharHolder out18 = new 
NullableVarCharHolder(); { final NullableVarCharHolder output = new 
NullableVarCharHolder(); VarCharHolder input = out17; 
GConvertToNullableVarCharHolder_eval: { output.isSet = 1; output.start = 
input.start; output.end = input.end; output.buffer = input.buffer; } out18 = 
output; } // end of eval portion of convertToNullableVARCHAR function. 
// if (out18 .isSet!= 0) { out0 = out18; } } if (!(out0 .isSet == 0)) { 
vv19 .getMutator().set((outIndex), out0 .isSet, out0); } } } public void 
doSetup(FragmentContext context, RecordBatch incoming, RecordBatch outgoing) 
throws SchemaChangeException { { UntypedNullHolder out1 = new 
UntypedNullHolder(); NullableVarBinaryHolder out2 = new 
NullableVarBinaryHolder(); /** start SETUP for function isnotnull **/ { 
NullableVarBinaryHolder input = out2; 
GNullOpNullableVarBinaryHolder$IsNotNull_setup: {} } /** end SETUP for function 
isnotnull **/ // start of eval portion of isnotnull function. // 
BitHolder out3 = new BitHolder(); { final BitHolder out = new BitHolder(); 
NullableVarBinaryHolder input = out2; 

[jira] [Updated] (DRILL-6855) Query from non-existent proxy user fails with "No default schema selected" when impersonation is enabled

2019-02-15 Thread Sorabh Hamirwasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-6855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sorabh Hamirwasia updated DRILL-6855:
-
Labels: ready-to-commit  (was: )

> Query from non-existent proxy user fails with "No default schema selected" 
> when impersonation is enabled
> 
>
> Key: DRILL-6855
> URL: https://issues.apache.org/jira/browse/DRILL-6855
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Abhishek Ravi
>Assignee: Abhishek Ravi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> Query from a *proxy user* fails with following error when *impersonation* is 
> *enabled* but user does not exist. This behaviour was discovered when running 
> Drill on MapR.
> {noformat}
> Error: VALIDATION ERROR: Schema [[dfs]] is not valid with respect to either 
> root schema or current default schema.
> Current default schema: No default schema selected
> {noformat}
> The above error is very confusing and made it very hard to relate to proxy 
> user does not exist + impersonation issue. 
> The {{fs.access(wsPath, FsAction.READ)}} in 
> {{WorkspaceSchemaFactory.accessible fails with IOException,}} which is not 
> handled in {{accessible}} but in {{DynamicRootSchema.loadSchemaFactory}}. At 
> this point none of the schemas are registered and hence the root schema will 
> be registered as default schema. 
> The query execution continues and fails much ahead at 
> {{DrillSqlWorker.getQueryPlan}} where the {{SqlConverter.validate}} 
> eventually throws  {{SchemaUtilites.throwSchemaNotFoundException}}.
> One possible fix could be to handle {{IOException}} similar to 
> {{FileNotFoundException}} in {{WorkspaceSchemaFactory.accessible}}.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7022) Partition pruning is not happening the first time after the metadata auto refresh

2019-02-15 Thread Vitalii Diravka (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitalii Diravka updated DRILL-7022:
---
Labels: ready-to-commit  (was: )

> Partition pruning is not happening the first time after the metadata auto 
> refresh
> -
>
> Key: DRILL-7022
> URL: https://issues.apache.org/jira/browse/DRILL-7022
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Metadata, Storage - Parquet
>Affects Versions: 1.15.0
>Reporter: Anton Gozhiy
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.16.0
>
>
> *Data creation:*
> # Create table:
> {code:sql}
> create table dfs.tmp.`orders` 
> partition by (o_orderstatus)
> as select * from cp.`tpch/orders.parquet`
> {code}
> # Create table metadata:
> {code:sql}
> refresh table metadata dfs.tmp.`orders`
> {code}
> *Steps:*
> # Modify the table to trigger metadata auto refresh:
> {noformat}
> hadoop fs -mkdir /tmp/orders/111
> {noformat}
> # Run the query:
> {code:sql}
> explain plan for 
> select * from dfs.tmp.`orders` 
> where o_orderstatus = 'O' and o_orderdate < '1995-03-10'
> {code}
> *Expected result:*
> Partition pruning happens:
> {noformat}
> ... numFiles=1, numRowGroups=1, usedMetadataFile=true ...
> {noformat}
> *Actual result:*
> Partition pruning doesn't happen:
> {noformat}
> ... numFiles=1, numRowGroups=3, usedMetadataFile=true
> {noformat}
> *Note:* It is being reproduced only the first time after auto refresh, after 
> repeating the query it works as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7040) Update Protocol Buffers syntax to proto3

2019-02-15 Thread Anton Gozhiy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Gozhiy updated DRILL-7040:

Description: 
Updating of protobuf library version is addressed by DRILL-6642.
Although we still use proto2 syntax. To update the syntax to proto3 we need to 
meet some requirements:
# Proto3 doesn't support required fields. So it is needed to change all 
existing required fields to optional. If we expect such fields to be always 
present in the messages, we need to revisit the approach.
# Custom default values are no more supported. And Drill uses custom defaults 
in some places. The impact from removal of them should be further investigated, 
but it would definitely require changes in logic.
# No more ability to determine if a missing field was not included, or was 
assigned the default value. Need investigation whether it is used in code.
# Support for nested groups is excluded from proto3. This shouldn't be a 
problem as they are not used in Drill.
# Protostuff and protobuf-maven-plugin should be also updated which may cause 
some compatibility issues.

Links to the language specs:
[Proto2|https://developers.google.com/protocol-buffers/docs/proto]
[Proto3|https://developers.google.com/protocol-buffers/docs/proto3]

  was:
Updating of protobuf library version is addressed by DRILL-6642.
Although we still use proto2 syntax. To update the syntax to proto3 we need to 
meet some requirements:
# Proto3 doesn't support required fields. So it is needed to change all 
existing required fields to optional. If we expect such fields to be always 
present in the messages, we need to revisit the approach.
# Custom default values are no more supported. And Drill uses custom defaults 
in some places. The impact from removal of them should be further investigated, 
but it would definitely require changes in logic.
# No more ability to determine if a missing field was not included, or was 
assigned the default value. Need investigation whether it is used in code.
# Support for nested groups is excluded from proto3. This shouldn't be a 
problem as they are not used in Drill.
# Protostuff and protobuf-maven-plugin should be also updated which may cause 
some compatibility issues.




> Update Protocol Buffers syntax to proto3
> 
>
> Key: DRILL-7040
> URL: https://issues.apache.org/jira/browse/DRILL-7040
> Project: Apache Drill
>  Issue Type: Task
>Affects Versions: 1.15.0
>Reporter: Anton Gozhiy
>Priority: Major
>
> Updating of protobuf library version is addressed by DRILL-6642.
> Although we still use proto2 syntax. To update the syntax to proto3 we need 
> to meet some requirements:
> # Proto3 doesn't support required fields. So it is needed to change all 
> existing required fields to optional. If we expect such fields to be always 
> present in the messages, we need to revisit the approach.
> # Custom default values are no more supported. And Drill uses custom defaults 
> in some places. The impact from removal of them should be further 
> investigated, but it would definitely require changes in logic.
> # No more ability to determine if a missing field was not included, or was 
> assigned the default value. Need investigation whether it is used in code.
> # Support for nested groups is excluded from proto3. This shouldn't be a 
> problem as they are not used in Drill.
> # Protostuff and protobuf-maven-plugin should be also updated which may cause 
> some compatibility issues.
> Links to the language specs:
> [Proto2|https://developers.google.com/protocol-buffers/docs/proto]
> [Proto3|https://developers.google.com/protocol-buffers/docs/proto3]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (DRILL-7040) Update Protocol Buffers syntax to proto3

2019-02-15 Thread Anton Gozhiy (JIRA)
Anton Gozhiy created DRILL-7040:
---

 Summary: Update Protocol Buffers syntax to proto3
 Key: DRILL-7040
 URL: https://issues.apache.org/jira/browse/DRILL-7040
 Project: Apache Drill
  Issue Type: Task
Affects Versions: 1.15.0
Reporter: Anton Gozhiy


Updating of protobuf library version is addressed by DRILL-6642.
Although we still use proto2 syntax. To update the syntax to proto3 we need to 
meet some requirements:
# Proto3 doesn't support required fields. So it is needed to change all 
existing required fields to optional. If we expect such fields to be always 
present in the messages, we need to revisit the approach.
# Custom default values are no more supported. And Drill uses custom defaults 
in some places. The impact from removal of them should be further investigated, 
but it would definitely require changes in logic.
# No more ability to determine if a missing field was not included, or was 
assigned the default value. Need investigation whether it is used in code.
# Support for nested groups is excluded from proto3. This shouldn't be a 
problem as they are not used in Drill.
# Protostuff and protobuf-maven-plugin should be also updated which may cause 
some compatibility issues.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-507) Merge drill-common with java-exec module

2019-02-15 Thread Vitalii Diravka (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769160#comment-16769160
 ] 

Vitalii Diravka commented on DRILL-507:
---

Now {{common}} module is a good place for the utility classes and methods, 
custom Drill Collections, Exceptions, Metrics and so on.
I think it can be closed, once DRILL-508 will be solved too.

> Merge drill-common with java-exec module
> 
>
> Key: DRILL-507
> URL: https://issues.apache.org/jira/browse/DRILL-507
> Project: Apache Drill
>  Issue Type: Task
> Environment: Current
>Reporter: Aditya Kishore
>Priority: Major
>  Labels: refactoring
> Fix For: Future
>
>
> The need for separation of these two modules does not exist anymore while 
> having the interfaces in one (common) and core implementations+registries 
> (java-exec) leads to less than ideal patterns.
> We should do this once the TPCH work is merged into the master branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)