[jira] [Commented] (DRILL-7181) [Text V3 Reader] Exception with inadequate message is thrown if select columns as array with extractHeader set to true

2019-05-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843925#comment-16843925
 ] 

ASF GitHub Bot commented on DRILL-7181:
---

arina-ielchiieva commented on issue #1789: DRILL-7181: Improve V3 text reader 
(row set) error messages
URL: https://github.com/apache/drill/pull/1789#issuecomment-493966729
 
 
   @paul-rogers, new changes look really good. +1
   Please squash the commits and fix build errors.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Text V3 Reader] Exception with inadequate message is thrown if select 
> columns as array with extractHeader set to true
> --
>
> Key: DRILL-7181
> URL: https://issues.apache.org/jira/browse/DRILL-7181
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Paul Rogers
>Priority: Major
>
> *Prerequisites:*
>  # Create a simple .csv file with header, like this:
> {noformat}
> col1,col2,col3
> 1,2,3
> 4,5,6
> 7,8,9
> {noformat}
>  # Set exec.storage.enable_v3_text_reader=true
>  # Set "extractHeader": true for csv format in dfs storage plugin.
> *Query:*
> {code:sql}
> select columns[0] from dfs.tmp.`/test.csv`
> {code}
> *Expected result:* Exception should happen, here is the message from V2 
> reader:
> {noformat}
> UNSUPPORTED_OPERATION ERROR: Drill Remote Exception
>   (java.lang.Exception) UNSUPPORTED_OPERATION ERROR: With extractHeader 
> enabled, only header names are supported
> column name columns
> column index
> Fragment 0:0
> [Error Id: 5affa696-1dbd-43d7-ac14-72d235c00f43 on userf87d-pc:31010]
> org.apache.drill.common.exceptions.UserException$Builder.build():630
> 
> org.apache.drill.exec.store.easy.text.compliant.FieldVarCharOutput.():106
> 
> org.apache.drill.exec.store.easy.text.compliant.CompliantTextRecordReader.setup():139
> org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas():321
> org.apache.drill.exec.physical.impl.ScanBatch.internalNext():216
> org.apache.drill.exec.physical.impl.ScanBatch.next():271
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283
> ...():0
> org.apache.hadoop.security.UserGroupInformation.doAs():1746
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():283
> org.apache.drill.common.SelfCleaningRunnable.run():38
> ...():0
> {noformat}
> *Actual result:* The exception message is inadequate:
> {noformat}
> org.apache.drill.common.exceptions.UserRemoteException: EXECUTION_ERROR 
> ERROR: Table schema must have exactly one column.
> Exception thrown from 
> org.apache.drill.exec.physical.impl.scan.ScanOperatorExec
> Fragment 0:0
> [Error Id: a76a1576-419a-413f-840f-088157167a6d on userf87d-pc:31010]
>   (java.lang.IllegalStateException) Table schema must have exactly one column.
> 
> 

[jira] [Updated] (DRILL-7181) [Text V3 Reader] Exception with inadequate message is thrown if select columns as array with extractHeader set to true

2019-05-20 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-7181:

Labels: ready-to-commit  (was: )

> [Text V3 Reader] Exception with inadequate message is thrown if select 
> columns as array with extractHeader set to true
> --
>
> Key: DRILL-7181
> URL: https://issues.apache.org/jira/browse/DRILL-7181
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Paul Rogers
>Priority: Major
>  Labels: ready-to-commit
>
> *Prerequisites:*
>  # Create a simple .csv file with header, like this:
> {noformat}
> col1,col2,col3
> 1,2,3
> 4,5,6
> 7,8,9
> {noformat}
>  # Set exec.storage.enable_v3_text_reader=true
>  # Set "extractHeader": true for csv format in dfs storage plugin.
> *Query:*
> {code:sql}
> select columns[0] from dfs.tmp.`/test.csv`
> {code}
> *Expected result:* Exception should happen, here is the message from V2 
> reader:
> {noformat}
> UNSUPPORTED_OPERATION ERROR: Drill Remote Exception
>   (java.lang.Exception) UNSUPPORTED_OPERATION ERROR: With extractHeader 
> enabled, only header names are supported
> column name columns
> column index
> Fragment 0:0
> [Error Id: 5affa696-1dbd-43d7-ac14-72d235c00f43 on userf87d-pc:31010]
> org.apache.drill.common.exceptions.UserException$Builder.build():630
> 
> org.apache.drill.exec.store.easy.text.compliant.FieldVarCharOutput.():106
> 
> org.apache.drill.exec.store.easy.text.compliant.CompliantTextRecordReader.setup():139
> org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas():321
> org.apache.drill.exec.physical.impl.ScanBatch.internalNext():216
> org.apache.drill.exec.physical.impl.ScanBatch.next():271
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283
> ...():0
> org.apache.hadoop.security.UserGroupInformation.doAs():1746
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():283
> org.apache.drill.common.SelfCleaningRunnable.run():38
> ...():0
> {noformat}
> *Actual result:* The exception message is inadequate:
> {noformat}
> org.apache.drill.common.exceptions.UserRemoteException: EXECUTION_ERROR 
> ERROR: Table schema must have exactly one column.
> Exception thrown from 
> org.apache.drill.exec.physical.impl.scan.ScanOperatorExec
> Fragment 0:0
> [Error Id: a76a1576-419a-413f-840f-088157167a6d on userf87d-pc:31010]
>   (java.lang.IllegalStateException) Table schema must have exactly one column.
> 
> org.apache.drill.exec.physical.impl.scan.columns.ColumnsArrayManager.resolveColumn():108
> 
> org.apache.drill.exec.physical.impl.scan.project.ReaderLevelProjection.resolveSpecial():91
> 
> org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.resolveRootTuple():62
> 
> org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.():52
> 
> org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.doExplicitProjection():223
> 
> 

[jira] [Updated] (DRILL-7181) [Text V3 Reader] Exception with inadequate message is thrown if select columns as array with extractHeader set to true

2019-05-20 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-7181:

Fix Version/s: 1.17.0

> [Text V3 Reader] Exception with inadequate message is thrown if select 
> columns as array with extractHeader set to true
> --
>
> Key: DRILL-7181
> URL: https://issues.apache.org/jira/browse/DRILL-7181
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Anton Gozhiy
>Assignee: Paul Rogers
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> *Prerequisites:*
>  # Create a simple .csv file with header, like this:
> {noformat}
> col1,col2,col3
> 1,2,3
> 4,5,6
> 7,8,9
> {noformat}
>  # Set exec.storage.enable_v3_text_reader=true
>  # Set "extractHeader": true for csv format in dfs storage plugin.
> *Query:*
> {code:sql}
> select columns[0] from dfs.tmp.`/test.csv`
> {code}
> *Expected result:* Exception should happen, here is the message from V2 
> reader:
> {noformat}
> UNSUPPORTED_OPERATION ERROR: Drill Remote Exception
>   (java.lang.Exception) UNSUPPORTED_OPERATION ERROR: With extractHeader 
> enabled, only header names are supported
> column name columns
> column index
> Fragment 0:0
> [Error Id: 5affa696-1dbd-43d7-ac14-72d235c00f43 on userf87d-pc:31010]
> org.apache.drill.common.exceptions.UserException$Builder.build():630
> 
> org.apache.drill.exec.store.easy.text.compliant.FieldVarCharOutput.():106
> 
> org.apache.drill.exec.store.easy.text.compliant.CompliantTextRecordReader.setup():139
> org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas():321
> org.apache.drill.exec.physical.impl.ScanBatch.internalNext():216
> org.apache.drill.exec.physical.impl.ScanBatch.next():271
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.record.AbstractRecordBatch.next():126
> org.apache.drill.exec.record.AbstractRecordBatch.next():116
> org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
> 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141
> org.apache.drill.exec.record.AbstractRecordBatch.next():186
> org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
> org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283
> ...():0
> org.apache.hadoop.security.UserGroupInformation.doAs():1746
> org.apache.drill.exec.work.fragment.FragmentExecutor.run():283
> org.apache.drill.common.SelfCleaningRunnable.run():38
> ...():0
> {noformat}
> *Actual result:* The exception message is inadequate:
> {noformat}
> org.apache.drill.common.exceptions.UserRemoteException: EXECUTION_ERROR 
> ERROR: Table schema must have exactly one column.
> Exception thrown from 
> org.apache.drill.exec.physical.impl.scan.ScanOperatorExec
> Fragment 0:0
> [Error Id: a76a1576-419a-413f-840f-088157167a6d on userf87d-pc:31010]
>   (java.lang.IllegalStateException) Table schema must have exactly one column.
> 
> org.apache.drill.exec.physical.impl.scan.columns.ColumnsArrayManager.resolveColumn():108
> 
> org.apache.drill.exec.physical.impl.scan.project.ReaderLevelProjection.resolveSpecial():91
> 
> org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.resolveRootTuple():62
> 
> org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.():52
> 
> org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.doExplicitProjection():223
> 
> 

[jira] [Assigned] (DRILL-7268) Read Hive array with parquet native reader

2019-05-20 Thread Igor Guzenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Guzenko reassigned DRILL-7268:
---

Assignee: Igor Guzenko

> Read Hive array with parquet native reader
> --
>
> Key: DRILL-7268
> URL: https://issues.apache.org/jira/browse/DRILL-7268
> Project: Apache Drill
>  Issue Type: Sub-task
>Reporter: Igor Guzenko
>Assignee: Igor Guzenko
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7250) Query with CTE fails when its name matches to the table name without access

2019-05-20 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-7250:

Reviewer: Arina Ielchiieva

> Query with CTE fails when its name matches to the table name without access
> ---
>
> Key: DRILL-7250
> URL: https://issues.apache.org/jira/browse/DRILL-7250
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> When impersonation is enabled, and for example, we have {{lineitem}} table 
> with permissions {{750}} which is owned by {{user0_1:group0_1}} and 
> {{user2_1}} don't have access to it.
> The following query:
> {code:sql}
> use mini_dfs_plugin.user0_1;
> with lineitem as (SELECT 1 as a) select * from lineitem
> {code}
> submitted from {{user2_1}} fails with the following error:
> {noformat}
> java.lang.Exception: org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=user2_1, access=READ_EXECUTE, 
> inode="/user/user0_1/lineitem":user0_1:group0_1:drwxr-x---
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:317)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:70)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4432)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:646)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>   at ...(:0) ~[na:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listRecursive(FileSystemUtil.java:253)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.list(FileSystemUtil.java:208) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listFiles(FileSystemUtil.java:104) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.DrillFileSystemUtil.listFiles(DrillFileSystemUtil.java:86)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.FileSelection.minusDirectories(FileSelection.java:178)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.detectEmptySelection(WorkspaceSchemaFactory.java:669)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:633)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:283)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.getNewEntry(ExpandingConcurrentMap.java:96)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.get(ExpandingConcurrentMap.java:90)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.getTable(WorkspaceSchemaFactory.java:439)
>  ~[classes/:na]
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83)
>  ~[calcite-core-1.18.0-drill-r1.jar:1.18.0-drill-r1]
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:286) 
> 

[jira] [Commented] (DRILL-7250) Query with CTE fails when its name matches to the table name without access

2019-05-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843870#comment-16843870
 ] 

ASF GitHub Bot commented on DRILL-7250:
---

arina-ielchiieva commented on issue #1792: DRILL-7250: Query with CTE fails 
when its name matches to the table name without access
URL: https://github.com/apache/drill/pull/1792#issuecomment-493942436
 
 
   +1, LGTM.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Query with CTE fails when its name matches to the table name without access
> ---
>
> Key: DRILL-7250
> URL: https://issues.apache.org/jira/browse/DRILL-7250
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.17.0
>
>
> When impersonation is enabled, and for example, we have {{lineitem}} table 
> with permissions {{750}} which is owned by {{user0_1:group0_1}} and 
> {{user2_1}} don't have access to it.
> The following query:
> {code:sql}
> use mini_dfs_plugin.user0_1;
> with lineitem as (SELECT 1 as a) select * from lineitem
> {code}
> submitted from {{user2_1}} fails with the following error:
> {noformat}
> java.lang.Exception: org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=user2_1, access=READ_EXECUTE, 
> inode="/user/user0_1/lineitem":user0_1:group0_1:drwxr-x---
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:317)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:70)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4432)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:646)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>   at ...(:0) ~[na:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listRecursive(FileSystemUtil.java:253)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.list(FileSystemUtil.java:208) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listFiles(FileSystemUtil.java:104) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.DrillFileSystemUtil.listFiles(DrillFileSystemUtil.java:86)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.FileSelection.minusDirectories(FileSelection.java:178)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.detectEmptySelection(WorkspaceSchemaFactory.java:669)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:633)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:283)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.getNewEntry(ExpandingConcurrentMap.java:96)
>  ~[classes/:na]
>   at 
> 

[jira] [Assigned] (DRILL-2000) Hive generated parquet files with maps show up in drill as map(key value)

2019-05-20 Thread Igor Guzenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Guzenko reassigned DRILL-2000:
---

Assignee: Bohdan Kazydub

> Hive generated parquet files with maps show up in drill as map(key value)
> -
>
> Key: DRILL-2000
> URL: https://issues.apache.org/jira/browse/DRILL-2000
> Project: Apache Drill
>  Issue Type: Improvement
>  Components: Storage - Parquet
>Affects Versions: 0.7.0
>Reporter: Ramana Inukonda Nagaraj
>Assignee: Bohdan Kazydub
>Priority: Major
> Fix For: Future
>
>
> Created a parquet file in hive having the following DDL
> hive> desc alltypesparquet; 
> OK
> c1 int 
> c2 boolean 
> c3 double 
> c4 string 
> c5 array 
> c6 map 
> c7 map 
> c8 struct
> c9 tinyint 
> c10 smallint 
> c11 float 
> c12 bigint 
> c13 array>  
> c15 struct>
> c16 array,n:int>> 
> Time taken: 0.076 seconds, Fetched: 15 row(s)
> Columns which are maps such as c6 map 
> show up as 
> 0: jdbc:drill:> select c6 from `/user/hive/warehouse/alltypesparquet`;
> ++
> | c6 |
> ++
> | {"map":[]} |
> | {"map":[]} |
> | {"map":[{"key":1,"value":"eA=="},{"key":2,"value":"eQ=="}]} |
> ++
> 3 rows selected (0.078 seconds)
> hive> select c6 from alltypesparquet;   
> NULL
> NULL
> {1:"x",2:"y"}
> Ignore the wrong values, I have raised DRILL-1997 for the same. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (DRILL-7250) Query with CTE fails when its name matches to the table name without access

2019-05-20 Thread Arina Ielchiieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/DRILL-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva updated DRILL-7250:

Labels: ready-to-commit  (was: )

> Query with CTE fails when its name matches to the table name without access
> ---
>
> Key: DRILL-7250
> URL: https://issues.apache.org/jira/browse/DRILL-7250
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
>  Labels: ready-to-commit
> Fix For: 1.17.0
>
>
> When impersonation is enabled, and for example, we have {{lineitem}} table 
> with permissions {{750}} which is owned by {{user0_1:group0_1}} and 
> {{user2_1}} don't have access to it.
> The following query:
> {code:sql}
> use mini_dfs_plugin.user0_1;
> with lineitem as (SELECT 1 as a) select * from lineitem
> {code}
> submitted from {{user2_1}} fails with the following error:
> {noformat}
> java.lang.Exception: org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=user2_1, access=READ_EXECUTE, 
> inode="/user/user0_1/lineitem":user0_1:group0_1:drwxr-x---
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:317)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:70)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4432)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:646)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>   at ...(:0) ~[na:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listRecursive(FileSystemUtil.java:253)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.list(FileSystemUtil.java:208) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listFiles(FileSystemUtil.java:104) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.DrillFileSystemUtil.listFiles(DrillFileSystemUtil.java:86)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.FileSelection.minusDirectories(FileSelection.java:178)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.detectEmptySelection(WorkspaceSchemaFactory.java:669)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:633)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.create(WorkspaceSchemaFactory.java:283)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.getNewEntry(ExpandingConcurrentMap.java:96)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.planner.sql.ExpandingConcurrentMap.get(ExpandingConcurrentMap.java:90)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory$WorkspaceSchema.getTable(WorkspaceSchemaFactory.java:439)
>  ~[classes/:na]
>   at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83)
>  ~[calcite-core-1.18.0-drill-r1.jar:1.18.0-drill-r1]
>   at 
> org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:286) 
> 

[jira] [Commented] (DRILL-7250) Query with CTE fails when its name matches to the table name without access

2019-05-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843868#comment-16843868
 ] 

ASF GitHub Bot commented on DRILL-7250:
---

vvysotskyi commented on pull request #1792: DRILL-7250: Query with CTE fails 
when its name matches to the table name without access
URL: https://github.com/apache/drill/pull/1792#discussion_r285537110
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/SqlConverter.java
 ##
 @@ -280,12 +280,10 @@ protected void validateFrom(
 changeNamesIfTableIsTemporary(tempNode);
 
 // Check the schema and throw a valid SchemaNotFound exception 
instead of TableNotFound exception.
-if (catalogReader.getTable(tempNode.names) == null) {
-  catalogReader.isValidSchema(tempNode.names);
-}
+catalogReader.isValidSchema(tempNode.names);
 
 Review comment:
   Good question, when it is set in the constructor or using a setter of 
`SqlIdentifier` class, it cannot be null, since there is used 
`ImmutableList.copyOf()` method for incoming lists. But this field is public 
and may be changed in some other places (I didn't find any place where it is 
set to null in the current Drill and Calcite code).
   
   Since Calcite's `SqlIdentifier` code does not assume that `names` may be 
null (there is a lot of code where this field is used without checks), I think 
we should also do not check it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Query with CTE fails when its name matches to the table name without access
> ---
>
> Key: DRILL-7250
> URL: https://issues.apache.org/jira/browse/DRILL-7250
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.17.0
>
>
> When impersonation is enabled, and for example, we have {{lineitem}} table 
> with permissions {{750}} which is owned by {{user0_1:group0_1}} and 
> {{user2_1}} don't have access to it.
> The following query:
> {code:sql}
> use mini_dfs_plugin.user0_1;
> with lineitem as (SELECT 1 as a) select * from lineitem
> {code}
> submitted from {{user2_1}} fails with the following error:
> {noformat}
> java.lang.Exception: org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=user2_1, access=READ_EXECUTE, 
> inode="/user/user0_1/lineitem":user0_1:group0_1:drwxr-x---
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:317)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:70)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4432)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:646)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>   at ...(:0) ~[na:na]
>   at 
> 

[jira] [Commented] (DRILL-7250) Query with CTE fails when its name matches to the table name without access

2019-05-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843867#comment-16843867
 ] 

ASF GitHub Bot commented on DRILL-7250:
---

vvysotskyi commented on pull request #1792: DRILL-7250: Query with CTE fails 
when its name matches to the table name without access
URL: https://github.com/apache/drill/pull/1792#discussion_r285534294
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/calcite/jdbc/DynamicRootSchema.java
 ##
 @@ -61,6 +61,8 @@ public StoragePluginRegistry getSchemaFactories() {
   @Override
   protected CalciteSchema getImplicitSubSchema(String schemaName,
boolean caseSensitive) {
+// Drill registers schemas in lower case, see AbstractSchema constructor
+schemaName = schemaName != null ? schemaName.toLowerCase() : null;
 
 Review comment:
   Thanks, done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Query with CTE fails when its name matches to the table name without access
> ---
>
> Key: DRILL-7250
> URL: https://issues.apache.org/jira/browse/DRILL-7250
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.17.0
>
>
> When impersonation is enabled, and for example, we have {{lineitem}} table 
> with permissions {{750}} which is owned by {{user0_1:group0_1}} and 
> {{user2_1}} don't have access to it.
> The following query:
> {code:sql}
> use mini_dfs_plugin.user0_1;
> with lineitem as (SELECT 1 as a) select * from lineitem
> {code}
> submitted from {{user2_1}} fails with the following error:
> {noformat}
> java.lang.Exception: org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=user2_1, access=READ_EXECUTE, 
> inode="/user/user0_1/lineitem":user0_1:group0_1:drwxr-x---
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:317)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:70)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4432)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:646)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>   at ...(:0) ~[na:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listRecursive(FileSystemUtil.java:253)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.list(FileSystemUtil.java:208) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listFiles(FileSystemUtil.java:104) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.DrillFileSystemUtil.listFiles(DrillFileSystemUtil.java:86)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.FileSelection.minusDirectories(FileSelection.java:178)
>  ~[classes/:na]
>   at 
> 

[jira] [Created] (DRILL-7268) Read Hive array with parquet native reader

2019-05-20 Thread Igor Guzenko (JIRA)
Igor Guzenko created DRILL-7268:
---

 Summary: Read Hive array with parquet native reader
 Key: DRILL-7268
 URL: https://issues.apache.org/jira/browse/DRILL-7268
 Project: Apache Drill
  Issue Type: Sub-task
Reporter: Igor Guzenko






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7250) Query with CTE fails when its name matches to the table name without access

2019-05-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843850#comment-16843850
 ] 

ASF GitHub Bot commented on DRILL-7250:
---

arina-ielchiieva commented on pull request #1792: DRILL-7250: Query with CTE 
fails when its name matches to the table name without access
URL: https://github.com/apache/drill/pull/1792#discussion_r285529985
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/SqlConverter.java
 ##
 @@ -280,12 +280,10 @@ protected void validateFrom(
 changeNamesIfTableIsTemporary(tempNode);
 
 // Check the schema and throw a valid SchemaNotFound exception 
instead of TableNotFound exception.
-if (catalogReader.getTable(tempNode.names) == null) {
-  catalogReader.isValidSchema(tempNode.names);
-}
+catalogReader.isValidSchema(tempNode.names);
 
 Review comment:
   Names cannot be null?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Query with CTE fails when its name matches to the table name without access
> ---
>
> Key: DRILL-7250
> URL: https://issues.apache.org/jira/browse/DRILL-7250
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.17.0
>
>
> When impersonation is enabled, and for example, we have {{lineitem}} table 
> with permissions {{750}} which is owned by {{user0_1:group0_1}} and 
> {{user2_1}} don't have access to it.
> The following query:
> {code:sql}
> use mini_dfs_plugin.user0_1;
> with lineitem as (SELECT 1 as a) select * from lineitem
> {code}
> submitted from {{user2_1}} fails with the following error:
> {noformat}
> java.lang.Exception: org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=user2_1, access=READ_EXECUTE, 
> inode="/user/user0_1/lineitem":user0_1:group0_1:drwxr-x---
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:317)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:70)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4432)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:646)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>   at ...(:0) ~[na:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listRecursive(FileSystemUtil.java:253)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.list(FileSystemUtil.java:208) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listFiles(FileSystemUtil.java:104) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.DrillFileSystemUtil.listFiles(DrillFileSystemUtil.java:86)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.FileSelection.minusDirectories(FileSelection.java:178)
>  ~[classes/:na]
>   at 
> 

[jira] [Commented] (DRILL-7250) Query with CTE fails when its name matches to the table name without access

2019-05-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843851#comment-16843851
 ] 

ASF GitHub Bot commented on DRILL-7250:
---

arina-ielchiieva commented on pull request #1792: DRILL-7250: Query with CTE 
fails when its name matches to the table name without access
URL: https://github.com/apache/drill/pull/1792#discussion_r285529572
 
 

 ##
 File path: 
exec/java-exec/src/main/java/org/apache/calcite/jdbc/DynamicRootSchema.java
 ##
 @@ -61,6 +61,8 @@ public StoragePluginRegistry getSchemaFactories() {
   @Override
   protected CalciteSchema getImplicitSubSchema(String schemaName,
boolean caseSensitive) {
+// Drill registers schemas in lower case, see AbstractSchema constructor
+schemaName = schemaName != null ? schemaName.toLowerCase() : null;
 
 Review comment:
   Not critical, but maybe it's better to invert: `schemaName == null ? ...`
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Query with CTE fails when its name matches to the table name without access
> ---
>
> Key: DRILL-7250
> URL: https://issues.apache.org/jira/browse/DRILL-7250
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.17.0
>
>
> When impersonation is enabled, and for example, we have {{lineitem}} table 
> with permissions {{750}} which is owned by {{user0_1:group0_1}} and 
> {{user2_1}} don't have access to it.
> The following query:
> {code:sql}
> use mini_dfs_plugin.user0_1;
> with lineitem as (SELECT 1 as a) select * from lineitem
> {code}
> submitted from {{user2_1}} fails with the following error:
> {noformat}
> java.lang.Exception: org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=user2_1, access=READ_EXECUTE, 
> inode="/user/user0_1/lineitem":user0_1:group0_1:drwxr-x---
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:317)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:70)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4432)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:646)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>   at ...(:0) ~[na:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listRecursive(FileSystemUtil.java:253)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.list(FileSystemUtil.java:208) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listFiles(FileSystemUtil.java:104) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.DrillFileSystemUtil.listFiles(DrillFileSystemUtil.java:86)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.FileSelection.minusDirectories(FileSelection.java:178)
>  ~[classes/:na]
>   at 
> 

[jira] [Commented] (DRILL-7192) Drill limits rows when autoLimit is disabled

2019-05-20 Thread Volodymyr Vysotskyi (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843804#comment-16843804
 ] 

Volodymyr Vysotskyi commented on DRILL-7192:


[~kkhatua], thanks for looking into this issue, the last your proposal makes 
sense for me.

> Drill limits rows when autoLimit is disabled
> 
>
> Key: DRILL-7192
> URL: https://issues.apache.org/jira/browse/DRILL-7192
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Kunal Khatua
>Priority: Major
> Fix For: 1.17.0
>
>
> In DRILL-7048 was implemented autoLimit for JDBC and rest clients.
> *Steps to reproduce the issue:*
>  1. Check that autoLimit was disabled, if not, disable it and restart Drill.
>  2. Submit any query, and verify that rows count is correct, for example,
> {code:sql}
> SELECT * FROM cp.`employee.json`;
> {code}
> returns 1,155 rows
>  3. Enable autoLimit for sqlLine sqlLine client:
> {code:sql}
> !set rowLimit 10
> {code}
> 4. Submit the same query and verify that the result has 10 rows.
>  5. Disable autoLimit:
> {code:sql}
> !set rowLimit 0
> {code}
> 6. Submit the same query, but for this time, *it returns 10 rows instead of 
> 1,155*.
> Correct rows count is returned only after creating a new connection.
> The same issue is also observed for SQuirreL SQL client, but for example, for 
> Postgres, it works correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (DRILL-7250) Query with CTE fails when its name matches to the table name without access

2019-05-20 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DRILL-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843763#comment-16843763
 ] 

ASF GitHub Bot commented on DRILL-7250:
---

vvysotskyi commented on pull request #1792: DRILL-7250: Query with CTE fails 
when its name matches to the table name without access
URL: https://github.com/apache/drill/pull/1792
 
 
   - Removed `catalogReader.getTable(tempNode.names)` call since it does not 
resolve table names correctly using provided scope.
   - Reworked `catalogReader.isValidSchema()` method to follow logic on how 
Calcite determines whether the table is present: it uses schemas from 
`CatalogReader` and resolves specified schema among them.
   - Updated Calcite version to use the fix for `CALCITE-3061`.
   - Added unit test.
   
   For problem description please see 
[DRILL-7250](https://issues.apache.org/jira/browse/DRILL-7250).
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Query with CTE fails when its name matches to the table name without access
> ---
>
> Key: DRILL-7250
> URL: https://issues.apache.org/jira/browse/DRILL-7250
> Project: Apache Drill
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Volodymyr Vysotskyi
>Priority: Major
> Fix For: 1.17.0
>
>
> When impersonation is enabled, and for example, we have {{lineitem}} table 
> with permissions {{750}} which is owned by {{user0_1:group0_1}} and 
> {{user2_1}} don't have access to it.
> The following query:
> {code:sql}
> use mini_dfs_plugin.user0_1;
> with lineitem as (SELECT 1 as a) select * from lineitem
> {code}
> submitted from {{user2_1}} fails with the following error:
> {noformat}
> java.lang.Exception: org.apache.hadoop.security.AccessControlException: 
> Permission denied: user=user2_1, access=READ_EXECUTE, 
> inode="/user/user0_1/lineitem":user0_1:group0_1:drwxr-x---
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:317)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:229)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1736)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:70)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4432)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:646)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>   at ...(:0) ~[na:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listRecursive(FileSystemUtil.java:253)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.list(FileSystemUtil.java:208) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.FileSystemUtil.listFiles(FileSystemUtil.java:104) 
> ~[classes/:na]
>   at 
> org.apache.drill.exec.util.DrillFileSystemUtil.listFiles(DrillFileSystemUtil.java:86)
>  ~[classes/:na]
>   at 
> org.apache.drill.exec.store.dfs.FileSelection.minusDirectories(FileSelection.java:178)
>  ~[classes/:na]
>   at 
>