[ 
https://issues.apache.org/jira/browse/DRILL-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16843607#comment-16843607
 ] 

ASF GitHub Bot commented on DRILL-7181:
---------------------------------------

paul-rogers commented on pull request #1789: DRILL-7181: Improve V3 text reader 
(row set) error messages
URL: https://github.com/apache/drill/pull/1789#discussion_r285419313
 
 

 ##########
 File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/physical/rowSet/ResultSetLoader.java
 ##########
 @@ -41,6 +42,12 @@
 
   public static final int DEFAULT_ROW_COUNT = 
BaseValueVector.INITIAL_VALUE_ALLOCATION;
 
+  /**
+   * Context for error messages.
+   */
+
+  RowSetContext context();
 
 Review comment:
   Renamed the class. Then, restructured the class to be more generic and moved 
it into common, alongside UserException. Doing so allowed simplifying usage of 
the context.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Text V3 Reader] Exception with inadequate message is thrown if select 
> columns as array with extractHeader set to true
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: DRILL-7181
>                 URL: https://issues.apache.org/jira/browse/DRILL-7181
>             Project: Apache Drill
>          Issue Type: Bug
>    Affects Versions: 1.16.0
>            Reporter: Anton Gozhiy
>            Assignee: Paul Rogers
>            Priority: Major
>
> *Prerequisites:*
>  # Create a simple .csv file with header, like this:
> {noformat}
> col1,col2,col3
> 1,2,3
> 4,5,6
> 7,8,9
> {noformat}
>  # Set exec.storage.enable_v3_text_reader=true
>  # Set "extractHeader": true for csv format in dfs storage plugin.
> *Query:*
> {code:sql}
> select columns[0] from dfs.tmp.`/test.csv`
> {code}
> *Expected result:* Exception should happen, here is the message from V2 
> reader:
> {noformat}
> UNSUPPORTED_OPERATION ERROR: Drill Remote Exception
>   (java.lang.Exception) UNSUPPORTED_OPERATION ERROR: With extractHeader 
> enabled, only header names are supported
> column name columns
> column index
> Fragment 0:0
> [Error Id: 5affa696-1dbd-43d7-ac14-72d235c00f43 on userf87d-pc:31010]
>     org.apache.drill.common.exceptions.UserException$Builder.build():630
>     
> org.apache.drill.exec.store.easy.text.compliant.FieldVarCharOutput.<init>():106
>     
> org.apache.drill.exec.store.easy.text.compliant.CompliantTextRecordReader.setup():139
>     org.apache.drill.exec.physical.impl.ScanBatch.getNextReaderIfHas():321
>     org.apache.drill.exec.physical.impl.ScanBatch.internalNext():216
>     org.apache.drill.exec.physical.impl.ScanBatch.next():271
>     org.apache.drill.exec.record.AbstractRecordBatch.next():126
>     org.apache.drill.exec.record.AbstractRecordBatch.next():116
>     org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>     org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
>     org.apache.drill.exec.record.AbstractRecordBatch.next():186
>     org.apache.drill.exec.record.AbstractRecordBatch.next():126
>     org.apache.drill.exec.record.AbstractRecordBatch.next():116
>     org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>     org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
>     org.apache.drill.exec.record.AbstractRecordBatch.next():186
>     org.apache.drill.exec.record.AbstractRecordBatch.next():126
>     org.apache.drill.exec.record.AbstractRecordBatch.next():116
>     org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>     org.apache.drill.exec.record.AbstractRecordBatch.next():186
>     org.apache.drill.exec.record.AbstractRecordBatch.next():126
>     org.apache.drill.exec.record.AbstractRecordBatch.next():116
>     org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>     
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141
>     org.apache.drill.exec.record.AbstractRecordBatch.next():186
>     org.apache.drill.exec.physical.impl.BaseRootExec.next():104
>     
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
>     org.apache.drill.exec.physical.impl.BaseRootExec.next():94
>     org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296
>     org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283
>     .......():0
>     org.apache.hadoop.security.UserGroupInformation.doAs():1746
>     org.apache.drill.exec.work.fragment.FragmentExecutor.run():283
>     org.apache.drill.common.SelfCleaningRunnable.run():38
>     .......():0
> {noformat}
> *Actual result:* The exception message is inadequate:
> {noformat}
> org.apache.drill.common.exceptions.UserRemoteException: EXECUTION_ERROR 
> ERROR: Table schema must have exactly one column.
> Exception thrown from 
> org.apache.drill.exec.physical.impl.scan.ScanOperatorExec
> Fragment 0:0
> [Error Id: a76a1576-419a-413f-840f-088157167a6d on userf87d-pc:31010]
>   (java.lang.IllegalStateException) Table schema must have exactly one column.
>     
> org.apache.drill.exec.physical.impl.scan.columns.ColumnsArrayManager.resolveColumn():108
>     
> org.apache.drill.exec.physical.impl.scan.project.ReaderLevelProjection.resolveSpecial():91
>     
> org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.resolveRootTuple():62
>     
> org.apache.drill.exec.physical.impl.scan.project.ExplicitSchemaProjection.<init>():52
>     
> org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.doExplicitProjection():223
>     
> org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.reviseOutputProjection():155
>     
> org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.endBatch():117
>     
> org.apache.drill.exec.physical.impl.scan.project.ReaderSchemaOrchestrator.defineSchema():94
>     
> org.apache.drill.exec.physical.impl.scan.framework.ShimBatchReader.defineSchema():105
>     org.apache.drill.exec.physical.impl.scan.ReaderState.buildSchema():300
>     org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.nextAction():182
>     
> org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.buildSchema():122
>     org.apache.drill.exec.physical.impl.protocol.OperatorDriver.start():160
>     org.apache.drill.exec.physical.impl.protocol.OperatorDriver.next():112
>     
> org.apache.drill.exec.physical.impl.protocol.OperatorRecordBatch.next():147
>     org.apache.drill.exec.record.AbstractRecordBatch.next():126
>     org.apache.drill.exec.record.AbstractRecordBatch.next():116
>     org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>     org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
>     org.apache.drill.exec.record.AbstractRecordBatch.next():186
>     org.apache.drill.exec.record.AbstractRecordBatch.next():126
>     org.apache.drill.exec.record.AbstractRecordBatch.next():116
>     org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>     org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():101
>     org.apache.drill.exec.record.AbstractRecordBatch.next():186
>     org.apache.drill.exec.record.AbstractRecordBatch.next():126
>     org.apache.drill.exec.record.AbstractRecordBatch.next():116
>     org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>     org.apache.drill.exec.record.AbstractRecordBatch.next():186
>     org.apache.drill.exec.record.AbstractRecordBatch.next():126
>     org.apache.drill.exec.record.AbstractRecordBatch.next():116
>     org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext():63
>     
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():141
>     org.apache.drill.exec.record.AbstractRecordBatch.next():186
>     org.apache.drill.exec.physical.impl.BaseRootExec.next():104
>     
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():83
>     org.apache.drill.exec.physical.impl.BaseRootExec.next():94
>     org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():296
>     org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():283
>     java.security.AccessController.doPrivileged():-2
>     javax.security.auth.Subject.doAs():422
>     org.apache.hadoop.security.UserGroupInformation.doAs():1746
>     org.apache.drill.exec.work.fragment.FragmentExecutor.run():283
>     org.apache.drill.common.SelfCleaningRunnable.run():38
>     java.util.concurrent.ThreadPoolExecutor.runWorker():1149
>     java.util.concurrent.ThreadPoolExecutor$Worker.run():624
>     java.lang.Thread.run():748
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to