[jira] [Assigned] (DRILL-5874) NPE in AnonWebUserConnection.cleanupSession()
[ https://issues.apache.org/jira/browse/DRILL-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sorabh Hamirwasia reassigned DRILL-5874: Assignee: Sorabh Hamirwasia > NPE in AnonWebUserConnection.cleanupSession() > - > > Key: DRILL-5874 > URL: https://issues.apache.org/jira/browse/DRILL-5874 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.12.0 >Reporter: Paul Rogers >Assignee: Sorabh Hamirwasia > > When debugging another issue, I tried to use the Web UI to run the example > query: > {code} > SELECT * FROM cp.`employee.json` LIMIT 20 > {code} > The query failed with this error: > {noformat} > Query Failed: An Error Occurred > java.lang.NullPointerException > {noformat} > No stack trace was provided in the log, even at DEBUG level. > Debugging, the problem appears to be deep inside > {{AnonWebUserConnection.cleanupSession()}}: > {code} > package io.netty.channel; > public class DefaultChannelPromise ... > protected EventExecutor executor() { > EventExecutor e = super.executor(); > if (e == null) { > return channel().eventLoop(); > } else { > return e; > } > } > {code} > In the above, {{channel()}} is null. the {{channel}} field is also null. > This may indicate that some part of the Web UI was not set up correctly. This > is a recent change, as this code worked several days ago. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5872) Deserialization of profile JSON fails due to totalCost being reported as "NaN"
[ https://issues.apache.org/jira/browse/DRILL-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202818#comment-16202818 ] ASF GitHub Bot commented on DRILL-5872: --- GitHub user paul-rogers opened a pull request: https://github.com/apache/drill/pull/990 DRILL-5872: Workaround for invalid cost in physical plans See DRILL-5872 for details. Works around a bug in a storage plugin that produces NaN for a cost estimate, which then leads to profiles that can't be deserialized. This fix simply replaces the NaN in the profile with the maximum double value. The real fix is that the storage plugin concerned should not produce NaN estimates. You can merge this pull request into a Git repository by running: $ git pull https://github.com/paul-rogers/drill DRILL-5872 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/990.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #990 commit 416bc527c4528d93ea23169292bd42578a4c647a Author: Paul RogersDate: 2017-10-12T22:27:31Z DRILL-5872: Workaround for invalid cost in physical plans See DRILL-5872 for details. > Deserialization of profile JSON fails due to totalCost being reported as "NaN" > -- > > Key: DRILL-5872 > URL: https://issues.apache.org/jira/browse/DRILL-5872 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.12.0 >Reporter: Kunal Khatua >Assignee: Paul Rogers >Priority: Blocker > Fix For: 1.12.0 > > > With DRILL-5716 , there is a change in the protobuf that introduces a new > attribute in the JSON document that Drill uses to interpret and render the > profile's details. > The totalCost attribute, used as a part of showing the query cost (to > understand how it was assign to small/large queue), sometimes returns a > non-numeric text value {{"NaN"}}. > This breaks the UI with the messages: > {code} > Failed to get profiles: > unable to deserialize value at key 2620698f-295e-f8d3-3ab7-01792b0f2669 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5874) NPE in AnonWebUserConnection.cleanupSession()
[ https://issues.apache.org/jira/browse/DRILL-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Rogers updated DRILL-5874: --- Description: When debugging another issue, I tried to use the Web UI to run the example query: {code} SELECT * FROM cp.`employee.json` LIMIT 20 {code} The query failed with this error: {noformat} Query Failed: An Error Occurred java.lang.NullPointerException {noformat} No stack trace was provided in the log, even at DEBUG level. Debugging, the problem appears to be deep inside {{AnonWebUserConnection.cleanupSession()}}: {code} package io.netty.channel; public class DefaultChannelPromise ... protected EventExecutor executor() { EventExecutor e = super.executor(); if (e == null) { return channel().eventLoop(); } else { return e; } } {code} In the above, {{channel()}} is null. the {{channel}} field is also null. This may indicate that some part of the Web UI was not set up correctly. This is a recent change, as this code worked several days ago. was: When debugging another issue, I tried to use the Web UI to run the example query: The query failed with this error: Debugging, the problem appears to be deep inside {{AnonWebUserConnection.cleanupSession()}}: {code} package io.netty.channel; public class DefaultChannelPromise ... protected EventExecutor executor() { EventExecutor e = super.executor(); if (e == null) { return channel().eventLoop(); } else { return e; } } {code} In the above, {{channel()}} is null. the {{channel}} field is also null. This may indicate that some part of the Web UI was not set up correctly. This is a recent change, as this code worked several days ago. > NPE in AnonWebUserConnection.cleanupSession() > - > > Key: DRILL-5874 > URL: https://issues.apache.org/jira/browse/DRILL-5874 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.12.0 >Reporter: Paul Rogers > > When debugging another issue, I tried to use the Web UI to run the example > query: > {code} > SELECT * FROM cp.`employee.json` LIMIT 20 > {code} > The query failed with this error: > {noformat} > Query Failed: An Error Occurred > java.lang.NullPointerException > {noformat} > No stack trace was provided in the log, even at DEBUG level. > Debugging, the problem appears to be deep inside > {{AnonWebUserConnection.cleanupSession()}}: > {code} > package io.netty.channel; > public class DefaultChannelPromise ... > protected EventExecutor executor() { > EventExecutor e = super.executor(); > if (e == null) { > return channel().eventLoop(); > } else { > return e; > } > } > {code} > In the above, {{channel()}} is null. the {{channel}} field is also null. > This may indicate that some part of the Web UI was not set up correctly. This > is a recent change, as this code worked several days ago. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5874) NPE in AnonWebUserConnection.cleanupSession()
Paul Rogers created DRILL-5874: -- Summary: NPE in AnonWebUserConnection.cleanupSession() Key: DRILL-5874 URL: https://issues.apache.org/jira/browse/DRILL-5874 Project: Apache Drill Issue Type: Bug Affects Versions: 1.12.0 Reporter: Paul Rogers When debugging another issue, I tried to use the Web UI to run the example query: The query failed with this error: Debugging, the problem appears to be deep inside {{AnonWebUserConnection.cleanupSession()}}: {code} package io.netty.channel; public class DefaultChannelPromise ... protected EventExecutor executor() { EventExecutor e = super.executor(); if (e == null) { return channel().eventLoop(); } else { return e; } } {code} In the above, {{channel()}} is null. the {{channel}} field is also null. This may indicate that some part of the Web UI was not set up correctly. This is a recent change, as this code worked several days ago. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5873) Drill C++ Client should throw proper/complete error message for the ODBC driver to consume
Krystal created DRILL-5873: -- Summary: Drill C++ Client should throw proper/complete error message for the ODBC driver to consume Key: DRILL-5873 URL: https://issues.apache.org/jira/browse/DRILL-5873 Project: Apache Drill Issue Type: Bug Components: Client - C++ Reporter: Krystal Assignee: Parth Chandra The Drill C++ Client should throw a proper/complete error message for the driver to utilize. The ODBC driver is directly outputting the exception message thrown by the client by calling the getError() API after the connect() API has failed with an error status. For the Java client, similar logic is hard coded at https://github.com/apache/drill/blob/1.11.0/exec/java-exec/src/main/java/org/apache/drill/exec/rpc/user/UserClient.java#L247. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (DRILL-5872) Deserialization of profile JSON fails due to totalCost being reported as "NaN"
[ https://issues.apache.org/jira/browse/DRILL-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Rogers reassigned DRILL-5872: -- Assignee: Paul Rogers (was: Paul Rogers) > Deserialization of profile JSON fails due to totalCost being reported as "NaN" > -- > > Key: DRILL-5872 > URL: https://issues.apache.org/jira/browse/DRILL-5872 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.12.0 >Reporter: Kunal Khatua >Assignee: Paul Rogers >Priority: Blocker > Fix For: 1.12.0 > > > With DRILL-5716 , there is a change in the protobuf that introduces a new > attribute in the JSON document that Drill uses to interpret and render the > profile's details. > The totalCost attribute, used as a part of showing the query cost (to > understand how it was assign to small/large queue), sometimes returns a > non-numeric text value {{"NaN"}}. > This breaks the UI with the messages: > {code} > Failed to get profiles: > unable to deserialize value at key 2620698f-295e-f8d3-3ab7-01792b0f2669 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5872) Deserialization of profile JSON fails due to totalCost being reported as "NaN"
Kunal Khatua created DRILL-5872: --- Summary: Deserialization of profile JSON fails due to totalCost being reported as "NaN" Key: DRILL-5872 URL: https://issues.apache.org/jira/browse/DRILL-5872 Project: Apache Drill Issue Type: Bug Affects Versions: 1.12.0 Reporter: Kunal Khatua Assignee: Paul Rogers Priority: Blocker Fix For: 1.12.0 With DRILL-5716 , there is a change in the protobuf that introduces a new attribute in the JSON document that Drill uses to interpret and render the profile's details. The totalCost attribute, used as a part of showing the query cost (to understand how it was assign to small/large queue), sometimes returns a non-numeric text value {{"NaN"}}. This breaks the UI with the messages: {code} Failed to get profiles: unable to deserialize value at key 2620698f-295e-f8d3-3ab7-01792b0f2669 {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5871) Large files fail to write to s3 datastore using hdfs s3a.
[ https://issues.apache.org/jira/browse/DRILL-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Jacobs updated DRILL-5871: Description: When storing CSV files to a S3a storage driver using a CTAS, if the files are large enough to implicate the multi-part upload functionality, the CTAS fails with the following stack trace (we can write smaller CSV's and parquet files no problem): Error: SYSTEM ERROR: UnsupportedOperationException Fragment 0:0 [Error Id: dbb018ea-29eb-4e1a-bf97-4c2c9cfbdf3c on den-certdrill-1.ci.neoninternal.org:31010] (java.lang.UnsupportedOperationException) null java.util.Collections$UnmodifiableList.sort():1331 java.util.Collections.sort():175 com.amazonaws.services.s3.model.transform.RequestXmlFactory.convertToXmlByteArray():42 com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload():2513 org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload.complete():384 org.apache.hadoop.fs.s3a.S3AFastOutputStream.close():253 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close():72 org.apache.hadoop.fs.FSDataOutputStream.close():106 java.io.PrintStream.close():360 org.apache.drill.exec.store.text.DrillTextRecordWriter.cleanup():170 org.apache.drill.exec.physical.impl.WriterRecordBatch.closeWriter():184 org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext():128 org.apache.drill.exec.record.AbstractRecordBatch.next():162 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133 org.apache.drill.exec.record.AbstractRecordBatch.next():162 org.apache.drill.exec.physical.impl.BaseRootExec.next():105 org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81 org.apache.drill.exec.physical.impl.BaseRootExec.next():95 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 java.security.AccessController.doPrivileged():-2 javax.security.auth.Subject.doAs():422 org.apache.hadoop.security.UserGroupInformation.doAs():1657 org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 org.apache.drill.common.SelfCleaningRunnable.run():38 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():748 (state=,code=0) This looks suspiciously like: https://issues.apache.org/jira/browse/HADOOP-14204 So the fix may be as 'simple' as just syncing the upstream version when Hadoop 2.8.2 releases later this month. Although I am ignorant to the implications of upgrading hadoop-hdfs to this version. We are able to store smaller files just fine. Things I've tried: Setting fs.s3a.multipart.threshold to a ridiculously large value like 10T (these files are just over 1GB). Does not work. Setting fs.s3a.fast.upload: false. Also does not change the behavior. The s3a driver does not appear to have an option to disable multi-part uploads all together. For completeness sake here are my current S3a options for the driver: "fs.s3a.endpoint": "**", "fs.s3a.access.key": "*", "fs.s3a.secret.key": "*", "fs.s3a.connection.maximum": "200", "fs.s3a.paging.maximum": "1000", "fs.s3a.fast.upload": "true", "fs.s3a.multipart.purge": "true", "fs.s3a.fast.upload.buffer": "bytebuffer", "fs.s3a.fast.upload.active.blocks": "8", "fs.s3a.buffer.dir": "/opt/apache-airflow/buffer", "fs.s3a.multipart.size": "134217728", "fs.s3a.multipart.threshold": "671088640", "fs.s3a.experimental.input.fadvise": "sequential", "fs.s3a.acl.default": "PublicRead", "fs.s3a.multiobjectdelete.enable": "true" was: When storing CSV files to a S3a storage driver using a CTAS, if the files are large enough to implicate the multi-part upload functionality, the CTAS fails with the following stack trace: Error: SYSTEM ERROR: UnsupportedOperationException Fragment 0:0 [Error Id: dbb018ea-29eb-4e1a-bf97-4c2c9cfbdf3c on den-certdrill-1.ci.neoninternal.org:31010] (java.lang.UnsupportedOperationException) null java.util.Collections$UnmodifiableList.sort():1331 java.util.Collections.sort():175 com.amazonaws.services.s3.model.transform.RequestXmlFactory.convertToXmlByteArray():42 com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload():2513 org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload.complete():384 org.apache.hadoop.fs.s3a.S3AFastOutputStream.close():253 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close():72 org.apache.hadoop.fs.FSDataOutputStream.close():106 java.io.PrintStream.close():360
[jira] [Created] (DRILL-5871) Large files fail to write to s3 datastore using hdfs s3a.
Steve Jacobs created DRILL-5871: --- Summary: Large files fail to write to s3 datastore using hdfs s3a. Key: DRILL-5871 URL: https://issues.apache.org/jira/browse/DRILL-5871 Project: Apache Drill Issue Type: Bug Components: Server Affects Versions: 1.11.0 Environment: Centos 7.4, Oracle Java SE 1.80.0_131-b11, x86_64, vmware. Zookeeper cluster, two drillbits, 3 zookeepers. Reporter: Steve Jacobs When storing CSV files to a S3a storage driver using a CTAS, if the files are large enough to implicate the multi-part upload functionality, the CTAS fails with the following stack trace: Error: SYSTEM ERROR: UnsupportedOperationException Fragment 0:0 [Error Id: dbb018ea-29eb-4e1a-bf97-4c2c9cfbdf3c on den-certdrill-1.ci.neoninternal.org:31010] (java.lang.UnsupportedOperationException) null java.util.Collections$UnmodifiableList.sort():1331 java.util.Collections.sort():175 com.amazonaws.services.s3.model.transform.RequestXmlFactory.convertToXmlByteArray():42 com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload():2513 org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload.complete():384 org.apache.hadoop.fs.s3a.S3AFastOutputStream.close():253 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close():72 org.apache.hadoop.fs.FSDataOutputStream.close():106 java.io.PrintStream.close():360 org.apache.drill.exec.store.text.DrillTextRecordWriter.cleanup():170 org.apache.drill.exec.physical.impl.WriterRecordBatch.closeWriter():184 org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext():128 org.apache.drill.exec.record.AbstractRecordBatch.next():162 org.apache.drill.exec.record.AbstractRecordBatch.next():119 org.apache.drill.exec.record.AbstractRecordBatch.next():109 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():133 org.apache.drill.exec.record.AbstractRecordBatch.next():162 org.apache.drill.exec.physical.impl.BaseRootExec.next():105 org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81 org.apache.drill.exec.physical.impl.BaseRootExec.next():95 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 java.security.AccessController.doPrivileged():-2 javax.security.auth.Subject.doAs():422 org.apache.hadoop.security.UserGroupInformation.doAs():1657 org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 org.apache.drill.common.SelfCleaningRunnable.run():38 java.util.concurrent.ThreadPoolExecutor.runWorker():1142 java.util.concurrent.ThreadPoolExecutor$Worker.run():617 java.lang.Thread.run():748 (state=,code=0) This looks suspiciously like: https://issues.apache.org/jira/browse/HADOOP-14204 So the fix may be as 'simple' as just syncing the upstream version when Hadoop 2.8.2 releases later this month. Although I am ignorant to the implications of upgrading hadoop-hdfs to this version. We are able to store smaller files just fine. Things I've tried: Setting fs.s3a.multipart.threshold to a ridiculously large value like 10T (these files are just over 1GB). Does not work. Setting fs.s3a.fast.upload: false. Also does not change the behavior. The s3a driver does not appear to have an option to disable multi-part uploads all together. For completeness sake here are my current S3a options for the driver: "fs.s3a.endpoint": "**", "fs.s3a.access.key": "*", "fs.s3a.secret.key": "*", "fs.s3a.connection.maximum": "200", "fs.s3a.paging.maximum": "1000", "fs.s3a.fast.upload": "true", "fs.s3a.multipart.purge": "true", "fs.s3a.fast.upload.buffer": "bytebuffer", "fs.s3a.fast.upload.active.blocks": "8", "fs.s3a.buffer.dir": "/opt/apache-airflow/buffer", "fs.s3a.multipart.size": "134217728", "fs.s3a.multipart.threshold": "671088640", "fs.s3a.experimental.input.fadvise": "sequential", "fs.s3a.acl.default": "PublicRead", "fs.s3a.multiobjectdelete.enable": "true" -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (DRILL-5682) Apache Drill should support network encryption
[ https://issues.apache.org/jira/browse/DRILL-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sorabh Hamirwasia resolved DRILL-5682. -- Resolution: Fixed Fix Version/s: 1.10.0 1.11.0 1.12.0 > Apache Drill should support network encryption > -- > > Key: DRILL-5682 > URL: https://issues.apache.org/jira/browse/DRILL-5682 > Project: Apache Drill > Issue Type: New Feature >Reporter: Sorabh Hamirwasia >Assignee: Sorabh Hamirwasia > Labels: doc-impacting, security > Fix For: 1.12.0, 1.11.0, 1.10.0 > > > Creating this one to repurpose DRILL-4335 for SASL encryption between Drill > Client to Drillbit. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (DRILL-2496) Add SSL support to C++ client
[ https://issues.apache.org/jira/browse/DRILL-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Parth Chandra resolved DRILL-2496. -- Resolution: Fixed Done in DRILL-5431 > Add SSL support to C++ client > - > > Key: DRILL-2496 > URL: https://issues.apache.org/jira/browse/DRILL-2496 > Project: Apache Drill > Issue Type: Improvement > Components: Client - C++ >Reporter: Parth Chandra >Assignee: Parth Chandra > Labels: security > Fix For: Future > > > Needed for impersonation where username and password are sent over the wire > to the user. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5864) Selecting a non-existing field from a MapR-DB JSON table fails with NPE
[ https://issues.apache.org/jira/browse/DRILL-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202430#comment-16202430 ] ASF GitHub Bot commented on DRILL-5864: --- Github user prasadns14 closed the pull request at: https://github.com/apache/drill/pull/988 > Selecting a non-existing field from a MapR-DB JSON table fails with NPE > --- > > Key: DRILL-5864 > URL: https://issues.apache.org/jira/browse/DRILL-5864 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators, Storage - MapRDB >Affects Versions: 1.12.0 >Reporter: Abhishek Girish >Assignee: Hanumath Rao Maduri > Attachments: OrderByNPE.log, OrderByNPE2.log > > > Query 1 > {code} > > select C_FIRST_NAME,C_BIRTH_COUNTRY,C_BIRTH_YEAR,C_BIRTH_MONTH,C_BIRTH_DAY > > from customer ORDER BY C_BIRTH_COUNTRY ASC, C_FIRST_NAME ASC LIMIT 10; > Error: SYSTEM ERROR: NullPointerException > (java.lang.NullPointerException) null > org.apache.drill.exec.record.SchemaUtil.coerceContainer():176 > > org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.convertBatch():124 > org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.add():90 > org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.addBatch():265 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch():421 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():357 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():302 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():115 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():115 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():134 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.physical.impl.BaseRootExec.next():105 > > org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81 > org.apache.drill.exec.physical.impl.BaseRootExec.next():95 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227 > java.security.AccessController.doPrivileged():-2 > javax.security.auth.Subject.doAs():422 > org.apache.hadoop.security.UserGroupInformation.doAs():1595 > org.apache.drill.exec.work.fragment.FragmentExecutor.run():227 > org.apache.drill.common.SelfCleaningRunnable.run():38 > java.util.concurrent.ThreadPoolExecutor.runWorker():1149 > java.util.concurrent.ThreadPoolExecutor$Worker.run():624 > java.lang.Thread.run():748 (state=,code=0) > {code} > Plan > {code} > 00-00Screen > 00-01 Project(C_FIRST_NAME=[$0], C_BIRTH_COUNTRY=[$1], > C_BIRTH_YEAR=[$2], C_BIRTH_MONTH=[$3], C_BIRTH_DAY=[$4]) > 00-02SelectionVectorRemover > 00-03 Limit(fetch=[10]) > 00-04Limit(fetch=[10]) > 00-05 SelectionVectorRemover > 00-06Sort(sort0=[$1], sort1=[$0], dir0=[ASC], dir1=[ASC]) > 00-07 Scan(groupscan=[JsonTableGroupScan > [ScanSpec=JsonScanSpec >
[jira] [Commented] (DRILL-5431) Support SSL
[ https://issues.apache.org/jira/browse/DRILL-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202402#comment-16202402 ] ASF GitHub Bot commented on DRILL-5431: --- Github user asfgit closed the pull request at: https://github.com/apache/drill/pull/950 > Support SSL > --- > > Key: DRILL-5431 > URL: https://issues.apache.org/jira/browse/DRILL-5431 > Project: Apache Drill > Issue Type: New Feature > Components: Client - Java, Client - ODBC >Reporter: Sudheesh Katkam >Assignee: Parth Chandra > Labels: doc-impacting > Fix For: 1.12.0 > > > Support SSL between Drillbit and JDBC/ODBC drivers. Drill already supports > HTTPS for web traffic. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5431) Support SSL
[ https://issues.apache.org/jira/browse/DRILL-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202308#comment-16202308 ] ASF GitHub Bot commented on DRILL-5431: --- Github user paul-rogers commented on the issue: https://github.com/apache/drill/pull/950 Go for it. I won't do any more commits until you give the all-clear. > Support SSL > --- > > Key: DRILL-5431 > URL: https://issues.apache.org/jira/browse/DRILL-5431 > Project: Apache Drill > Issue Type: New Feature > Components: Client - Java, Client - ODBC >Reporter: Sudheesh Katkam >Assignee: Parth Chandra > Labels: doc-impacting > Fix For: 1.12.0 > > > Support SSL between Drillbit and JDBC/ODBC drivers. Drill already supports > HTTPS for web traffic. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5815) Provide option to set query memory as percent of total
[ https://issues.apache.org/jira/browse/DRILL-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5815: Description: Drill provides a parameter to set the memory per query as a static number which defaults to 2 GB. This number is a wonderful setting for the default Drillbit configuration of 8 GB heap; it allows 2-3 concurrent queries. But, as Drillbit memory increases, the default becomes a bit constraining. While users can change the setting, they seldom do. In addition, provide an option that sets memory as a percent of total memory. If the allocation is 10%, say, and total memory is 128 GB, then each query gets ~13GB, which is a big improvement. The existing option acts as a floor: the query must receive at least that much memory. *DOCUMENTATION* New option should be documented - planner.memory.percent_per_query Default - 0.05 (which is equivalent to 5 %) To disable feature set to 0. More information can be found in pull request description. was: Drill provides a parameter to set the memory per query as a static number which defaults to 2 GB. This number is a wonderful setting for the default Drillbit configuration of 8 GB heap; it allows 2-3 concurrent queries. But, as Drillbit memory increases, the default becomes a bit constraining. While users can change the setting, they seldom do. In addition, provide an option that sets memory as a percent of total memory. If the allocation is 10%, say, and total memory is 128 GB, then each query gets ~13GB, which is a big improvement. The existing option acts as a floor: the query must receive at least that much memory. > Provide option to set query memory as percent of total > -- > > Key: DRILL-5815 > URL: https://issues.apache.org/jira/browse/DRILL-5815 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Assignee: Paul Rogers > Labels: doc-impacting, ready-to-commit > Fix For: 1.12.0 > > > Drill provides a parameter to set the memory per query as a static number > which defaults to 2 GB. This number is a wonderful setting for the default > Drillbit configuration of 8 GB heap; it allows 2-3 concurrent queries. But, > as Drillbit memory increases, the default becomes a bit constraining. While > users can change the setting, they seldom do. > In addition, provide an option that sets memory as a percent of total memory. > If the allocation is 10%, say, and total memory is 128 GB, then each query > gets ~13GB, which is a big improvement. > The existing option acts as a floor: the query must receive at least that > much memory. > *DOCUMENTATION* > New option should be documented - planner.memory.percent_per_query > Default - 0.05 (which is equivalent to 5 %) > To disable feature set to 0. > More information can be found in pull request description. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5815) Provide option to set query memory as percent of total
[ https://issues.apache.org/jira/browse/DRILL-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5815: Labels: doc-impacting ready-to-commit (was: ready-to-commit) > Provide option to set query memory as percent of total > -- > > Key: DRILL-5815 > URL: https://issues.apache.org/jira/browse/DRILL-5815 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Assignee: Paul Rogers > Labels: doc-impacting, ready-to-commit > Fix For: 1.12.0 > > > Drill provides a parameter to set the memory per query as a static number > which defaults to 2 GB. This number is a wonderful setting for the default > Drillbit configuration of 8 GB heap; it allows 2-3 concurrent queries. But, > as Drillbit memory increases, the default becomes a bit constraining. While > users can change the setting, they seldom do. > In addition, provide an option that sets memory as a percent of total memory. > If the allocation is 10%, say, and total memory is 128 GB, then each query > gets ~13GB, which is a big improvement. > The existing option acts as a floor: the query must receive at least that > much memory. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5862: Labels: ready-to-commit (was: ) > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Affects Versions: 1.11.0 >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > Labels: ready-to-commit > Fix For: 1.12.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5862: Reviewer: Arina Ielchiieva > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Affects Versions: 1.11.0 >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > Labels: ready-to-commit > Fix For: 1.12.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5862: Affects Version/s: 1.11.0 > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Affects Versions: 1.11.0 >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > Labels: ready-to-commit > Fix For: 1.12.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5862: Fix Version/s: 1.12.0 > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Affects Versions: 1.11.0 >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > Labels: ready-to-commit > Fix For: 1.12.0 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202292#comment-16202292 ] ASF GitHub Bot commented on DRILL-5862: --- Github user arina-ielchiieva commented on a diff in the pull request: https://github.com/apache/drill/pull/985#discussion_r144352384 --- Diff: pom.xml --- @@ -15,7 +15,8 @@ org.apache apache -14 +18 + --- End diff -- I have tried, it does fail ) Thanks for explanation. > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202293#comment-16202293 ] ASF GitHub Bot commented on DRILL-5862: --- Github user arina-ielchiieva commented on the issue: https://github.com/apache/drill/pull/985 +1, LGTM. > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202281#comment-16202281 ] ASF GitHub Bot commented on DRILL-5862: --- Github user vrozov commented on a diff in the pull request: https://github.com/apache/drill/pull/985#discussion_r144351062 --- Diff: pom.xml --- @@ -15,7 +15,8 @@ org.apache apache -14 +18 + --- End diff -- Try to add an empty pom to the parent of your drill repo :). I guess most of the projects assume that there will be no pom.xml file in a parent directory and it is not always the case, so it is better to follow maven guidelines (http://maven.apache.org/ref/3.0/maven-model/maven.html) > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5815) Provide option to set query memory as percent of total
[ https://issues.apache.org/jira/browse/DRILL-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5815: Reviewer: Boaz Ben-Zvi (was: Arina Ielchiieva) > Provide option to set query memory as percent of total > -- > > Key: DRILL-5815 > URL: https://issues.apache.org/jira/browse/DRILL-5815 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Assignee: Paul Rogers > Labels: ready-to-commit > Fix For: 1.12.0 > > > Drill provides a parameter to set the memory per query as a static number > which defaults to 2 GB. This number is a wonderful setting for the default > Drillbit configuration of 8 GB heap; it allows 2-3 concurrent queries. But, > as Drillbit memory increases, the default becomes a bit constraining. While > users can change the setting, they seldom do. > In addition, provide an option that sets memory as a percent of total memory. > If the allocation is 10%, say, and total memory is 128 GB, then each query > gets ~13GB, which is a big improvement. > The existing option acts as a floor: the query must receive at least that > much memory. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5815) Provide option to set query memory as percent of total
[ https://issues.apache.org/jira/browse/DRILL-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5815: Reviewer: Arina Ielchiieva > Provide option to set query memory as percent of total > -- > > Key: DRILL-5815 > URL: https://issues.apache.org/jira/browse/DRILL-5815 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Assignee: Paul Rogers > Labels: ready-to-commit > Fix For: 1.12.0 > > > Drill provides a parameter to set the memory per query as a static number > which defaults to 2 GB. This number is a wonderful setting for the default > Drillbit configuration of 8 GB heap; it allows 2-3 concurrent queries. But, > as Drillbit memory increases, the default becomes a bit constraining. While > users can change the setting, they seldom do. > In addition, provide an option that sets memory as a percent of total memory. > If the allocation is 10%, say, and total memory is 128 GB, then each query > gets ~13GB, which is a big improvement. > The existing option acts as a floor: the query must receive at least that > much memory. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5815) Provide option to set query memory as percent of total
[ https://issues.apache.org/jira/browse/DRILL-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5815: Labels: ready-to-commit (was: ) > Provide option to set query memory as percent of total > -- > > Key: DRILL-5815 > URL: https://issues.apache.org/jira/browse/DRILL-5815 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Assignee: Paul Rogers > Labels: ready-to-commit > Fix For: 1.12.0 > > > Drill provides a parameter to set the memory per query as a static number > which defaults to 2 GB. This number is a wonderful setting for the default > Drillbit configuration of 8 GB heap; it allows 2-3 concurrent queries. But, > as Drillbit memory increases, the default becomes a bit constraining. While > users can change the setting, they seldom do. > In addition, provide an option that sets memory as a percent of total memory. > If the allocation is 10%, say, and total memory is 128 GB, then each query > gets ~13GB, which is a big improvement. > The existing option acts as a floor: the query must receive at least that > much memory. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5815) Provide option to set query memory as percent of total
[ https://issues.apache.org/jira/browse/DRILL-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202263#comment-16202263 ] ASF GitHub Bot commented on DRILL-5815: --- Github user arina-ielchiieva commented on the issue: https://github.com/apache/drill/pull/960 +1, please resolve the conflicts. > Provide option to set query memory as percent of total > -- > > Key: DRILL-5815 > URL: https://issues.apache.org/jira/browse/DRILL-5815 > Project: Apache Drill > Issue Type: Improvement >Affects Versions: 1.11.0 >Reporter: Paul Rogers >Assignee: Paul Rogers > Fix For: 1.12.0 > > > Drill provides a parameter to set the memory per query as a static number > which defaults to 2 GB. This number is a wonderful setting for the default > Drillbit configuration of 8 GB heap; it allows 2-3 concurrent queries. But, > as Drillbit memory increases, the default becomes a bit constraining. While > users can change the setting, they seldom do. > In addition, provide an option that sets memory as a percent of total memory. > If the allocation is 10%, say, and total memory is 128 GB, then each query > gets ~13GB, which is a big improvement. > The existing option acts as a floor: the query must receive at least that > much memory. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5431) Support SSL
[ https://issues.apache.org/jira/browse/DRILL-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202258#comment-16202258 ] ASF GitHub Bot commented on DRILL-5431: --- Github user parthchandra commented on the issue: https://github.com/apache/drill/pull/950 Rebased again. Also updated protobuf files. If there are no further comments, I'll be merging this PR in. > Support SSL > --- > > Key: DRILL-5431 > URL: https://issues.apache.org/jira/browse/DRILL-5431 > Project: Apache Drill > Issue Type: New Feature > Components: Client - Java, Client - ODBC >Reporter: Sudheesh Katkam >Assignee: Parth Chandra > Labels: doc-impacting > Fix For: 1.12.0 > > > Support SSL between Drillbit and JDBC/ODBC drivers. Drill already supports > HTTPS for web traffic. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202213#comment-16202213 ] ASF GitHub Bot commented on DRILL-5862: --- Github user arina-ielchiieva commented on a diff in the pull request: https://github.com/apache/drill/pull/985#discussion_r144340840 --- Diff: pom.xml --- @@ -15,7 +15,8 @@ org.apache apache -14 +18 + --- End diff -- But it was happy before :) plus I don't see this element present in pom.xml of other Apache projects (Calcite, Camel). Does Drill structure differ? > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202204#comment-16202204 ] ASF GitHub Bot commented on DRILL-5862: --- Github user vrozov commented on a diff in the pull request: https://github.com/apache/drill/pull/985#discussion_r144339857 --- Diff: pom.xml --- @@ -15,7 +15,8 @@ org.apache apache -14 +18 + --- End diff -- Yes, to make maven happy. Otherwise, it expects to find parent pom in a parent directory and if finds one (for example if there is an uber pom that builds multiple projects), maven will complain. > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5862) Update project parent pom xml to the latest ASF version
[ https://issues.apache.org/jira/browse/DRILL-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202149#comment-16202149 ] ASF GitHub Bot commented on DRILL-5862: --- Github user arina-ielchiieva commented on a diff in the pull request: https://github.com/apache/drill/pull/985#discussion_r144333156 --- Diff: pom.xml --- @@ -15,7 +15,8 @@ org.apache apache -14 +18 + --- End diff -- Do we need this element present in pom.xml? > Update project parent pom xml to the latest ASF version > --- > > Key: DRILL-5862 > URL: https://issues.apache.org/jira/browse/DRILL-5862 > Project: Apache Drill > Issue Type: Improvement > Components: Tools, Build & Test >Reporter: Vlad Rozov >Assignee: Vlad Rozov >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5849) Add freemarker lib to dependencyManagement to ensure proper version is used when resolving dependency version conflicts
[ https://issues.apache.org/jira/browse/DRILL-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202048#comment-16202048 ] Arina Ielchiieva commented on DRILL-5849: - Merged into master with commit id 27b6605fab5a52526c2abb5a90c649febe725905 > Add freemarker lib to dependencyManagement to ensure proper version is used > when resolving dependency version conflicts > --- > > Key: DRILL-5849 > URL: https://issues.apache.org/jira/browse/DRILL-5849 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.12.0 >Reporter: Arina Ielchiieva >Assignee: Arina Ielchiieva > Labels: ready-to-commit > Fix For: 1.12.0 > > > After DRILL-5766 we started using newer freemarker library in Drill. There > several libs in Drill that also use freemarker library and sometime older > version is pciked up. In this case we receive the following error: > {noformat} > 0: jdbc:drill:zk=local> Exception in thread "main" > java.lang.NoSuchFieldError: VERSION_2_3_26 > at > org.apache.drill.exec.server.rest.DrillRestServer.getFreemarkerConfiguration(DrillRestServer.java:140) > at > org.apache.drill.exec.server.rest.DrillRestServer.(DrillRestServer.java:83) > at > org.apache.drill.exec.server.rest.WebServer.start(WebServer.java:174) > at > org.apache.drill.exec.server.Drillbit.run(Drillbit.java:141) > at > org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:123) > at > org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:72) > at > org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69) > at > org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:143) > at org.apache.drill.jdbc.Driver.connect(Driver.java:72) > at > sqlline.DatabaseConnection.connect(DatabaseConnection.java:167) > at > sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213) > at sqlline.Commands.close(Commands.java:925) > at sqlline.Commands.closeall(Commands.java:899) > at sqlline.SqlLine.begin(SqlLine.java:649) > at sqlline.SqlLine.start(SqlLine.java:375) > at sqlline.SqlLine.main(SqlLine.java:268) > {noformat} > To fix this issue we should not rely on Maven nearest win strategy and define > allowed freemarker version under {{dependencyManagement}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva reassigned DRILL-5790: --- Assignee: (was: Arina Ielchiieva) > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning > Labels: ready-to-commit > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5790: Labels: ready-to-commit (was: ) > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning >Assignee: Arina Ielchiieva > Labels: ready-to-commit > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated DRILL-5790: Reviewer: Arina Ielchiieva > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning >Assignee: Arina Ielchiieva > Labels: ready-to-commit > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202023#comment-16202023 ] ASF GitHub Bot commented on DRILL-5790: --- Github user arina-ielchiieva commented on the issue: https://github.com/apache/drill/pull/989 +1, LGTM. > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning >Assignee: Arina Ielchiieva > Labels: ready-to-commit > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva reassigned DRILL-5790: --- Assignee: Arina Ielchiieva > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning >Assignee: Arina Ielchiieva > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201925#comment-16201925 ] ASF GitHub Bot commented on DRILL-5790: --- Github user Vlad-Storona commented on a diff in the pull request: https://github.com/apache/drill/pull/989#discussion_r144286736 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/pcap/PcapRecordReader.java --- @@ -125,8 +129,8 @@ public int next() { @Override public void close() throws Exception { -//buffer = null; -//in.close(); +in.close(); +fs.close(); --- End diff -- Sure, my mistake, I will remove it. > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201924#comment-16201924 ] ASF GitHub Bot commented on DRILL-5790: --- Github user Vlad-Storona commented on a diff in the pull request: https://github.com/apache/drill/pull/989#discussion_r144286706 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/pcap/PcapRecordReader.java --- @@ -100,14 +104,14 @@ public void setup(final OperatorContext context, final OutputMutator output) thr this.output = output; this.buffer = new byte[10]; - this.in = new FileInputStream(inputPath); + this.in = fs.open(pathToFile); this.decoder = new PacketDecoder(in); this.validBytes = in.read(buffer); this.projectedCols = getProjectedColsIfItNull(); setColumns(projectedColumns); } catch (IOException io) { throw UserException.dataReadError(io) - .addContext("File name:", inputPath) + .addContext("File name:", pathToFile.toString()) --- End diff -- Thanks, I will replace it. > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201888#comment-16201888 ] ASF GitHub Bot commented on DRILL-5790: --- Github user arina-ielchiieva commented on a diff in the pull request: https://github.com/apache/drill/pull/989#discussion_r144269017 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/pcap/PcapRecordReader.java --- @@ -125,8 +129,8 @@ public int next() { @Override public void close() throws Exception { -//buffer = null; -//in.close(); +in.close(); +fs.close(); --- End diff -- Since you did not open fs, I guess it's not your responsibility to close it. > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201887#comment-16201887 ] ASF GitHub Bot commented on DRILL-5790: --- Github user arina-ielchiieva commented on a diff in the pull request: https://github.com/apache/drill/pull/989#discussion_r144269528 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/store/pcap/PcapRecordReader.java --- @@ -100,14 +104,14 @@ public void setup(final OperatorContext context, final OutputMutator output) thr this.output = output; this.buffer = new byte[10]; - this.in = new FileInputStream(inputPath); + this.in = fs.open(pathToFile); this.decoder = new PacketDecoder(in); this.validBytes = in.read(buffer); this.projectedCols = getProjectedColsIfItNull(); setColumns(projectedColumns); } catch (IOException io) { throw UserException.dataReadError(io) - .addContext("File name:", inputPath) + .addContext("File name:", pathToFile.toString()) --- End diff -- It's better you use `pathToFile.toUri().getPath()`. > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5783) Make code generation in the TopN operator more modular and test it
[ https://issues.apache.org/jira/browse/DRILL-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201840#comment-16201840 ] ASF GitHub Bot commented on DRILL-5783: --- Github user ilooner commented on the issue: https://github.com/apache/drill/pull/984 @paul-rogers Applied / responding to comments. Also removed RecordBatchBuilder and used RowSetBuilder. > Make code generation in the TopN operator more modular and test it > -- > > Key: DRILL-5783 > URL: https://issues.apache.org/jira/browse/DRILL-5783 > Project: Apache Drill > Issue Type: Improvement >Reporter: Timothy Farkas >Assignee: Timothy Farkas > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5790) PCAP format explicitly opens local file
[ https://issues.apache.org/jira/browse/DRILL-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201773#comment-16201773 ] ASF GitHub Bot commented on DRILL-5790: --- GitHub user Vlad-Storona opened a pull request: https://github.com/apache/drill/pull/989 DRILL-5790: PCAP format explicitly opens local file See DRILL-5790 for details. In the current implementation in master used the local FS and it is working on MapR-FS but does not work on HDFS. Now, for reading files used `org.apache.hadoop.fs.FileSystem`. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mapr-demos/drill DRILL-5790 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/989.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #989 commit f968708c0a428ab91fdff8f5e52303b500089b88 Author: Vlad StoronaDate: 2017-10-12T10:16:19Z Fixed problem with explicit opening a local file > PCAP format explicitly opens local file > --- > > Key: DRILL-5790 > URL: https://issues.apache.org/jira/browse/DRILL-5790 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Ted Dunning > Fix For: 1.12.0 > > > Note the new FileInputStream line > {code} > @Override > public void setup(final OperatorContext context, final OutputMutator output) > throws ExecutionSetupException { > try { > this.output = output; > this.buffer = new byte[10]; > this.in = new FileInputStream(inputPath); > this.decoder = new PacketDecoder(in); > this.validBytes = in.read(buffer); > this.projectedCols = getProjectedColsIfItNull(); > setColumns(projectedColumns); > } catch (IOException io) { > throw UserException.dataReadError(io) > .addContext("File name:", inputPath) > .build(logger); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (DRILL-5870) Simplify creating list and map values for the row set builder
[ https://issues.apache.org/jira/browse/DRILL-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Farkas updated DRILL-5870: -- Priority: Minor (was: Major) > Simplify creating list and map values for the row set builder > - > > Key: DRILL-5870 > URL: https://issues.apache.org/jira/browse/DRILL-5870 > Project: Apache Drill > Issue Type: Improvement >Reporter: Timothy Farkas >Assignee: Timothy Farkas >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5870) Simplify creating list and map values for the row set builder
Timothy Farkas created DRILL-5870: - Summary: Simplify creating list and map values for the row set builder Key: DRILL-5870 URL: https://issues.apache.org/jira/browse/DRILL-5870 Project: Apache Drill Issue Type: Improvement Reporter: Timothy Farkas Assignee: Timothy Farkas -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (DRILL-5869) Empty maps not handled
Prasad Nagaraj Subramanya created DRILL-5869: Summary: Empty maps not handled Key: DRILL-5869 URL: https://issues.apache.org/jira/browse/DRILL-5869 Project: Apache Drill Issue Type: Bug Components: Storage - JSON Affects Versions: 1.11.0 Reporter: Prasad Nagaraj Subramanya Consider the below json - {code} {a:{}} {code} A query on the column 'a' throws NPE - {code} select a from temp.json; {code} Stack trace - {code} org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: NullPointerException Fragment 0:0 [Error Id: 7f81fa02-4b20-4401-9d18-bd901653d11d on pns182.qa.lab:31010] at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586) ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:298) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:267) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144] Caused by: java.lang.NullPointerException: null at org.apache.drill.exec.test.generated.ProjectorGen0.setup(ProjectorTemplate.java:91) ~[na:na] at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchemaFromInput(ProjectRecordBatch.java:497) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.setupNewSchema(ProjectRecordBatch.java:505) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:82) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:141) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:105) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:95) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:234) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:227) ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_144] at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_144] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) ~[hadoop-common-2.7.0-mapr-1607.jar:na] at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:227) [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT] ... 4 common frames omitted {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5783) Make code generation in the TopN operator more modular and test it
[ https://issues.apache.org/jira/browse/DRILL-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201535#comment-16201535 ] ASF GitHub Bot commented on DRILL-5783: --- Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/984#discussion_r144207089 --- Diff: common/src/test/java/org/apache/drill/testutils/SubDirTestWatcher.java --- @@ -0,0 +1,108 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.drill.testutils; --- End diff -- done > Make code generation in the TopN operator more modular and test it > -- > > Key: DRILL-5783 > URL: https://issues.apache.org/jira/browse/DRILL-5783 > Project: Apache Drill > Issue Type: Improvement >Reporter: Timothy Farkas >Assignee: Timothy Farkas > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5783) Make code generation in the TopN operator more modular and test it
[ https://issues.apache.org/jira/browse/DRILL-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201524#comment-16201524 ] ASF GitHub Bot commented on DRILL-5783: --- Github user ilooner commented on a diff in the pull request: https://github.com/apache/drill/pull/984#discussion_r144205146 --- Diff: exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/TopN/TopNBatch.java --- @@ -335,20 +333,32 @@ private void purge() throws SchemaChangeException { logger.debug("Took {} us to purge", watch.elapsed(TimeUnit.MICROSECONDS)); } - public PriorityQueue createNewPriorityQueue(FragmentContext context, List orderings, - VectorAccessible batch, MappingSet mainMapping, MappingSet leftMapping, MappingSet rightMapping) - throws ClassTransformationException, IOException, SchemaChangeException{ -CodeGenerator cg = CodeGenerator.get(PriorityQueue.TEMPLATE_DEFINITION, context.getFunctionRegistry(), context.getOptions()); + private PriorityQueue createNewPriorityQueue(VectorAccessible batch, int limit) +throws SchemaChangeException, ClassTransformationException, IOException { +return createNewPriorityQueue(context.getOptionSet(), context.getFunctionRegistry(), context.getDrillbitContext().getCompiler(), + config.getOrderings(), batch, unionTypeEnabled, codegenDump, limit, oContext.getAllocator(), schema.getSelectionVectorMode()); + } + + public static PriorityQueue createNewPriorityQueue( +OptionSet optionSet, FunctionLookupContext functionLookupContext, CodeCompiler codeCompiler, +List orderings, VectorAccessible batch, boolean unionTypeEnabled, boolean codegenDump, +int limit, BufferAllocator allocator, SelectionVectorMode mode) + throws ClassTransformationException, IOException, SchemaChangeException { +final MappingSet mainMapping = new MappingSet((String) null, null, ClassGenerator.DEFAULT_SCALAR_MAP, ClassGenerator.DEFAULT_SCALAR_MAP); +final MappingSet leftMapping = new MappingSet("leftIndex", null, ClassGenerator.DEFAULT_SCALAR_MAP, ClassGenerator.DEFAULT_SCALAR_MAP); +final MappingSet rightMapping = new MappingSet("rightIndex", null, ClassGenerator.DEFAULT_SCALAR_MAP, ClassGenerator.DEFAULT_SCALAR_MAP); --- End diff -- Reverted it back to the way it was > Make code generation in the TopN operator more modular and test it > -- > > Key: DRILL-5783 > URL: https://issues.apache.org/jira/browse/DRILL-5783 > Project: Apache Drill > Issue Type: Improvement >Reporter: Timothy Farkas >Assignee: Timothy Farkas > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (DRILL-5864) Selecting a non-existing field from a MapR-DB JSON table fails with NPE
[ https://issues.apache.org/jira/browse/DRILL-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201521#comment-16201521 ] ASF GitHub Bot commented on DRILL-5864: --- GitHub user prasadns14 opened a pull request: https://github.com/apache/drill/pull/988 DRILL-5864: Handled projection of non-existent field in MapRDB JSON When atleast one of the projected fields exists in the MapRDB JSON table, we don't have problem handling the non-existing fields. Taking cue from this observation, I included _id field in the projected columns list if it is not already present. This way we don't have to worry about NPE. @paul-rogers Please review You can merge this pull request into a Git repository by running: $ git pull https://github.com/prasadns14/drill DRILL-5864 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/988.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #988 commit ff89dcf76ff5017c33d486d6db56b16268a8abaa Author: Prasad Nagaraj SubramanyaDate: 2017-10-12T06:25:28Z DRILL-5864: Handled projection of non-existent field in MapRDB JSON > Selecting a non-existing field from a MapR-DB JSON table fails with NPE > --- > > Key: DRILL-5864 > URL: https://issues.apache.org/jira/browse/DRILL-5864 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Relational Operators, Storage - MapRDB >Affects Versions: 1.12.0 >Reporter: Abhishek Girish >Assignee: Hanumath Rao Maduri > Attachments: OrderByNPE.log, OrderByNPE2.log > > > Query 1 > {code} > > select C_FIRST_NAME,C_BIRTH_COUNTRY,C_BIRTH_YEAR,C_BIRTH_MONTH,C_BIRTH_DAY > > from customer ORDER BY C_BIRTH_COUNTRY ASC, C_FIRST_NAME ASC LIMIT 10; > Error: SYSTEM ERROR: NullPointerException > (java.lang.NullPointerException) null > org.apache.drill.exec.record.SchemaUtil.coerceContainer():176 > > org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.convertBatch():124 > org.apache.drill.exec.physical.impl.xsort.managed.BufferedBatches.add():90 > org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.addBatch():265 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch():421 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():357 > > org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():302 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():115 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext():115 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.record.AbstractRecordBatch.next():119 > org.apache.drill.exec.record.AbstractRecordBatch.next():109 > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51 > > org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():134 > org.apache.drill.exec.record.AbstractRecordBatch.next():164 > org.apache.drill.exec.physical.impl.BaseRootExec.next():105 > > org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81 > org.apache.drill.exec.physical.impl.BaseRootExec.next():95 > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234 >