[jira] [Assigned] (DRILL-8442) NPE on DeltaRowGroupScan
[ https://issues.apache.org/jira/browse/DRILL-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8442: - Assignee: Vova Vysotskyi > NPE on DeltaRowGroupScan > > > Key: DRILL-8442 > URL: https://issues.apache.org/jira/browse/DRILL-8442 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Other >Affects Versions: 1.21.1 > Environment: pyspark 3.4.0 > delta-spark 2.4.0 > Ubuntu 22.04.2 LTS > >Reporter: Matt Keranen >Assignee: Vova Vysotskyi >Priority: Minor > > SELECT * on Delta table (Parquet) throws null pointer exception: > > {noformat} > 2023-06-20 18:58:19,058 [1b6e0933-dd1c-f16b-f6af-dd466d5d94f2:foreman] INFO > o.a.drill.exec.work.foreman.Foreman - Query text for query with id > 1b6e0933-dd1c-f16b-f6af-dd466d5d94f2 issued by mattk: ALTER SESSION SET > `exec.query.max_rows`=1000 > 2023-06-20 18:58:19,068 [1b6e0933-dd1c-f16b-f6af-dd466d5d94f2:frag:0:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - > 1b6e0933-dd1c-f16b-f6af-dd466d5d94f2:0:0: State change requested > AWAITING_ALLOCATION --> RUNNING > 2023-06-20 18:58:19,068 [1b6e0933-dd1c-f16b-f6af-dd466d5d94f2:frag:0:0] INFO > o.a.d.e.w.f.FragmentStatusReporter - > 1b6e0933-dd1c-f16b-f6af-dd466d5d94f2:0:0: State to report: RUNNING > 2023-06-20 18:58:19,118 [1b6e0933-dd1c-f16b-f6af-dd466d5d94f2:frag:0:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - > 1b6e0933-dd1c-f16b-f6af-dd466d5d94f2:0:0: State change requested RUNNING --> > FINISHED > 2023-06-20 18:58:19,118 [1b6e0933-dd1c-f16b-f6af-dd466d5d94f2:frag:0:0] INFO > o.a.d.e.w.f.FragmentStatusReporter - > 1b6e0933-dd1c-f16b-f6af-dd466d5d94f2:0:0: State to report: FINISHED > 2023-06-20 18:58:19,137 [1b6e0933-c599-8d17-8971-5b0c2ecefac7:foreman] INFO > o.a.drill.exec.work.foreman.Foreman - Query text for query with id > 1b6e0933-c599-8d17-8971-5b0c2ecefac7 issued by mattk: select * > from table(delta.root.`Warehouse/dbo/DeltaTestTable` (type => 'delta')) > limit 5 > 2023-06-20 18:58:23,037 [1b6e0933-c599-8d17-8971-5b0c2ecefac7:frag:1:1] INFO > o.a.d.e.w.fragment.FragmentExecutor - > 1b6e0933-c599-8d17-8971-5b0c2ecefac7:1:1: State change requested > AWAITING_ALLOCATION --> FAILED > 2023-06-20 18:58:23,037 [1b6e0933-c599-8d17-8971-5b0c2ecefac7:frag:1:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - > 1b6e0933-c599-8d17-8971-5b0c2ecefac7:1:0: State change requested > AWAITING_ALLOCATION --> FAILED > 2023-06-20 18:58:23,037 [1b6e0933-c599-8d17-8971-5b0c2ecefac7:frag:1:1] INFO > o.a.d.e.w.fragment.FragmentExecutor - > 1b6e0933-c599-8d17-8971-5b0c2ecefac7:1:1: State change requested FAILED --> > FINISHED > 2023-06-20 18:58:23,037 [1b6e0933-c599-8d17-8971-5b0c2ecefac7:frag:1:0] INFO > o.a.d.e.w.fragment.FragmentExecutor - > 1b6e0933-c599-8d17-8971-5b0c2ecefac7:1:0: State change requested FAILED --> > FINISHED > 2023-06-20 18:58:23,038 [1b6e0933-c599-8d17-8971-5b0c2ecefac7:frag:1:3] INFO > o.a.d.e.w.fragment.FragmentExecutor - > 1b6e0933-c599-8d17-8971-5b0c2ecefac7:1:3: State change requested > AWAITING_ALLOCATION --> FAILED > 2023-06-20 18:58:23,037 [1b6e0933-c599-8d17-8971-5b0c2ecefac7:frag:1:1] ERROR > o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: NullPointerException > Fragment: 1:1 > Please, refer to logs for more information. > [Error Id: c6b09027-199a-46e1-abb8-f37576c50382 on vm-etl-01:31010] > org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: > NullPointerException > Fragment: 1:1 > Please, refer to logs for more information. > [Error Id: c6b09027-199a-46e1-abb8-f37576c50382 on vm-etl-01:31010] > at > org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:688) > at > org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:392) > at > org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:244) > at > org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:359) > at > org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at java.base/java.lang.Thread.run(Thread.java:833) > Caused by: com.fasterxml.jackson.databind.exc.ValueInstantiationException: > Cannot construct instance of > `org.apache.drill.exec.store.delta.DeltaRowGroupScan`, problem: > `java.lang.NullPointerException` > at [Source: (String)"{ > "pop" : "single-sender", > "@id" : 0, > "receiver-major-fragment" : 0, > "receiver-minor-fragment" : 0, > "child" : { > "pop" : "selection-vector-remover", > "@id" : 1, > "child" : { >
[jira] [Updated] (DRILL-8440) ANALYZE is not supported for group scan with format Delta
[ https://issues.apache.org/jira/browse/DRILL-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8440: -- Issue Type: Improvement (was: Bug) > ANALYZE is not supported for group scan with format Delta > - > > Key: DRILL-8440 > URL: https://issues.apache.org/jira/browse/DRILL-8440 > Project: Apache Drill > Issue Type: Improvement > Components: Storage - Other >Affects Versions: 1.21.1 > Environment: Ubuntu 22.04.2 LTS > openjdk version "17.0.7" 2023-04-18 > >Reporter: Matt Keranen >Priority: Minor > > *Describe the bug* > Attempting to store table metadata in the Iceberg metastore, Delta tables > which are queryable with SQL throw error when attempting ANALYZE TABLE > *To Reproduce* > {{apache drill (delta.root)> ANALYZE TABLE "path/to/table" REFRESH METADATA > 'TABLE' LEVEL;}} > *Error detail, log output or screenshots* > {{Error: VALIDATION ERROR: ANALYZE is not supported for group scan > [DeltaGroupScan [path="/path/to/table", entries=[ReadEntryWithPath > [path=/path/to/table/partition/file.snappy.parquet], ...}} > *Drill version* > Apache Drill 1.21.1 > *Additional context* > On same Drillbit, ANALYZE is successful on plain Parquet tables. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8400) Fix pruning partitions with pushed transitive predicates
Vova Vysotskyi created DRILL-8400: - Summary: Fix pruning partitions with pushed transitive predicates Key: DRILL-8400 URL: https://issues.apache.org/jira/browse/DRILL-8400 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi See {{TestHivePartitionPruning.prunePartitionsBasedOnTransitivePredicates()}} test for details. The issue occurs for queries like these: {code:sql} SELECT * FROM hive.partition_pruning_test t1 JOIN hive.partition_with_few_schemas t2 ON t1.`d` = t2.`d` AND t1.`e` = t2.`e` WHERE t2.`e` IS NOT NULL AND t1.`d` = 1 {code} The expected behavior is to create additional filters based on the existing filters and join conditions. We have a {{TRANSITIVE_CLOSURE}} planning phase, which is responsible for such query transformations, but Drill pushes down filters from the WHERE condition before that phase, so the optimization is not performed. Ideally, we should move rules from the {{TRANSITIVE_CLOSURE}} phase to the {{LOGICAL}} phase so that the planner will choose the most optimal plan, but it wouldn't help until CALCITE-1048 is fixed (it is required to pull predicates when three has {{RelSubset}} nodes). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8398) Fix GitHub Actions to use proper JDK version
Vova Vysotskyi created DRILL-8398: - Summary: Fix GitHub Actions to use proper JDK version Key: DRILL-8398 URL: https://issues.apache.org/jira/browse/DRILL-8398 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi In one of the changes GitHub Actions `actions/setup-java@v2` action was removed, and after that provided java version (11) was used instead of the specified one in the matrix.java. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8397) Drill prints warnings to console when starting it
Vova Vysotskyi created DRILL-8397: - Summary: Drill prints warnings to console when starting it Key: DRILL-8397 URL: https://issues.apache.org/jira/browse/DRILL-8397 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi When starting the drill in embedded mode, it prints the following warnings: {noformat} 11:19:55,482 |-INFO in ch.qos.logback.classic.LoggerContext[default] - This is logback-classic version 1.4.5 11:19:55,499 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml] 11:19:55,503 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/tmp/drill/distribution/target/apache-drill-1.21.0-SNAPSHOT/apache-drill-1.21.0-SNAPSHOT/conf/logback.xml] 11:19:55,607 |-WARN in ch.qos.logback.classic.joran.action.LevelAction - element is deprecated. Near [level] on line 73 11:19:55,607 |-WARN in ch.qos.logback.classic.joran.action.LevelAction - Please use "level" attribute within or elements instead. 11:19:55,607 |-WARN in ch.qos.logback.classic.joran.action.LevelAction - element is deprecated. Near [level] on line 78 11:19:55,607 |-WARN in ch.qos.logback.classic.joran.action.LevelAction - Please use "level" attribute within or elements instead. 11:19:55,608 |-WARN in ch.qos.logback.classic.joran.action.LevelAction - element is deprecated. Near [level] on line 91 11:19:55,608 |-WARN in ch.qos.logback.classic.joran.action.LevelAction - Please use "level" attribute within or elements instead. 11:19:55,652 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - Processing appender named [STDOUT] 11:19:55,652 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender] 11:19:55,658 |-INFO in ch.qos.logback.core.model.processor.ImplicitModelHandler - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property 11:19:55,689 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - Processing appender named [QUERY] 11:19:55,689 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender] 11:19:55,697 |-INFO in ch.qos.logback.core.rolling.FixedWindowRollingPolicy@66ac5762 - No compression will be used 11:19:55,699 |-INFO in ch.qos.logback.core.model.processor.ImplicitModelHandler - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property 11:19:55,699 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[QUERY] - Active log file name: /tmp/drill/distribution/target/apache-drill-1.21.0-SNAPSHOT/apache-drill-1.21.0-SNAPSHOT/log/sqlline_queries.json 11:19:55,699 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[QUERY] - File property is set to [/tmp/drill/distribution/target/apache-drill-1.21.0-SNAPSHOT/apache-drill-1.21.0-SNAPSHOT/log/sqlline_queries.json] 11:19:55,700 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - Processing appender named [FILE] 11:19:55,700 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender] 11:19:55,700 |-INFO in ch.qos.logback.core.rolling.FixedWindowRollingPolicy@797cf65c - No compression will be used 11:19:55,701 |-INFO in ch.qos.logback.core.model.processor.ImplicitModelHandler - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property 11:19:55,701 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - Active log file name: /tmp/drill/distribution/target/apache-drill-1.21.0-SNAPSHOT/apache-drill-1.21.0-SNAPSHOT/log/sqlline.log 11:19:55,701 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - File property is set to [/tmp/drill/distribution/target/apache-drill-1.21.0-SNAPSHOT/apache-drill-1.21.0-SNAPSHOT/log/sqlline.log] 11:19:55,702 |-INFO in ch.qos.logback.classic.model.processor.LoggerModelHandler - Setting additivity of logger [org.apache.drill] to false 11:19:55,702 |-INFO in ch.qos.logback.classic.model.processor.LevelModelHandler - org.apache.drill level set to INFO 11:19:55,702 |-INFO in ch.qos.logback.core.model.processor.AppenderRefModelHandler - Attaching appender named [FILE] to Logger[org.apache.drill] 11:19:55,703 |-INFO in ch.qos.logback.classic.model.processor.LoggerModelHandler - Setting additivity of logger [query.logger] to false 11:19:55,703 |-INFO in ch.qos.logback.classic.model.processor.LevelModelHandler - query.logger level set to INFO 11:19:55,703 |-INFO in ch.qos.logback.core.model.processor.AppenderRefModelHandler - Attaching appender named [QUERY] to Logger[query.logger] 11:19:55,703 |-IN
[jira] [Created] (DRILL-8396) Update checkstyle version
Vova Vysotskyi created DRILL-8396: - Summary: Update checkstyle version Key: DRILL-8396 URL: https://issues.apache.org/jira/browse/DRILL-8396 Project: Apache Drill Issue Type: Task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Update com.puppycrawl.tools:checkstyle version to the latest one. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (DRILL-8381) Add support for filtered aggregate calls
[ https://issues.apache.org/jira/browse/DRILL-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8381: -- Description: Currently, Drill ignores filters for filtered aggregate calls and returns incorrect results. Here is the example query for which Drill will return incorrect results: {code:sql} SELECT count(n_name) FILTER(WHERE n_regionkey = 1) AS nations_count_in_1_region, count(n_name) FILTER(WHERE n_regionkey = 2) AS nations_count_in_2_region, count(n_name) FILTER(WHERE n_regionkey = 3) AS nations_count_in_3_region, count(n_name) FILTER(WHERE n_regionkey = 4) AS nations_count_in_4_region, count(n_name) FILTER(WHERE n_regionkey = 0) AS nations_count_in_0_region FROM cp.`tpch/nation.parquet` {code} {noformat} +---+---+---+---+---+ | nations_count_in_1_region | nations_count_in_2_region | nations_count_in_3_region | nations_count_in_4_region | nations_count_in_0_region | +---+---+---+---+---+ | 25| 25| 25 | 25| 25| +---+---+---+---+---+ {noformat} But the correct result is {noformat} +---+---+---+---+---+ | nations_count_in_1_region | nations_count_in_2_region | nations_count_in_3_region | nations_count_in_4_region | nations_count_in_0_region | +---+---+---+---+---+ | 5 | 5 | 5 | 5 | 5 | +---+---+---+---+---+ {noformat} Side note: The query above could be rewritten using PIVOT: {code:sql} SELECT `1` nations_count_in_1_region, `2` nations_count_in_2_region, `3` nations_count_in_3_region, `4` nations_count_in_4_region, `0` nations_count_in_0_region FROM (SELECT n_name, n_regionkey FROM cp.`tpch/nation.parquet`) PIVOT(count(n_name) FOR n_regionkey IN (0, 1, 2, 3, 4)) {code} And will return correct results when this issue is fixed and Calcite is updated to 1.33.0 was: Currently, Drill ignores filters for filtered aggregate calls and returns incorrect results. Here is the example query for which Drill will return incorrect results: {code:sql} SELECT count(n_name) FILTER(WHERE n_regionkey = 1) AS nations_count_in_1_region, count(n_name) FILTER(WHERE n_regionkey = 2) AS nations_count_in_2_region, count(n_name) FILTER(WHERE n_regionkey = 3) AS nations_count_in_3_region, count(n_name) FILTER(WHERE n_regionkey = 4) AS nations_count_in_4_region, count(n_name) FILTER(WHERE n_regionkey = 0) AS nations_count_in_0_region FROM cp.`tpch/nation.parquet` {code} {noformat} +---+---+---+---+---+ | nations_count_in_1_region | nations_count_in_2_region | nations_count_in_3_region | nations_count_in_4_region | nations_count_in_0_region | +---+---+---+---+---+ | 25| 25| 25 | 25| 25| +---+---+---+---+---+ {noformat} But correct result is {noformat} +---+---+---+---+---+ | nations_count_in_1_region | nations_count_in_2_region | nations_count_in_3_region | nations_count_in_4_region | nations_count_in_0_region | +---+---+---+---+---+ | 5 | 5 | 5 | 5 | 5 | +---+---+---+---+---+ {noformat} > Add support for filtered aggregate calls > > > Key: DRILL-8381 > URL: htt
[jira] [Created] (DRILL-8381) Add support for filtered aggregate calls
Vova Vysotskyi created DRILL-8381: - Summary: Add support for filtered aggregate calls Key: DRILL-8381 URL: https://issues.apache.org/jira/browse/DRILL-8381 Project: Apache Drill Issue Type: New Feature Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Currently, Drill ignores filters for filtered aggregate calls and returns incorrect results. Here is the example query for which Drill will return incorrect results: {code:sql} SELECT count(n_name) FILTER(WHERE n_regionkey = 1) AS nations_count_in_1_region, count(n_name) FILTER(WHERE n_regionkey = 2) AS nations_count_in_2_region, count(n_name) FILTER(WHERE n_regionkey = 3) AS nations_count_in_3_region, count(n_name) FILTER(WHERE n_regionkey = 4) AS nations_count_in_4_region, count(n_name) FILTER(WHERE n_regionkey = 0) AS nations_count_in_0_region FROM cp.`tpch/nation.parquet` {code} {noformat} +---+---+---+---+---+ | nations_count_in_1_region | nations_count_in_2_region | nations_count_in_3_region | nations_count_in_4_region | nations_count_in_0_region | +---+---+---+---+---+ | 25| 25| 25 | 25| 25| +---+---+---+---+---+ {noformat} But correct result is {noformat} +---+---+---+---+---+ | nations_count_in_1_region | nations_count_in_2_region | nations_count_in_3_region | nations_count_in_4_region | nations_count_in_0_region | +---+---+---+---+---+ | 5 | 5 | 5 | 5 | 5 | +---+---+---+---+---+ {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8380) Remove customised SqlValidatorImpl.deriveAlias
Vova Vysotskyi created DRILL-8380: - Summary: Remove customised SqlValidatorImpl.deriveAlias Key: DRILL-8380 URL: https://issues.apache.org/jira/browse/DRILL-8380 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8379) Update Calcite to 1.33.0
Vova Vysotskyi created DRILL-8379: - Summary: Update Calcite to 1.33.0 Key: DRILL-8379 URL: https://issues.apache.org/jira/browse/DRILL-8379 Project: Apache Drill Issue Type: Task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8369) Add support for querying DeltaLake snapshots by version
Vova Vysotskyi created DRILL-8369: - Summary: Add support for querying DeltaLake snapshots by version Key: DRILL-8369 URL: https://issues.apache.org/jira/browse/DRILL-8369 Project: Apache Drill Issue Type: Improvement Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8358) Storage plugin for querying other Apache Drill clusters
Vova Vysotskyi created DRILL-8358: - Summary: Storage plugin for querying other Apache Drill clusters Key: DRILL-8358 URL: https://issues.apache.org/jira/browse/DRILL-8358 Project: Apache Drill Issue Type: New Feature Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8353) Format plugin for Delta Lake
Vova Vysotskyi created DRILL-8353: - Summary: Format plugin for Delta Lake Key: DRILL-8353 URL: https://issues.apache.org/jira/browse/DRILL-8353 Project: Apache Drill Issue Type: New Feature Affects Versions: 1.20.2 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Fix For: Future Implement format plugin for Delta Lake. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (DRILL-8190) Mongo query: "Schema change not currently supported for schemas with complex types"
[ https://issues.apache.org/jira/browse/DRILL-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8190: - Assignee: Vova Vysotskyi (was: James Turton) > Mongo query: "Schema change not currently supported for schemas with complex > types" > --- > > Key: DRILL-8190 > URL: https://issues.apache.org/jira/browse/DRILL-8190 > Project: Apache Drill > Issue Type: Bug > Components: Server >Affects Versions: 1.20.0 > Environment: RHEL 7: Linux 3.10.0-1160.59.1.el7.x86_64 #1 SMP Wed > Feb 16 12:17:35 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux >Reporter: Daniel Clark >Assignee: Vova Vysotskyi >Priority: Major > Fix For: Future > > Attachments: customGrounds.gz, log_4.txt, profile_4.json > > > I'm attempting to run this mongo query that ran successfully in Drill 1.19 > with the 1.21.0-SNAPSHOT build. > > SELECT `Elements_Efforts`.`EffortTypeName` AS `EffortTypeName`, > `Elements`.`ElementSubTypeName` AS `ElementSubTypeName`, > `Elements`.`ElementTypeName` AS `ElementTypeName`, > `Elements`.`PlanID` AS `PlanID` > FROM `mongo.grounds`.`Elements` `Elements` > INNER JOIN `mongo.grounds`.`Elements_Efforts` `Elements_Efforts` ON > (`Elements`.`_id` = `Elements_Efforts`.`_id`) > WHERE (`Elements`.`PlanID` = '1623263140') > GROUP BY `Elements_Efforts`.`EffortTypeName`, > `Elements`.`ElementSubTypeName`, > `Elements`.`ElementTypeName`, > `Elements`.`PlanID` > > I'm getting this error message: UserRemoteException : SYSTEM ERROR: > RuntimeException: Schema change not currently supported for schemas with > complex types. I've attached the log, profile, and a mongodb dump containing > the relevant datasets. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8304) Update Calcite to 1.32
Vova Vysotskyi created DRILL-8304: - Summary: Update Calcite to 1.32 Key: DRILL-8304 URL: https://issues.apache.org/jira/browse/DRILL-8304 Project: Apache Drill Issue Type: Task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8303) Add support for inserts into JDBC storage
Vova Vysotskyi created DRILL-8303: - Summary: Add support for inserts into JDBC storage Key: DRILL-8303 URL: https://issues.apache.org/jira/browse/DRILL-8303 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Allow inserting into JDBC tables and pushing down complete insert statement where possible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (DRILL-8279) Use thick Phoenix driver
[ https://issues.apache.org/jira/browse/DRILL-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8279: -- Priority: Blocker (was: Major) > Use thick Phoenix driver > > > Key: DRILL-8279 > URL: https://issues.apache.org/jira/browse/DRILL-8279 > Project: Apache Drill > Issue Type: Bug >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Blocker > > phoenix-queryserver-client shades Avatica classes, so it causes issues when > starting Drill and shaded class from phoenix jars is loaded before, so Drill > wouldn't be able to start correctly. > To avoid that, phoenix thick client can be used, it also will improve query > performance. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8279) Use thick Phoenix driver
Vova Vysotskyi created DRILL-8279: - Summary: Use thick Phoenix driver Key: DRILL-8279 URL: https://issues.apache.org/jira/browse/DRILL-8279 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi phoenix-queryserver-client shades Avatica classes, so it causes issues when starting Drill and shaded class from phoenix jars is loaded before, so Drill wouldn't be able to start correctly. To avoid that, phoenix thick client can be used, it also will improve query performance. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (DRILL-6371) Use FilterSetOpTransposeRule, DrillProjectSetOpTransposeRule in main logical stage
[ https://issues.apache.org/jira/browse/DRILL-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-6371. --- Resolution: Fixed Fixed in DRILL-7523 > Use FilterSetOpTransposeRule, DrillProjectSetOpTransposeRule in main logical > stage > -- > > Key: DRILL-6371 > URL: https://issues.apache.org/jira/browse/DRILL-6371 > Project: Apache Drill > Issue Type: Improvement > Components: Query Planning & Optimization >Affects Versions: 1.13.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Minor > Fix For: Future > > > FilterSetOpTransposeRule, ProjectSetOpTransposeRule are leveraged in > DRILL-3855. > They are used in HepPlanner, but if they additionally will be enabled in > main logical planning stage for Volcano planner, more cases will be covered > with these rules. > For example: > {code:java} > WITH year_total_1 > AS (SELECT c.r_regionkeycustomer_id, > 1 year_total > FROM cp.`tpch/region.parquet` c > UNION ALL > SELECT c.n_nationkeycustomer_id, > 1 year_total > FROM cp.`tpch/nation.parquet` c), > year_total_2 > AS (SELECT c.r_regionkeycustomer_id, > 1 year_total > FROM cp.`tpch/region.parquet` c > UNION ALL > SELECT c.n_nationkeycustomer_id, > 1 year_total > FROM cp.`tpch/nation.parquet` c) > SELECT count(t_w_firstyear.customer_id) as ct > FROM year_total_1 t_w_firstyear, >year_total_2 t_w_secyear > WHERE t_w_firstyear.year_total = t_w_secyear.year_total > AND t_w_firstyear.year_total > 0 and t_w_secyear.year_total > 0 > {code} > Current plan after performing rules: > {code:java} > LogicalAggregate(group=[{}], ct=[COUNT($0)]) > LogicalProject(customer_id=[$0]) > LogicalFilter(condition=[AND(=($1, $3), >($1, 0), >($3, 0))]) > LogicalJoin(condition=[true], joinType=[inner]) > LogicalUnion(all=[true]) > LogicalProject(customer_id=[$1], year_total=[1]) > EnumerableTableScan(table=[[cp, tpch/region.parquet]]) > LogicalProject(customer_id=[$1], year_total=[1]) > EnumerableTableScan(table=[[cp, tpch/nation.parquet]]) > LogicalUnion(all=[true]) > LogicalProject(customer_id=[$1], year_total=[1]) > EnumerableTableScan(table=[[cp, tpch/region.parquet]]) > LogicalProject(customer_id=[$1], year_total=[1]) > EnumerableTableScan(table=[[cp, tpch/nation.parquet]]) > {code} > Since LogicalFilter isn't under LogicalUnion the FilterSetOpTransposeRule is > not performed. > FilterJoinRule from main Drill logical stage pushes LogicalFilter below, but > the stage with FilterSetOpTransposeRule is already finished. > That's why FilterSetOpTransposeRule and ProjectSetOpTransposeRule should be > used in Drill main logical stage with Volcano planner. > Currently using them in Volcano Planner can cause infinite loops - > CALCITE-1271 (can be resolved after solving CALCITE-2223) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (DRILL-4086) Query hangs in planing
[ https://issues.apache.org/jira/browse/DRILL-4086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-4086. --- Resolution: Fixed Fixed in DRILL-7523 > Query hangs in planing > -- > > Key: DRILL-4086 > URL: https://issues.apache.org/jira/browse/DRILL-4086 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Affects Versions: 1.2.0 >Reporter: boris chmiel >Assignee: Vova Vysotskyi >Priority: Major > > The query is stuck seems blocked on planning (pending) > View : > {noformat} > create or replace view View1 AS ( > SELECT > B1.columns[0] c0, > B1.columns[1] c1 > FROM dfs.tmp.`TEST\B1.csv` B1 > LEFT OUTER JOIN dfs.tmp.`TEST\BK.csv` BK > ON B1.columns[1] = BK.columns[0] > WHERE BK.columns[0] is null AND trim(B1.columns[1]) <> '' > ); > {noformat} > {noformat} > create or replace view View2 AS ( > SELECT > View1.c0, > View1.c1 > FROM View1 > LEFT OUTER JOIN dfs.tmp.`TEST\BK.csv` BK > ON View1.c1 = BK.columns[0] > WHERE BK.columns[0] is null AND trim(View1.c1) <> '' > ); > {noformat} > Query : > {noformat} > select * FROM dfs.tmp.View2 > {noformat} > => Infinite Pending > data set : > {panel:title=B1} > A; > B;F > C;A > D;E > E; > F;C > {panel} > {panel:title=BK} > A;1 > B;2 > F;4 > {panel} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (DRILL-6371) Use FilterSetOpTransposeRule, DrillProjectSetOpTransposeRule in main logical stage
[ https://issues.apache.org/jira/browse/DRILL-6371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-6371: - Assignee: Vova Vysotskyi > Use FilterSetOpTransposeRule, DrillProjectSetOpTransposeRule in main logical > stage > -- > > Key: DRILL-6371 > URL: https://issues.apache.org/jira/browse/DRILL-6371 > Project: Apache Drill > Issue Type: Improvement > Components: Query Planning & Optimization >Affects Versions: 1.13.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Minor > Fix For: Future > > > FilterSetOpTransposeRule, ProjectSetOpTransposeRule are leveraged in > DRILL-3855. > They are used in HepPlanner, but if they additionally will be enabled in > main logical planning stage for Volcano planner, more cases will be covered > with these rules. > For example: > {code:java} > WITH year_total_1 > AS (SELECT c.r_regionkeycustomer_id, > 1 year_total > FROM cp.`tpch/region.parquet` c > UNION ALL > SELECT c.n_nationkeycustomer_id, > 1 year_total > FROM cp.`tpch/nation.parquet` c), > year_total_2 > AS (SELECT c.r_regionkeycustomer_id, > 1 year_total > FROM cp.`tpch/region.parquet` c > UNION ALL > SELECT c.n_nationkeycustomer_id, > 1 year_total > FROM cp.`tpch/nation.parquet` c) > SELECT count(t_w_firstyear.customer_id) as ct > FROM year_total_1 t_w_firstyear, >year_total_2 t_w_secyear > WHERE t_w_firstyear.year_total = t_w_secyear.year_total > AND t_w_firstyear.year_total > 0 and t_w_secyear.year_total > 0 > {code} > Current plan after performing rules: > {code:java} > LogicalAggregate(group=[{}], ct=[COUNT($0)]) > LogicalProject(customer_id=[$0]) > LogicalFilter(condition=[AND(=($1, $3), >($1, 0), >($3, 0))]) > LogicalJoin(condition=[true], joinType=[inner]) > LogicalUnion(all=[true]) > LogicalProject(customer_id=[$1], year_total=[1]) > EnumerableTableScan(table=[[cp, tpch/region.parquet]]) > LogicalProject(customer_id=[$1], year_total=[1]) > EnumerableTableScan(table=[[cp, tpch/nation.parquet]]) > LogicalUnion(all=[true]) > LogicalProject(customer_id=[$1], year_total=[1]) > EnumerableTableScan(table=[[cp, tpch/region.parquet]]) > LogicalProject(customer_id=[$1], year_total=[1]) > EnumerableTableScan(table=[[cp, tpch/nation.parquet]]) > {code} > Since LogicalFilter isn't under LogicalUnion the FilterSetOpTransposeRule is > not performed. > FilterJoinRule from main Drill logical stage pushes LogicalFilter below, but > the stage with FilterSetOpTransposeRule is already finished. > That's why FilterSetOpTransposeRule and ProjectSetOpTransposeRule should be > used in Drill main logical stage with Volcano planner. > Currently using them in Volcano Planner can cause infinite loops - > CALCITE-1271 (can be resolved after solving CALCITE-2223) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (DRILL-8063) OOM planning a certain query
[ https://issues.apache.org/jira/browse/DRILL-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-8063. --- Resolution: Fixed Fixed in DRILL-7523 > OOM planning a certain query > > > Key: DRILL-8063 > URL: https://issues.apache.org/jira/browse/DRILL-8063 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: Vova Vysotskyi >Priority: Critical > > This looks like an infinite planning bug in Calcite. To reproduce, copy the > two referenced TPCH Parquet files from contrib/data/tpch-sample-data/parquet/ > to dfs.tmp then run the following. Uncommenting the `magic_fix` column is > just one of the changes that can be made to make the query planning succeed. > > {code:java} > select > p_brand, > -- 'foobar' as magic_fix, > case > when f1 then v1 > else null > end as `m_1`, > case > when f1 then v2 > else null > end as `m_2` > from > (select > part.`p_brand`, >sum(t.l_extendedprice) as v1, >avg(t.l_extendedprice) as v2, > true as f1 > from > dfs.tmp.`lineitem.parquet` `t` > inner join dfs.tmp.`part.parquet` part on `t`.`l_partkey` = > part.`p_partkey` > group by part.`p_brand`) as `t2`; {code} > > > Stack trace snippet: > > {code:java} > 2021-12-01 13:12:15,172 [1e58a77f-0a5d-22b5-47f6-4c51bc31dbe6:foreman] ERROR > o.a.drill.common.CatastrophicFailure - Cat > astrophic Failure Occurred, exiting. Information message: Unable to handle > out of memory condition in Foreman. > java.lang.OutOfMemoryError: Java heap space > at java.base/java.util.Arrays.copyOf(Arrays.java:3745) > at > java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:172) > at > java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:538) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:174) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:168) > at org.apache.calcite.rex.RexCall.appendOperands(RexCall.java:109) > at org.apache.calcite.rex.RexCall.computeDigest(RexCall.java:166) > at org.apache.calcite.rex.RexCall.toString(RexCall.java:183) > at java.base/java.lang.String.valueOf(String.java:2951) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:168) > at org.apache.calcite.rex.RexCall.appendOperands(RexCall.java:109) > at org.apache.calcite.rex.RexCall.computeDigest(RexCall.java:166) > at org.apache.calcite.rex.RexCall.toString(RexCall.java:183) > ...{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (DRILL-7526) Assertion Error when only type is used with schema in table function
[ https://issues.apache.org/jira/browse/DRILL-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17576427#comment-17576427 ] Vova Vysotskyi commented on DRILL-7526: --- Fixed in DRILL-7523 > Assertion Error when only type is used with schema in table function > > > Key: DRILL-7526 > URL: https://issues.apache.org/jira/browse/DRILL-7526 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.16.0 >Reporter: Arina Ielchiieva >Assignee: Vova Vysotskyi >Priority: Major > > {{org.apache.drill.TestSchemaWithTableFunction}} > {noformat} > @Test > public void testWithTypeAndSchema() { > String query = "select Year from > table(dfs.`store/text/data/cars.csvh`(type=> 'text', " + > "schema=>'inline=(`Year` int)')) where Make = 'Ford'"; > queryBuilder().sql(query).print(); > } > {noformat} > {noformat} > Caused by: java.lang.AssertionError: BOOLEAN > at > org.apache.calcite.sql.type.SqlTypeExplicitPrecedenceList.compareTypePrecedence(SqlTypeExplicitPrecedenceList.java:140) > at org.apache.calcite.sql.SqlUtil.bestMatch(SqlUtil.java:687) > at > org.apache.calcite.sql.SqlUtil.filterRoutinesByTypePrecedence(SqlUtil.java:656) > at > org.apache.calcite.sql.SqlUtil.lookupSubjectRoutines(SqlUtil.java:515) > at org.apache.calcite.sql.SqlUtil.lookupRoutine(SqlUtil.java:435) > at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:240) > at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:218) > at > org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5640) > at > org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5627) > at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1692) > at > org.apache.calcite.sql.validate.ProcedureNamespace.validateImpl(ProcedureNamespace.java:53) > at > org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1009) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:969) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3129) > at > org.apache.drill.exec.planner.sql.conversion.DrillValidator.validateFrom(DrillValidator.java:63) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3111) > at > org.apache.drill.exec.planner.sql.conversion.DrillValidator.validateFrom(DrillValidator.java:63) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3383) > at > org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60) > at > org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1009) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:969) > at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:216) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:944) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:651) > at > org.apache.drill.exec.planner.sql.conversion.SqlConverter.validate(SqlConverter.java:189) > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateNode(DefaultSqlHandler.java:648) > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateAndConvert(DefaultSqlHandler.java:196) > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:170) > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:283) > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan(DrillSqlWorker.java:163) > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan(DrillSqlWorker.java:128) > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:93) > at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:590) > at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:275) > ... 1 more > {noformat} > Note: when other format options are used or schema is used alone, everything > works fine. > See test examples: > org.apache.drill.TestSchemaWithTableFunc
[jira] [Resolved] (DRILL-7526) Assertion Error when only type is used with schema in table function
[ https://issues.apache.org/jira/browse/DRILL-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-7526. --- Resolution: Fixed > Assertion Error when only type is used with schema in table function > > > Key: DRILL-7526 > URL: https://issues.apache.org/jira/browse/DRILL-7526 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.16.0 >Reporter: Arina Ielchiieva >Assignee: Vova Vysotskyi >Priority: Major > > {{org.apache.drill.TestSchemaWithTableFunction}} > {noformat} > @Test > public void testWithTypeAndSchema() { > String query = "select Year from > table(dfs.`store/text/data/cars.csvh`(type=> 'text', " + > "schema=>'inline=(`Year` int)')) where Make = 'Ford'"; > queryBuilder().sql(query).print(); > } > {noformat} > {noformat} > Caused by: java.lang.AssertionError: BOOLEAN > at > org.apache.calcite.sql.type.SqlTypeExplicitPrecedenceList.compareTypePrecedence(SqlTypeExplicitPrecedenceList.java:140) > at org.apache.calcite.sql.SqlUtil.bestMatch(SqlUtil.java:687) > at > org.apache.calcite.sql.SqlUtil.filterRoutinesByTypePrecedence(SqlUtil.java:656) > at > org.apache.calcite.sql.SqlUtil.lookupSubjectRoutines(SqlUtil.java:515) > at org.apache.calcite.sql.SqlUtil.lookupRoutine(SqlUtil.java:435) > at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:240) > at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:218) > at > org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5640) > at > org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5627) > at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1692) > at > org.apache.calcite.sql.validate.ProcedureNamespace.validateImpl(ProcedureNamespace.java:53) > at > org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1009) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:969) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3129) > at > org.apache.drill.exec.planner.sql.conversion.DrillValidator.validateFrom(DrillValidator.java:63) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3111) > at > org.apache.drill.exec.planner.sql.conversion.DrillValidator.validateFrom(DrillValidator.java:63) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3383) > at > org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60) > at > org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1009) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:969) > at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:216) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:944) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:651) > at > org.apache.drill.exec.planner.sql.conversion.SqlConverter.validate(SqlConverter.java:189) > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateNode(DefaultSqlHandler.java:648) > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.validateAndConvert(DefaultSqlHandler.java:196) > at > org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:170) > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:283) > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan(DrillSqlWorker.java:163) > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan(DrillSqlWorker.java:128) > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:93) > at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:590) > at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:275) > ... 1 more > {noformat} > Note: when other format options are used or schema is used alone, everything > works fine. > See test examples: > org.apache.drill.TestSchemaWithTableFunction#testSchemaInlineWithTableProperties, > org.apach
[jira] [Resolved] (DRILL-7722) CREATE VIEW with LATERAL UNNEST creates an invalid view
[ https://issues.apache.org/jira/browse/DRILL-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-7722. --- Resolution: Fixed Fixed in DRILL-7523 > CREATE VIEW with LATERAL UNNEST creates an invalid view > --- > > Key: DRILL-7722 > URL: https://issues.apache.org/jira/browse/DRILL-7722 > Project: Apache Drill > Issue Type: Bug > Components: SQL Parser >Affects Versions: 1.17.0 >Reporter: Matevž Bradač >Assignee: Vova Vysotskyi >Priority: Blocker > > Creating a view from a query containing LATERAL UNNEST results in a view that > cannot be parsed by the engine. The generated view contains superfluous > parentheses, thus the failed parsing. > {code:bash|title=a simple JSON database} > $ cat /tmp/t.json > [{"name": "item_1", "related": ["id1"]}, {"name": "item_2", "related": > ["id1", "id2"]}, {"name": "item_3", "related": ["id2"]}] > {code} > {code:SQL|title=drill query, working} > SELECT > item.name, > relations.* > FROM dfs.tmp.`t.json` item > JOIN LATERAL( > SELECT * FROM UNNEST(item.related) i(rels) > ) relations > ON TRUE > name rels > 0 item_1 id1 > 1 item_2 id1 > 2 item_2 id2 > 3 item_3 id2 > {code} > {code:SQL|title=create a drill view from the above query} > CREATE VIEW dfs.tmp.unnested_view AS > SELECT > item.name, > relations.* > FROM dfs.tmp.`t.json` item > JOIN LATERAL( > SELECT * FROM UNNEST(item.related) i(rels) > ) relations > ON TRUE > {code} > {code:bash|title=contents of view file} > # note the extra parentheses near LATERAL and FROM > $ cat /tmp/unnested_view.view.drill > { > "name" : "unnested_view", > "sql" : "SELECT `item`.`name`, `relations`.*\nFROM `dfs`.`tmp`.`t.json` AS > `item`\nINNER JOIN LATERAL((SELECT *\nFROM (UNNEST(`item`.`related`)) AS `i` > (`rels`))) AS `relations` ON TRUE", > "fields" : [ { > "name" : "name", > "type" : "ANY", > "isNullable" : true > }, { > "name" : "rels", > "type" : "ANY", > "isNullable" : true > } ], > "workspaceSchemaPath" : [ ] > } > {code} > {code:SQL|title=query the view} > SELECT * FROM dfs.tmp.unnested_view > PARSE ERROR: Failure parsing a view your query is dependent upon. > SQL Query: SELECT `item`.`name`, `relations`.* > FROM `dfs`.`tmp`.`t.json` AS `item` > INNER JOIN LATERAL((SELECT * > FROM (UNNEST(`item`.`related`)) AS `i` (`rels`))) AS `relations` ON TRUE > ^ > [Error Id: fd816a27-c2c5-4c2a-b6bf-173ab37eb693 ] > {code} > If the view is "fixed" by editing the generated JSON and removing the extra > parentheses, e.g. > {code:bash|title=fixed view} > $ cat /tmp/fixed_unnested_view.view.drill > { > "name" : "fixed_unnested_view", > "sql" : "SELECT `item`.`name`, `relations`.*\nFROM `dfs`.`tmp`.`t.json` AS > `item`\nINNER JOIN LATERAL(SELECT *\nFROM UNNEST(`item`.`related`) AS `i` > (`rels`)) AS `relations` ON TRUE", > "fields" : [ { > "name" : "name", > "type" : "ANY", > "isNullable" : true > }, { > "name" : "rels", > "type" : "ANY", > "isNullable" : true > } ], > "workspaceSchemaPath" : [ ] > } > {code} > then querying works as expected: > {code:sql|title=fixed view query} > SELECT * FROM dfs.tmp.fixed_unnested_view > name rels > 0 item_1 id1 > 1 item_2 id1 > 2 item_2 id2 > 3 item_3 id2 > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (DRILL-8190) Mongo query: "Schema change not currently supported for schemas with complex types"
[ https://issues.apache.org/jira/browse/DRILL-8190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17574871#comment-17574871 ] Vova Vysotskyi commented on DRILL-8190: --- [~clarkddc], which is value of the option {{store.mongo.bson.record.reader}}? Could you please try running a query with the changed value of this option? > Mongo query: "Schema change not currently supported for schemas with complex > types" > --- > > Key: DRILL-8190 > URL: https://issues.apache.org/jira/browse/DRILL-8190 > Project: Apache Drill > Issue Type: Bug > Components: Server >Affects Versions: 1.20.0 > Environment: RHEL 7: Linux 3.10.0-1160.59.1.el7.x86_64 #1 SMP Wed > Feb 16 12:17:35 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux >Reporter: Daniel Clark >Assignee: James Turton >Priority: Major > Fix For: Future > > Attachments: customGrounds.gz, log_4.txt, profile_4.json > > > I'm attempting to run this mongo query that ran successfully in Drill 1.19 > with the 1.21.0-SNAPSHOT build. > > SELECT `Elements_Efforts`.`EffortTypeName` AS `EffortTypeName`, > `Elements`.`ElementSubTypeName` AS `ElementSubTypeName`, > `Elements`.`ElementTypeName` AS `ElementTypeName`, > `Elements`.`PlanID` AS `PlanID` > FROM `mongo.grounds`.`Elements` `Elements` > INNER JOIN `mongo.grounds`.`Elements_Efforts` `Elements_Efforts` ON > (`Elements`.`_id` = `Elements_Efforts`.`_id`) > WHERE (`Elements`.`PlanID` = '1623263140') > GROUP BY `Elements_Efforts`.`EffortTypeName`, > `Elements`.`ElementSubTypeName`, > `Elements`.`ElementTypeName`, > `Elements`.`PlanID` > > I'm getting this error message: UserRemoteException : SYSTEM ERROR: > RuntimeException: Schema change not currently supported for schemas with > complex types. I've attached the log, profile, and a mongodb dump containing > the relevant datasets. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8272) Skip MAP column without children when creating parquet tables
Vova Vysotskyi created DRILL-8272: - Summary: Skip MAP column without children when creating parquet tables Key: DRILL-8272 URL: https://issues.apache.org/jira/browse/DRILL-8272 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (DRILL-3671) UNION infinite PENDING status
[ https://issues.apache.org/jira/browse/DRILL-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17568652#comment-17568652 ] Vova Vysotskyi commented on DRILL-3671: --- Ok, so it means that updating Calcite didn't break it :) On my version planning takes a fraction of a second, but perhaps my PC is not so loaded as yours. > UNION infinite PENDING status > - > > Key: DRILL-3671 > URL: https://issues.apache.org/jira/browse/DRILL-3671 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Affects Versions: 1.1.0 > Environment: Drill embedded Win7 x64 >Reporter: boris chmiel >Priority: Major > Fix For: Future > > > Querying a View containing more than 7 UNION clause on the same table, leads > the query to remain infinitely in PENDING status. The Physical Plan is not > created. > data_balance_sheet.csv : > {noformat} > account|m1|m2|m3|m4|m5|m6|m7|m8|m9|m10|m11|m12 > A|3058.77|450.12|257390.92|58104.74|9376.08|109.28|13.24|2149.25|1962.30|1076.59|530.98|44918.63 > {noformat} > {code:sql} > SELECT columns[0] FROM dfs.tmp.`data_balance_sheet.csv` > {code} > => > {noformat} > +-+ > | EXPR$0 | > +-+ > | A | > +-+ > {noformat} > View: > {code:sql} > CREATE OR REPLACE VIEW dfs.tmp.view_balance_sheet AS ( > SELECT CAST(columns[0] AS Varchar(20)) account, '01' Period, CAST(columns[1] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '02' Period, CAST(columns[2] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '03' Period, CAST(columns[3] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '04' Period, CAST(columns[4] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '05' Period, CAST(columns[5] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '06' Period, CAST(columns[6] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '07' Period, CAST(columns[7] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '08' Period, CAST(columns[8] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '09' Period, CAST(columns[9] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '10' Period, CAST(columns[10] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '11' Period, CAST(columns[11] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '12' Period, CAST(columns[12] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > ); > {code} > => > {noformat} > View 'view_balance_sheet' replaced successfully in 'dfs.tmp' schema > {noformat} > {code:sql} > SELECT * FROM dfs.tmp.view_balance_sheet; > {code} > => Nothing appends, status remains PENDING, no Physical Plan is created -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (DRILL-4086) Query hangs in planing
[ https://issues.apache.org/jira/browse/DRILL-4086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17568651#comment-17568651 ] Vova Vysotskyi commented on DRILL-4086: --- It works with the updated Calcite version and returns the following results: {noformat} +++ | c0 | c1 | +++ | F | C | | D | E | +++ {noformat} > Query hangs in planing > -- > > Key: DRILL-4086 > URL: https://issues.apache.org/jira/browse/DRILL-4086 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Affects Versions: 1.2.0 >Reporter: boris chmiel >Priority: Major > > The query is stuck seems blocked on planning (pending) > View : > {noformat} > create or replace view View1 AS ( > SELECT > B1.columns[0] c0, > B1.columns[1] c1 > FROM dfs.tmp.`TEST\B1.csv` B1 > LEFT OUTER JOIN dfs.tmp.`TEST\BK.csv` BK > ON B1.columns[1] = BK.columns[0] > WHERE BK.columns[0] is null AND trim(B1.columns[1]) <> '' > ); > {noformat} > {noformat} > create or replace view View2 AS ( > SELECT > View1.c0, > View1.c1 > FROM View1 > LEFT OUTER JOIN dfs.tmp.`TEST\BK.csv` BK > ON View1.c1 = BK.columns[0] > WHERE BK.columns[0] is null AND trim(View1.c1) <> '' > ); > {noformat} > Query : > {noformat} > select * FROM dfs.tmp.View2 > {noformat} > => Infinite Pending > data set : > {panel:title=B1} > A; > B;F > C;A > D;E > E; > F;C > {panel} > {panel:title=BK} > A;1 > B;2 > F;4 > {panel} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (DRILL-3671) UNION infinite PENDING status
[ https://issues.apache.org/jira/browse/DRILL-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17568645#comment-17568645 ] Vova Vysotskyi commented on DRILL-3671: --- Hi [~dzamo], I have checked it with the updated Calcite version, and it works fine there. > UNION infinite PENDING status > - > > Key: DRILL-3671 > URL: https://issues.apache.org/jira/browse/DRILL-3671 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Affects Versions: 1.1.0 > Environment: Drill embedded Win7 x64 >Reporter: boris chmiel >Priority: Major > Fix For: Future > > > Querying a View containing more than 7 UNION clause on the same table, leads > the query to remain infinitely in PENDING status. The Physical Plan is not > created. > data_balance_sheet.csv : > {noformat} > account|m1|m2|m3|m4|m5|m6|m7|m8|m9|m10|m11|m12 > A|3058.77|450.12|257390.92|58104.74|9376.08|109.28|13.24|2149.25|1962.30|1076.59|530.98|44918.63 > {noformat} > {code:sql} > SELECT columns[0] FROM dfs.tmp.`data_balance_sheet.csv` > {code} > => > {noformat} > +-+ > | EXPR$0 | > +-+ > | A | > +-+ > {noformat} > View: > {code:sql} > CREATE OR REPLACE VIEW dfs.tmp.view_balance_sheet AS ( > SELECT CAST(columns[0] AS Varchar(20)) account, '01' Period, CAST(columns[1] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '02' Period, CAST(columns[2] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '03' Period, CAST(columns[3] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '04' Period, CAST(columns[4] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '05' Period, CAST(columns[5] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '06' Period, CAST(columns[6] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '07' Period, CAST(columns[7] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '08' Period, CAST(columns[8] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '09' Period, CAST(columns[9] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '10' Period, CAST(columns[10] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '11' Period, CAST(columns[11] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > UNION > SELECT CAST(columns[0] AS Varchar(20)) account, '12' Period, CAST(columns[12] > AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` > ); > {code} > => > {noformat} > View 'view_balance_sheet' replaced successfully in 'dfs.tmp' schema > {noformat} > {code:sql} > SELECT * FROM dfs.tmp.view_balance_sheet; > {code} > => Nothing appends, status remains PENDING, no Physical Plan is created -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (DRILL-3671) UNION infinite PENDING status
[ https://issues.apache.org/jira/browse/DRILL-3671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-3671: -- Description: Querying a View containing more than 7 UNION clause on the same table, leads the query to remain infinitely in PENDING status. The Physical Plan is not created. data_balance_sheet.csv : {noformat} account|m1|m2|m3|m4|m5|m6|m7|m8|m9|m10|m11|m12 A|3058.77|450.12|257390.92|58104.74|9376.08|109.28|13.24|2149.25|1962.30|1076.59|530.98|44918.63 {noformat} {code:sql} SELECT columns[0] FROM dfs.tmp.`data_balance_sheet.csv` {code} => {noformat} +-+ | EXPR$0 | +-+ | A | +-+ {noformat} View: {code:sql} CREATE OR REPLACE VIEW dfs.tmp.view_balance_sheet AS ( SELECT CAST(columns[0] AS Varchar(20)) account, '01' Period, CAST(columns[1] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '02' Period, CAST(columns[2] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '03' Period, CAST(columns[3] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '04' Period, CAST(columns[4] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '05' Period, CAST(columns[5] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '06' Period, CAST(columns[6] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '07' Period, CAST(columns[7] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '08' Period, CAST(columns[8] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '09' Period, CAST(columns[9] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '10' Period, CAST(columns[10] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '11' Period, CAST(columns[11] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '12' Period, CAST(columns[12] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` ); {code} => {noformat} View 'view_balance_sheet' replaced successfully in 'dfs.tmp' schema {noformat} {code:sql} SELECT * FROM dfs.tmp.view_balance_sheet; {code} => Nothing appends, status remains PENDING, no Physical Plan is created was: Querying a View containing more than 7 UNION clause on the same table, leads the query to remain infinitely in PENDING status. The Physical Plan is not created. data_balance_sheet.csv : account|m1|m2|m3|m4|m5|m6|m7|m8|m9|m10|m11|m12 A|3058.77|450.12|257390.92|58104.74|9376.08|109.28|13.24|2149.25|1962.30|1076.59|530.98|44918.63 SELECT columns[0] FROM dfs.tmp.`data_balance_sheet.csv` => +-+ | EXPR$0 | +-+ | A | +-+ View: CREATE OR REPLACE VIEW dfs.tmp.view_balance_sheet AS ( SELECT CAST(columns[0] AS Varchar(20)) account, '01' Period, CAST(columns[1] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '02' Period, CAST(columns[2] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '03' Period, CAST(columns[3] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '04' Period, CAST(columns[4] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '05' Period, CAST(columns[5] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '06' Period, CAST(columns[6] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '07' Period, CAST(columns[7] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '08' Period, CAST(columns[8] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '09' Period, CAST(columns[9] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '10' Period, CAST(columns[10] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '11' Period, CAST(columns[11] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` UNION SELECT CAST(columns[0] AS Varchar(20)) account, '12' Period, CAST(columns[12] AS Varchar(20)) Val FROM dfs.tmp.`data_balance_sheet.csv` );
[jira] [Commented] (DRILL-7722) CREATE VIEW with LATERAL UNNEST creates an invalid view
[ https://issues.apache.org/jira/browse/DRILL-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17568373#comment-17568373 ] Vova Vysotskyi commented on DRILL-7722: --- [~dzamo], I have checked, and it works fine with the updated Calcite version. > CREATE VIEW with LATERAL UNNEST creates an invalid view > --- > > Key: DRILL-7722 > URL: https://issues.apache.org/jira/browse/DRILL-7722 > Project: Apache Drill > Issue Type: Bug > Components: SQL Parser >Affects Versions: 1.17.0 >Reporter: Matevž Bradač >Priority: Blocker > > Creating a view from a query containing LATERAL UNNEST results in a view that > cannot be parsed by the engine. The generated view contains superfluous > parentheses, thus the failed parsing. > {code:bash|title=a simple JSON database} > $ cat /tmp/t.json > [{"name": "item_1", "related": ["id1"]}, {"name": "item_2", "related": > ["id1", "id2"]}, {"name": "item_3", "related": ["id2"]}] > {code} > {code:SQL|title=drill query, working} > SELECT > item.name, > relations.* > FROM dfs.tmp.`t.json` item > JOIN LATERAL( > SELECT * FROM UNNEST(item.related) i(rels) > ) relations > ON TRUE > name rels > 0 item_1 id1 > 1 item_2 id1 > 2 item_2 id2 > 3 item_3 id2 > {code} > {code:SQL|title=create a drill view from the above query} > CREATE VIEW dfs.tmp.unnested_view AS > SELECT > item.name, > relations.* > FROM dfs.tmp.`t.json` item > JOIN LATERAL( > SELECT * FROM UNNEST(item.related) i(rels) > ) relations > ON TRUE > {code} > {code:bash|title=contents of view file} > # note the extra parentheses near LATERAL and FROM > $ cat /tmp/unnested_view.view.drill > { > "name" : "unnested_view", > "sql" : "SELECT `item`.`name`, `relations`.*\nFROM `dfs`.`tmp`.`t.json` AS > `item`\nINNER JOIN LATERAL((SELECT *\nFROM (UNNEST(`item`.`related`)) AS `i` > (`rels`))) AS `relations` ON TRUE", > "fields" : [ { > "name" : "name", > "type" : "ANY", > "isNullable" : true > }, { > "name" : "rels", > "type" : "ANY", > "isNullable" : true > } ], > "workspaceSchemaPath" : [ ] > } > {code} > {code:SQL|title=query the view} > SELECT * FROM dfs.tmp.unnested_view > PARSE ERROR: Failure parsing a view your query is dependent upon. > SQL Query: SELECT `item`.`name`, `relations`.* > FROM `dfs`.`tmp`.`t.json` AS `item` > INNER JOIN LATERAL((SELECT * > FROM (UNNEST(`item`.`related`)) AS `i` (`rels`))) AS `relations` ON TRUE > ^ > [Error Id: fd816a27-c2c5-4c2a-b6bf-173ab37eb693 ] > {code} > If the view is "fixed" by editing the generated JSON and removing the extra > parentheses, e.g. > {code:bash|title=fixed view} > $ cat /tmp/fixed_unnested_view.view.drill > { > "name" : "fixed_unnested_view", > "sql" : "SELECT `item`.`name`, `relations`.*\nFROM `dfs`.`tmp`.`t.json` AS > `item`\nINNER JOIN LATERAL(SELECT *\nFROM UNNEST(`item`.`related`) AS `i` > (`rels`)) AS `relations` ON TRUE", > "fields" : [ { > "name" : "name", > "type" : "ANY", > "isNullable" : true > }, { > "name" : "rels", > "type" : "ANY", > "isNullable" : true > } ], > "workspaceSchemaPath" : [ ] > } > {code} > then querying works as expected: > {code:sql|title=fixed view query} > SELECT * FROM dfs.tmp.fixed_unnested_view > name rels > 0 item_1 id1 > 1 item_2 id1 > 2 item_2 id2 > 3 item_3 id2 > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (DRILL-512) Tpch2 fails to decorrelate
[ https://issues.apache.org/jira/browse/DRILL-512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-512. -- Fix Version/s: (was: Future) Resolution: Fixed Looks like it works for the current master version > Tpch2 fails to decorrelate > -- > > Key: DRILL-512 > URL: https://issues.apache.org/jira/browse/DRILL-512 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Reporter: Jacques Nadeau >Priority: Minor > > On top of Optiq 0.6, TPCH2 fails to remove the CorrelatorRel from the logical > plan. Line 1091 of RelDecorrelator is supposed to be populating the map > between correlation variables and output positions. It doesn't look like > that is happening. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (DRILL-8063) OOM planning a certain query
[ https://issues.apache.org/jira/browse/DRILL-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8063: - Assignee: Vova Vysotskyi (was: Vitalii Diravka) > OOM planning a certain query > > > Key: DRILL-8063 > URL: https://issues.apache.org/jira/browse/DRILL-8063 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: Vova Vysotskyi >Priority: Critical > > This looks like an infinite planning bug in Calcite. To reproduce, copy the > two referenced TPCH Parquet files from contrib/data/tpch-sample-data/parquet/ > to dfs.tmp then run the following. Uncommenting the `magic_fix` column is > just one of the changes that can be made to make the query planning succeed. > > {code:java} > select > p_brand, > -- 'foobar' as magic_fix, > case > when f1 then v1 > else null > end as `m_1`, > case > when f1 then v2 > else null > end as `m_2` > from > (select > part.`p_brand`, >sum(t.l_extendedprice) as v1, >avg(t.l_extendedprice) as v2, > true as f1 > from > dfs.tmp.`lineitem.parquet` `t` > inner join dfs.tmp.`part.parquet` part on `t`.`l_partkey` = > part.`p_partkey` > group by part.`p_brand`) as `t2`; {code} > > > Stack trace snippet: > > {code:java} > 2021-12-01 13:12:15,172 [1e58a77f-0a5d-22b5-47f6-4c51bc31dbe6:foreman] ERROR > o.a.drill.common.CatastrophicFailure - Cat > astrophic Failure Occurred, exiting. Information message: Unable to handle > out of memory condition in Foreman. > java.lang.OutOfMemoryError: Java heap space > at java.base/java.util.Arrays.copyOf(Arrays.java:3745) > at > java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:172) > at > java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:538) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:174) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:168) > at org.apache.calcite.rex.RexCall.appendOperands(RexCall.java:109) > at org.apache.calcite.rex.RexCall.computeDigest(RexCall.java:166) > at org.apache.calcite.rex.RexCall.toString(RexCall.java:183) > at java.base/java.lang.String.valueOf(String.java:2951) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:168) > at org.apache.calcite.rex.RexCall.appendOperands(RexCall.java:109) > at org.apache.calcite.rex.RexCall.computeDigest(RexCall.java:166) > at org.apache.calcite.rex.RexCall.toString(RexCall.java:183) > ...{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (DRILL-7523) Update Calcite to 1.31.0
[ https://issues.apache.org/jira/browse/DRILL-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-7523: -- Description: Upgrade to Calcite 1.31 version. Also, there are some fixes that may help to avoid specific commit for CALCITE-3121. As discussed in CALCITE-2223, there were made additional changes for fixing this issue, so we should check whether we can remove the workaround made in DRILL-6212. Incorporate with changes made in https://issues.apache.org/jira/browse/CALCITE-3774 - remove overriding {{shouldMergeProject}} and set {{withBloat(int bloat)}} in custom rel builder and investigate whether it would help to remove {{Hook.REL_BUILDER_SIMPLIFY}} hook. Additionally, please check whether specific commit with ViewExpander may be removed due to changes made in CALCITE-2441. was: Upgrade to Calcite 1.22 version. Also, there are some fixes that may help to avoid specific commit for CALCITE-3121. As discussed in CALCITE-2223, there were made additional changes for fixing this issue, so we should check whether we can remove the workaround made in DRILL-6212. Incorporate with changes made in https://issues.apache.org/jira/browse/CALCITE-3774 - remove overriding {{shouldMergeProject}} and set {{withBloat(int bloat)}} in custom rel builder and investigate whether it would help to remove {{Hook.REL_BUILDER_SIMPLIFY}} hook. Additionally, please check whether specific commit with ViewExpander may be removed due to changes made in CALCITE-2441. > Update Calcite to 1.31.0 > > > Key: DRILL-7523 > URL: https://issues.apache.org/jira/browse/DRILL-7523 > Project: Apache Drill > Issue Type: Task >Affects Versions: 1.17.0 >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > Fix For: Future > > > Upgrade to Calcite 1.31 version. > Also, there are some fixes that may help to avoid specific commit for > CALCITE-3121. > As discussed in CALCITE-2223, there were made additional changes for fixing > this issue, so we should check whether we can remove the workaround made in > DRILL-6212. > Incorporate with changes made in > https://issues.apache.org/jira/browse/CALCITE-3774 - remove overriding > {{shouldMergeProject}} and set {{withBloat(int bloat)}} in custom rel builder > and investigate whether it would help to remove {{Hook.REL_BUILDER_SIMPLIFY}} > hook. > Additionally, please check whether specific commit with ViewExpander may be > removed due to changes made in CALCITE-2441. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (DRILL-7523) Update Calcite to 1.31.0
[ https://issues.apache.org/jira/browse/DRILL-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-7523: -- Summary: Update Calcite to 1.31.0 (was: Update Calcite to 1.30.0) > Update Calcite to 1.31.0 > > > Key: DRILL-7523 > URL: https://issues.apache.org/jira/browse/DRILL-7523 > Project: Apache Drill > Issue Type: Task >Affects Versions: 1.17.0 >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > Fix For: Future > > > Upgrade to Calcite 1.22 version. > Also, there are some fixes that may help to avoid specific commit for > CALCITE-3121. > As discussed in CALCITE-2223, there were made additional changes for fixing > this issue, so we should check whether we can remove the workaround made in > DRILL-6212. > Incorporate with changes made in > https://issues.apache.org/jira/browse/CALCITE-3774 - remove overriding > {{shouldMergeProject}} and set {{withBloat(int bloat)}} in custom rel builder > and investigate whether it would help to remove {{Hook.REL_BUILDER_SIMPLIFY}} > hook. > Additionally, please check whether specific commit with ViewExpander may be > removed due to changes made in CALCITE-2441. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (DRILL-8063) OOM planning a certain query
[ https://issues.apache.org/jira/browse/DRILL-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17566923#comment-17566923 ] Vova Vysotskyi commented on DRILL-8063: --- Hi [~dzamo], yes, this query works fine with the latest Calcite version. > OOM planning a certain query > > > Key: DRILL-8063 > URL: https://issues.apache.org/jira/browse/DRILL-8063 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: Vitalii Diravka >Priority: Critical > > This looks like an infinite planning bug in Calcite. To reproduce, copy the > two referenced TPCH Parquet files from contrib/data/tpch-sample-data/parquet/ > to dfs.tmp then run the following. Uncommenting the `magic_fix` column is > just one of the changes that can be made to make the query planning succeed. > > {code:java} > select > p_brand, > -- 'foobar' as magic_fix, > case > when f1 then v1 > else null > end as `m_1`, > case > when f1 then v2 > else null > end as `m_2` > from > (select > part.`p_brand`, >sum(t.l_extendedprice) as v1, >avg(t.l_extendedprice) as v2, > true as f1 > from > dfs.tmp.`lineitem.parquet` `t` > inner join dfs.tmp.`part.parquet` part on `t`.`l_partkey` = > part.`p_partkey` > group by part.`p_brand`) as `t2`; {code} > > > Stack trace snippet: > > {code:java} > 2021-12-01 13:12:15,172 [1e58a77f-0a5d-22b5-47f6-4c51bc31dbe6:foreman] ERROR > o.a.drill.common.CatastrophicFailure - Cat > astrophic Failure Occurred, exiting. Information message: Unable to handle > out of memory condition in Foreman. > java.lang.OutOfMemoryError: Java heap space > at java.base/java.util.Arrays.copyOf(Arrays.java:3745) > at > java.base/java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:172) > at > java.base/java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:538) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:174) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:168) > at org.apache.calcite.rex.RexCall.appendOperands(RexCall.java:109) > at org.apache.calcite.rex.RexCall.computeDigest(RexCall.java:166) > at org.apache.calcite.rex.RexCall.toString(RexCall.java:183) > at java.base/java.lang.String.valueOf(String.java:2951) > at java.base/java.lang.StringBuilder.append(StringBuilder.java:168) > at org.apache.calcite.rex.RexCall.appendOperands(RexCall.java:109) > at org.apache.calcite.rex.RexCall.computeDigest(RexCall.java:166) > at org.apache.calcite.rex.RexCall.toString(RexCall.java:183) > ...{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (DRILL-8255) Update Drill-Calcite version to include fix for CALCITE-4992
Vova Vysotskyi created DRILL-8255: - Summary: Update Drill-Calcite version to include fix for CALCITE-4992 Key: DRILL-8255 URL: https://issues.apache.org/jira/browse/DRILL-8255 Project: Apache Drill Issue Type: Task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Deleted] (DRILL-8252) لوله پلیکا چیست؟
[ https://issues.apache.org/jira/browse/DRILL-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi deleted DRILL-8252: -- > لوله پلیکا چیست؟ > > > Key: DRILL-8252 > URL: https://issues.apache.org/jira/browse/DRILL-8252 > Project: Apache Drill > Issue Type: Bug >Reporter: helenaa >Priority: Major > > h1. *لوله پلیکا چیست؟* > لوله پلیکا با نام دیگر لوله U-PVC در صنعت یکی از پرمصرف ترین لوله ها است. > لوله های پی وی سی صفت و سخت (پلی وینیل کلراید) یک ماده همه کاره است که به طور > فعال در چیدمان تجهیزات مختلف استفاده می شود. استحکام مکانیکی بالا، همراه با > مقاومت در محیط های مختلف کاری تهاجمی، لوله های پلی وینیل کلرید را به یک ضرورت > در سیستم های آبرسانی، ساخت، صنعتی و خانگی و همچنین در سایر بخش های بسیار > تخصصی اقتصاد تبدیل می کند. در این بررسی، ویژگی های اصلی محصولات لوله پی وی سی > و زمینه های اصلی کاربرد آنها را در نظر گرفت. > > > > h2. *پلی وینیل کلراید چیست؟* > U-PVC به دسته ای از پلاستیک ها تعلق دارد که پس از تکمیل شکل دهی محصول، از > جمله لوله ها، قابل بازیافت می مانند. این ماده در تغییر می کند 165-200 درجه > سانتیگراد شروع به ذوب شدن می کند و معمولاً در شکل از جمله تنش های مکانیکی > است. خواصی به دلیل ساختار پلی وینیل کلرید و ساختار مولکولی چنین می شود. > برای کامل کردن این لوله پی وی سی، باید بدانید که این یک محصول غیر احتراق است > که به نسبت مقادیر زیادی از اسیدها، کربوهیدرات های کلردار، معطر و مقاوم است. > این ماده به خودی خود ترموپلاستیک است، کاملاً قابل ماشینکاری است و دمای آن تا > دمای 200 تا 300 درجه سانتیگراد برای جوشکاری آن کافی است. > > > h2. *ویژگی های [لوله > پلیکا|https://naabzist.net/%D9%84%D9%88%D9%84%D9%87-%D9%BE%D9%84%DB%8C%DA%A9%D8%A7]* > لوله های پلاستیکی بر اساس پلی وینیل کلرید تولید می شوند - ماده ای با ویژگی > های فیزیکی و عملیاتی به فرد. محصولات PVC از خوردگی، اشعه ماوراء بنفش، بارهای > مکانیکی و هیدرولیکی بالا نمی ترسند. برای افزایش استحکام و بهبود سایر خواص، > سازندگان تثبیت کننده های مختلف (کادمیم، سرب یا قلع) را به محصول نیمه تمام PVC > اضافه می کنند. > مانند سایر محصولات پی وی سی، لوله پی وی سی سفت و سخت با افزایش استحکام، > استحکام و مقاومت در برابر عوامل مختلف مشخص می شود. این ماده قادر است بدون > تغییر خواص خود در محدوده دمای عملیاتی - از 15 تا +60 درجه کار کند. اما ویژگی > اصلی لوله های PVC این است که از نظر شیمیایی نسبت به اکثر معرف های تهاجمی > (اکسیدها، اسیدها، مواد معدنی، روغن ها، محلول های نمک و غیره) بی اثر هستند. > لوله های پی سی صفت و سخت آنها دارای محصولات مشابه پلیمری معمولی هستند، علاوه > بر ویژگی های خصوصی و فنی که می توانند در زمینه الکتریکی مورد استفاده قرار > گیرند. محصولات محصولات پی وی سی سخت عبارتند از: > مقاومت بالا در برابر آسیب های مکانیکی و محیط های مختلف تهاجمی؛ > صافی کامل دیوارهای داخلی؛ > دوام - لوله حداقل 35-50 سال عمر می کند. > مقاومت در برابر رطوبت - مواد تحت تاثیر قرار نمی گیرند. > هزینه نسبتا کم > ایمنی سوزی - احتراق لوله های PVC مرتبه ای بالاتر از دمای معمولی پلاستیک است. > با آتش باز، مواد به کمتر ذوب می شوند، در غیاب آن، خود خاموش می شوند. > دی الکتریک - هیچ جریان الکتریکی از پلی وینیل کلرید عبور نمی کند. > در معرض خوردگی نیست؛ > نیازی به زمین ندارد؛ > به شما امکان می دهد حتی پیچیده ترین انشعاب سیم کشی برق را نصب کنید. > گسترش شبکه های کابلی را با کشیدن سیم های کمکی امکان پذیر می کند. > > _{color:#00875a}اشعه ماوراء بنفش، میکروارگانیسم ها و جریان های سرگردان نیز > خطری برای چنین محصولاتی ایجاد نمی کنند. زمانی که عایق سیم برق شکسته شود از > لوله سفید به عنوان یک عامل محافظ در برابر شوک الکتریکی استفاده می شود.{color}_ > > > h2. *شرایط تولید لوله پلیکا* > لوله های پی وی سی شرایط سخت زیر را داشته باشند: > مقاومت در برابر بارهای مکانیکی بالا - مقاومت در برابر ضربه با ضربه 10 تا 20 > نیوتن بر متر؛ > حفاظت از ارتباطات داخلی در برابر و گرد و غبار ریز طبق طبقه بندی بین المللی > IP65. > با پارگی احتمالی، سطح کشیدگی نسبی نباید از 30 درصد تجاوز کند. > غیر قابل احتراق - این ماده نباید از احتراق در کند تا +650 درجه پشتیبانی کند. > مقاومت غلاف عایق برای 60 مرحله در 100 mΩ باشد. > خواص دی الکتریک در فرکانس جریان 50 هرتز و ماشین ولتاژ 2000 ولت. > لوله های پی وی سی بسته به ضخامت دیواره سنگین (تقویت شده)، سبک، متوسط یا غیر > استاندارد هستند. مجموعه در بازار می توانید مجموعه وسیعی از لوله های پی وی سی > با قطر 16 تا 750 میلی متر را پیدا کنید. محصولات لوله با مقطع کوچک هنگام نصب > لوازم خانگی کوچک، تخمگذار خطوط ارتباطی استفاده می شود. لوله های پلی وینیل > کلرید با بزرگترین اندازه هنگام نصب کابل های برق قوی یا خطوط در مکان های > بالقوه خطرناک (در انبارهای فرآورده های نفتی و سایر مواد منفجره) استفاده می > شود. > _{color:#00875a}لوله های اشاره به دلیل عبور نور زیاد پلی در رنگ های مختلف از > جمله شفاف تولید می شوند. محصولات PVC در مقایسه با نمونه های پلاستیکی مقاومت > بالاتری در برابر اشعه UV دارند. در سطح لوله های خارجی، از داخل و صاف است. > کاهش سفتی چنین لوله هایی در دمای بالای 78 درجه سانتیگراد صفر بالای مشاهده می > شود.{color}_ > > > h2. *لوله پلیکا کجا استفاده می شوند* > محصولات لوله پی وی سی سفت و سخت برای سازماندهی زهکشی،
[jira] [Assigned] (DRILL-1162) 25 way join ended up with OOM
[ https://issues.apache.org/jira/browse/DRILL-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-1162: - Assignee: (was: Vova Vysotskyi) > 25 way join ended up with OOM > - > > Key: DRILL-1162 > URL: https://issues.apache.org/jira/browse/DRILL-1162 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Flow, Query Planning & Optimization >Reporter: Rahul Kumar Challapalli >Priority: Critical > Fix For: Future > > Attachments: error.log, oom_error.log > > > git.commit.id.abbrev=e5c2da0 > The below query results in 0 results being returned > {code:sql} > select count(*) from `lineitem1.parquet` a > inner join `part.parquet` j on a.l_partkey = j.p_partkey > inner join `orders.parquet` k on a.l_orderkey = k.o_orderkey > inner join `supplier.parquet` l on a.l_suppkey = l.s_suppkey > inner join `partsupp.parquet` m on j.p_partkey = m.ps_partkey and l.s_suppkey > = m.ps_suppkey > inner join `customer.parquet` n on k.o_custkey = n.c_custkey > inner join `lineitem2.parquet` b on a.l_orderkey = b.l_orderkey > inner join `lineitem2.parquet` c on a.l_partkey = c.l_partkey > inner join `lineitem2.parquet` d on a.l_suppkey = d.l_suppkey > inner join `lineitem2.parquet` e on a.l_extendedprice = e.l_extendedprice > inner join `lineitem2.parquet` f on a.l_comment = f.l_comment > inner join `lineitem2.parquet` g on a.l_shipdate = g.l_shipdate > inner join `lineitem2.parquet` h on a.l_commitdate = h.l_commitdate > inner join `lineitem2.parquet` i on a.l_receiptdate = i.l_receiptdate > inner join `lineitem2.parquet` o on a.l_receiptdate = o.l_receiptdate > inner join `lineitem2.parquet` p on a.l_receiptdate = p.l_receiptdate > inner join `lineitem2.parquet` q on a.l_receiptdate = q.l_receiptdate > inner join `lineitem2.parquet` r on a.l_receiptdate = r.l_receiptdate > inner join `lineitem2.parquet` s on a.l_receiptdate = s.l_receiptdate > inner join `lineitem2.parquet` t on a.l_receiptdate = t.l_receiptdate > inner join `lineitem2.parquet` u on a.l_receiptdate = u.l_receiptdate > inner join `lineitem2.parquet` v on a.l_receiptdate = v.l_receiptdate > inner join `lineitem2.parquet` w on a.l_receiptdate = w.l_receiptdate > inner join `lineitem2.parquet` x on a.l_receiptdate = x.l_receiptdate; > {code} > However when we remove the last 'inner join' and run the query it returns > '716372534'. Since the last inner join is similar to the one's before it, it > should match some records and return the data appropriately. > The logs indicated that it actually returned 0 results. Attached the log file. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-7523) Update Calcite to 1.30.0
[ https://issues.apache.org/jira/browse/DRILL-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-7523: -- Summary: Update Calcite to 1.30.0 (was: Update Calcite to 1.23.0) > Update Calcite to 1.30.0 > > > Key: DRILL-7523 > URL: https://issues.apache.org/jira/browse/DRILL-7523 > Project: Apache Drill > Issue Type: Task >Affects Versions: 1.17.0 >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > Fix For: Future > > > Upgrade to Calcite 1.22 version. > Also, there are some fixes that may help to avoid specific commit for > CALCITE-3121. > As discussed in CALCITE-2223, there were made additional changes for fixing > this issue, so we should check whether we can remove the workaround made in > DRILL-6212. > Incorporate with changes made in > https://issues.apache.org/jira/browse/CALCITE-3774 - remove overriding > {{shouldMergeProject}} and set {{withBloat(int bloat)}} in custom rel builder > and investigate whether it would help to remove {{Hook.REL_BUILDER_SIMPLIFY}} > hook. > Additionally, please check whether specific commit with ViewExpander may be > removed due to changes made in CALCITE-2441. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8245) Project pushdown depends on rules order and might not happen
[ https://issues.apache.org/jira/browse/DRILL-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8245: -- Parent: DRILL-7523 Issue Type: Sub-task (was: Bug) > Project pushdown depends on rules order and might not happen > > > Key: DRILL-8245 > URL: https://issues.apache.org/jira/browse/DRILL-8245 > Project: Apache Drill > Issue Type: Sub-task >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > > When the ProjectRomoveRule deletes the project before applying the rule that > pushes the project to scan, Drill converts scan to DrillScanRel and adds > explicitly star column so all columns will be read. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8245) Project pushdown depends on rules order and might not happen
Vova Vysotskyi created DRILL-8245: - Summary: Project pushdown depends on rules order and might not happen Key: DRILL-8245 URL: https://issues.apache.org/jira/browse/DRILL-8245 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi When the ProjectRomoveRule deletes the project before applying the rule that pushes the project to scan, Drill converts scan to DrillScanRel and adds explicitly star column so all columns will be read. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8237) Limit is not pushed down to scan for MSSQL
[ https://issues.apache.org/jira/browse/DRILL-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8237: -- Summary: Limit is not pushed down to scan for MSSQL (was: Limit is not pushed for MSSQL) > Limit is not pushed down to scan for MSSQL > -- > > Key: DRILL-8237 > URL: https://issues.apache.org/jira/browse/DRILL-8237 > Project: Apache Drill > Issue Type: Bug >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > > [~dzamo] has noticed that the following test case will fail > {code:java} > @Test > public void testLimitPushDownWithOffset() throws Exception { > String query = "select person_id, first_name from mssql.dbo.person limit > 100 offset 10"; > queryBuilder() > .sql(query) > .planMatcher() > .include("Jdbc\\(.*SELECT TOP \\(110\\)") > .include("Limit\\(") > .match(); > } > {code} > because the limit wasn't pushed down. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8237) Limit is not pushed for MSSQL
Vova Vysotskyi created DRILL-8237: - Summary: Limit is not pushed for MSSQL Key: DRILL-8237 URL: https://issues.apache.org/jira/browse/DRILL-8237 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi [~dzamo] has noticed that the following test case will fail {code:java} @Test public void testLimitPushDownWithOffset() throws Exception { String query = "select person_id, first_name from mssql.dbo.person limit 100 offset 10"; queryBuilder() .sql(query) .planMatcher() .include("Jdbc\\(.*SELECT TOP \\(110\\)") .include("Limit\\(") .match(); } {code} because the limit wasn't pushed down. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8234) Register rules only from plugins used in the query
Vova Vysotskyi created DRILL-8234: - Summary: Register rules only from plugins used in the query Key: DRILL-8234 URL: https://issues.apache.org/jira/browse/DRILL-8234 Project: Apache Drill Issue Type: Improvement Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Currently, rules from all enabled plugins are collected for query planning. It could cause issues when one of the plugins became invalid. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8216) Use EVF-based JSON reader for Values operator
Vova Vysotskyi created DRILL-8216: - Summary: Use EVF-based JSON reader for Values operator Key: DRILL-8216 URL: https://issues.apache.org/jira/browse/DRILL-8216 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi The newer Calcite version simplifies and removes casts for literals. It causes wrong results for drillTypeOf and sqlTypeOf functions, since the Values operator uses an old JSON reader which reads integers with bigInt type. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8214) Replace EnumerableTableScan usage with LogicalTableScan
Vova Vysotskyi created DRILL-8214: - Summary: Replace EnumerableTableScan usage with LogicalTableScan Key: DRILL-8214 URL: https://issues.apache.org/jira/browse/DRILL-8214 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Newer Calcite version returns LogicalTableScan instead of EnumerableTableScan in RelOptTableImpl, so Drill shouldn't rely on this class also and LogicalTableScan where possible to avoid planning issues. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8213) Replace deprecated RelNode.getRows with RelNode.estimateRowCount
Vova Vysotskyi created DRILL-8213: - Summary: Replace deprecated RelNode.getRows with RelNode.estimateRowCount Key: DRILL-8213 URL: https://issues.apache.org/jira/browse/DRILL-8213 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi In the newer Calcite version RelNode.getRows was removed, so replacing its usage with RelNode.estimateRowCount -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8212) Join queries fail with StackOverflowError
Vova Vysotskyi created DRILL-8212: - Summary: Join queries fail with StackOverflowError Key: DRILL-8212 URL: https://issues.apache.org/jira/browse/DRILL-8212 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi With the newer Calcite version, some join queries fail with StackOverflowError. In the new version, Calcite uses selectivity code for computing cost for some conditions. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8211) Replace deprecated RelNode.getChildExps with Project.getProjects
Vova Vysotskyi created DRILL-8211: - Summary: Replace deprecated RelNode.getChildExps with Project.getProjects Key: DRILL-8211 URL: https://issues.apache.org/jira/browse/DRILL-8211 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi In the newer Calcite version RelNode.getChildExps was removed, so replacing it with Project.getProjects -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8210) Add substring convertlet
Vova Vysotskyi created DRILL-8210: - Summary: Add substring convertlet Key: DRILL-8210 URL: https://issues.apache.org/jira/browse/DRILL-8210 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Newer Calcite requires adding convertlet for substring to prevent using ReflectiveConvertletTable, so queries that use substring will fail with {noformat} Caused by: java.lang.IllegalArgumentException: argument type mismatch at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.calcite.sql2rel.ReflectiveConvertletTable.lambda$registerOpTypeMethod$3(ReflectiveConvertletTable.java:140) at org.apache.calcite.sql2rel.SqlNodeToRexConverterImpl.convertCall(SqlNodeToRexConverterImpl.java:62) at org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.visit(SqlToRelConverter.java:5352) at org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.visit(SqlToRelConverter.java:4547) at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:161) at org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.convertExpression(SqlToRelConverter.java:5180) {noformat} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8209) Introduce rule for converting join with distinct input to semi-join
Vova Vysotskyi created DRILL-8209: - Summary: Introduce rule for converting join with distinct input to semi-join Key: DRILL-8209 URL: https://issues.apache.org/jira/browse/DRILL-8209 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Newer Calcite changed the order of applying rules. AggregateRemoveRule is applied before SemiJoinRule, so SemiJoinRule cannot be applied later, since aggregate is pruned from planning. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8208) Create builder for SqlSelect
[ https://issues.apache.org/jira/browse/DRILL-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8208: -- Description: Newer Calcite version adds more fields to the constructor of SqlSelect. In most cases, nulls are passed for these new arguments. Using builder will reduce the number of places where it should be added. (was: Never Calcite version adds more fields to the constructor of SqlSelect. In most cases, nulls are passed for these new arguments. Using builder will reduce the number of places where it should be added.) > Create builder for SqlSelect > > > Key: DRILL-8208 > URL: https://issues.apache.org/jira/browse/DRILL-8208 > Project: Apache Drill > Issue Type: Sub-task >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > > Newer Calcite version adds more fields to the constructor of SqlSelect. In > most cases, nulls are passed for these new arguments. Using builder will > reduce the number of places where it should be added. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (DRILL-8208) Create builder for SqlSelect
Vova Vysotskyi created DRILL-8208: - Summary: Create builder for SqlSelect Key: DRILL-8208 URL: https://issues.apache.org/jira/browse/DRILL-8208 Project: Apache Drill Issue Type: Sub-task Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Never Calcite version adds more fields to the constructor of SqlSelect. In most cases, nulls are passed for these new arguments. Using builder will reduce the number of places where it should be added. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8035) Update Janino to 3.1.7 version
[ https://issues.apache.org/jira/browse/DRILL-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8035: -- Description: Drill uses 3.0.11 Janino version. The latest one is [3.1.7|https://mvnrepository.com/artifact/org.codehaus.janino/janino/3.1.7] (was: Drill uses 3.0.11 Janino version. The latest one is [3.1.6|https://mvnrepository.com/artifact/org.codehaus.janino/janino/3.1.6]) > Update Janino to 3.1.7 version > -- > > Key: DRILL-8035 > URL: https://issues.apache.org/jira/browse/DRILL-8035 > Project: Apache Drill > Issue Type: Wish > Components: 1.19 >Affects Versions: Future >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > > Drill uses 3.0.11 Janino version. The latest one is > [3.1.7|https://mvnrepository.com/artifact/org.codehaus.janino/janino/3.1.7] -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (DRILL-8035) Update Janino to 3.1.7 version
[ https://issues.apache.org/jira/browse/DRILL-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8035: -- Summary: Update Janino to 3.1.7 version (was: Update Janino to 3.1.6 version) > Update Janino to 3.1.7 version > -- > > Key: DRILL-8035 > URL: https://issues.apache.org/jira/browse/DRILL-8035 > Project: Apache Drill > Issue Type: Wish > Components: 1.19 >Affects Versions: Future >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > > Drill uses 3.0.11 Janino version. The latest one is > [3.1.6|https://mvnrepository.com/artifact/org.codehaus.janino/janino/3.1.6] -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (DRILL-8035) Update Janino to 3.1.6 version
[ https://issues.apache.org/jira/browse/DRILL-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8035: - Assignee: Vova Vysotskyi (was: Vitalii Diravka) > Update Janino to 3.1.6 version > -- > > Key: DRILL-8035 > URL: https://issues.apache.org/jira/browse/DRILL-8035 > Project: Apache Drill > Issue Type: Wish > Components: 1.19 >Affects Versions: Future >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > > Drill uses 3.0.11 Janino version. The latest one is > [3.1.6|https://mvnrepository.com/artifact/org.codehaus.janino/janino/3.1.6] -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Assigned] (DRILL-8013) Drill attempts to push "$SUM0" to JDBC storage plugin for AVG
[ https://issues.apache.org/jira/browse/DRILL-8013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8013: - Assignee: Vova Vysotskyi > Drill attempts to push "$SUM0" to JDBC storage plugin for AVG > - > > Key: DRILL-8013 > URL: https://issues.apache.org/jira/browse/DRILL-8013 > Project: Apache Drill > Issue Type: Bug > Components: Storage - JDBC >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: Vova Vysotskyi >Priority: Major > > When running a query that includes the AVG aggregate against a JDBC data > source, Drill transforms AVG into SUM0 / COUNT. COUNT is pushed down to the > source correctly but SUM0 is pushed down as "$SUM0" instead of SUM, causing a > syntax error. More info is visible at the link below. > > http://mail-archives.apache.org/mod_mbox/drill-user/202110.mbox/%3ccags_q5dguiisdhk7lzxeytup322+ttxlayskdxhimj0g-qs...@mail.gmail.com%3E -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8192) Cassandra queries fail when enabled Mongo plugin
Vova Vysotskyi created DRILL-8192: - Summary: Cassandra queries fail when enabled Mongo plugin Key: DRILL-8192 URL: https://issues.apache.org/jira/browse/DRILL-8192 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8187) Dialect factory returns ANSI SQL dialect for BigQuery
Vova Vysotskyi created DRILL-8187: - Summary: Dialect factory returns ANSI SQL dialect for BigQuery Key: DRILL-8187 URL: https://issues.apache.org/jira/browse/DRILL-8187 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi See CALCITE-5064 for details. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8151) Add support for more ElasticSearch and Cassandra data types
Vova Vysotskyi created DRILL-8151: - Summary: Add support for more ElasticSearch and Cassandra data types Key: DRILL-8151 URL: https://issues.apache.org/jira/browse/DRILL-8151 Project: Apache Drill Issue Type: Improvement Affects Versions: 1.20.0 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Deleted] (DRILL-8141) Godrej Park Retreat is a luxury apartment project coming up in the heart of Sarjapur Road, near RGA Tech Park, Bangalore East.
[ https://issues.apache.org/jira/browse/DRILL-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi deleted DRILL-8141: -- > Godrej Park Retreat is a luxury apartment project coming up in the heart of > Sarjapur Road, near RGA Tech Park, Bangalore East. > -- > > Key: DRILL-8141 > URL: https://issues.apache.org/jira/browse/DRILL-8141 > Project: Apache Drill > Issue Type: Task >Reporter: GodrejParkRetreat >Priority: Major > > [https://www.godrej-parkretreat.co/] > [https://www.godrej-parkretreat.co/price/] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8137) Prevent reading union inputs after cancellation request
[ https://issues.apache.org/jira/browse/DRILL-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8137: -- Description: When running a union all query that has right side operators like join or aggregate, and limit on top of the union, such query will fail for the case when right input shouldn't read because the left one had required number of records for limit. Example of such failing query (thanks to [~dzamo] for helping to minimize it): {code:sql} WITH foo AS (SELECT 1 AS a FROM cp.`/tpch/nation.parquet` UNION ALL SELECT 1 AS a FROM cp.`/tpch/nation.parquet` WHERE n_nationkey > (SELECT 1) ) SELECT * FROM foo LIMIT 1 {code} > Prevent reading union inputs after cancellation request > --- > > Key: DRILL-8137 > URL: https://issues.apache.org/jira/browse/DRILL-8137 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.19.0 >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Critical > > When running a union all query that has right side operators like join or > aggregate, and limit on top of the union, such query will fail for the case > when right input shouldn't read because the left one had required number of > records for limit. > Example of such failing query (thanks to [~dzamo] for helping to minimize it): > {code:sql} > WITH foo AS > (SELECT 1 AS a >FROM cp.`/tpch/nation.parquet` >UNION ALL SELECT 1 AS a >FROM cp.`/tpch/nation.parquet` >WHERE n_nationkey > >(SELECT 1) ) > SELECT * > FROM foo > LIMIT 1 > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8137) Prevent reading union inputs after cancellation request
[ https://issues.apache.org/jira/browse/DRILL-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8137: -- Summary: Prevent reading union inputs after cancellation request (was: Prevent reading union inputs after query cancellation) > Prevent reading union inputs after cancellation request > --- > > Key: DRILL-8137 > URL: https://issues.apache.org/jira/browse/DRILL-8137 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.19.0 >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Critical > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8137) Prevent reading union inputs after query cancellation
Vova Vysotskyi created DRILL-8137: - Summary: Prevent reading union inputs after query cancellation Key: DRILL-8137 URL: https://issues.apache.org/jira/browse/DRILL-8137 Project: Apache Drill Issue Type: Bug Affects Versions: 1.19.0 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-8128) ORDER BY DESC is not working for JDBC mariadb storage plugin
[ https://issues.apache.org/jira/browse/DRILL-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17492091#comment-17492091 ] Vova Vysotskyi commented on DRILL-8128: --- DRILL-8090 is a different one. The issue there was related to MSSQL only, but not to MariaDB. [~matthros], please share full stack trace and logs with the exception that have more info about the root cause of that issue. I've tried a similar query on MariaDB, and it worked fine. > ORDER BY DESC is not working for JDBC mariadb storage plugin > > > Key: DRILL-8128 > URL: https://issues.apache.org/jira/browse/DRILL-8128 > Project: Apache Drill > Issue Type: Bug > Components: Storage - JDBC >Affects Versions: 1.19.0 >Reporter: Matthias Rosenthaler >Priority: Major > Fix For: 1.20.0 > > Attachments: drill_error.png > > > If I try to use a ORDER BY DESC clause for my jdbc mariadb storage plugin, it > always fails with the attached error message. > ORDER BY ASC is working perfectly. > > I am using mariadb-java-client-3.0.3.jar and mariadb v {{{}10.5.4{}}}{*}{{*}} > edit: If I add a where clause the query is also working, although the result > set contains all results like without the clause: > {code:java} > SELECT * FROM `sql.medat`.`measurement` as m WHERE TO_DATE( m.`begin`) >= > TO_DATE( '08.02.1970', 'dd.MM.') AND TO_DATE( m.`begin`) <= TO_DATE( > '08.02.2030', 'dd.MM.') ORDER BY `begin` DESC{code} > while this one produces the same error: > {code:java} > SELECT * FROM `sql.medat`.`measurement` as m WHERE `begin` > DATE_SUB(NOW(), > interval '1' month) ORDER BY `begin` DESC{code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-8131) Infinite planning when storage-phoenix is enabled
[ https://issues.apache.org/jira/browse/DRILL-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8131: - Assignee: Vova Vysotskyi (was: James Turton) > Infinite planning when storage-phoenix is enabled > - > > Key: DRILL-8131 > URL: https://issues.apache.org/jira/browse/DRILL-8131 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Phoenix >Affects Versions: 1.20.0 >Reporter: James Turton >Assignee: Vova Vysotskyi >Priority: Blocker > Fix For: 1.20.0 > > Attachments: phoenix-table-profile.json, phoenix-table-profile.log, > profiles-query-profile.json, profiles-query-profile.log, profiles.view.drill, > pulsar_e2e_test.view.drill > > > With a connection to Phoenix Query Server using either storage-jdbc or > storage-phoenix, two queries fail after an infinite planning loop. One query > is against the Phoenix QS (c.f. phoenix-table-* attachments), the other does > not involve Phoenix at all and queries Parquet in HDFS (c.f. profiles-query-* > attachments). Both queries go through Drill views, the definitions of which > are attached to this issue. They are both only projections. > Software versions in the environment where the bug exists: Hadoop 2, Phoenix > 4.15.0 with hbase 1.5.0 and phoenix-queryserver 1.0.0. Downgrading Drill's > phoenix-queryserver-client jar from 6.0.0 to 1.0.0 to accommodate this PQS > version does not remediate the problem. > Storage-jdbc config. > {code:java} > { > "type": "jdbc", > "driver": "org.apache.phoenix.queryserver.client.Driver", > "url": > "jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF;authentication=SPNEGO;principal=drill/bit@FOO.CLUSTER;keytab=/etc/hadoop/conf/drill.keytab";, > "writerBatchSize": 1, > "enabled": true > }{code} > The same storage-jdbc config is deployed in Drill 1.16 environments which do > not exhibit this infinite planning bug. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-8126) Ignore OAuth Parameter in Storage Plugin
[ https://issues.apache.org/jira/browse/DRILL-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17488411#comment-17488411 ] Vova Vysotskyi commented on DRILL-8126: --- [~cgivre], if this issue is a blocker, please send an email about it in the release vote thread. cc [~dzamo] > Ignore OAuth Parameter in Storage Plugin > > > Key: DRILL-8126 > URL: https://issues.apache.org/jira/browse/DRILL-8126 > Project: Apache Drill > Issue Type: Bug > Components: Web Server >Affects Versions: 1.19.0 >Reporter: Charles Givre >Assignee: Charles Givre >Priority: Blocker > Fix For: 1.20.0 > > > During certain REST calls, the REST interface was throwing a 400 error due to > the `oauth` parameter. This minor fix, makes that parameter ignorable. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8058: -- Fix Version/s: 1.20.0 (was: Future) > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > Labels: iceberg, storage > Fix For: 1.20.0 > > > Checked in Drill embedded the query form > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test > case: > {code:java} > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > {code} > But it gives the following error: > {code:java} > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null > at > org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) > at > org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) > at > org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) > at > org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) > {code} > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-8058. --- Resolution: Fixed Fixed in https://github.com/apache/drill/commit/493ac43af92f165f31e6f6ca3182bd1f324823e3 > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > Labels: iceberg, storage > Fix For: Future > > > Checked in Drill embedded the query form > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test > case: > {code:java} > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > {code} > But it gives the following error: > {code:java} > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null > at > org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) > at > org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) > at > org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) > at > org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) > {code} > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8114) Prevent applying Iceberg project on non-Iceberg tables
Vova Vysotskyi created DRILL-8114: - Summary: Prevent applying Iceberg project on non-Iceberg tables Key: DRILL-8114 URL: https://issues.apache.org/jira/browse/DRILL-8114 Project: Apache Drill Issue Type: Bug Affects Versions: 1.20.0 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Fix For: 1.20.0 When running complex queries on non-Iceberg tables, some Iceberg rules are still applied, so it might cause whole the query to fail -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8111) Remove lombok usage
Vova Vysotskyi created DRILL-8111: - Summary: Remove lombok usage Key: DRILL-8111 URL: https://issues.apache.org/jira/browse/DRILL-8111 Project: Apache Drill Issue Type: Task Affects Versions: 1.20.0 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Fix For: 1.20.0 See https://lists.apache.org/thread/gdv21h4m9omoqojhbpc4pcxtmyoqwrm6 for details. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8103) Unable to use view permissions for non-pam auth or when the view is stored on s3
Vova Vysotskyi created DRILL-8103: - Summary: Unable to use view permissions for non-pam auth or when the view is stored on s3 Key: DRILL-8103 URL: https://issues.apache.org/jira/browse/DRILL-8103 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Fix For: Future -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-6193) Latest Calcite optimized out join condition and cause "This query cannot be planned possibly due to either a cartesian join or an inequality join"
[ https://issues.apache.org/jira/browse/DRILL-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17469941#comment-17469941 ] Vova Vysotskyi commented on DRILL-6193: --- [~dzamo], it was a bug, but I'm not sure whether it is still reproducible. Nested loop join is used for the case when we have some specific join conditions that cannot be handled by a hash or merge join. One of such conditions is {{true}} literal. But similar to this case, we could have issues in Drill when during the planning process instead of a highly-performant hash join, a nested loop join was chosen, and users will observe bad performance because of that. So disabling it by default helps to discover such issues or warn users that the query they are attempting to submit will use NLJ with all consequences. But please notice that NLJ is prohibited only for the case when there is no join input that has a single record to avoid results multiplication or when planned has not enough info to detect that. > Latest Calcite optimized out join condition and cause "This query cannot be > planned possibly due to either a cartesian join or an inequality join" > -- > > Key: DRILL-6193 > URL: https://issues.apache.org/jira/browse/DRILL-6193 > Project: Apache Drill > Issue Type: Bug > Components: Query Planning & Optimization >Affects Versions: 1.13.0 >Reporter: Chunhui Shi >Assignee: Hanumath Rao Maduri >Priority: Critical > > I got the same error on apache master's MapR profile on the tip(before Hive > upgrade) and on changeset 9e944c97ee6f6c0d1705f09d531af35deed2e310, the last > commit of Calcite upgrade with the failed query reported in functional test > but now it is on parquet file: > > {quote}SELECT L.L_QUANTITY, L.L_DISCOUNT, L.L_EXTENDEDPRICE, L.L_TAX > > FROM cp.`tpch/lineitem.parquet` L, cp.`tpch/orders.parquet` O > WHERE cast(L.L_ORDERKEY as int) = cast(O.O_ORDERKEY as int) AND > cast(L.L_LINENUMBER as int) = 7 AND cast(L.L_ORDERKEY as int) = 10208 AND > cast(O.O_ORDERKEY as int) = 10208; > {quote} > However, built Drill on commit ef0fafea214e866556fa39c902685d48a56001e1, the > commit right before Calcite upgrade commits, the same query worked. > This was caused by latest Calcite simplified the predicates and during this > process, "cast(L.L_ORDERKEY as int) = cast(O.O_ORDERKEY as int) " was > considered redundant and was removed, so the logical plan of this query is > getting an always true condition for Join: > {quote}DrillJoinRel(condition=[true], joinType=[inner]) > {quote} > While in previous version we have > {quote}DrillJoinRel(condition=[=($5, $0)], joinType=[inner]) > {quote} > > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8010) Build fails with unresolved incubator-iceberg dependency
[ https://issues.apache.org/jira/browse/DRILL-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8010: -- Fix Version/s: 1.20.0 > Build fails with unresolved incubator-iceberg dependency > > > Key: DRILL-8010 > URL: https://issues.apache.org/jira/browse/DRILL-8010 > Project: Apache Drill > Issue Type: Bug >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > Fix For: 1.20.0 > > > As [~mrymar] noticed, build fails with the following error: > {noformat} > [ERROR] Failed to execute goal on project drill-iceberg-metastore: Could not > resolve dependencies for project > org.apache.drill.metastore:drill-iceberg-metastore:jar:1.20.0-SNAPSHOT: The > following artifacts could not be resolved: > com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-data:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-core:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-common:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-api:jar:93d51b9: Failure to find > com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9 in > https://conjars.org/repo was cached in the local repository, resolution will > not be reattempted until the update interval of conjars has elapsed or > updates are forced > {noformat} > Iceberg was moved out from the incubator, so it is likely that the old > JitPack dependency was revoked, but it wasn't rebuilt because the repo URL > was changed. > The solution is to use a newer non-incubator version. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-8090) LIMIT clause is pushed down to an invalid OFFSET-FETCH clause for MS SQL Server
[ https://issues.apache.org/jira/browse/DRILL-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17465763#comment-17465763 ] Vova Vysotskyi commented on DRILL-8090: --- I've checked, and the dialect is chosen correctly. The issue here is that the MS SQL server allows FETCH and OFFSET with the ORDER BY statement only. Though we could replace FETCH with TOP N, we still cannot handle OFFSET without ORDER BY... Ideally, we should prevent pushing down FETCH and OFFSET without ORDER BY based on specific dialect, but we should extend dialect API to be able to determine that (requires changes in Calcite). > LIMIT clause is pushed down to an invalid OFFSET-FETCH clause for MS SQL > Server > > > Key: DRILL-8090 > URL: https://issues.apache.org/jira/browse/DRILL-8090 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: Vova Vysotskyi >Priority: Major > > In MS SQL Server, ORDER BY is mandatory for using OFFSET and FETCH clauses. > Drill (or Calcite) does not include an ORDER BY clause in the SQL it > generates for a LIMIT clause. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-8090) LIMIT clause is pushed down to an invalid OFFSET-FETCH clause for MS SQL Server
[ https://issues.apache.org/jira/browse/DRILL-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8090: - Assignee: Vova Vysotskyi > LIMIT clause is pushed down to an invalid OFFSET-FETCH clause for MS SQL > Server > > > Key: DRILL-8090 > URL: https://issues.apache.org/jira/browse/DRILL-8090 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.19.0 >Reporter: James Turton >Assignee: Vova Vysotskyi >Priority: Major > > In MS SQL Server, ORDER BY is mandatory for using OFFSET and FETCH clauses. > Drill (or Calcite) does not include an ORDER BY clause in the SQL it > generates for a LIMIT clause. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17455942#comment-17455942 ] Vova Vysotskyi commented on DRILL-8058: --- Nope, different one, error on the next line: org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:102) vs org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) But will be fixed in the scope of DRILL-8060 > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > Labels: iceberg, storage > Fix For: Future > > > Checked in Drill embedded the query form > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test > case: > {code:java} > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > {code} > But it gives the following error: > {code:java} > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null > at > org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) > at > org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) > at > org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) > at > org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) > {code} > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8073) Add support for persistent table and storage aliases
Vova Vysotskyi created DRILL-8073: - Summary: Add support for persistent table and storage aliases Key: DRILL-8073 URL: https://issues.apache.org/jira/browse/DRILL-8073 Project: Apache Drill Issue Type: New Feature Reporter: Vova Vysotskyi -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8073) Add support for persistent table and storage aliases
[ https://issues.apache.org/jira/browse/DRILL-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8073: -- Fix Version/s: 1.20.0 > Add support for persistent table and storage aliases > > > Key: DRILL-8073 > URL: https://issues.apache.org/jira/browse/DRILL-8073 > Project: Apache Drill > Issue Type: New Feature >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > Fix For: 1.20.0 > > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-8073) Add support for persistent table and storage aliases
[ https://issues.apache.org/jira/browse/DRILL-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8073: - Assignee: Vova Vysotskyi > Add support for persistent table and storage aliases > > > Key: DRILL-8073 > URL: https://issues.apache.org/jira/browse/DRILL-8073 > Project: Apache Drill > Issue Type: New Feature >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-8058) NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null
[ https://issues.apache.org/jira/browse/DRILL-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8058: - Assignee: Vova Vysotskyi > NPE: Cannot invoke "org.apache.calcite.rel.core.TableScan.getTable()" because > "scan" is null > > > Key: DRILL-8058 > URL: https://issues.apache.org/jira/browse/DRILL-8058 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Iceberg >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > Labels: iceberg, storage > Fix For: Future > > > Checked in Drill embedded the query form > _TestE2EUnnestAndLateral#testMultipleBatchesLateral_WithLimitInParent_ test > case: > {code:java} > SELECT customer.c_name, avg(orders.o_totalprice) AS avgPrice FROM > dfs.`/{custom_path}/drill/exec/java-exec/target/org.apache.drill.exec.physical.impl.lateraljoin.TestE2EUnnestAndLateral/root/lateraljoin/multipleFiles` > > customer, LATERAL (SELECT t.ord.o_totalprice as o_totalprice FROM > UNNEST(customer.c_orders) t(ord) > WHERE t.ord.o_totalprice > 10 LIMIT 2) orders GROUP BY customer.c_name; > {code} > But it gives the following error: > {code:java} > Caused by: java.lang.NullPointerException: Cannot invoke > "org.apache.calcite.rel.core.TableScan.getTable()" because "scan" is null > at > org.apache.drill.exec.planner.common.DrillRelOptUtil.getDrillTable(DrillRelOptUtil.java:691) > at > org.apache.drill.exec.store.iceberg.plan.IcebergPluginImplementor.canImplement(IcebergPluginImplementor.java:101) > at > org.apache.drill.exec.store.plan.rule.PluginConverterRule.matches(PluginConverterRule.java:64) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:263) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:247) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:1566) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1840) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:848) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:864) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:92) > at > org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:329) > {code} > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8053) Reduce Docker image size
Vova Vysotskyi created DRILL-8053: - Summary: Reduce Docker image size Key: DRILL-8053 URL: https://issues.apache.org/jira/browse/DRILL-8053 Project: Apache Drill Issue Type: Bug Affects Versions: 1.20.0 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Fix For: 1.20.0 Exclude unwanted dependencies and optimize docker image -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8050) Fix license-maven-plugin unknown file extension warnings
Vova Vysotskyi created DRILL-8050: - Summary: Fix license-maven-plugin unknown file extension warnings Key: DRILL-8050 URL: https://issues.apache.org/jira/browse/DRILL-8050 Project: Apache Drill Issue Type: Bug Affects Versions: 1.20.0 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Fix For: 1.20.0 When running check for license, `license-maven-plugin` prints the following warnings: {noformat} [INFO] --- license-maven-plugin:3.0:check (default) @ drill-root --- [INFO] Checking licenses... Warning: Unknown file extension: /home/runner/work/drill/drill/.dockerignore Warning: Unknown file extension: /home/runner/work/drill/drill/contrib/storage-splunk/src/test/resources/logback-test.xml.bak Warning: Unknown file extension: /home/runner/work/drill/drill/contrib/format-httpd/src/test/resources/httpd/multiformat.access_log Warning: Unknown file extension: /home/runner/work/drill/drill/contrib/storage-jdbc/src/test/resources/mysql_config_override/mysql_override.cnf Warning: Unknown file extension: /home/runner/work/drill/drill/contrib/storage-cassandra/src/test/resources/queries.cql Warning: Unknown file extension: /home/runner/work/drill/drill/contrib/storage-druid/src/test/resources/druid/environment Warning: Unknown file extension: /home/runner/work/drill/drill/lombok.config Warning: Unknown file extension: /home/runner/work/drill/drill/hooks/push Warning: Unknown file extension: /home/runner/work/drill/drill/hooks/build Warning: Unable to find a comment style definition for some files. You may want to add a custom mapping for the relevant file extensions. {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8049) Make rule names for Iceberg plugin instances unique
Vova Vysotskyi created DRILL-8049: - Summary: Make rule names for Iceberg plugin instances unique Key: DRILL-8049 URL: https://issues.apache.org/jira/browse/DRILL-8049 Project: Apache Drill Issue Type: Bug Affects Versions: 1.20.0 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Fix For: 1.20.0 As [~dzamo] noticed, when defining multiple storage plugins with Iceberg format, queries fail with the following error: {noformat} Caused by: java.lang.AssertionError: Rule's description should be unique; existing rule=VertexDrelConverterRuleICEBERG.iceberg(in:ICEBERG.iceberg,out:LOGICAL); new rule=VertexDrelConverterRuleICEBERG.iceberg(in:ICEBERG.iceberg,out:LOGICAL) at org.apache.calcite.plan.AbstractRelOptPlanner.mapRuleDescription(AbstractRelOptPlanner.java:152) at org.apache.calcite.plan.volcano.VolcanoPlanner.addRule(VolcanoPlanner.java:459) at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:315) at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:405) at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:351) at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToRawDrel(DefaultSqlHandler.java:245) at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:308) at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:173) at org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:283) at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPhysicalPlan(DrillSqlWorker.java:163) at org.apache.drill.exec.planner.sql.DrillSqlWorker.convertPlan(DrillSqlWorker.java:128) at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:93) at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:593) at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:274) ... 1 common frames omitted {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-8026) Sqlline 1.12 upgrade
[ https://issues.apache.org/jira/browse/DRILL-8026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8026: - Assignee: Vova Vysotskyi > Sqlline 1.12 upgrade > > > Key: DRILL-8026 > URL: https://issues.apache.org/jira/browse/DRILL-8026 > Project: Apache Drill > Issue Type: Bug >Reporter: Cong Luo >Assignee: Vova Vysotskyi >Priority: Blocker > Fix For: 1.20.0 > > > Upgrade to SqlLine 1.12 once it is released > (https://github.com/julianhyde/sqlline/issues/451). > The goal of the update is to better support Apple M1. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8034) Support Java17
[ https://issues.apache.org/jira/browse/DRILL-8034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8034: -- Fix Version/s: 1.20.0 (was: Future) > Support Java17 > -- > > Key: DRILL-8034 > URL: https://issues.apache.org/jira/browse/DRILL-8034 > Project: Apache Drill > Issue Type: Wish > Components: Execution - Codegen >Affects Versions: 1.19.0 >Reporter: Vitalii Diravka >Assignee: Vova Vysotskyi >Priority: Major > Fix For: 1.20.0 > > > Drill officially supports Java14, and it can be updated to Java15 with a > minimal changes. But latest LTS Java version is 17. Need to add support of > building Drill with JVM17 and running on that JVM. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8035) Update Janino to 3.1.6 version
[ https://issues.apache.org/jira/browse/DRILL-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8035: -- Parent: (was: DRILL-8034) Issue Type: Wish (was: Sub-task) > Update Janino to 3.1.6 version > -- > > Key: DRILL-8035 > URL: https://issues.apache.org/jira/browse/DRILL-8035 > Project: Apache Drill > Issue Type: Wish > Components: 1.19 >Affects Versions: Future >Reporter: Vitalii Diravka >Assignee: Vitalii Diravka >Priority: Major > > Drill uses 3.0.11 Janino version. The latest one is > [3.1.6|https://mvnrepository.com/artifact/org.codehaus.janino/janino/3.1.6] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Assigned] (DRILL-8007) Problems with datetime in parquet files
[ https://issues.apache.org/jira/browse/DRILL-8007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi reassigned DRILL-8007: - Assignee: Vova Vysotskyi > Problems with datetime in parquet files > --- > > Key: DRILL-8007 > URL: https://issues.apache.org/jira/browse/DRILL-8007 > Project: Apache Drill > Issue Type: Bug > Components: Storage - Parquet >Affects Versions: 1.19.0 >Reporter: Bjørn Jørgensen >Assignee: Vova Vysotskyi >Priority: Major > > Hi I did fill a bug in Apache spark for problems with datetime columns. > Looks like Apache Drill only implements TIMESTAMP_MILLIS in Parquet. > TIMESTAMP_MICROS is also Parquet standard but looks like the read path for > this type seems missing in Drill. > > The bug report > https://issues.apache.org/jira/browse/SPARK-36934 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (DRILL-8027) Format plugin for Apache Iceberg
[ https://issues.apache.org/jira/browse/DRILL-8027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-8027. --- Resolution: Fixed Fixed in https://github.com/apache/drill/commit/ec76ad05680612b84147993d66312f040430cac0 > Format plugin for Apache Iceberg > > > Key: DRILL-8027 > URL: https://issues.apache.org/jira/browse/DRILL-8027 > Project: Apache Drill > Issue Type: New Feature >Affects Versions: 1.20.0 >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > Labels: plugin > Fix For: 1.20.0 > > > Implement a format plugin for Apache Iceberg. > Plugin should be able to: > - support reading data from Iceberg tables in Parquet, Avro, and ORC formats > - push down fields used in the project > - push down supported filter expressions > - spit and parallelize reading tasks > - provide a way for specifying Iceberg-specific configurations > - read specific snapshot versions if configured > - read table metadata (entries, files, history, snapshots, manifests, > partitions, etc.) > - support schema provisioning -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8042) Select star from MongoDB with aggregated pipeline fails with empty $project error
Vova Vysotskyi created DRILL-8042: - Summary: Select star from MongoDB with aggregated pipeline fails with empty $project error Key: DRILL-8042 URL: https://issues.apache.org/jira/browse/DRILL-8042 Project: Apache Drill Issue Type: Bug Affects Versions: 1.20.0 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Fix For: 1.20.0 See https://github.com/apache/drill/issues/2367 for details -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8041) Fix mongo scan spec deserialization
[ https://issues.apache.org/jira/browse/DRILL-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8041: -- Fix Version/s: 1.20.0 > Fix mongo scan spec deserialization > --- > > Key: DRILL-8041 > URL: https://issues.apache.org/jira/browse/DRILL-8041 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.20.0 >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > Fix For: 1.20.0 > > > See https://github.com/apache/drill/issues/2355 for details. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (DRILL-8041) Fix mongo scan spec deserialization
[ https://issues.apache.org/jira/browse/DRILL-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8041: -- Affects Version/s: 1.20.0 > Fix mongo scan spec deserialization > --- > > Key: DRILL-8041 > URL: https://issues.apache.org/jira/browse/DRILL-8041 > Project: Apache Drill > Issue Type: Bug >Affects Versions: 1.20.0 >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > > See https://github.com/apache/drill/issues/2355 for details. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8041) Fix mongo scan spec deserialization
Vova Vysotskyi created DRILL-8041: - Summary: Fix mongo scan spec deserialization Key: DRILL-8041 URL: https://issues.apache.org/jira/browse/DRILL-8041 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi See https://github.com/apache/drill/issues/2355 for details. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (DRILL-8027) Format plugin for Apache Iceberg
Vova Vysotskyi created DRILL-8027: - Summary: Format plugin for Apache Iceberg Key: DRILL-8027 URL: https://issues.apache.org/jira/browse/DRILL-8027 Project: Apache Drill Issue Type: New Feature Affects Versions: 1.20.0 Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Fix For: 1.20.0 Implement a format plugin for Apache Iceberg. Plugin should be able to: - support reading data from Iceberg tables in Parquet, Avro, and ORC formats - push down fields used in the project - push down supported filter expressions - spit and parallelize reading tasks - provide a way for specifying Iceberg-specific configurations - read specific snapshot versions if configured - read table metadata (entries, files, history, snapshots, manifests, partitions, etc.) - support schema provisioning -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (DRILL-8010) Build fails with unresolved incubator-iceberg dependency
[ https://issues.apache.org/jira/browse/DRILL-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-8010. --- Resolution: Fixed Fixed in https://github.com/apache/drill/commit/0c9451e6720e5028e1187067cc6d1957ff998bef > Build fails with unresolved incubator-iceberg dependency > > > Key: DRILL-8010 > URL: https://issues.apache.org/jira/browse/DRILL-8010 > Project: Apache Drill > Issue Type: Bug >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > > As [~mrymar] noticed, build fails with the following error: > {noformat} > [ERROR] Failed to execute goal on project drill-iceberg-metastore: Could not > resolve dependencies for project > org.apache.drill.metastore:drill-iceberg-metastore:jar:1.20.0-SNAPSHOT: The > following artifacts could not be resolved: > com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-data:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-core:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-common:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-api:jar:93d51b9: Failure to find > com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9 in > https://conjars.org/repo was cached in the local repository, resolution will > not be reattempted until the update interval of conjars has elapsed or > updates are forced > {noformat} > Iceberg was moved out from the incubator, so it is likely that the old > JitPack dependency was revoked, but it wasn't rebuilt because the repo URL > was changed. > The solution is to use a newer non-incubator version. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (DRILL-8010) Build fails with unresolved incubator-iceberg dependency
[ https://issues.apache.org/jira/browse/DRILL-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi updated DRILL-8010: -- Description: As [~mrymar] noticed, build fails with the following error: {noformat} [ERROR] Failed to execute goal on project drill-iceberg-metastore: Could not resolve dependencies for project org.apache.drill.metastore:drill-iceberg-metastore:jar:1.20.0-SNAPSHOT: The following artifacts could not be resolved: com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-data:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-core:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-common:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-api:jar:93d51b9: Failure to find com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9 in https://conjars.org/repo was cached in the local repository, resolution will not be reattempted until the update interval of conjars has elapsed or updates are forced {noformat} Iceberg was moved out from the incubator, so it is likely that the old JitPack dependency was revoked, but it wasn't rebuilt because the repo URL was changed. The solution is to use a newer non-incubator version. was: Error: {noformat} [ERROR] Failed to execute goal on project drill-iceberg-metastore: Could not resolve dependencies for project org.apache.drill.metastore:drill-iceberg-metastore:jar:1.20.0-SNAPSHOT: The following artifacts could not be resolved: com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-data:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-core:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-common:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-api:jar:93d51b9: Failure to find com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9 in https://conjars.org/repo was cached in the local repository, resolution will not be reattempted until the update interval of conjars has elapsed or updates are forced {noformat} Iceberg was moved out from the incubator, so it is likely that the old JitPack dependency was revoked, but it wasn't rebuilt because the repo URL was changed. The solution is to use a newer non-incubator version. > Build fails with unresolved incubator-iceberg dependency > > > Key: DRILL-8010 > URL: https://issues.apache.org/jira/browse/DRILL-8010 > Project: Apache Drill > Issue Type: Bug >Reporter: Vova Vysotskyi >Assignee: Vova Vysotskyi >Priority: Major > > As [~mrymar] noticed, build fails with the following error: > {noformat} > [ERROR] Failed to execute goal on project drill-iceberg-metastore: Could not > resolve dependencies for project > org.apache.drill.metastore:drill-iceberg-metastore:jar:1.20.0-SNAPSHOT: The > following artifacts could not be resolved: > com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-data:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-core:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-common:jar:93d51b9, > com.github.apache.incubator-iceberg:iceberg-api:jar:93d51b9: Failure to find > com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9 in > https://conjars.org/repo was cached in the local repository, resolution will > not be reattempted until the update interval of conjars has elapsed or > updates are forced > {noformat} > Iceberg was moved out from the incubator, so it is likely that the old > JitPack dependency was revoked, but it wasn't rebuilt because the repo URL > was changed. > The solution is to use a newer non-incubator version. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (DRILL-8010) Build fails with unresolved incubator-iceberg dependency
Vova Vysotskyi created DRILL-8010: - Summary: Build fails with unresolved incubator-iceberg dependency Key: DRILL-8010 URL: https://issues.apache.org/jira/browse/DRILL-8010 Project: Apache Drill Issue Type: Bug Reporter: Vova Vysotskyi Assignee: Vova Vysotskyi Error: {noformat} [ERROR] Failed to execute goal on project drill-iceberg-metastore: Could not resolve dependencies for project org.apache.drill.metastore:drill-iceberg-metastore:jar:1.20.0-SNAPSHOT: The following artifacts could not be resolved: com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-data:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-core:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-common:jar:93d51b9, com.github.apache.incubator-iceberg:iceberg-api:jar:93d51b9: Failure to find com.github.apache.incubator-iceberg:iceberg-parquet:jar:93d51b9 in https://conjars.org/repo was cached in the local repository, resolution will not be reattempted until the update interval of conjars has elapsed or updates are forced {noformat} Iceberg was moved out from the incubator, so it is likely that the old JitPack dependency was revoked, but it wasn't rebuilt because the repo URL was changed. The solution is to use a newer non-incubator version. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (DRILL-7843) Fix NPE due running TestZookeeperClient
[ https://issues.apache.org/jira/browse/DRILL-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vova Vysotskyi resolved DRILL-7843. --- Resolution: Duplicate > Fix NPE due running TestZookeeperClient > --- > > Key: DRILL-7843 > URL: https://issues.apache.org/jira/browse/DRILL-7843 > Project: Apache Drill > Issue Type: Test > Components: Tools, Build & Test >Affects Versions: 1.18.0 >Reporter: Vitalii Diravka >Priority: Minor > Fix For: Future > > > There is recurring NPE error (13x times) during running _TestZookeeperClient_ > test case: > {code:java} > {code} > > {code:java} > {code} > _[INFO] Running org.apache.drill.exec.coord.zk.TestZookeeperClient > 2169java.lang.NullPointerException 2170 at > org.apache.zookeeper.server.persistence.FileTxnSnapLog.fastForwardFromEdits(FileTxnSnapLog.java:269) > 2171 at > org.apache.zookeeper.server.ZKDatabase.fastForwardDataBase(ZKDatabase.java:251) > 2172 at > org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:583) > 2173 at > org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:546) > 2174 at > org.apache.zookeeper.server.NIOServerCnxnFactory.shutdown(NIOServerCnxnFactory.java:929) > 2175 at > org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:178) > 2176 at > org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:118) > 2177 at > org.apache.curator.test.TestingZooKeeperServer.close(TestingZooKeeperServer.java:130) > 2178 at org.apache.curator.test.TestingServer.close(TestingServer.java:178) > 2179 at > org.apache.drill.exec.coord.zk.TestZookeeperClient.tearDown(TestZookeeperClient.java:92) > 2180 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2181 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2182 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2183 at java.lang.reflect.Method.invoke(Method.java:498) 2184 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2185 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2186 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2187 at > mockit.integration.junit4.JUnit4TestRunnerDecorator.invokeExplosively(JUnit4TestRunnerDecorator.java:49) > 2188 at > mockit.integration.junit4.FakeFrameworkMethod.invokeExplosively(FakeFrameworkMethod.java:29) > 2189 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java) > 2190 at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) > 2191 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 2192 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2193 at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2194 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2195 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2196 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2197 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2198 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2199 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2200 at > org.junit.runners.Suite.runChild(Suite.java:128) 2201 at > org.junit.runners.Suite.runChild(Suite.java:27) 2202 at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2203 at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2204 at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2205 at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2206 at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2207 at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2208 at > org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) 2209 at > org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) > 2210 at > org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeLazy(JUnitCoreWrapper.java:119) > 2211 at > org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:87) > 2212 at > org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75) > 2213 at > org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158) > 2214 at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) > 2215 at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) > 2216 at > org.apache.maven.s