Jinfeng Ni created DRILL-5378:
-
Summary: Put more information into SchemaChangeException when
HashJoin hit SchemaChangeException
Key: DRILL-5378
URL: https://issues.apache.org/jira/browse/DRILL-5378
Proje
Github user jinfengni commented on a diff in the pull request:
https://github.com/apache/drill/pull/792#discussion_r107808909
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/expr/EvaluationVisitor.java
---
@@ -671,8 +674,9 @@ private HoldingContainer
visitBooleanAnd(
I am working on pushing down joins to Druid storage plugin. To my
experience, you need to write a rule to know whether the joins could be
pushed down by your storage plugin metadata first,then if ok ,you transfer
the join node to the scan node with the query relevant information in the
scan node. T
Rahul Challapalli created DRILL-5377:
Summary: Drill returns weird characters when parquet date
auto-correction is turned off
Key: DRILL-5377
URL: https://issues.apache.org/jira/browse/DRILL-5377
This seems like a reasonable feature request. It could also be expanded to
detect the underlying block size for the location being written to.
Could you file a JIRA for this?
Thanks
Kunal
From: François Méthot
Sent: Thursday, March 23, 2017 9:08:51 AM
To: de
The JDBC storage plugin does attempt to do pushdowns of joins. However, the
Drill optimizer will evaluate different query plans. In doing so, it may
choose an alternative plan that does not do a full pushdown if it believes
that’s a less costly plan than a full pushdown. There are a number of
Hi Muhammad,
It seems that the goal for filters should be possible; I’m not familiar enough
with the code to know if joins are currently supported, or if this is where
you’d have to make some contributions to Drill.
The storage plugin is called at various places in the planning process, and can
Hi Bob
Thanks for the positive acknowledgement of our efforts.
We're also interested in understanding what reformatting (I'm guessing you mean
code refactoring?) did you do for workarounds. As an open source project, we
look forward to contributions from the community in solving problems that
Hi Bob,
Thanks for sharing such positive experience of Drill 1.10 release with
the community! I'm sure that will give contributors in this community
motivation to get Drill better and faster in each new release.
If possible, please share more detail about your benchmark and
experience with 1.10.
After further investigation, Drill uses the hadoop ParquetFileWriter (
https://github.com/Parquet/parquet-mr/blob/master/parquet-hadoop/src/main/java/parquet/hadoop/ParquetFileWriter.java
).
This is where the file creation occurs so it might be tricky after all.
However ParquetRecordWriter.java (
GitHub user arina-ielchiieva opened a pull request:
https://github.com/apache/drill/pull/794
DRILL-5375: Nested loop join: return correct result for left join
With this fix nested loop join will correctly process INNER and LEFT joins
with non-equality conditions.
You can merge this
Github user Serhii-Harnyk commented on a diff in the pull request:
https://github.com/apache/drill/pull/793#discussion_r107647154
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/planner/cost/DrillRelMdRowCount.java
---
@@ -14,35 +14,71 @@
* WITHOUT WARRANTIES OR
12 matches
Mail list logo