Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/3260
Thanks @twalthr!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/3260
Hello, I updated PR for calcite 1.12.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ex00 closed the pull request at:
https://github.com/apache/flink/pull/3033
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/3012
Hi @tonycox, thanks for your reply.
>Do you have any negative cases that breake this down?
Yes, i have. which is expected result, if any field in source file is
empty, is null or defa
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3205#discussion_r100252102
--- Diff: flink-contrib/docker-flink/docker-compose.yml ---
@@ -16,21 +16,22 @@
# limitations under the License
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3012#discussion_r100247845
--- Diff:
flink-java/src/main/java/org/apache/flink/api/java/io/CsvReader.java ---
@@ -351,6 +356,32 @@ public CsvReader ignoreInvalidLines
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3012#discussion_r100247856
--- Diff:
flink-java/src/main/java/org/apache/flink/api/java/io/CsvReader.java ---
@@ -351,6 +356,32 @@ public CsvReader ignoreInvalidLines
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3012#discussion_r100247899
--- Diff:
flink-tests/src/test/java/org/apache/flink/test/io/CsvReaderITCase.java ---
@@ -122,6 +127,80 @@ public void testValueTypes() throws Exception
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3012#discussion_r100247879
--- Diff:
flink-scala/src/main/scala/org/apache/flink/api/scala/ExecutionEnvironment.scala
---
@@ -348,6 +349,47 @@ class ExecutionEnvironment(javaEnv
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3012#discussion_r100247870
--- Diff:
flink-scala/src/main/scala/org/apache/flink/api/scala/ExecutionEnvironment.scala
---
@@ -348,6 +349,47 @@ class ExecutionEnvironment(javaEnv
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3012#discussion_r100247839
--- Diff:
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java
---
@@ -192,6 +193,28 @@ public void getFlatFields(String
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/3012
Thanks for work @tonycox.
The PR looks good to me. I left a few minor comments about PR.
What do you mean about test negative case? If typeMap does not match with
fields type in file for
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r99778650
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetSingleRowJoin.scala
---
@@ -144,21 +150,46 @@ class
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/3205
Really I didn't build Flink. Thanks @jgrier for your comment. My case has
been showing that documentation is needed :)
I think next information will be useful for users who want run fli
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/3205
@jgrier thanks for your reply!
I tried to build flink image from your PR and i got next error:
>tar: invalid magic
tar: short read
The command '/bin/sh -c set -x &
GitHub user ex00 opened a pull request:
https://github.com/apache/flink/pull/3260
[FLINK-4604] Add support for standard deviation/variance
add rule for reduce standard deviation/variance functions
Thanks for contributing to Apache Flink. Before you open your pull request
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97734069
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3149#discussion_r97730063
--- Diff:
flink-connectors/flink-hbase/src/main/java/org/apache/flink/addons/hbase/HBaseTableSchema.java
---
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/3205
Hi @jgrier. Thaks for your PR!
Could you update
[README.md](https://github.com/apache/flink/blob/master/flink-contrib/docker-flink/README.md)
in accordance with your changes?
---
If your project
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r96588722
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetSingleRowJoin.scala
---
@@ -46,6 +47,7 @@ class
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r96585254
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetSingleRowJoin.scala
---
@@ -46,6 +47,7 @@ class
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r96580679
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetSingleRowJoin.scala
---
@@ -46,6 +47,7 @@ class
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r96397396
--- Diff:
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/JoinITCase.scala
---
@@ -372,9 +372,163 @@ class JoinITCase
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/3033
Hi @fhueske. I updated PR. Please look my changes. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r96359104
--- Diff:
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/JoinITCase.scala
---
@@ -372,9 +372,163 @@ class JoinITCase
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r96359044
--- Diff:
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/JoinITCase.scala
---
@@ -372,9 +372,163 @@ class JoinITCase
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r96356761
--- Diff:
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/JoinITCase.scala
---
@@ -372,9 +372,163 @@ class JoinITCase
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r96356416
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/dataSet/DataSetJoinRule.scala
---
@@ -43,7 +43,7 @@ class
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r96356194
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetSingleRowJoin.scala
---
@@ -144,21 +150,46 @@ class
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3033#discussion_r93592588
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/dataSet/DataSetJoinRule.scala
---
@@ -40,10 +41,17 @@ class
GitHub user ex00 opened a pull request:
https://github.com/apache/flink/pull/3033
[FLINK-5256] Extend DataSetSingleRowJoin to support Left and Right joins
add support outer joins with single record via DataSetSingleRowJoin
Thanks for contributing to Apache Flink. Before you
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2958
Thank you Fabian!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2958
PR has been updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2958
Hi @fhueske and @twalthr, thanks for your comments.
I will update PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user ex00 opened a pull request:
https://github.com/apache/flink/pull/2958
[FLINK-4704] Move Table API to org.apache.flink.table
Thanks for contributing to Apache Flink. Before you open your pull request,
please take the following check list into consideration.
If your
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2840
Hi @fhueske, thanks for your review and comments!
I updated PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2840
Hi
I updated PR, added new rule for non grouped aggregate data and added
`AggregationTest` for check plan of query.
---
If your project is set up for it, you can reply to this email and have your
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2840
Hi @StephanEwen
I found a way check these changes via `TableTestBase` how told @fhueske
```scala
@Test
def testAggregateQueryBatchSQL(): Unit = {
val util = batchTestUtil
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/2840#discussion_r89277461
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/nodes/dataset/DataSetAggregate.scala
---
@@ -157,4 +161,41 @@ class
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2840
Hi @fhueske
>Actually, I think implementing the fix as an optimizer rule would be the
nicer solution. In that case we could transform one of the ITCases into a test
that extends the TableTestB
GitHub user ex00 opened a pull request:
https://github.com/apache/flink/pull/2840
[FLINK-4832] Count/Sum 0 elements
Hello.
Currently, if `AggregateDataSet` is empty then we unable to count or sum up
0 elements.
These changes allows to get correct result of aggregate
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/2818#discussion_r88429597
--- Diff: flink-libraries/flink-table/pom.xml ---
@@ -45,7 +45,7 @@ under the License.
org.codehaus.janino
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2650
@zentol, thanks for your review and comments!
I updated tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/2650#discussion_r88007685
--- Diff:
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroupTest.java
---
@@ -44,4 +52,132 @@ protected QueryScopeInfo
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2650
I can remove ``staticCharacterFilter`` from global test class, and
initialize filter in ``MetricRegistryTest`` how not static. This is static
filter doesn't apply in ``TestReporter*`` classes i
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2650
Hi
Is last changes correct?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/2650#discussion_r85543811
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java
---
@@ -169,19 +176,7 @@ public String
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2650
@StephanEwen, @zentol thanks for your comments!
Do need caching only first filter or don't cache any filter?
---
If your project is set up for it, you can reply to this email and have your
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2650
Hello. I pushed new changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2650
Thanks for you review, zentol!
>oh, I'm afraid i misunderstood the code you posted in the JIRA; i assumed
it was to showcase the behavior from the perspective of a single reporter.
In
GitHub user ex00 opened a pull request:
https://github.com/apache/flink/pull/2650
[FLINK-4563] [metrics] scope caching not adjusted for multiple reporters
Hello.
It is implementation FLINK-4563.
In ``AbstractMetricGroup.java`` added characterFilter and
firstReporterIndex
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2517
Thanks.
I pushed new changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2517
Hello, zentol.
>but, i just realized that without this you would end up with concurrency
issues since multiple register calls can be active at the same time...I'll have
to think about th
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/2517#discussion_r80927679
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/MetricRegistry.java
---
@@ -216,17 +244,20 @@ public ScopeFormats getScopeFormats
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2517
zentol, thanks!
I am pushed edited implementation again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ex00 commented on a diff in the pull request:
https://github.com/apache/flink/pull/2517#discussion_r80499420
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/MetricRegistry.java
---
@@ -219,9 +249,16 @@ public ScopeFormats getScopeFormats
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2517
zentol, thanks for you review.
I am pushed edited implementation.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2517
zentol, thanks for you comment.
I am pushed new version implementation, please check it. Is correct idea?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2517
Could you explain me how must look API for getting metric identifier?
Now I think about this:
```java
@Test
public void testConfigurableDelimiterForReporter
GitHub user ex00 reopened a pull request:
https://github.com/apache/flink/pull/2517
[FLINK-4564] [metrics] Delimiter should be configured per reporter
Hi
It is my fix FLINK-4564. I want fix this issue, please send me your
comments about this implementation.
I could assign
Github user ex00 closed the pull request at:
https://github.com/apache/flink/pull/2517
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2517
Not all clear.
How should look like `MetricGroup#getMetricIdentifier()`call in this case?
If we use single reporter for other reporters and know their indexes, we
must know name or index where
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2517
Thanks zentol!
> Something i thought of was to create a "front" metric group for every
reporter that is aware of this index.
Do you mean new MetricGroup implementation w
GitHub user ex00 opened a pull request:
https://github.com/apache/flink/pull/2517
[FLINK-4564] [metrics] Delimiter should be configured per reporter
Hi
It is my fix FLINK-4564. I want fix this issue, please send me your
comments about this implementation.
I could assign to
Github user ex00 closed the pull request at:
https://github.com/apache/flink/pull/2516
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ex00 commented on the issue:
https://github.com/apache/flink/pull/2516
Sorry. I made a mistake. In comment for commit I writed error of issue
number. This PR for issue FLINK-4564
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user ex00 opened a pull request:
https://github.com/apache/flink/pull/2516
[FLINK-4563] [metrics] Delimiter should be configured per reporter
Hi
It is my fix FLINK-4563. I want fix this issue, please send me your
comments about this implementation.
I could assign to
67 matches
Mail list logo