orted issue [3].
This email intention is to give you a heads up in case you would encounter
[2] and letting you know that the fix is already provided and waiting for
review. If someone could do a review I would appreciate it [3].
Thanks,
Krzysztof Chmielewski
[1] https://github.com/apache/flink
Hi,
Could someone set FIX version to 1.17 and 1.16.2 on those tickets: [1] [2]
please?
The [1] Was merged to master on Feb 6th and It is present in 1.17 branch,
hence we need to merge bug fix [2] to that branch also.
[1] https://issues.apache.org/jira/browse/FLINK-27246
[2] https://issues.apache
Hi,
happy to see such a feature.
Small note from my end regarding Catalog changes.
TL;DR
I don't think it is necessary to delegate this feature to the catalog. I
think that since "timetravel" is per job/query property, its should not be
coupled with the Catalog or table definition. In my opinion t
ed to DynamicTableFactory so we could properly set Delta
standalone library.
Regards,
Krzysztof
śr., 31 maj 2023 o 10:37 Krzysztof Chmielewski <
krzysiek.chmielew...@gmail.com> napisał(a):
> Hi,
> happy to see such a feature.
> Small note from my end regarding Catalog changes.
flink-http-connector
Best Regards,
Krzysztof Chmielewski
On 2022/06/01 06:38:55 Maciej Bryński wrote:
> Hi Jeremy,
> We as GetinData already started some work related with HTTP connector.
> As a first step we created HTTP Source [1] described in blog post here
[2].
> During next 3
ector community) we are looking at ways to
workaround this problem. Fell free to join *flink-delta-connector* channel
on go.delta.io/slack for more info about Delta/Flink connector.
Cheers,
Krzysztof Chmielewski
[1] https://github.com/delta-io/connectors/tree/master/flink
[2]
https://delta.io/blog/20
dInvoke(Task.java:927)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
at java.lang.Thread.run(Thread.java:748)
Test is passing for Flink 1.12, 1.13 and 1.14.
I would like to ask for any suggestions, what might be causing
A small update,
When I change number of Sinks from 3 to 1, test passes.
śr., 7 wrz 2022 o 12:18 Krzysztof Chmielewski <
krzysiek.chmielew...@gmail.com> napisał(a):
> Hi,
> I'm a co-author for opensource Delta-Flink connector hosted on [1].
> The connector was originate
Hi,
Krzysztof Chmielewski [1] from Delta-Flink connector open source community
here [2].
I'm totally agree with Steven on this. Sink's V1 GlobalCommitter is
something exactly what Flink-Delta Sink needs since it is the place where
we do an actual commit to the Delta Log which should be
re compact reproducer than testFileSink
test from [1].
[1]
https://github.com/kristoffSC/connectors/blob/Flink_1.15/flink/src/test/java/io/delta/flink/sink/DeltaSinkStreamingExecutionITCase.java
śr., 7 wrz 2022 o 13:22 Krzysztof Chmielewski <
krzysiek.chmielew...@gmail.com> napisał(a):
does look good though.
Regards,
Krzysztof Chmielewski
[1] https://lists.apache.org/thread/otscy199g1l9t3llvo8s2slntyn2r1jc
pt., 9 wrz 2022 o 20:49 Martijn Visser
napisał(a):
> Hi all,
>
> A couple of bits from when work was being done on the new sink: V1 is
> completely simulate
Hi Martijn
Could you clarify a little bit what do you mean by:
"The important part to remember is that this
topology is lagging one checkpoint behind in terms of fault-tolerance: it
only receives data once the committer committed"
What are the implications?
Thanks,
Krzysztof Chmie
h GlobalCommitter and failover scenarios [4] and even that duplicated
key in [5] described above is another case, maybe we should never have two
entries for same subtaskId. That I don't know.
P.S.
Steven, apologies for hijacking the thread a little bit.
Thanks,
Krzysztof Chmielewski
[1]
https:/
ectors/blob/Flink_1.15/flink/src/test/java/io/delta/flink/sink/DeltaSinkStreamingExecutionITCase.java
> <
> https://github.com/kristoffSC/connectors/blob/Flink_1.15/flink/src/test/java/io/delta/flink/sink/DeltaSinkStreamingExecutionITCase.java
> >
> śr., 7 wrz 2022 o 13:22 Krzys
I had a similar use case.
What we did is that we decided that data for enrichment must be versioned,
for example our enrichment data was "refreshed" once a day and we kept old
data.
During the enrichment process we lookup data for given version based on
record's metadata.
Reg
://issues.apache.org/jira/browse/FLINK-29627
[2] https://github.com/apache/flink/pull/21115
[3] https://issues.apache.org/jira/browse/FLINK-29509
[4] https://issues.apache.org/jira/browse/FLINK-29512
Regards,
Krzysztof Chmielewski
czw., 20 paź 2022 o 11:21 Xingbo Huang napisał(a):
> Hi every
://issues.apache.org/jira/browse/FLINK-29512
[3] https://issues.apache.org/jira/browse/FLINK-29627
Regards,
Krzysztof Chmielewski
czw., 20 paź 2022 o 11:51 Martijn Visser
napisał(a):
> Hi Krzysztof,
>
> Given that this issue already exists in previous Flink versions, I don't
> think it's
rs/tree/master/flink
[3] https://issues.apache.org/jira/browse/FLINK-29589
Regards,
Krzysztof Chmielewski
czw., 20 paź 2022 o 16:13 Xingbo Huang napisał(a):
> Hi Krzysztof,
>
> When I was building rc2, I tried to search whether issues with `fix
> version` of 1.16.0 have not been closed.
> http
Hi community,
I would like to work on fixing FLINK-27246 [1].
I verified that it still happens on current master branch.
I also did an initial investigation and I believe I've found what seems to
be a cause of this problem. I have added comment to the ticket [2] where
I've asked couple of question
Krzysztof Chmielewski created FLINK-31018:
-
Summary: SQL Client -j option does not load user jars to classpath.
Key: FLINK-31018
URL: https://issues.apache.org/jira/browse/FLINK-31018
Project
Krzysztof Chmielewski created FLINK-31197:
-
Summary: Exception while writing Parqeut files containing Arrays
with complex types.
Key: FLINK-31197
URL: https://issues.apache.org/jira/browse/FLINK-31197
Krzysztof Chmielewski created FLINK-31202:
-
Summary: Add support for reading Parquet files containing Arrays
with complex types.
Key: FLINK-31202
URL: https://issues.apache.org/jira/browse/FLINK-31202
Krzysztof Chmielewski created FLINK-29589:
-
Summary: Data Loss in Sink GlobalCommitter during Task Manager
recovery
Key: FLINK-29589
URL: https://issues.apache.org/jira/browse/FLINK-29589
Krzysztof Chmielewski created FLINK-29627:
-
Summary: Sink - Duplicate key exception during recover more than 1
committable.
Key: FLINK-29627
URL: https://issues.apache.org/jira/browse/FLINK-29627
24 matches
Mail list logo