I've found more examples here:
https://www.ververica.com/blog/a-journey-to-beating-flinks-sql-performance
where a fact table is enriched using several dimension tables, but again
the temporal table functions are registered using Table API like so:
```java
tEnv.registerFunction(
"dime
Hi James,
Your steps seem right. Could you check your jar file
'~/repos/simple_protobuf/SimpleTest/SimpleTest.jar'
that it does contain 'com.example.SimpleTest.class'?
Besides that, to use Kafka connector in sql-client, you should use
'flink-sql-connector-kafka' instead of
'flink-connector-kafka'
Hey Martijn,
Well, as a user I think that Scala API still adds a tremendous value, with
all its issues. But I'm not a committer and I don't know what effort it
takes to keep maintaining it... so I prepare for the worst :)
Regarding the proposed timeline, I don't know all the specifics around
brea
Hi Qing:
> I think this is refering to the order between broadcasted element and non
> broadcasted element, right?
No, as broadcast and nonbroadcast stream are different streams, they will
usually transfer with different tcp connection, we can not control the order of
elements in different con
Hi Flink Community,
I am trying to prove out the new protobuf functionality added to 1.16
([1]). I have built master locally and have attempted following the
Protobuf Format doc ([2]) to create a table with the kafka connector using
the protobuf format.
I compiled the sample .proto file using pro
Hi Mason,
Definitely! Feel free to open a PR and ping me for a review.
Cheers, Martijn
On Tue, Oct 4, 2022 at 3:51 PM Mason Chen wrote:
> Hi Martjin,
>
> I notice that this question comes up quite often. Would this be a good
> addition to the KafkaSource documentation? I'd be happy to contribu
Hi Martjin,
I notice that this question comes up quite often. Would this be a good
addition to the KafkaSource documentation? I'd be happy to contribute to
the documentation.
Best,
Mason
On Tue, Oct 4, 2022 at 11:23 AM Martijn Visser
wrote:
> Hi Robert,
>
> Based on
> https://stackoverflow.com
Hi Flink community,
According to flink doc, avro-confluent([1]) is only supported for kafka sql
connector and upsert kafka sql connector.
I'm wondering if there is any reason this format is not supported for
Filesystem sql connector ([2]) ?
We are looking to use FileSystem sink to write to s3 i
Hi Yaroslav,
If I could summarize your suggestion, would it mean that you would only be
in favour of dropping Scala API support if we introduce Java 17 support
exactly at the same time (say Flink 2.0). I was first thinking that an
alternative would be to have a Flink 2.0 which supports Java 17 whi
Hi Martijn,
The 2.0 argument makes sense (I agree it's easier to introduce more
breaking changes in one major release), but I think my comment about Java
17 also makes sense in this case: 1) easier to introduce because breaking
changes are possible 2) you'd need to give some syntax sugar as an
alt
By looking at the docs for older versions of Flink, e.g.,
https://nightlies.apache.org/flink/flink-docs-release-1.8/dev/table/streaming/joins.html
it seems that it's possible to rewrite this query
```
SELECT
o.amount * r.rate AS amount
FROM
Orders AS o,
LATERAL TABLE (Rates(o.rowtime)) AS
It doesn't seem the case with processing time unless I'm mistaken:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/joins/#processing-time-temporal-join
This case seems to require a different syntax based on LATERAL TABLE and a
temporal table function (FOR SYSTEM_TI
Hi Robert,
Based on
https://stackoverflow.com/questions/72870074/apache-flink-restoring-state-from-checkpoint-with-changes-kafka-topic
I think you'll need to change the UID for your KafkaSource and restart your
job with allowNonRestoredState enabled.
Best regards,
Martijn
On Tue, Oct 4, 2022 at
We've changed the KafkaSource to ingest from a new topic but the old name
is still being referenced:
2022-10-04 07:03:41org.apache.flink.util.FlinkException: Global
failure triggered by OperatorCoordinator for 'Source: Grokfailures'
(operator feca28aff5a3958840bee985ee7de4d3).at
org.apache.fli
Hi Salva,
The examples for temporal table joins can be found at
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/joins/#temporal-joins.
Your example is definitely possible with just using SQL.
Best regards,
Martijn
On Tue, Oct 4, 2022 at 12:20 PM Salva Alcántara
Hi Ori,
Thanks for reaching out! I do fear that there's not much that we can help
out with. As you mentioned, it looks like there's a network issue which
would be on the Google side of issues. I'm assuming that the mentioned
Flink version corresponds with Flink 1.12 [1], which isn't supported in t
Based on this:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/temporal_table_function/
It seems that the only way of registering temporal table functions is via
the Table API.
If that is the case, is there a way to make this example work
```
SELECT
SUM(amount * r
Hi Yaroslav,
Thanks for the feedback, that's much appreciated! Regarding Java 17 as a
prerequisite, we would have to break compatibility already since Scala
2.12.7 doesn't compile on Java 17 [1].
Given that we can only remove Scala APIs with the next major Flink (2.0)
version, would that still im
+1
At my employer, we maintain several Flink jobs in Scala. We've been writing
newer jobs in Java, and we'd be fine with porting our Scala jobs over to
the Java API.
I'd like to request Java 17 support. Specifically, Java records is a
feature our Flink code would use a lot of and make the Java sy
Hi Martijn,
As a Scala user, this change would affect me a lot and I'm not looking
forward to rewriting my codebase, and it's not even a very large one :)
I'd like to suggest supporting Java 17 as a prerequisite (
https://issues.apache.org/jira/browse/FLINK-15736). Things like switch
expressions
Hi Martijn,
Thanks for bringing this up. It is generally a great idea, so +1.
Since both scala extension projects mentioned in the FLIP are still very
young and I don't think they will attract more scala developers as Flink
could just because they are external projects. It will be a big issue for
Hi Flink user group,
I have a question around broadcast.
Reading the docs
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/broadcast_state/#important-considerations,
it says the following:
> Order of events in Broadcast State may differ across tasks: Alt
Hi Marton,
You're making a good point, I originally wanted to include already the User
mailing list to get their feedback but forgot to do so. I'll do some more
outreach via other channels as well.
@Users of Flink, I've made a proposal to deprecate and remove Scala API
support in a future version
This should do the trick:
job:
upgradeMode: savepoint
state: suspended
In the CR you should see something similar, after applying the above change.
lastSavepoint:
formatType: CANONICAL
location: file:/flink-data/savepoints/savepoint-fc61e1-b9cf089c260e
timeStamp
24 matches
Mail list logo