Hi, Eric
Firstly FileSystemTableSource doe not implement LookupTableSource which means
we cannot directly lookup a Filesystem table.
In FLINK-19830, we plan to support Processing-time temporal join any
table/views by lookup the data in join operator state which scanned from the
filesystem tabl
Hi Jan.
Thanks for your reply. Do you set the option
`table.exec.source.idle-timeout` and `pipeline.auto-watermark-interval` ?
If the `pipeline.auto-watermark-interval ` is zero, it will not trigger the
detection of the idle source.
Best,
Shengkai
Jan Oelschlegel 于2021年2月26日周五
下午11:09写道:
> Hi
This is great Timo. Maybe it only works in SQL but not Table API in the
middle of a plan, which is fine. We'll give this a shot, thank you so much.
On Fri, Feb 26, 2021 at 2:00 AM Timo Walther wrote:
> Hi Rex,
>
> as far as I know, we recently allowed PROCTIME() also at arbitrary
> locations in
Digging around, it looks like Upsert Kafka which requires a Primary Key
will actually do what I want and uses compaction, but it doesn't look
compatible with Debezium format? Is this on the roadmap?
In the meantime, we're considering consuming from Debezium Kafka (still
compacted) and then writing
David and Timo,
Firstly, thank you both so much for your contributions and advice. I believe
I’ve implemented things along the lines that you both detailed and things
appear to work just as expected (e.g. I can see things arriving, being added to
windows, discarding late records, and ultimately
Hello!
I'm getting an exception running a modified version of datastream/statefun
example. (See exception details that follow.) The example was adapted from
the original datastream example provided in statefun repo. I was trying to
play with the example by chaining two functions (with the 1st func
Hi Team,
While running java flink project in local, I am facing following issues: *Could
not create actor system ; Caused by: java.lang.NoSuchMethodError:
scala.Product.$init$(Lscala/Product;)V*
Could you suggest does flink java project needs scala at run time? What
versions might be incompatible
I believe bootstrap.servers is mandatory Kafka property, but it looks like you
didn’t set it
From: Claude M
Sent: Friday, February 26, 2021 12:02:10 PM
To: user
Subject: Producer Configuration
Hello,
I created a simple Producer and when the job ran, it was get
Hi everybody,
I just wanted to say thanks again for all your input and share the
(surprisingly simple) solution that we came up with in the meantime:
class SensorRecordCounter extends KeyedProcessFunctionSensorRecord, SensorCount>{
private ValueState state;
private long windowSizeMs = 6L
Hello,
I created a simple Producer and when the job ran, it was getting the
following error:
Caused by: org.apache.kafka.common.errors.TimeoutException
I read about increasing the request.timeout.ms. Thus, I added the
following properties.
Properties properties = new Properties();
properties.s
Thank you Mattias.
It’s version1.9.
Best regards
Rainie
On Fri, Feb 26, 2021 at 6:33 AM Matthias Pohl
wrote:
> Hi Rainie,
> the network buffer pool was destroyed for some reason. This happens when
> the NettyShuffleEnvironment gets closed which is triggered when an operator
> is cleaned up, for
Does this also imply that it's not safe to compact the initial topic where
data is coming from Debezium? I'd think that Flink's Kafka source would
emit retractions on any existing data with a primary key as new data with
the same pk arrived (in our case all data has primary keys). I guess that
goes
>>Is it possible that you
are generating to many watermarks that need to be send to all downstream
tasks?
This was it basically. I had unexpected flooding on specific keys, which
was guessing intermittently hot partitions that was back pressuring the
rowtime task.
I do have another question, how
Hello,
We have Flink job running in Kubernetes with Kuberenetes HA enabled (JM is
deployed as Job, single TM as StatefulSet). We taken savepoint with
cancel=true. Now when we are trying to start job using --fromSavepoint A, where
is A path we got from taking savepoint (ClusterEntrypoint reports
Hi Debrai,
sorry for misleading you first. You're right. I looked through the code
once more and found something: There's the yarn.staging-directory [1] that
is set to the user's home folder by default. This parameter is used by the
YarnApplicationFileUploader [2] to upload the application files.
Hi Bariša,
have you had the chance to analyze the memory usage in more detail? An
OutOfMemoryError might be an indication for some memory leak which should
be solved instead of lowering some memory configuration parameters. Or is
it that the off-heap memory is not actually used but blocks the JVM f
Thanks Matthias for replying.
Yes there was some yarn configuration issue on my side which I mentioned in
my last email.
I am starting on flink. So just for my understanding in few links (posted
below) it is reported that flink needs to create a .flink directory in the
users home folder. Even tho
Hi Debraj,
thanks for reaching out to the Flink community. Without knowing the details
on how you've set up the Single-Node YARN cluster, I would still guess that
it is a configuration issue on the YARN side. Flink does not know about a
.flink folder. Hence, there is no configuration to set this fo
Hi Eric,
it looks like you ran into FLINK-19830 [1]. I'm gonna add Leonard to the
thread. Maybe, he has a workaround for your case.
Best,
Matthias
[1] https://issues.apache.org/jira/browse/FLINK-19830
On Fri, Feb 26, 2021 at 11:40 AM eric hoffmann
wrote:
> Hello
> Working with flink 1.12.1 i r
Hi Shengkai,
i’m using Flink 1.11.2. The problem is if I use a parallelism higher than my
kafka partition count, the watermarks are not increasing and so the windows are
never ggot fired.
I suspect that then a source task is not marked as idle and thus the watermark
is not increased. In any ca
Hi Abhishek,
this might be caused by the switch from log4j to log4j2 as the default in
Flink 1.11 [1]. Have you had a chance to look at the logging documentation
[2] to enable log4j again?
Best,
Matthias
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/release-notes/flink-1.11.html#swit
Hi Sandeep,
thanks for reaching out to the community. Unfortunately, the information
you're looking for is not exposed in a way that you could access it from
within your RichMapFunction. Could you elaborate a bit more on what you're
trying to achieve? Maybe, we can find another solution for your pr
Strong +1
Having two planners is confusing to users and the diverging semantics make
it difficult to provide useful learning material. It is time to rip the
bandage off.
Seth
On Fri, Feb 26, 2021 at 12:54 AM Kurt Young wrote:
> change.>
>
> Hi Timo,
>
> First of all I want to thank you for in
Hi Rainie,
the network buffer pool was destroyed for some reason. This happens when
the NettyShuffleEnvironment gets closed which is triggered when an operator
is cleaned up, for instance. Maybe, the timeout in the metric system caused
this. But I'm not sure how this is connected. I'm gonna add Che
Hi, Jan.
Could you tell us which Flink version you use? As far as I know, the kafka
sql connector has implemented `SupportWatermarkPushDown` in Flink-1.12. The
`SupportWatermarkPushDown` pushes the watermark generator into the source
and emits the minimum watermark among all the partitions. For mo
In my setup hadoop-yarn-nodemenager is running with yarn user.
ubuntu@vrni-platform:/tmp/flink$ ps -ef | grep nodemanager
yarn 4953 1 2 05:53 ?00:11:26
/usr/lib/jvm/java-8-openjdk/bin/java -Dproc_nodemanager
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/heap-dumps/yar
David,
Thank you again for a reply. It really looks like this situation is happened
because of the parallel instances.
Best,
Yuri L.
>Пятница, 26 февраля 2021, 15:40 +03:00 от Dawid Wysakowicz
>:
>
>Hi,
>What is exactly the problem? Is it that no patterns are being generated?
>Usually th
Hello, David.
Yes, I’m using 1.12. And my code is now working. Thank you very much for
your comment.
Yuri L.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
What is exactly the problem? Is it that no patterns are being generated?
Usually the problem is in idle parallel instances[1]. You need to have
data flowing in each of the parallel instances for a watermark to
progress. You can also read about it in the aspect of Kafka's partitions[2].
Best,
Hello,
I’ve already asked the question today and got the solve:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-CEP-can-t-process-PatternStream-td41722.html
, and it’s clean for me how PatternStream works with ProcessTime.
But I need help again, I can’t write prope
Hello
Working with flink 1.12.1 i read in the doc that Processing-time temporal
join is supported for kv like join but when i try i get a:
Exception in thread "main" org.apache.flink.table.api.TableException:
Processing-time temporal join is not supported yet.
at
org.apache.flink.table.pla
Yes indeed, Timo is correct -- I am proposing that you not use timers at
all. Watermarks and event-time timers go hand in hand -- and neither
mechanism can satisfy your requirements.
You can instead put all of the timing logic in the processElement method --
effectively emulating what you would ge
Just to clarify, intermediate topics should in most cases not be compacted
for exactly the reasons if your application depends on all intermediate
data. For the final topic, it makes sense. If you also consume intermediate
topics for web application, one solution is to split it into two topics
(lik
Hi,
I have no idea what's going on. There is no mechanism in DataStream to
react to deleted records.
Can you reproduce it locally and debug through it?
On Wed, Feb 24, 2021 at 5:21 PM bat man wrote:
> Hi Arvid,
>
> The Flink application was not re-started. I had checked on that.
> By adding
Hi Rex,
as far as I know, we recently allowed PROCTIME() also at arbitrary
locations in the query. So you don't have to pass it through the
aggregate but you can call it afterwards again.
Does that work in your use case? Something like:
SELECT i, COUNT(*) FROM customers GROUP BY i, TUMBLE(PR
Hi Aeden,
the rowtime task is actually just a simple map function that extracts
the event-time timestamp into a field of the row for the next operator.
It should not be the problem. Can you share a screenshot of your
pipeline? What is your watermarking strategy? Is it possible that you
are ge
Hi Barisa,
by looking at the 1.8 documentation [1] it was possible to configure the
off heap memory as well. Also other memory options were already present.
So I don't think that you need an upgrade to 1.11 immediately. Please
let us know if you could fix your problem, otherwise we can try to
Hi Yaroslav,
I think your approach is correct. Union is perfect to implement multiway
joins if you normalize the type of all streams before. It can simply be
a composite type with the key and a member variable for each stream
where only one of those variables is not null. A keyed process funct
Hi Yuri,
Which Flink version are you using? Is it 1.12? In 1.12 we changed the
default TimeCharacteristic to EventTime. Therefore you need watermarks
and timestamp[1] for your program to work correctly. If you want to
apply your pattern in ProcessingTime you can do:
PatternStream patternStream =
Hi Rion,
I think what David was refering to is that you do the entire time
handling yourself in process function. That means not using the
`context.timerService()` or `onTimer()` that Flink provides but calling
your own logic based on the timestamps that enter your process function
and the st
Until we have more information, maybe this is also helpful:
https://ci.apache.org/projects/flink/flink-docs-stable/ops/debugging/debugging_classloading.html#inverted-class-loading-and-classloader-resolution-order
On 26.02.21 09:20, Timo Walther wrote:
If this problems affects multiple people, f
If this problems affects multiple people, feel free to open an issue
that explains how to easily reproduce the problem. This helps us or
contributors to provide a fix.
Regards,
Timo
On 26.02.21 05:08, sofya wrote:
What was the actual solution? Did you have to modify pom?
--
Sent from: htt
Hello everyone.
I’m trying to use Flink Cep library and I want to fetch some events by pattern.
At first I’ve created a simple HelloWorld project. But I have a problem exactly
like it described here:
https://stackoverflow.com/questions/39575991/flink-cep-no-results-printed
You can see my c
43 matches
Mail list logo