In answer Biao said "currently there is no such API to access the middle
NFA state". May be that API exist in plan? Or I can create issue or pull
request that add API?
пт, 17 мая 2024 г. в 12:04, Anton Sidorov :
> Ok, thanks for the reply.
>
> пт, 17 мая 2024 г. в 09:22,
Ok, thanks for the reply.
пт, 17 мая 2024 г. в 09:22, Biao Geng :
> Hi Anton,
>
> I am afraid that currently there is no such API to access the middle NFA
> state in your case. For patterns that contain 'within()' condition, the
> timeout events could be retrieved via TimedOutPart
Hello mete.
I found this SO article
https://stackoverflow.com/questions/54293808/measuring-event-time-latency-with-flink-cep
If I'm not mistake, you can use Flink metrics system for operators and get
time of processing event in operator.
On 2024/05/16 11:54:44 mete wrote:
> Hello,
>
> For an
Hello!
I have a Flink Job with CEP pattern.
Pattern example:
// Strict Contiguity
// a b+ c d e
Pattern.begin("a", AfterMatchSkipStrategy.skipPastLastEvent()).where(...)
.next("b").where(...).oneOrMore()
.next("c").where(...)
.next("d").where(...)
Hello!
I have a Flink Job with CEP pattern.
Pattern example:
// Strict Contiguity
// a b+ c d e
Pattern.begin("a", AfterMatchSkipStrategy.skipPastLastEvent()).where(...)
.next("b").where(...).oneOrMore()
.next("c").where(...)
.next("d").where(...)
Makes sense, thank you!
On Tue, Jan 31, 2023 at 10:48 AM Gyula Fóra wrote:
> Thanks @Anton Ippolitov
> At this stage I would highly recommend the native mode if you have the
> liberty to try that.
> I think that has better production characteristics and will work out
ka.tcp://flink@JM_POD_IP:6123/user/rpc/dispatcher_1
>> *when HA enabled.
>>
>>
>> Best,
>> Yang
>>
>> Anton Ippolitov via user 于2023年1月31日周二 00:21写道:
>>
>>> This is actually what I'm already doing, I'm only setting high-availability:
>>>
at can only cause problems and
> should not achieve anything :)
>
> Gyula
>
> On Fri, Jan 27, 2023 at 6:44 PM Anton Ippolitov via user <
> user@flink.apache.org> wrote:
>
>> Hi everyone,
>>
>> I've been experimenting with Kubernetes HA and the Kuberne
adc3579411224f510a72/pkg/controller/flink/container_utils.go#L212-L216
It explicitly sets jobmanager.rpc.address to the host IPs.
Am I misconfiguring or misunderstanding something? Is there any way to fix
these errors?
Thanks!
Anton
Hi,
We recently switched to the flink-s3-fs-presto library for checkpointing in
Flink 1.16.0 and we would like to get client-side metrics from the Presto
S3 client (request rate, throttling rate, etc).
I can see that the upstream client from Presto 0.272 already comes with a
metric collector
Looks like I set wrong parameter. Is should have been
taskmanager.memory.task.off-heap.size.
From: Anton [mailto:anton...@yandex.ru]
Sent: Friday, December 17, 2021 10:12 PM
To: 'Xintong Song'
Cc: 'user'
Subject: RE: Direct buffer memory in job with hbase client
Hi Xintong,
After
com]
Sent: Wednesday, December 15, 2021 12:17 PM
To: Anton
Cc: user
Subject: Re: Direct buffer memory in job with hbase client
Hi Anton,
You may want to try increasing the task off-heap memory, as your tasks are
using hbase client which needs off-heap (direct) memory. The default task
off-heap m
Hi, from time to time my job is stopping to process messages with warn
message listed below. Tried to increase jobmanager.memory.process.size and
taskmanager.memory.process.size but it didn't help.
What else can I try? "Framework Off-heap" is 128mb now as seen is task
manager dashboard and Task
[mailto:tsreape...@gmail.com]
Sent: Wednesday, November 24, 2021 4:47 AM
To: Anton
Cc: user
Subject: Re: Working with HBase inside RichMapFunction
Hi!
Which Flink version are you using? Are you putting data into HBase inside
RichMapFunction, instead of using an HBase sink? If yes could you
Hi, I'm using RichMapFunction to enrich data from stream generated from
Kafka topic and put rich data again to HBase. And when there is a failure on
HBase side I'm seeing in Flink's log that HBase client attempts several
times to get necessary data from HBase - I believe it makes it
Hello. Please suggest best method to write data to HBase (stream going from
Kafka being enriched with HBase data and need to be written to HBase). There
is only one connector on flink.apache.org related to Table API. At the same
time there is HBaseSinkFunction in the source code and I beleive it
est regards,
Anton
OpenPGP_signature
Description: OpenPGP digital signature
ine, if some of them will be duplicated - that’s also fine.
Regards,
Anton
ss":"akka.remote.RemoteActorRefProvider$RemotingTerminator","message":"Remoting
shut down.","host":"clickstream-flink08"}
{"time":"2019-09-02
11:33:14.836","loglevel":"INFO","class":"org.apache.flink.runtime.rpc.akka.AkkaRpcService","message":"Stopped
Akka RPC service.","host":"clickstream-flink08”}
But task manager process is still alive:
flink 29078 423 7.0 49191076 27790920 ? Sl 10:13 828:28 java
-Djava.net.preferIPv4Stack=true
-Dlog.file=/opt/flink/log/flink--taskexecutor-0-clickstream-flink08.log
-Dlog4j.configuration=file:/opt/flink/conf/log4j.properties
-Dlogback.configurationFile=file:/opt/flink/conf/logback.xml -classpath
/opt/flink/lib/flink-cep_2.12-1.8.0.jar:/opt/flink/lib/flink-queryable-state-runtime_2.12-1.8.0.jar:/opt/flink/lib/flink-s3-fs-presto-1.8.0.jar:/opt/flink/lib/flink-table_2.12-1.8.0.jar:/opt/flink/lib/log4j-1.2.17.jar:/opt/flink/lib/slf4j-log4j12-1.7.15.jar:/opt/flink/lib/flink-dist_2.12-1.8.0.jar:::
org.apache.flink.runtime.taskexecutor.TaskManagerRunner --configDir
/opt/flink/conf
Is it acceptable behaviour?
Best regards,
Anton Ustinov
me result.
Additional information: Flink 1.8.0, runs on a single node with 56 CPUs, 256G
RAM, 10GB/s network.
Anton Ustinov
ustinov@gmail.com <mailto:ustinov@gmail.com>
We also have a requirement of using Drools in Flink. Drools brings a very
mature and usable business rules editor. And to be able to integrate Drools
into Flink would be very useful.
On 23 June 2017 at 22:09, Suneel Marthi wrote:
> Sorry I didn't read the whole thread.
>
>
Hi Aljoscha,
Could you share your plans of resolving it?
Best,
Anton
From: Aljoscha Krettek [mailto:aljos...@apache.org]
Sent: Thursday, February 16, 2017 2:48 PM
To: user@flink.apache.org
Subject: Re: Flink batch processing fault tolerance
Hi,
yes, this is indeed true. We had some plans
Hi,
Could you update List of contributors after that? ☺
Anton Solovev
Software Engineer
Office: +7 846 200 09 70 x 55621<tel:+7%20846%20200%2009%2070;ext=55621>
Email: anton_solo...@epam.com<mailto:anton_solo...@epam.com>
Samara, Russia (GMT+4) epam.com<http
be done without using TA-Lib, but
there are other functions in this library that I would like to use, plus it
would give me experience with integrating external analysis libraries.
Thanks and regards
Anton
On Sun, Nov 29, 2015 at 4:12 PM, Anton Polyakov <polyakov.an...@gmail.com>
wrote:
> Hi Fabian
>
> Defining a special flag for record seems like a checkpoint barrier. I
> think I will end up re-implementing checkpointing myself. I found the
> discussion in flink-dev:
> mail-
> Hi Anton!
>
> That you can do!
>
> You can look at the interfaces "Checkpointed" and "checkpointNotifier".
> There you will get a call at every checkpoint (and can look at what records
> are before that checkpoint). You also get a call once the checkpoin
e DAG, but you cannot force a
> barrier at a specific point.
>
> On Mon, Nov 30, 2015 at 3:33 PM, Anton Polyakov <polyakov.an...@gmail.com
> <javascript:_e(%7B%7D,'cvml','polyakov.an...@gmail.com');>> wrote:
>
>> Hi Stephan
>>
>> sorry for misunderstanding, but
> Hi Anton!
>
> That you can do!
>
> You can look at the interfaces "Checkpointed" and "checkpointNotifier".
> There you will get a call at every checkpoint (and can look at what records
> are before that checkpoint). You also get a call once the checkpoin
Javier
sorry to jumping in, but I think your case is very similar to what I am
trying to achieve in the thread just next to yours (called "Watermarks as
"process completion" flags". I also need to process a stream which is
produced for some time, but then take an action after certain event. Also
ches the sink. The call to "checkpointComplete()" in the sources comes
> after all barriers have reached all sinks.
>
> Have a look here for an illustration about barriers flowing with the
> stream:
> https://ci.apache.org/projects/flink/flink-docs-release-0.10/internals/str
what I need.
Can I possibly utilize internal Flink’s checkpoint barriers (i.e. like
triggering a custom checkoint or finishing streaming job)?
> On 24 Nov 2015, at 21:53, Fabian Hueske <fhue...@gmail.com> wrote:
>
> Hi Anton,
>
> If I got your requirements right, you
Michels <m...@apache.org> wrote:
> Hi Anton,
>
> You should be able to model your problem using the Flink Streaming
> API. The actions you want to perform on the streamed records
> correspond to transformations on Windows. You can indeed use
> Watermarks to signal the window
either on infinite streams where nobody cares about completion or
classical batch examples which rely on fact all input data is ready.
Can you please hint me.
Thank you vm
Anton
34 matches
Mail list logo