Re: Flink kinesis connector 4.3.0 release estimated date

2024-05-22 Thread Leonard Xu
Hey, Vararu

The kinesis connector 4.3.0 release is under vote phase and we hope to finalize 
the release work in this week if everything goes well.


Best,
Leonard


> 2024年5月22日 下午11:51,Vararu, Vadim  写道:
> 
> Hi guys,
>  
> Any idea when the 4.3.0 kinesis connector is estimated to be released?
>  
> Cheers,
> Vadim.



Flink Kubernetes Operator Pod Disruption Budget

2024-05-22 Thread Jeremy Alvis via user
Hello,

In order to maintain at least one pod for both the Flink Kubernetes
Operator and JobManagers through Kubernetes processes that use the Eviction
API

such as when draining a node, we have deployed Pod Disruption Budgets

in the appropriate namespaces.

Here is the flink-kubernetes-operator PDB:

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: flink-kubernetes-operator
spec:
  minAvailable: 1
  selector:
matchLabels:
  app: flink-kubernetes-operator

Where the Flink Kubernetes Operator has the flink-kubernetes-operator app
label defined:
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
app: flink-kubernetes-operator


Here is the jobmanager PDB (deployed alongside each FlinkDeployment):

apiVersion: policy/v1
kind: PodDisruptionBudget
spec:
  minAvailable: 1
  selector:
matchLabels:
  name: jobmanager

Where the FlinkDeployment has the jobmanager name label defined:

apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
spec:
  jobManager:
podTemplate:
  metadata:
labels:
  name: jobmanager

We were wondering if it would make sense for the Flink Kubernetes Operator
to automatically create the PDBs as they are a native Kubernetes resource
like the Ingress that the operator currently creates.

Thanks,
Jeremy


Flink kinesis connector 4.3.0 release estimated date

2024-05-22 Thread Vararu, Vadim
Hi guys,

Any idea when the 4.3.0 kinesis connector is estimated to be released?

Cheers,
Vadim.


[ANNOUNCE] Apache Celeborn 0.4.1 available

2024-05-22 Thread Nicholas Jiang
Hi all,

Apache Celeborn community is glad to announce the
new release of Apache Celeborn 0.4.1.

Celeborn is dedicated to improving the efficiency and elasticity of
different map-reduce engines and provides an elastic, high-efficient
service for intermediate data including shuffle data, spilled data,
result data, etc.


Download Link: https://celeborn.apache.org/download/

GitHub Release Tag:

- https://github.com/apache/celeborn/releases/tag/v0.4.1

Release Notes:

- https://celeborn.apache.org/community/release_notes/release_note_0.4.1


Home Page: https://celeborn.apache.org/

Celeborn Resources:

- Issue Management: https://issues.apache.org/jira/projects/CELEBORN
- Mailing List: d...@celeborn.apache.org

Regards,
Nicholas Jiang
On behalf of the Apache Celeborn community

StateMigrationException while using stateTTL

2024-05-22 Thread irakli.keshel...@sony.com
Hello,

I'm using Flink 1.17.1 and I have stateTTL enabled in one of my Flink jobs 
where I'm using the RocksDB for checkpointing. I have a value state of Pojo 
class (which is generated from Avro schema). I added a new field to my schema 
along with the default value to make sure it is backwards compatible, however 
when I redeployed the job, I got StateMigrationException. I have similar setup 
with other Flink jobs where adding a column doesn't cause any trouble.

This is my stateTTL config:

StateTtlConfig
 .newBuilder(Time.days(7))
 .cleanupInRocksdbCompactFilter(1000)
 .build

This is how I enable it:

val myStateDescriptor: ValueStateDescriptor[MyPojoClass] =
 new ValueStateDescriptor[MyPojoClass](
   "test-name",
   classOf[MyPojoClass])

myStateDescriptor.enableTimeToLive(initStateTTLConfig())

This is the exception I end up with:

Caused by: org.apache.flink.util.StateMigrationException: The new state 
serializer 
(org.apache.flink.runtime.state.ttl.TtlStateFactory$TtlSerializer@a13bf51) must 
not be incompatible with the old state serializer 
(org.apache.flink.runtime.state.ttl.TtlStateFactory$TtlSerializer@a13bf51).
at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.updateRestoredStateMetaInfo(RocksDBKeyedStateBackend.java:755)
at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.tryRegisterKvStateInformation(RocksDBKeyedStateBackend.java:667)
at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.createOrUpdateInternalState(RocksDBKeyedStateBackend.java:883)
at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.createOrUpdateInternalState(RocksDBKeyedStateBackend.java:870)
at 
org.apache.flink.runtime.state.ttl.TtlStateFactory.createTtlStateContext(TtlStateFactory.java:222)
at 
org.apache.flink.runtime.state.ttl.TtlStateFactory.createValueState(TtlStateFactory.java:145)
at 
org.apache.flink.runtime.state.ttl.TtlStateFactory.createState(TtlStateFactory.java:129)
at 
org.apache.flink.runtime.state.ttl.TtlStateFactory.createStateAndWrapWithTtlIfEnabled(TtlStateFactory.java:69)
at 
org.apache.flink.runtime.state.AbstractKeyedStateBackend.getOrCreateKeyedState(AbstractKeyedStateBackend.java:362)
at 
org.apache.flink.runtime.state.AbstractKeyedStateBackend.getPartitionedState(AbstractKeyedStateBackend.java:413)
at 
org.apache.flink.runtime.state.DefaultKeyedStateStore.getPartitionedState(DefaultKeyedStateStore.java:115)
at 
org.apache.flink.runtime.state.DefaultKeyedStateStore.getState(DefaultKeyedStateStore.java:60)
... 25 more

Does anyone know what is causing the issue?

Cheers,
Irakli




Re: Get access to unmatching events in Apache Flink Cep

2024-05-22 Thread Anton Sidorov
In answer Biao said "currently there is no such API to access the middle
NFA state". May be that API exist in plan? Or I can create issue or pull
request that add API?

пт, 17 мая 2024 г. в 12:04, Anton Sidorov :

> Ok, thanks for the reply.
>
> пт, 17 мая 2024 г. в 09:22, Biao Geng :
>
>> Hi Anton,
>>
>> I am afraid that currently there is no such API to access the middle NFA
>> state in your case. For patterns that contain 'within()' condition, the
>> timeout events could be retrieved via TimedOutPartialMatchHandler
>> interface, but other unmatching events would be pruned immediately once
>> they are considered as unnecessary to keep.
>>
>> Best,
>> Biao Geng
>>
>>
>> Anton Sidorov  于2024年5月16日周四 16:12写道:
>>
>>> Hello!
>>>
>>> I have a Flink Job with CEP pattern.
>>>
>>> Pattern example:
>>>
>>> // Strict Contiguity
>>> // a b+ c d e
>>> Pattern.begin("a", AfterMatchSkipStrategy.skipPastLastEvent()).where(...)
>>> .next("b").where(...).oneOrMore()
>>> .next("c").where(...)
>>> .next("d").where(...)
>>> .next("e").where(...);
>>>
>>> I have events with wrong order stream on input:
>>>
>>> a b d c e
>>>
>>> On output I haven`t any matching. But I want have access to events, that
>>> not matching.
>>>
>>> Can I have access to middle NFA state in CEP pattern, or get some other
>>> way to view unmatching events?
>>>
>>> Example project with CEP pattern on github
>>> , and my question
>>> on SO
>>> 
>>>
>>> Thanks in advance
>>>
>>
>
> --
> С уважением, Антон.
>


-- 
С уважением, Антон.