Hi Gordon
Any chance we could get this reviewed and released to the central repo?
We're currently forced to use a Flink version that has a nasty bug causing
an operational nightmare
Many thanks
Fil
On Sat, 19 Aug 2023 at 01:38, Galen Warren via user
wrote:
> Gotcha, makes sense as to the origi
job fails.
On Fri, 4 Aug 2023 at 05:03, Filip Karnicki
wrote:
> Hi, we recently went live with a job on a shared cluster, which is managed
> with Yarn
>
> The job was started using
>
> flink run -s hdfs://PATH_TO_A_CHECKPOINT_FROM_A_PREVIOUS_RUN_HERE
>
>
> Everything
Hi, we recently went live with a job on a shared cluster, which is managed
with Yarn
The job was started using
flink run -s hdfs://PATH_TO_A_CHECKPOINT_FROM_A_PREVIOUS_RUN_HERE
Everything worked fine for a few days, but then the job needed to be
restored for whatever reason
2023-08-03 16:34:44
This is great news, as we're using statefun as well.
Please don't hesitate to let me know if you need me to do some additional
testing on a real life prod-like setup.
On Sat, 24 Jun 2023 at 18:41, Galen Warren via user
wrote:
> Great -- thanks!
>
> I'm going to be out of town for about a week b
Hi Galen
I was thinking about the same thing recently and reached a point where I
see that async io does not have access to the keyed state because:
"* State related apis in
[[org.apache.flink.api.common.functions.RuntimeContext]] are not supported
* yet because the key may get changed while acc
not namespaced.
>
> Just curious, which specific state in FunctionGroupOperator are you trying
> to transform? I assume all other internal state in FunctionGroupOperator
> you want to remain untouched, and only wish to carry them over to be
> included in the transformed savepoint?
&g
ger stop
> in a process function and walk up the call stack to find the proper
> components implementing it.
>
>
>
> If you share a little more of your code it is much easier to provide
> specific guidance 😊
>
> (e.g. ‘savepoint’ is never used again in your code snippet …)
>
Hi, I'm trying to load a list state using the State Processor API (Flink
1.14.3)
Cluster settings:
state.backend: rocksdb
state.backend.incremental: true
(...)
Code:
val env = ExecutionEnvironment.getExecutionEnvironment
val savepoint = Savepoint.load(env, pathToSavepoint, new
EmbeddedRocksDBS
Op do 30 jun. 2022 om 15:52 schreef Weihua Hu :
>>
>>> Hi, Filip
>>>
>>> You can modify the InfluxdbReporter code to rewrite the
>>> notifyOfAddedMetric method and filter the required metrics for reporting.
>>>
>>> Best,
>>> Weihua
>
Hi All
We're using the influx reporter (flink 1.14.3), which seems to create a
series per:
-[task|job]manager
- host
- job_id
- job_name
- subtask_index
- task_attempt_id
- task_attempt_num
- task_id
- tm_id
which amounts to about 4k of series each time our job restarts itself
We are currently e
ors/flink-connector-kafka/src/main/java/org/apache/flink/connector/kafka/sink/KafkaWriter.java#L302-L321
> [4]
> https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/ops/debugging/application_profiling/
>
>
> On Sat, Mar 26, 2022 at 2:11 PM Filip Karnicki
> wrote:
>
&
Hi, I noticed that with each added (kafka) sink with exactly-once
guarantees, there looks to be a penalty of ~100ms in terms of sync
checkpointing time.
Would anyone be able to explain and/or point me in the right direction in
the source code so that I could understand why that is? Specifically, w
u mentioned to be loaded as
> plugins? [1]
>
> I don't see any other ways to solve this problem.
> Probably Chesnay or Seth will suggest a better solution.
>
> [1]
>
> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/plugins/
>
&
Hi Zhanghao
it's 3.5.5
Thank you
Fil
On Sat, 5 Mar 2022 at 08:12, Zhanghao Chen
wrote:
> Hi Filip,
>
> Could you share the version of the ZK server you are connecting to?
>
>
> Best,
> Zhanghao Chen
> --
> *From:* Filip Karnicki
&g
Hi, I believe there's a mismatch in shaded zookeeper/curator dependencies.
I see that curator 4.2.0 needs zookeeper 3.5.4-beta, but it's still used in
flink-shaded-zookeeper-34, which as far as I can tell is used by flink
runtime 1.14.3
https://mvnrepository.com/artifact/org.apache.flink/flink-run
Hi All!
We're running a statefun uber jar on a shared cloudera flink cluster,
the latter of which launches with some ancient protobuf dependencies
because of reasons[1].
Setting the following flink-config settings on the entire cluster
classloader.parent-first-patterns.additional:
org.apache.fli
and it seems that
> they've gone down a similar path [2].
>
> Cheers,
> Igal.
>
> [1] https://stackoverflow.com/a/35304873/4405470
> [2] https://lists.apache.org/thread/nxf7yk5ctcvndyygnvx7l34gldn0xgj3
>
>
> On Mon, Jan 24, 2022 at 7:08 PM Filip Karnicki
> wrote:
Hi All!
I was wondering if there's a way to secure a remote function by requiring
the client (flink) to use a client cert. Preferably a base64 encoded string
from the env properties, but that might be asking for a lot :)
I had a look at the code, and NettySharedResources seems to use
SslContextBu
Hi, we're currently using statefun 3.1.1 on a shared cloudera cluster,
which is going to be updated to 1.14.x
We think this update might break our jobs, since 3.1.1 is not explicitly
compatible with 1.14.x (
https://flink.apache.org/downloads.html#apache-flink-stateful-functions-311)
Is there any
Hi, is there a neat way to access kafka headers from within a remote
function without using the datastream api to insert the headers as part of
a RoutableMessage payload?
Many thanks
Fil
ete((response: Slice, exception: Throwable) =>
onComplete(exchange, response, exception))
})
}
def onComplete(exchange: HttpServerExchange, slice: Slice, throwable:
Throwable) = (... as per the example)
}
Many thanks again for your help, Igal
On Wed, 27 Oct 2021 at 13:59, Filip
gt; You've mentioned that you are using the data stream integration, where
> there any particular reason you chose that? It has some limitations at the
> moment with respect to remote functions.
>
>
> Cheers,
> Igal
>
> On Wed 27. Oct 2021 at 08:49, Filip Karnicki
> w
Hi
I have a kafka topic with json messages that I map to protobufs within a
data stream, and then send those to embedded stateful functions using the
datastream integration api (DataStream[RoutableMessage]). From there I need
to make an idempotent long-running blocking IO call.
I noticed that I w
Hi, I modified the Stateful Functions 2.2.0 asyc example to include a real
binding to kafka, I included statefun-flink-distribution and
stateful-kafka-io in the pom and I created a fat jar using the
maven-assembly-plugin,
and my flink cluster complains about:
java.lang.IllegalStateException: Unab
24 matches
Mail list logo