Hi,

I have set up such configuration for local environment (minikube), that can be found at [1] and [2]. It is somewhat older, but it might serve as an inspiration. If you would like write up your solution to the documentation, that would be awesome, I'd be happy to review it. :)

Best,
 Jan

[1] https://github.com/PacktPublishing/Building-Big-Data-Pipelines-with-Apache-Beam/blob/main/env/manifests/flink.yaml

[2] https://github.com/PacktPublishing/Building-Big-Data-Pipelines-with-Apache-Beam/blob/main/env/docker/flink/Dockerfile

On 2/23/24 00:48, Jaehyeon Kim wrote:
Hello,

I'm playing with the beam portable runner to read/write data from Kafka. I see a spark runner example on Kubernetes (https://beam.apache.org/documentation/runners/spark/#kubernetes) but the flink runner section doesn't include such an example.

Is there a resource that I can learn? Ideally it'll be good if it is updated in the documentation.

Cheers,
Jaehyeon

Reply via email to