Hi Kyle,

[document doesn't have comments enabled currently]

As noted, worker deployment is an open question. I believe pipeline
submission and worker execution need to be considered together for a
complete deployment story. The idea of creating a self containing jar file
is interesting, but there are trade-offs:

* The pipeline construction code itself may need access to cluster
resources. In such cases the jar file cannot be created offline.
* For k8s deployment, a container image with the SDK and application code
is required for the worker. The jar file (which is really a derived
artifact) would need to be built in addition to the container image.
* To build such jar file, the user would need a build environment with job
server and application code. Do we want to make that assumption?

The document that I had shared discusses options for pipeline submission.
It might be interesting to explore if your proposal for building such a jar
can be integrated or if you have other comments?

Thomas



On Tue, Aug 6, 2019 at 5:03 PM Kyle Weaver <kcwea...@google.com> wrote:

> Hi all,
>
> Following up on discussion about portable Beam on Flink on Kubernetes [1],
> I have drafted a short document on how I propose we bundle portable Beam
> applications into jars that can be run on OSS runners, similar to Dataflow
> templates (but without the actual template part, at least for the first
> iteration). It's pretty straightforward, but I thought I would broadcast it
> here in case anyone is interested.
>
>
> https://docs.google.com/document/d/1kj_9JWxGWOmSGeZ5hbLVDXSTv-zBrx4kQRqOq85RYD4/edit#
>
> [1]
> https://lists.apache.org/thread.html/a12dd939c4af254694481796bc08b05bb1321cfaadd1a79cd3866584@%3Cdev.beam.apache.org%3E
>
> Kyle Weaver | Software Engineer | github.com/ibzib | kcwea...@google.com
>

Reply via email to