Docker is not necessary to expand the transform (indeed, by default it
should just pull the Jar and invokes that directly to start the expansion
service), but it is used as the environment in which to execute the
expanded transform.

It would be in theory possible to run the worker without docker as well.
This would involve manually starting up a worker in Java, manually starting
up an expansion service that points to this worker as its environment, and
then using that expansion service from Python. I've never done that myself,
so I don't know how easy it would be, but the "LOOPBACK" runner in Java
could give some insight into how this could be done.



On Tue, Apr 18, 2023 at 5:22 PM Juan Romero <[email protected]> wrote:

> Hi.
>
> I have an issue when I try to run a kafka io pipeline in python on my
> local machine, because in my local machine it is not possible to install
> docker. Seems that beam try to use docker to pull and start the beam java
> sdk i order to start the expansion service. I tried to start manually the
> expansion service and define the expansion service url in the connector
> properties but in anyway it keeps asking by docker process. My question is
> if we can run a pipeline with external transformations without install
> docker.
>
> Looking forward to it. Thanks!!
>

Reply via email to