Hi, On Mon, Jun 1, 2020 at 5:47 AM Omid Bakhshandeh <omidbakhshan...@gmail.com> wrote:
> Hi, > > I'm very confused about StateFun 2.0 new architecture. > > Is it possible to have both remote and embedded functions in the same > deployment? > Yes that is possible. Embedded functions simply run within the Flink StateFun workers (which essentially are Flink TMs). Remote functions run as standalone services independent of the Flink StateFun cluster. > Is there a tutorial that shows the deployment of the two types in the same > Kubernetes cluster alongside with Flink(possible in Python and Java)? > You simply have to include all modules [1] (embedded and remote modules included) when packaging your StateFun application image [2]. You can see the Dockerfile in [2] that demonstrates exactly that. It copies an embedded module (a JAR containing the service files and Java functions / ingresses / egresses) and a remote module (a YAML file defining the endpoints of remote functions, ingresses / egresses) into a pre-defined modules directory in the image. > > Also, is there a path towards KNative support and scale to zero option? > I've cc'ed Igal, who has been looking at KNative support. As for scale-to-zero, that would already work for your remote function deployments if you are deploying them with, for example, FaaS solutions like KNative / AWS Lambda / GCP Cloud Functions, etc. That is already supported with the new StateFun 2.0 architecture. > > Best, > -- > ------------------------------------------- > Omid > Cheers, Gordon [1] https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.0/sdk/modules.html [2] https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.0/deployment-and-operations/packaging.html