Hi Fanjia,

Thanks for opening this discussion, and thanks to other guys providing
the idea, it seems we have found a way to split the connector jars
from our fat job jar.

There are three things that need to do:
1. For Spark engine, we can use `--jars` to submit the connector jars
to the Spark cluster, this is already supported.
2. For Flink engine, we can add our connector jars into Fliink
environment by `pipeline.jars`, this needs to write a client like
`CliFrontend` since Flink doesn't support using `pipeline.jars` by
shell.
3. Change the current distribution way, packaging the connectors into
our plugin directory.

If I have misunderstood, please let me know.

Thanks,
Wenjun Ruan


On Wed, Apr 20, 2022 at 2:24 PM 范佳 <[email protected]> wrote:
>
> Hi all guys,
> Now all the connector jar in the binary distribution package of Seatunnel are 
> packaged into one jar file: core .
> This makes it impossible for us to implement multi-version support for the 
> same component.
> So, I designed a new package and submit method with other wonderful people.
>
> Check it out:
> https://github.com/apache/incubator-seatunnel/issues/1669 
> <https://github.com/apache/incubator-seatunnel/issues/1669>
>
> Hope get your great advise and idea.
>
> Jia  Fan
>

Reply via email to