Hi Averell,

Hadoop could directly support S3AFileSystem. When you deploy a Flink job
on YARN, the hadoop classpath will be added to JobManager/TaskManager
automatically. That means you could use "s3a" schema without putting
"flink-s3-fs-hadoop.jar" in the plugin directory.

In K8s deployment, we do not have a hadoop filesystem by default. So then
you need to do this manually.


Best,
Yang

Averell <lvhu...@gmail.com> 于2020年4月27日周一 下午1:46写道:

> Hi David, Yang,
>
> Thanks. But I just tried to submit the same job on a YARN cluster using
> that
> same uberjar, and it was successful. I don't have flink-s3-fs-hadoop.jar
> anywhere in the lib or plugin folder.
>
> Thanks and regards,
> Averell
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>

Reply via email to