Hi Navneeth,

did you follow the plugin folder structure? [1]

There is another plugin called flink-s3-fs-presto that you can use.
If you want to use both plugins, use s3a:// for s3-fs-hadoop (output) and
s3p:// for s3-fs-presto (checkpointing).

[1]
https://ci.apache.org/projects/flink/flink-docs-master/ops/plugins.html#isolation-and-plugin-structure

On Thu, Jan 30, 2020 at 10:26 AM Navneeth Krishnan <reachnavnee...@gmail.com>
wrote:

> Hi All,
>
> I'm trying to migrate from NFS to S3 for checkpointing and I'm facing few
> issues. I have flink running in docker with flink-s3-fs-hadoop jar copied
> to plugins folder. Even after having the jar I'm getting the following
> error: Caused by:
> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is
> not in the classpath/dependencies. Am I missing something?
>
> In the documentation it says "Presto is the recommended file system for
> checkpointing to S3". How can I enable this? Is there a specific
> configuration that I need to do for this?
>
> Also, I couldn't figure out how the entropy injection works. Should I
> create the bucket with checkpoints folder and flink will automatically
> inject an entropy and create a per job checkpoint folder or should I create
> it?
>
> bucket/checkpoints/_entropy_/dashboard-job/
>
> s3.entropy.key: _entropy_
> s3.entropy.length: 4 (default)
>
> Thanks
>

Reply via email to