I've placed a flink-conf.yaml file in conf dir but
StreamExecutionEnvironment.getExecutionEnvironment doesn't pick it up. If
set programmatically keys are visible in Flink Web UI, they are just not
passed to Hadoop FS.

On 2021/10/18 03:04:04, Yangze Guo <k...@gmail.com> wrote:
> Hi, Pavel.>
>
> From my understanding of the doc[1], you need to set it in>
> flink-conf.yaml instead of your job.>
>
> [1]
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/filesystems/s3/#hadooppresto-s3-file-systems-plugins>

>
> Best,>
> Yangze Guo>
>
> On Sat, Oct 16, 2021 at 5:46 AM Pavel Penkov <eb...@gmail.com> wrote:>
> >>
> > Apparently Flink 1.14.0 doesn't correctly translate S3 options when
they are set programmatically. I'm creating a local environment like this
to connect to local MinIO instance:>
> >>
> >   val flinkConf = new Configuration()>
> >   flinkConf.setString("s3.endpoint", "http://127.0.0.1:9000";)>
> >   flinkConf.setString("s3.aws.credentials.provider",
"org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider")>
> >>
> >   val env =
StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(flinkConf)>
> >>
> > Then StreamingFileSink fails with a huge stack trace with most relevant
messages being Caused by:
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials
provided by SimpleAWSCredentialsProvider
EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider :
com.amazonaws.SdkClientException: Failed to connect to service endpoint:
 which means that Hadoop tried to enumerate all of the credential providers
instead of using the one set in configuration. What am I doing wrong?>
>

Reply via email to