finally, I work out how to build a custom flink image, the Dockerfile just
as:
>
> FROM  flink:1.13.1-scala_2.11
> ADD ./flink-s3-fs-hadoop-1.13.1.jar /opt/flink/plugins
> ADD ./flink-s3-fs-presto-1.13.1.jar /opt/flink/plugins
>

the wrong Docker file is :

>  FROM  apache/flink:1.13.1-scala_2.11

ADD ./flink-s3-fs-hadoop-1.13.1.jar /opt/flink/plugins

ADD ./flink-s3-fs-presto-1.13.1.jar /opt/flink/plugins

It uses the wrong base image.
I don't know why apache/flink:1.13.1-scala_2.11 is so different from
flink:1.13.1-scala_2.11, I have no idea where the apache comes from. Hope
you all are doing it well.

Joshua Fan <joshuafat...@gmail.com> 于2021年8月5日周四 上午11:42写道:

> It seems I set a wrong  high-availability.storageDir,
> s3://flink-test/recovery can work, but  s3:///flink-test/recovery can not,
> one / be removed.
>
> Joshua Fan <joshuafat...@gmail.com> 于2021年8月5日周四 上午10:43写道:
>
>> Hi Robert, Tobias
>>
>> I have tried many ways to build and validate the image.
>>
>> 1.put the s3 dependency to plugin subdirectory, the Dockerfile content is
>> below:
>>
>>> FROM apache/flink:1.13.1-scala_2.11
>>> ADD ./flink-s3-fs-hadoop-1.13.1.jar
>>> /opt/flink/plugins/s3-hadoop/flink-s3-fs-hadoop-1.13.1.jar
>>> ADD ./flink-s3-fs-presto-1.13.1.jar
>>> /opt/flink/plugins/s3-presto/flink-s3-fs-presto-1.13.1.jar
>>>
>> This time the image can be run on  k8s but would also hit a error like
>> "org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not
>> find a file system implementation for scheme 's3'.", it seems like flink
>> can not find the s3 filesystem supports dynamically. When I want to run the
>> image using 'docker run -it ', it would also report
>> 'standard_init_linux.go:190: exec user process caused "exec format error"'.
>>
>> 2.put the s3 dependency to plugin directly, the Dockerfile content is
>> below:
>>
>>> FROM apache/flink:1.13.1-scala_2.11
>>> ADD ./flink-s3-fs-hadoop-1.13.1.jar /opt/flink/plugins
>>> ADD ./flink-s3-fs-presto-1.13.1.jar /opt/flink/plugins
>>>
>> The image can not run on the k8s and report error just the same  as  run
>> the image using 'docker run -it ',  'standard_init_linux.go:190: exec user
>> process caused "exec format error"'.
>>
>> 3.just run the community edition image flink:1.13.1-scala_2.11 locally as
>> docker run -it, it will also hit the same error
>> 'standard_init_linux.go:190: exec user process caused "exec format error"',
>> but the flink:1.13.1-scala_2.11 can be run on the k8s without s3
>> requirement.
>>
>> 4.import the s3 dependency as a kubernetes parameter
>> I submit the session with '
>> -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.13.0.jar
>> \
>>     -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=
>> flink-s3-fs-hadoop-1.13.0.jar', the session can be start, but report
>> error as below
>>
>>> Caused by: java.lang.NullPointerException: null uri host.
>>>         at java.util.Objects.requireNonNull(Objects.java:228)
>>>         at
>>> org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:72)
>>>         at
>>> org.apache.hadoop.fs.s3a.S3AFileSystem.setUri(S3AFileSystem.java:467)
>>>         at
>>> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:234)
>>>         at
>>> org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.create(AbstractS3FileSystemFactory.java:123)
>>>
>> but I have set the s3 staff in the flink-conf.yaml as below:
>>
>>> high-availability:
>>> org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
>>
>> high-availability.storageDir: s3:///flink-test/recovery
>>
>> s3.endpoint: http://xxx.yyy.zzz.net
>>
>> s3.path.style.access: true
>>
>> s3.access-key: 111111111111111111111111111
>>
>> s3.secret-key: 222222222222222222222222222
>>
>> I think I supplied all the s3 information in the flink-conf.yaml, but it
>> did not work.
>>
>> I will try other ways to complete the s3 ha on k8s. Thank your guys.
>>
>> Yours sincerely
>> Joshua
>>
>> Robert Metzger <rmetz...@apache.org> 于2021年8月4日周三 下午11:35写道:
>>
>>> Hey Joshua,
>>>
>>> Can you first validate if the docker image you've built is valid by
>>> running it locally on your machine?
>>>
>>> I would recommend putting the s3 filesystem files into the plugins [1]
>>> directory to avoid classloading issues.
>>> Also, you don't need to build custom images if you want to use build-in
>>> plugins [2]
>>>
>>> [1]
>>> https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/filesystems/plugins/
>>> [2]
>>> https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/native_kubernetes/#using-plugins
>>>
>>> On Wed, Aug 4, 2021 at 3:06 PM Joshua Fan <joshuafat...@gmail.com>
>>> wrote:
>>>
>>>> Hi All
>>>> I want to build a custom flink image to run on k8s, below is my
>>>> Dockerfile content:
>>>>
>>>>> FROM apache/flink:1.13.1-scala_2.11
>>>>> ADD ./flink-s3-fs-hadoop-1.13.1.jar /opt/flink/lib
>>>>> ADD ./flink-s3-fs-presto-1.13.1.jar /opt/flink/lib
>>>>>
>>>> I just put the s3 fs dependency to the {flink home}/lib, and then I
>>>> build the image and push it to the repo.
>>>>
>>>> When I submit the flink session from the custom image, a error will be
>>>> reported like "exec /docker-entrypoint.sh failed: Exec format error".
>>>>
>>>> I googled a lot, but it seems no useful information.
>>>>
>>>> Thanks for your help.
>>>>
>>>> Yours sincerely
>>>> Joshua
>>>>
>>>

Reply via email to