Hi Averell,

I think David's answer is right. The user uber jar will be loaded lazily by
user classloader.
So it cannot be recognized by Flink system class. You need to put it
directly /opt/flink/lib
directory or loaded via plugin mechanism.

Best,
Yang

David Magalhães <speeddra...@gmail.com> 于2020年4月25日周六 上午12:05写道:

> I think the classloaders for the uberjar and the link are different. Not
> sure if this is the right explanation, but that is why you need to
> add flink-s3-fs-hadoop inside the plugin folder in the cluster.
>
> On Fri, Apr 24, 2020 at 4:07 PM Averell <lvhu...@gmail.com> wrote:
>
>> Thank you Yun Tang.
>> Building my own docker image as suggested solved my problem.
>>
>> However, I don't understand why I need that while I already had that
>> s3-hadoop jar included in my uber jar?
>>
>> Thanks.
>> Regards,
>> Averell
>>
>>
>>
>> --
>> Sent from:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>>
>

Reply via email to