Hello,

I'm not sure whether it's a Hadoop or Flink-specific question, but since I
ran into this in the context of Flink I'm asking here. I would be glad if
anyone can suggest a more appropriate place.

I have a native library that I need to use in my Flink batch job that I run
on EMR, and I try to point JVM to the location of native library. Normally,
I'd do this using java.library.path parameter. So I try to run as follows:
`
HADOOP_CONF_DIR=/etc/hadoop/conf
JVM_ARGS=-Djava.library.path=<native_lib_dir> flink-1.0.0/bin/flink run -m
yarn-cluster -yn 1 -yjm 768 -ytm 768 <my.jar>
`
It does not work, fails with `java.lang.UnsatisfiedLinkError` when trying
to load the native lib. It probably has to do with YARN not not passing
this parameter to task nodes, but my understanding of this mechanism is
quite limited so far.

I dug up this Jira ticket:
https://issues.apache.org/jira/browse/MAPREDUCE-3693, but setting
LD_LIBRARY_PATH in mapreduce.admin.user.env did not solve the problem
either.

Any help or hint where to look is highly appreciated.

Thanks,
Timur

Reply via email to