Hello,

I'm trying to run a pyflink job in cluster mode (with yarn). My job
contains source and sink definitions using Table API which are converted to
a datastream and back. Unfortunately I'm getting an unusual exception at:
*table = t_env.from_data_stream(ds, 'user_id, first_name, last_name).*

The exception is:
*Traceback (most recent call last):*
*  File "users_job.py", line 40, in <module>*
*    table = t_env.from_data_stream(ds, 'user_id, first_name, last_name)*
*  File
"/jobs/venv/lib/python3.7/site-packages/pyflink/table/table_environment.py",
line 1734, in from_data_stream*
*    JPythonConfigUtil.declareManagedMemory(*
*  File "/jobs/venv/lib/python3.7/site-packages/py4j/java_gateway.py", line
1516, in __getattr__*
*    "{0}.{1} does not exist in the JVM".format(self._fqn, name))*
*py4j.protocol.Py4JError:
org.apache.flink.python.util.PythonConfigUtil.declareManagedMemory does not
exist in the JVM*

Python version: 3.7 (venv built by the setup-python-environment.sh script
from documentation)
Flink version: 1.12.3

Any help would be appreciated.

Kind Regards
Kamil

Reply via email to