When we submit a job which use udf of hive , the job will dependent on udf's jars and configuration files.
We have already store udf's jars and configuration files in hive metadata store,so we excpet that flink could get those files hdfs paths by hive-connector,and get those files in hdfs by paths when it running. In this code, it seemed we have already get those udf resources's path in FunctionInfo, but did't use it. https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/module/hive/HiveModule.java#L80 We submit udf's jars and configuration files with job to yarn by client now ,and try to find a way to avoid submit udf's resources when we submit a job.Is it possible? -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/