Hi Till,
we are not using HBase at the moment. We managed to run successfully the
job but it was a pain to find the right combination of dependencies,
library shading and the right HADOOP_CLASSPATH.
The problem was the combination of parquet, jaxrs, hadoop and jackson.
Moreover we had to run the cl
Hi Flavio,
I haven't seen this problem before. Are you using Flink's HBase connector?
According to similar problems with Spark one needs to make sure that the
hbase jars are on the classpath [1, 2]. If not, then it might be a problem
with the MR1 version 2.6.0-mr1-cdh5.11.2 which caused problems f
I forgot to mention that I'm using Flink 1.6.2 compiled for cloudera CDH
5.11.2:
/opt/shared/devel/apache-maven-3.3.9/bin/mvn clean install
-Dhadoop.version=2.6.0-cdh5.11.2 -Dhbase.version=1.2.0-cdh5.11.2
-Dhadoop.core.version=2.6.0-mr1-cdh5.11.2 -DskipTests -Pvendor-repos
On Wed, Nov 7, 2018 at
Hi to all,
we tried to upgrade our jobs to Flink 1.6.2 but now we get the following
error (we saw a similar issue with spark that was caused by different java
version on the cluster servers so we checked them and they are all to the
same version - oracle-8-191):
Caused by: org.apache.flink.runtime