Hi all, we are having problems with using a custom hadoop lib in a spark image
when running it on a kubernetes cluster while following the steps of the documentation. Details in the description below. Does anyone else had similar problems? Is there something missing in the setup below? Or is this a bug? Hadoop free spark on kubernetes Using custom hadoop libraries in spark image does not work with following the steps of the documentation (*) for running spark pi on kubernetes cluster. *Usage of hadoop free build: https://spark.apache.org/docs/2.4.0/hadoop-provided.html Steps: 1. Download hadoop free spark spark-2.4.0-bin-without-hadoop.tgz<https://archive.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-without-hadoop.tgz> 2. Build spark image without hadoop from this with docker-image-tool.sh 3. Create Dockerfile to add an image layer to the spark image without hadoop that adds a custom hadoop (see: Dockerfile and conf/spark-enf.sh in gist) ==> custom Hadoop Version 2.9.2 4. Use custom hadoop spark image to run spark examples (see: k8s submit below) 5. Produces JNI Error (see message below), expected instead is computation of pi. See files in gist https://gist.github.com/HectorOvid/c0bdad1b9dc8f64540b5b34e73f2a4a1 Regards, Tobias Sommer M.Sc. (Uni) Team eso-IN-Swarm Software Engineer [Beschreibung: Description: Description: Description: Description: Description: Description: e-solutions-logo-text-142] e.solutions GmbH Despag-Str. 4a, 85055 Ingolstadt, Germany Phone +49-8458-3332-1219 Fax +49-8458-3332-2219 tobias.som...@esolutions.de<mailto:tobias.som...@esolutions.de> Registered Office: Despag-Str. 4a, 85055 Ingolstadt, Germany e.solutions GmbH Managing Directors Uwe Reder, Dr. Riclef Schmidt-Clausen Register Court Ingolstadt HRB 5221