Hi everyone,

I'm trying to use the logstash-logback-encoder
<https://github.com/logstash/logstash-logback-encoder> in my spark jobs but
I'm having some problems with the Spark classloader. The
logstash-logback-encoder uses a special version of the slf4j BasicMarker
<https://github.com/qos-ch/slf4j/blob/master/slf4j-api/src/main/java/org/slf4j/helpers/BasicMarker.java>,
called LogstashBasicMarker
<https://github.com/logstash/logstash-logback-encoder/blob/master/src/main/java/org/slf4j/helpers/LogstashBasicMarker.java>,
that exposes the slf4j marker's contructor, that has visibility package, as
public. This works well inside a Java application but not inside a Spark
job: when this markers are created inside a Spark job, a security exception
is thrown because Spark loads its own version of slf4j instead of the one
from the assembly of the job. One solution could be to set
spark.driver.userClassPathFirst=true but this option is experimental and
affects all the libraries loaded. I would like to force Spark to load only
the slf4j library from the assembly of my job. Is this possible and, if it
is, is this safe?

Another thing: does anybody knows why this happens? I get that the problem
is because of from where slf4j is loaded but I'm not sure why the Marker
constructor is not visible in a Spark job.

You can find details about the problem at
https://github.com/logstash/logstash-logback-encoder/issues/104.

Thanks,
Mario

Reply via email to