It appears to me that the org.json4s.DefaultFormats class in the shaded jar
for Livy 0.7.1 is incompatible.
You could try upgrading that dependency within the shaded jar.

Others may have more thoughts?

On Thu, Dec 8, 2022 at 3:14 PM Xiaowei Wang <xiaowei...@gmail.com> wrote:

> To whom it may concern,
>
> We are building a Spark connector for our own database. Due to some
> technical concerns, the connector jar is only compatible with JAVA 11 and
> above. We plan to deploy the connector jar in Amazon EMR and use it in a
> Jupyter notebook. Due to the compatibility, we must set up an EMR cluster
> and run Spark+YARN+LIVY+HADOOP services with *JAVA 11*.
>
> However, we can't even open an idle Spark session in the Jupyter Notebook
> with Livy. Below is the exception we got from YARN and Livy
>
>> YARN Diagnostics:
>> Application application_1670453222303_0001 failed 1 times (global limit =2; 
>> local limit is =1) due to AM Container for 
>> appattempt_1670453222303_0001_000001 exited with  exitCode: 13
>> Failing this attempt.Diagnostics: [2022-12-07 22:56:20.869]Exception from 
>> container-launch.
>> Container id: container_1670453222303_0001_01_000001
>> Exit code: 13
>>
>> [2022-12-07 22:56:20.897]Container exited with a non-zero exit code 13. 
>> Error file: prelaunch.err.
>> Last 4096 bytes of prelaunch.err :
>> Last 4096 bytes of stderr :
>> veMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>      at 
>> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>      at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>>      at 
>> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:742)
>> )
>> 22/12/07 22:56:20 ERROR ApplicationMaster: Uncaught exception:
>> org.apache.spark.SparkException: Exception thrown in awaitResult:
>>      at 
>> org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301) 
>> ~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
>>      at 
>> org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:514)
>>  ~[spark-yarn_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
>>      at 
>> org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:278)
>>  ~[spark-yarn_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
>>      at 
>> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:929)
>>  ~[spark-yarn_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
>>      at 
>> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:928)
>>  ~[spark-yarn_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
>>      at java.security.AccessController.doPrivileged(Native Method) ~[?:?]
>>      at javax.security.auth.Subject.doAs(Subject.java:423) ~[?:?]
>>      at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
>>  ~[hadoop-client-api-3.3.3-amzn-0.jar:?]
>>      at 
>> org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:928)
>>  ~[spark-yarn_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
>>      at 
>> org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala) 
>> ~[spark-yarn_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
>> Caused by: java.util.concurrent.ExecutionException: Boxed Error
>>      at scala.concurrent.impl.Promise$.resolver(Promise.scala:87) 
>> ~[scala-library-2.12.15.jar:?]
>>      at 
>> scala.concurrent.impl.Promise$.scala$concurrent$impl$Promise$$resolveTry(Promise.scala:79)
>>  ~[scala-library-2.12.15.jar:?]
>>      at 
>> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284) 
>> ~[scala-library-2.12.15.jar:?]
>>      at scala.concurrent.Promise.tryFailure(Promise.scala:112) 
>> ~[scala-library-2.12.15.jar:?]
>>      at scala.concurrent.Promise.tryFailure$(Promise.scala:112) 
>> ~[scala-library-2.12.15.jar:?]
>>      at 
>> scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:187) 
>> ~[scala-library-2.12.15.jar:?]
>>      at 
>> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:761)
>>  ~[spark-yarn_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]*Caused by: 
>> java.lang.IncompatibleClassChangeError: Inconsistent constant pool data in 
>> classfile for class* org/apache/livy/shaded/json4s/DefaultFormats. Method 
>> 'java.text.SimpleDateFormat 
>> $anonfun$df$1(org.apache.livy.shaded.json4s.DefaultFormats)' at index 156 is 
>> CONSTANT_MethodRef and should be CONSTANT_InterfaceMethodRef
>>      at 
>> org.apache.livy.shaded.json4s.DefaultFormats.$init$(Formats.scala:318) ~[?:?]
>>      at 
>> org.apache.livy.shaded.json4s.DefaultFormats$.<init>(Formats.scala:296) 
>> ~[?:?].
>>
>>
> Our EMR cluster is configured to have 1 master and 3 workers. There should
> always be enough resources. The cluster is on *emr-6.9.0, with Spark
> 3.3.0, JupyterHub 1.4.1, Livy 0.7.1. installed.*
>
> We've been blocked by this issue for a while, our investigation tends to
> believe this is a compatibility problem between Livy and Java 11. Also we
> only see the exception for interactive sessions; For batch sessions, Livy
> works as expected with Java 11. Any feedback here can be super helpful to
> us. Thank you in advance for all your help!
>
> regards,
> Xiaowei
>

Reply via email to