Hello,

Spark provides json4s 3.2.X, while Ignite uses the newest version. This
seems to cause an error when using some spark SQL commands that use a
json4s methods that no longer exist.

Adding Ignite to our existing Spark code bases seems to break things.

How do people work around this issue?

Stack trace:

[info] Caused by: java.lang.NoSuchMethodError:
org.json4s.jackson.JsonMethods$.parse(Lorg/json4s/JsonInput;Z)Lorg/json4s/JsonAST$JValue;
[info]     at
org.apache.spark.sql.types.DataType$.fromJson(DataType.scala:108)
[info]     at
org.apache.spark.sql.types.StructType$$anonfun$6.apply(StructType.scala:414)
[info]     at
org.apache.spark.sql.types.StructType$$anonfun$6.apply(StructType.scala:414)
[info]     at scala.util.Try$.apply(Try.scala:192)
[info]     at
org.apache.spark.sql.types.StructType$.fromString(StructType.scala:414)
[info]     at
org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.init(ParquetWriteSupport.scala:80)
[info]     at
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:341)
[info]     at
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:302)
[info]     at
org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
[info]     at
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:159)
[info]     at
org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:303)
[info]     at
org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:312)
[info]     at
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256)
[info]     at
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:254)
[info]     at
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1371)
[info]     at
org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:259)
[info]     ... 8 more

Reply via email to