[
https://issues.apache.org/jira/browse/PIG-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947062#comment-14947062
]
Srikanth Sundarrajan commented on PIG-4693:
-------------------------------------------
Yes. After the AM is launched, client makes a request to the AM for
newHadoopRDD corresponding to the Load Statement, which is shipped through the
kryo serializer as I had set this as the default spark serializer. The stack
trace in the original comment has more detail. Posting a section of it here for
quick reference.
{noformat}
java.lang.NoSuchMethodError:
com.esotericsoftware.kryo.Kryo.setInstantiatorStrategy(Lorg/objenesis/strategy/InstantiatorStrategy;)V
at com.twitter.chill.KryoBase.setInstantiatorStrategy(KryoBase.scala:86)
at
com.twitter.chill.EmptyScalaKryoInstantiator.newKryo(ScalaKryoInstantiator.scala:59)
at
org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:80)
at
org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:227)
at
org.apache.spark.serializer.KryoSerializerInstance.<init>(KryoSerializer.scala:212)
at
org.apache.spark.serializer.KryoSerializer.newInstance(KryoSerializer.scala:128)
at
org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:201)
at
org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:102)
at
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:85)
at
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)
at org.apache.spark.rdd.NewHadoopRDD.<init>(NewHadoopRDD.scala:77)
at
org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1099)
at
org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1094)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:681)
at
org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1094)
at
org.apache.pig.backend.hadoop.executionengine.spark.converter.LoadConverter.convert(LoadConverter.java:88)
{noformat}
> Class conflicts: Kryo bundled in spark vs kryo bundled with pig
> ---------------------------------------------------------------
>
> Key: PIG-4693
> URL: https://issues.apache.org/jira/browse/PIG-4693
> Project: Pig
> Issue Type: Sub-task
> Components: spark
> Affects Versions: spark-branch
> Reporter: Srikanth Sundarrajan
> Assignee: Srikanth Sundarrajan
> Labels: spork
> Fix For: spark-branch
>
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)