[ https://issues.apache.org/jira/browse/SPARK-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Josh Rosen resolved SPARK-7436. ------------------------------- Resolution: Fixed Fix Version/s: 1.4.0 1.3.2 Issue resolved by pull request 5975 [https://github.com/apache/spark/pull/5975] > Cannot implement nor use custom StandaloneRecoveryModeFactory implementations > ----------------------------------------------------------------------------- > > Key: SPARK-7436 > URL: https://issues.apache.org/jira/browse/SPARK-7436 > Project: Spark > Issue Type: Bug > Components: Deploy > Affects Versions: 1.3.1 > Reporter: Jacek Lewandowski > Fix For: 1.3.2, 1.4.0 > > > At least, this code fragment is buggy ({{Master.scala}}): > {code} > case "CUSTOM" => > val clazz = > Class.forName(conf.get("spark.deploy.recoveryMode.factory")) > val factory = clazz.getConstructor(conf.getClass, > Serialization.getClass) > .newInstance(conf, SerializationExtension(context.system)) > .asInstanceOf[StandaloneRecoveryModeFactory] > (factory.createPersistenceEngine(), > factory.createLeaderElectionAgent(this)) > {code} > Because here: {{val factory = clazz.getConstructor(conf.getClass, > Serialization.getClass)}} it tries to find the constructor which accepts > {{org.apache.spark.SparkConf}} and class of companion object of > {{akka.serialization.Serialization}} and then it tries to instantiate > {{newInstance(conf, SerializationExtension(context.system))}} with instance > of {{SparkConf}} and instance of {{Serialization}} class - not the companion > objects. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org