[jira] [Updated] (SPARK-7436) Cannot implement nor use custom StandaloneRecoveryModeFactory implementations

2015-05-08 Thread Josh Rosen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Rosen updated SPARK-7436:
--
Assignee: Jacek Lewandowski

 Cannot implement nor use custom StandaloneRecoveryModeFactory implementations
 -

 Key: SPARK-7436
 URL: https://issues.apache.org/jira/browse/SPARK-7436
 Project: Spark
  Issue Type: Bug
  Components: Deploy
Affects Versions: 1.3.1
Reporter: Jacek Lewandowski
Assignee: Jacek Lewandowski
 Fix For: 1.3.2, 1.4.0


 At least, this code fragment is buggy ({{Master.scala}}):
 {code}
   case CUSTOM =
 val clazz = 
 Class.forName(conf.get(spark.deploy.recoveryMode.factory))
 val factory = clazz.getConstructor(conf.getClass, 
 Serialization.getClass)
   .newInstance(conf, SerializationExtension(context.system))
   .asInstanceOf[StandaloneRecoveryModeFactory]
 (factory.createPersistenceEngine(), 
 factory.createLeaderElectionAgent(this))
 {code}
 Because here: {{val factory = clazz.getConstructor(conf.getClass, 
 Serialization.getClass)}} it tries to find the constructor which accepts 
 {{org.apache.spark.SparkConf}} and class of companion object of 
 {{akka.serialization.Serialization}} and then it tries to instantiate 
 {{newInstance(conf, SerializationExtension(context.system))}} with instance 
 of {{SparkConf}} and instance of {{Serialization}} class - not the companion 
 objects. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-7436) Cannot implement nor use custom StandaloneRecoveryModeFactory implementations

2015-05-07 Thread Jacek Lewandowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacek Lewandowski updated SPARK-7436:
-
Description: 
At least, this code fragment is buggy ({{Master.scala}}):

{code}
  case CUSTOM =
val clazz = Class.forName(conf.get(spark.deploy.recoveryMode.factory))
val factory = clazz.getConstructor(conf.getClass, 
Serialization.getClass)
  .newInstance(conf, SerializationExtension(context.system))
  .asInstanceOf[StandaloneRecoveryModeFactory]
(factory.createPersistenceEngine(), 
factory.createLeaderElectionAgent(this))
{code}

Because here: {{val factory = clazz.getConstructor(conf.getClass, 
Serialization.getClass)}} it tries to find the constructor which accepts 
{{org.apache.spark.SparkConf}} and class of companion object of 
{{akka.serialization.Serialization}} and then it tries to instantiate 
{{newInstance(conf, SerializationExtension(context.system))}} with instance of 
{{SparkConf}} and instance of {{Serialization}} class - not the companion 
objects. 


  was:
At least, this code fragment is buggy ({{Master.scala}}):

{code}
  case CUSTOM =
val clazz = Class.forName(conf.get(spark.deploy.recoveryMode.factory))
val factory = clazz.getConstructor(conf.getClass, 
Serialization.getClass)
  .newInstance(conf, SerializationExtension(context.system))
  .asInstanceOf[StandaloneRecoveryModeFactory]
(factory.createPersistenceEngine(), 
factory.createLeaderElectionAgent(this))
{code}



 Cannot implement nor use custom StandaloneRecoveryModeFactory implementations
 -

 Key: SPARK-7436
 URL: https://issues.apache.org/jira/browse/SPARK-7436
 Project: Spark
  Issue Type: Bug
Affects Versions: 1.3.1
Reporter: Jacek Lewandowski

 At least, this code fragment is buggy ({{Master.scala}}):
 {code}
   case CUSTOM =
 val clazz = 
 Class.forName(conf.get(spark.deploy.recoveryMode.factory))
 val factory = clazz.getConstructor(conf.getClass, 
 Serialization.getClass)
   .newInstance(conf, SerializationExtension(context.system))
   .asInstanceOf[StandaloneRecoveryModeFactory]
 (factory.createPersistenceEngine(), 
 factory.createLeaderElectionAgent(this))
 {code}
 Because here: {{val factory = clazz.getConstructor(conf.getClass, 
 Serialization.getClass)}} it tries to find the constructor which accepts 
 {{org.apache.spark.SparkConf}} and class of companion object of 
 {{akka.serialization.Serialization}} and then it tries to instantiate 
 {{newInstance(conf, SerializationExtension(context.system))}} with instance 
 of {{SparkConf}} and instance of {{Serialization}} class - not the companion 
 objects. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-7436) Cannot implement nor use custom StandaloneRecoveryModeFactory implementations

2015-05-07 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-7436:
---
Component/s: Deploy

 Cannot implement nor use custom StandaloneRecoveryModeFactory implementations
 -

 Key: SPARK-7436
 URL: https://issues.apache.org/jira/browse/SPARK-7436
 Project: Spark
  Issue Type: Bug
  Components: Deploy
Affects Versions: 1.3.1
Reporter: Jacek Lewandowski

 At least, this code fragment is buggy ({{Master.scala}}):
 {code}
   case CUSTOM =
 val clazz = 
 Class.forName(conf.get(spark.deploy.recoveryMode.factory))
 val factory = clazz.getConstructor(conf.getClass, 
 Serialization.getClass)
   .newInstance(conf, SerializationExtension(context.system))
   .asInstanceOf[StandaloneRecoveryModeFactory]
 (factory.createPersistenceEngine(), 
 factory.createLeaderElectionAgent(this))
 {code}
 Because here: {{val factory = clazz.getConstructor(conf.getClass, 
 Serialization.getClass)}} it tries to find the constructor which accepts 
 {{org.apache.spark.SparkConf}} and class of companion object of 
 {{akka.serialization.Serialization}} and then it tries to instantiate 
 {{newInstance(conf, SerializationExtension(context.system))}} with instance 
 of {{SparkConf}} and instance of {{Serialization}} class - not the companion 
 objects. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-7436) Cannot implement nor use custom StandaloneRecoveryModeFactory implementations

2015-05-07 Thread Patrick Wendell (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-7436:
---
Target Version/s: 1.3.2, 1.4.0

 Cannot implement nor use custom StandaloneRecoveryModeFactory implementations
 -

 Key: SPARK-7436
 URL: https://issues.apache.org/jira/browse/SPARK-7436
 Project: Spark
  Issue Type: Bug
  Components: Deploy
Affects Versions: 1.3.1
Reporter: Jacek Lewandowski

 At least, this code fragment is buggy ({{Master.scala}}):
 {code}
   case CUSTOM =
 val clazz = 
 Class.forName(conf.get(spark.deploy.recoveryMode.factory))
 val factory = clazz.getConstructor(conf.getClass, 
 Serialization.getClass)
   .newInstance(conf, SerializationExtension(context.system))
   .asInstanceOf[StandaloneRecoveryModeFactory]
 (factory.createPersistenceEngine(), 
 factory.createLeaderElectionAgent(this))
 {code}
 Because here: {{val factory = clazz.getConstructor(conf.getClass, 
 Serialization.getClass)}} it tries to find the constructor which accepts 
 {{org.apache.spark.SparkConf}} and class of companion object of 
 {{akka.serialization.Serialization}} and then it tries to instantiate 
 {{newInstance(conf, SerializationExtension(context.system))}} with instance 
 of {{SparkConf}} and instance of {{Serialization}} class - not the companion 
 objects. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org