[jira] [Commented] (SPARK-2691) Allow Spark on Mesos to be launched with Docker

2014-09-24 Thread Ryan D Braley (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146246#comment-14146246
 ] 

Ryan D Braley commented on SPARK-2691:
--

+1 Spark typically lags behind mesos in version numbers so if you run mesos 
today you have to choose between spark and docker. With this we could have our 
cake and eat it too :) 

 Allow Spark on Mesos to be launched with Docker
 ---

 Key: SPARK-2691
 URL: https://issues.apache.org/jira/browse/SPARK-2691
 Project: Spark
  Issue Type: Improvement
  Components: Mesos
Reporter: Timothy Chen
  Labels: mesos

 Currently to launch Spark with Mesos one must upload a tarball and specifiy 
 the executor URI to be passed in that is to be downloaded on each slave or 
 even each execution depending coarse mode or not.
 We want to make Spark able to support launching Executors via a Docker image 
 that utilizes the recent Docker and Mesos integration work. 
 With the recent integration Spark can simply specify a Docker image and 
 options that is needed and it should continue to work as-is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-2593) Add ability to pass an existing Akka ActorSystem into Spark

2014-09-12 Thread Ryan D Braley (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14131333#comment-14131333
 ] 

Ryan D Braley commented on SPARK-2593:
--

This would be quite useful. It is hard to use actorStream with spark streaming 
where you have remote actors sending to spark because we need two actor 
systems. Right now it seems the name of the actor system in spark is hardcoded 
to spark. In order for actors to join an akka cluster we need to have the 
actor systems share the same name. Thus it is currently difficult to distribute 
work from an external actor system to the spark cluster without this change.

 Add ability to pass an existing Akka ActorSystem into Spark
 ---

 Key: SPARK-2593
 URL: https://issues.apache.org/jira/browse/SPARK-2593
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Reporter: Helena Edelson

 As a developer I want to pass an existing ActorSystem into StreamingContext 
 in load-time so that I do not have 2 actor systems running on a node in an 
 Akka application.
 This would mean having spark's actor system on its own named-dispatchers as 
 well as exposing the new private creation of its own actor system.
  
 I would like to create an Akka Extension that wraps around Spark/Spark 
 Streaming and Cassandra. So the programmatic creation would simply be this 
 for a user
 val extension = SparkCassandra(system)
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org