[ 
https://issues.apache.org/jira/browse/SPARK-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14109818#comment-14109818
 ] 

Vida Ha commented on SPARK-3213:
--------------------------------

Joseph, Josh, & I discussed in person. 

There is a quick workarounds:

1) Use an old version of the spark_ec2 scripts that uses security groups to 
identify the slaves, if using "Launch more like this"

But now I need to investigate:

If using "launch more like this", it does seem like amazon tries to reuse the 
tags, but I'm wondering if it doesn't like having multiple machines with the 
same "Name" tag.  I will try using a different tag, like "spark-ec2-cluster-id" 
or something like that to identify the machine.  If that tag does copy over, 
then we can properly support "Launch more like this".

> spark_ec2.py cannot find slave instances
> ----------------------------------------
>
>                 Key: SPARK-3213
>                 URL: https://issues.apache.org/jira/browse/SPARK-3213
>             Project: Spark
>          Issue Type: Bug
>          Components: EC2
>    Affects Versions: 1.1.0
>            Reporter: Joseph K. Bradley
>            Priority: Blocker
>
> spark_ec2.py cannot find all slave instances.  In particular:
> * I created a master & slave and configured them.
> * I created new slave instances from the original slave ("Launch More Like 
> This").
> * I tried to relaunch the cluster, and it could only find the original slave.
> Old versions of the script worked.  The latest working commit which edited 
> that .py script is: a0bcbc159e89be868ccc96175dbf1439461557e1
> There may be a problem with this PR: 
> [https://github.com/apache/spark/pull/1899].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to