Oh, sorry, I forgot to add: here are the extra lines in my spark_ec2.py

@205
   "r3.large":    "hvm",
    "r3.xlarge":   "hvm",
    "r3.2xlarge":  "hvm",
    "r3.4xlarge":  "hvm",
    "r3.8xlarge":  "hvm"

Clearly a masterpiece of hacking. :-) I haven't tested all of them.  The r3
set seems to act like i2.



On Sun, Jun 1, 2014 at 12:45 AM, Jeremy Lee <unorthodox.engine...@gmail.com>
wrote:

> Hi there, Patrick. Thanks for the reply...
>
> It wouldn't surprise me that AWS Ubuntu has Python 2.7. Ubuntu is cool
> like that. :-)
>
> Alas, the Amazon Linux AMI (2014.03.1) does not, and it's the very first
> one on the recommended instance list. (Ubuntu is #4, after Amazon, RedHat,
> SUSE) So, users such as myself who deliberately pick the "Most Amazon-ish
> obvious first choice" find they picked the wrong one.
>
> But that's trivial compared to the failure of the cluster to come up,
> apparently due to the master's http configuration. Any help on that would
> be much appreciated... it's giving me serious grief.
>
>
>
> On Sat, May 31, 2014 at 1:37 PM, Patrick Wendell <pwend...@gmail.com>
> wrote:
>
>> Hi Jeremy,
>>
>> That's interesting, I don't think anyone has ever reported an issue
>> running these scripts due to Python incompatibility, but they may require
>> Python 2.7+. I regularly run them from the AWS Ubuntu 12.04 AMI... that
>> might be a good place to start. But if there is a straightforward way to
>> make them compatible with 2.6 we should do that.
>>
>> For r3.large, we can add that to the script. It's a newer type. Any
>> interest in contributing this?
>>
>> - Patrick
>>
>> On May 30, 2014 5:08 AM, "Jeremy Lee" <unorthodox.engine...@gmail.com>
>> wrote:
>>
>>>
>>> Hi there! I'm relatively new to the list, so sorry if this is a repeat:
>>>
>>> I just wanted to mention there are still problems with the EC2 scripts.
>>> Basically, they don't work.
>>>
>>> First, if you run the scripts on Amazon's own suggested version of
>>> linux, they break because amazon installs Python2.6.9, and the scripts use
>>> a couple of Python2.7 commands. I have to "sudo yum install python27", and
>>> then edit the spark-ec2 shell script to use that specific version.
>>> Annoying, but minor.
>>>
>>> (the base "python" command isn't upgraded to 2.7 on many systems,
>>> apparently because it would break yum)
>>>
>>> The second minor problem is that the script doesn't know about the
>>> "r3.large" servers... also easily fixed by adding to the spark_ec2.py
>>> script. Minor,
>>>
>>> The big problem is that after the EC2 cluster is provisioned, installed,
>>> set up, and everything, it fails to start up the webserver on the master.
>>> Here's the tail of the log:
>>>
>>> Starting GANGLIA gmond:                                    [  OK  ]
>>> Shutting down GANGLIA gmond:                               [FAILED]
>>> Starting GANGLIA gmond:                                    [  OK  ]
>>> Connection to ec2-54-183-82-48.us-west-1.compute.amazonaws.com closed.
>>> Shutting down GANGLIA gmond:                               [FAILED]
>>> Starting GANGLIA gmond:                                    [  OK  ]
>>> Connection to ec2-54-183-82-24.us-west-1.compute.amazonaws.com closed.
>>> Shutting down GANGLIA gmetad:                              [FAILED]
>>> Starting GANGLIA gmetad:                                   [  OK  ]
>>> Stopping httpd:                                            [FAILED]
>>> Starting httpd: httpd: Syntax error on line 153 of
>>> /etc/httpd/conf/httpd.conf: Cannot load modules/mod_authn_alias.so into
>>> server: /etc/httpd/modules/mod_authn_alias.so: cannot open shared object
>>> file: No such file or directory
>>>                                                            [FAILED]
>>>
>>> Basically, the AMI you have chosen does not seem to have a "full"
>>> install of apache, and is missing several modules that are referred to in
>>> the httpd.conf file that is installed. The full list of missing modules is:
>>>
>>> authn_alias_module modules/mod_authn_alias.so
>>> authn_default_module modules/mod_authn_default.so
>>> authz_default_module modules/mod_authz_default.so
>>> ldap_module modules/mod_ldap.so
>>> authnz_ldap_module modules/mod_authnz_ldap.so
>>> disk_cache_module modules/mod_disk_cache.so
>>>
>>> Alas, even if these modules are commented out, the server still fails to
>>> start.
>>>
>>> root@ip-172-31-11-193 ~]$ service httpd start
>>> Starting httpd: AH00534: httpd: Configuration error: No MPM loaded.
>>>
>>> That means Spark 1.0.0 clusters on EC2 are Dead-On-Arrival when run
>>> according to the instructions. Sorry.
>>>
>>> Any suggestions on how to proceed? I'll keep trying to fix the
>>> webserver, but (a) changes to httpd.conf get blown away by "resume", and
>>> (b) anything I do has to be redone every time I provision another cluster.
>>> Ugh.
>>>
>>> --
>>> Jeremy Lee  BCompSci(Hons)
>>>   The Unorthodox Engineers
>>>
>>
>>
>>
>>
>>
>
>
> --
> Jeremy Lee  BCompSci(Hons)
>   The Unorthodox Engineers
>



-- 
Jeremy Lee  BCompSci(Hons)
  The Unorthodox Engineers

Reply via email to