You're probably requesting more instances than allowed by your account, so
the error gets generated for the extra instances. Try launching a smaller
cluster.
On Wed, Apr 1, 2015 at 12:41 PM, Vadim Bichutskiy <
vadim.bichuts...@gmail.com> wrote:
> Hi all,
>
> I just tried launching a Spark cluster
Thank You Akhil. Will look into it.
Its free, isn't it? I am still a student :)
On Tue, Feb 24, 2015 at 9:06 PM, Akhil Das
wrote:
> If you signup for Google Compute Cloud, you will get free $300 credits for
> 3 months and you can start a pretty good cluster for your testing purposes.
> :)
>
> Th
Yes it is :)
Thanks
Best Regards
On Tue, Feb 24, 2015 at 9:09 PM, Deep Pradhan
wrote:
> Thank You Akhil. Will look into it.
> Its free, isn't it? I am still a student :)
>
> On Tue, Feb 24, 2015 at 9:06 PM, Akhil Das
> wrote:
>
>> If you signup for Google Compute Cloud, you will get free $300
If you signup for Google Compute Cloud, you will get free $300 credits for
3 months and you can start a pretty good cluster for your testing purposes.
:)
Thanks
Best Regards
On Tue, Feb 24, 2015 at 8:25 PM, Deep Pradhan
wrote:
> Hi,
> I have just signed up for Amazon AWS because I learnt that i
Thank You All.
I think I will look into paying ~$0.7/hr as Sean suggested.
On Tue, Feb 24, 2015 at 9:01 PM, gen tang wrote:
> Hi,
>
> I am sorry that I made a mistake on AWS tarif. You can read the email of
> sean owen which explains better the strategies to run spark on AWS.
>
> For your questi
Hi,
I am sorry that I made a mistake on AWS tarif. You can read the email of
sean owen which explains better the strategies to run spark on AWS.
For your question: it means that you just download spark and unzip it. Then
run spark shell by ./bin/spark-shell or ./bin/pyspark. It is useful to get
f
No, I think I am ok with the time it takes.
Just that, with the increase in the partitions along with the increase in
the number of workers, I want to see the improvement in the performance of
an application.
I just want to see this happen.
Any comments?
Thank You
On Tue, Feb 24, 2015 at 8:52 PM,
You can definitely, easily, try a 1-node standalone cluster for free.
Just don't be surprised when the CPU capping kicks in within about 5
minutes of any non-trivial computation and suddenly the instance is
very s-l-o-w.
I would consider just paying the ~$0.07/hour to play with an
m3.medium, which
This should help you understand the cost of running a Spark cluster for a
short period of time:
http://www.ec2instances.info/
If you run an instance for even 1 second of a single hour you are charged
for that complete hour. So before you shut down your miniature cluster make
sure you really are d
Thank You Sean.
I was just trying to experiment with the performance of Spark Applications
with various worker instances (I hope you remember that we discussed about
the worker instances).
I thought it would be a good one to try in EC2. So, it doesn't work out,
does it?
Thank You
On Tue, Feb 24,
The free tier includes 750 hours of t2.micro instance time per month.
http://aws.amazon.com/free/
That's basically a month of hours, so it's all free if you run one
instance only at a time. If you run 4, you'll be able to run your
cluster of 4 for about a week free.
A t2.micro has 1GB of memory,
Kindly bear with my questions as I am new to this.
>> If you run spark on local mode on a ec2 machine
What does this mean? Is it that I launch Spark cluster from my local
machine,i.e., by running the shell script that is there in /spark/ec2?
On Tue, Feb 24, 2015 at 8:32 PM, gen tang wrote:
> Hi,
Hi,
As a real spark cluster needs a least one master and one slaves, you need
to launch two machine. Therefore the second machine is not free.
However, If you run spark on local mode on a ec2 machine. It is free.
The charge of AWS depends on how much and the types of machine that you
launched, bu
Oh yeah, they picked up changes after restart, thanks!
On Thu, Feb 5, 2015 at 8:13 PM, Charles Feduke
wrote:
> I don't see anything that says you must explicitly restart them to load
> the new settings, but usually there is some sort of signal trapped [or
> brute force full restart] to get a con
I don't see anything that says you must explicitly restart them to load the
new settings, but usually there is some sort of signal trapped [or brute
force full restart] to get a configuration reload for most daemons. I'd
take a guess and use the $SPARK_HOME/sbin/{stop,start}-slaves.sh scripts on
yo
Hi Gilberto,
Could you please attach the driver logs as well, so that we can pinpoint what's
going wrong? Could you also add the flag
`--driver-memory 4g` while submitting your application and try that as well?
Best,
Burak
- Original Message -
From: "Gilberto Lira"
To: user@spark.apach
No, you don't have to set up your own AMI. Actually it's probably simpler
and less error prone if you let spark-ec2 manage that for you as you first
start to get comfortable with Spark. Just spin up a cluster without any
explicit mention of AMI and it will do the right thing.
2014년 6월 1일 일요일, supe
I haven't set up AMI yet. I am just trying to run a simple job on the EC2
cluster. So, is setting up AMI a prerequisite for running simple Spark
example like org.apache.spark.examples.GroupByTest?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-EC2
Hmm.. you've gotten further than me. Which AMI's are you using?
On Sun, Jun 1, 2014 at 2:21 PM, superback
wrote:
> Hi,
> I am trying to run an example on AMAZON EC2 and have successfully
> set up one cluster with two nodes on EC2. However, when I was testing an
> example using the follo
19 matches
Mail list logo