, Jun 12, 2014 at 6:02 PM, Aliaksei Litouka
aliaksei.lito...@gmail.com wrote:
Yes, I am launching a cluster with the spark_ec2 script. I checked
/root/spark/conf/spark-env.sh on the master node and on slaves and it looks
like this:
#!/usr/bin/env bash
export SPARK_LOCAL_DIRS=/mnt/spark
it useful. Or maybe someone will want to join
development.
The application is available at https://github.com/alitouka/spark_dbscan
Any questions, comments, suggestions, as well as criticism are welcome :)
Best regards,
Aliaksei Litouka
is configured to set it to 512,
and is overriding the application’s settings. Take a look in there and
delete that line if possible.
Matei
On Jun 10, 2014, at 2:38 PM, Aliaksei Litouka aliaksei.lito...@gmail.com
wrote:
I am testing my application in EC2 cluster of m3.medium machines
[Double,Double] as well, instead of just a file.
val data = IOHelper.readDataset(sc, /path/to/my/data.csv)
And other distance measures ofcourse.
Thanks,
Vipul
On Jun 12, 2014, at 2:31 PM, Aliaksei Litouka aliaksei.lito...@gmail.com
wrote:
Hi.
I'm not sure if messages like
Well... the reason was an out-of-date version of Python (2.6.6) on the
machine where I ran the script. If anyone else experiences this issue -
just update your Python.
On Sun, May 4, 2014 at 7:51 PM, Aliaksei Litouka aliaksei.lito...@gmail.com
wrote:
I am using Spark 0.9.1. When I'm trying
I am using Spark 0.9.1. When I'm trying to start a EC2 cluster with the
spark-ec2 script, an error occurs and the following message is issued:
AttributeError: 'module' object has no attribute 'check_output'. By this
time, EC2 instances are up and running but Spark doesn't seem to be
installed on