I started my ec2 spark cluster with

    ./ec2/spark---ebs-vol-{size=100,num=8,type=gp2} -t m3.xlarge -s 10
launch mycluster

I see the additional volumes attached but they do not seem to be set up for
hdfs.
How can I check if they are being utilized on all workers,
and how can I get all workers to utilize the extra volumes for hdfs.
I do not have experience using hadoop directly, only through spark.

thanks
Daniel

Reply via email to