Thanks Akhil. I tried changing /root/ephemeral-hdfs/conf/hdfs-site.xml to
have

  <property>
    <name>dfs.data.dir</name>

<value>/vol,/vol0,/vol1,/vol2,/vol3,/vol4,/vol5,/vol6,/vol7,/mnt/ephemeral-hdfs/data,/mnt2/ephemeral-hdfs/data</value>
  </property>

and then running

/root/ephemeral-hdfs/bin/stop-all.sh
copy-dir  /root/ephemeral-hdfs/conf/
/root/ephemeral-hdfs/bin/start-all.sh

to try and make sure the new configurations taks on the entire cluster.
I then ran spark to write to the local hdfs.
It failed after filling the original /mnt* mounted drives,,
without writing anything to the attached /vol* drives.

I also tried completely stopping and restarting the cluster,
but restarting resets /root/ephemeral-hdfs/conf/hdfs-site.xml to the
default state.

thanks
Daniel



On Thu, Oct 30, 2014 at 1:56 AM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> I think you can check in the core-site.xml or hdfs-site.xml file under
> /root/ephemeral-hdfs/etc/hadoop/ where you can see data node dir property
> which will be a comma separated list of volumes.
>
> Thanks
> Best Regards
>
> On Thu, Oct 30, 2014 at 5:21 AM, Daniel Mahler <dmah...@gmail.com> wrote:
>
>> I started my ec2 spark cluster with
>>
>>     ./ec2/spark---ebs-vol-{size=100,num=8,type=gp2} -t m3.xlarge -s 10
>> launch mycluster
>>
>> I see the additional volumes attached but they do not seem to be set up
>> for hdfs.
>> How can I check if they are being utilized on all workers,
>> and how can I get all workers to utilize the extra volumes for hdfs.
>> I do not have experience using hadoop directly, only through spark.
>>
>> thanks
>> Daniel
>>
>
>

Reply via email to