Re: Adding quota to the ephemeral hdfs on a standalone spark cluster on ec2
Thanks! I found the hdfs ui via this port - http://[master-ip]:50070/. It shows 1 node hdfs though, although I have 4 slaves on my cluster. Any idea why? On Sun, Sep 7, 2014 at 4:29 PM, Ognen Duzlevski wrote: > > On 9/7/2014 7:27 AM, Tomer Benyamini wrote: >> >> 2. What should I do to increase the quota? Should I bring down the >> existing slaves and upgrade to ones with more storage? Is there a way >> to add disks to existing slaves? I'm using the default m1.large slaves >> set up using the spark-ec2 script. > > Take a look at: http://www.ec2instances.info/ > > There you will find the available EC2 instances with their associated costs > and how much ephemeral space they come with. Once you pick an instance you > get only so much ephemeral space. You can always add drives but they will be > EBS and not physically attached to the instance. > > Ognen > > - > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
Re: Adding quota to the ephemeral hdfs on a standalone spark cluster on ec2
On 9/7/2014 7:27 AM, Tomer Benyamini wrote: 2. What should I do to increase the quota? Should I bring down the existing slaves and upgrade to ones with more storage? Is there a way to add disks to existing slaves? I'm using the default m1.large slaves set up using the spark-ec2 script. Take a look at: http://www.ec2instances.info/ There you will find the available EC2 instances with their associated costs and how much ephemeral space they come with. Once you pick an instance you get only so much ephemeral space. You can always add drives but they will be EBS and not physically attached to the instance. Ognen - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
Adding quota to the ephemeral hdfs on a standalone spark cluster on ec2
Hi, I would like to make sure I'm not exceeding the quota on the local cluster's hdfs. I have a couple of questions: 1. How do I know the quota? Here's the output of hadoop fs -count -q which essentially does not tell me a lot root@ip-172-31-7-49 ~]$ hadoop fs -count -q / 2147483647 2147482006none inf 4 163725412205559 / 2. What should I do to increase the quota? Should I bring down the existing slaves and upgrade to ones with more storage? Is there a way to add disks to existing slaves? I'm using the default m1.large slaves set up using the spark-ec2 script. Thanks, Tomer - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org