Hi,
I would like to make sure I'm not exceeding the quota on the local
cluster's hdfs. I have a couple of questions:
1. How do I know the quota? Here's the output of hadoop fs -count -q
which essentially does not tell me a lot
root@ip-172-31-7-49 ~]$ hadoop fs -count -q /
2147483647
On 9/7/2014 7:27 AM, Tomer Benyamini wrote:
2. What should I do to increase the quota? Should I bring down the
existing slaves and upgrade to ones with more storage? Is there a way
to add disks to existing slaves? I'm using the default m1.large slaves
set up using the spark-ec2 script.
Take a
Thanks! I found the hdfs ui via this port - http://[master-ip]:50070/.
It shows 1 node hdfs though, although I have 4 slaves on my cluster.
Any idea why?
On Sun, Sep 7, 2014 at 4:29 PM, Ognen Duzlevski
ognen.duzlev...@gmail.com wrote:
On 9/7/2014 7:27 AM, Tomer Benyamini wrote:
2. What should