Re: How to mount HDFS as a local file system?

2016-11-10 Thread Alexandr Porunov
Thank you for suggestions. I tried to install MountableHDFS but I failed. It says: *2. fuse_dfs_wrapper.sh dfs://hadoop_server1.foo.com:9000 /export/hdfs -d and from another terminal, try ls /export/hdfs* Where to get this script? I have installed libfuse:

Re: How to mount HDFS as a local file system?

2016-11-10 Thread Ravi Prakash
Or you could use NFS https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html . In our experience, both of them still need some work for stability and correctness. On Thu, Nov 10, 2016 at 10:00 AM, wrote: > Fuse is your tool: > >

How to mount HDFS as a local file system?

2016-11-10 Thread Alexandr Porunov
Hello, I try to understand how to mount HDFS as a local file system but without success. I already have a running a hadoop cluster 2.7.1 but I can access HDFS only with hdfs dfs tool. For example: hdfs dfs -mkdir /test Can somebody help me to figure out how to mount it? Sincerely, Alexandr

RE: How to mount HDFS as a local file system?

2016-11-10 Thread wget.null
Fuse is your tool: https://wiki.apache.org/hadoop/MountableHDFS -- m: wget.n...@gmail.com b: https://mapredit.blogspot.com From: Alexandr Porunov Sent: Thursday, November 10, 2016 6:56 PM To: user.hadoop Subject: How to mount HDFS as a local file system? Hello, I try to understand how to mount

Re: Yarn 2.7.3 - capacity scheduler container allocation to nodes?

2016-11-10 Thread Rafał Radecki
I have already used maximum-capacity for both queues (70 and 30) to limit their resource usage but it seems that this mechanism does not work on node level but rather on cluster level. We have samza tasks on the cluster and they run for a very long time so we cannot depend on the elasticity

RE: Yarn 2.7.3 - capacity scheduler container allocation to nodes?

2016-11-10 Thread Bibinchundatt
Hi Rafai, Probably the following 2 two option you can look into 1. Elasticity - Free resources can be allocated to any queue beyond it’s capacity. When there is demand for these resources from queues running below capacity at a future point in time, as tasks scheduled on these resources

Re: Yarn 2.7.3 - capacity scheduler container allocation to nodes?

2016-11-10 Thread Rafał Radecki
We have 4 nodes and 4 large (~30GB each tasks), additionally we have about 25 small (~2 GB each) tasks. All tasks can possibly be started in random order. On each node we have 50GB for yarn. So in case we start all 4 large tasks at the beginning the are correctly scheduled to all 4 nodes. But in