Hi all,
I fixed the previous issue but now I am getting this :
Entry in /etc/fstab :
fuse_dfs#dfs://7071bcce81d9:54310 /home/jony/FreshHadoop/mnt fuse
-oallow_other,rw,-ousetrash 0 0
$ sudo mount /home/jony/FreshHadoop/mnt
port=54310,server=7071bcce81d9
fuse-dfs didn't recognize /home/jony/Fresh
+1 for Arun and Todd
Alexander Lorenz
http://mapredit.blogspot.com
On Jan 4, 2012, at 9:40 AM, Todd Lipcon wrote:
> On Wed, Jan 4, 2012 at 9:28 AM, Arun C Murthy wrote:
>> Other than these technical discussions, I don't see why ASF lists should be
>> used to discuss or market products of _any
Hi Martinus,
As Harsha mentioned, HA is under development.
Couple of things you can do for HOT-COLD setup are:
1. Multiple dirs for ${dfs.name.dir}
2. Place ${dfs.name.dir} on a RAID 1 mirror setup
3. NFS as one of the ${dfs.name.dir}
-Bharath
On Wed, Jan 4, 2012 at 1:19 AM, Harsh J wrote:
On Wed, Jan 4, 2012 at 9:28 AM, Arun C Murthy wrote:
> Other than these technical discussions, I don't see why ASF lists should be
> used to discuss or market products of _any_ vendor for several good reasons:
> # ASF developer community cannot help users of vendor-specific products for
> obvious
On Jan 3, 2012, at 7:52 PM, M. C. Srivas wrote:
>
> On Tue, Jan 3, 2012 at 4:01 PM, Arun C Murthy wrote:
> Stuti - it's best to stick to questions about Apache Hadoop on
> *@hadoop.apache.org lists. The Apache Hadoop mailing lists exist to help
> users and developers of Apache Hadoop.
>
> this
Hi,
Please ping the host you want to reach and check your hosts-file and your
resolve.conf
- Alex
Alexander Lorenz
http://mapredit.blogspot.com
On Jan 4, 2012, at 7:28 AM, Oren wrote:
> so it seems but doing a dig from terminal command line returns the results
> correctly.
> the same settin
so it seems but doing a dig from terminal command line returns the
results correctly.
the same setting are running in production servers (not hadoop) for
months without problems.
clarification - i changed servers names in logs, domain isn't xxx.local
originally..
On 01/04/2012 05:19 PM, Har
Looks like your caching DNS servers aren't really functioning as you'd
expect them to?
> org.apache.hadoop.hbase.ZooKeeperConnectionException:
> java.net.UnknownHostException: s06.xxx.local
(That .local also worries me, you probably have a misconfiguration in
resolution somewhere.)
On Wed, Jan 4
hi.
i have a small hadoop grid connected with a 1g network.
when servers are configured to use the local dns server the jobs are
running without a problem and copy speed during reduce is tens on MB.
once i change the servers to work with a cache only named server on each
node, i start to get fa
Hi Stuti,
Do a search for libhdfs.so* and do also an ldd /path/to/fuse_dfs. Could be that
only a symlink is missing. With ldd you will see which libraries the binary
wants, if the libhdfs.so.1 is not in the path export the path where you found
it.
- Alex
Alexander Lorenz
http://mapredit.blog
Per recommendations I received in Cloudera's Hadoop Administrator training, I
configured our dfs.name.dir property with 3 directories, one on the NN, one on
an nfs mount to a Hadoop client machine (in the same rack as the NN), and one
to an nfs mount to a NAS (different rack, same datacenter).
I have already exported it in the env. Output of "export" command.
declare -x
LD_LIBRARY_PATH="/usr/lib:/usr/local/lib:/home/jony/FreshHadoop/hadoop-0.20.2/build/libhdfs:/usr/lib/jvm/java-6-openjdk/jre/lib/i386/server/:/usr/lib/libfuse.so"
Stuti
From: Ha
Stuti,
Your env needs to carry this:
export
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/dir/where/libhdfs/files/are/present
Otherwise the fuse_dfs binary won't be able to find and load it. The
wrapper script does this as part of its setup if you read it.
On Wed, Jan 4, 2012 at 5:29 PM, Stuti Awa
Im able to mount using command :
fuse_dfs_wrapper.sh dfs://: /export/hdfs
-Original Message-
From: Stuti Awasthi
Sent: Wednesday, January 04, 2012 5:24 PM
To: hdfs-user@hadoop.apache.org
Subject: RE: Mounting HDFS
Harsh,
Output of $file `which fuse_dfs`
/sbin/fuse_dfs: ELF 32-bit LSB
Harsh,
Output of $file `which fuse_dfs`
/sbin/fuse_dfs: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped
Same output for $ file /sbin/fuse_dfs
Thanks
From: Harsh J [ha...
Stuti,
My original command was "file `which fuse_dfs`", and not just the which command.
Can you run "file /sbin/fuse_dfs"? You need the utility called 'file' available
(its mostly present).
On 04-Jan-2012, at 5:08 PM, Stuti Awasthi wrote:
> Hi Harsh,
>
> Currently I am using 32 bit Ubuntu11.1
Hi Harsh,
Currently I am using 32 bit Ubuntu11.10, Hadoop 0.20.2
Output of : $ which fuse_dfs
/sbin/fuse_dfs
I searched on net and I got this url
"http://wiki.apache.org/hadoop/MountableHDFS";
How can I get hdfs fuse deb or rpm packages ?? Thanks for pointing this, can
you please guide me more
Stuti,
What's your platform - 32-bits or 64-bits? Which one have you built libhdfs for?
What's the output of the following?
$ file `which fuse_dfs`
FWIW, the most hassle free way to do these things today is to use proper
packages available for your platform, instead of compiling it by yourself.
Hi All,
I am following http://wiki.apache.org/hadoop/MountableHDFS for HDFS mount.
I have successfully followed the steps till "Installing" and I am able mount it
properly. After that I am trying with "Deploying" step and followed the steps:
1. add the following to /etc/fstab
fuse_dfs#dfs://hado
Martinus,
High-Availability NameNode is being worked upon and an initial version
will be out soon. Check out the
https://issues.apache.org/jira/browse/HDFS-1623 JIRA for its
state/discussions.
You can also clone the Hadoop repo and switch to branch 'HDFS-1623' to
give it a whirl, although it is s
Hi Bharath,
Thanks for your answer. I remembered hadoop has single point of failure,
which is it's namenode. Is there a way to make my hadoop clusters to become
fault tolerant, even when the master node (namenode) fail?
Thanks and Happy New Year 2012.
On Tue, Jan 3, 2012 at 2:20 AM, Bharath Mund
21 matches
Mail list logo