Hi all
So I was not satisfied with the above mentioned approach and tried hadoop
socks server config at client end and used ssh with -D option as mentioned
by Hariharan Iyer (Thank you for that) and it worked as expected without
the need of opening separate ssh tunnels for data nodes.
Thanks.
On
Thank you all for your help .
Solution that worked for me is as follows:
I opened ssh tunnel for namenode which ensure that hadoop fs -ls works
In order for hadoop fs -put to work (as it was timing out because namenode
was returning private ip addresses of datanode which cant be resolved by
edge ma
You will have to use a socks proxy (-D option in ssh tunnel). In addition,
when invoking hadoop fs command, you will have to add -Dsocks.proxyHost and
- Dsocks.proxyPort.
Thanks,
Hariharan
On Thu, 12 Sep 2019, 23:26 saurabh pratap singh,
wrote:
> Thank you so much for your reply .
> I have furt
Hi
Hadoop is designed to avoid proxy as it will act as a bottleneck. Namenodes
are used to obtain a direct socket client / datanodes that is specific to
each job.
Le ven. 13 sept. 2019 à 14:21, Tony S. Wu a écrit :
> You need connectivity from edge node to the entire cluster, not just
> namenode
Thank you so much for your reply .
I have further question there are some blogs which talks about some similar
setup like this one
https://github.com/vkovalchuk/hadoop-2.6.0-windows/wiki/How-to-access-HDFS-behind-firewall-using-SOCKS-proxy
I am just curious how does that works.
On Thu, Sep 12,
You need connectivity from edge node to the entire cluster, not just
namenode. Your topology, unfortunately, probably won’t work too well. A
proper VPN / IPSec tunnel might be a better idea.
On Thu, Sep 12, 2019 at 12:04 AM saurabh pratap singh <
saurabh.cs...@gmail.com> wrote:
> Hadoop version :
Hi Markus,
HDFS NFS gateway currently does not support snapshots. Following issues
are tracking this in HDFS.
https://issues.apache.org/jira/browse/HDFS-5084
https://issues.apache.org/jira/browse/HDFS-11315
However we do not have a fix for these jira.
Thanks,
Mukul
On 5/9/19 1:16 AM, mark
Thanks! They are fine, I was just confused seeing them talked about in forums.
John
-Original Message-
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Friday, July 05, 2013 8:01 PM
To:
Subject: Re: Accessing HDFS
These APIs (ClientProtocol, DFSClient) are not for Public access
These APIs (ClientProtocol, DFSClient) are not for Public access.
Please do not use them in production. The only API we care not to
change incompatibly are the FileContext and the FileSystem APIs. They
provide much of what you want - if not, log a JIRA.
On Fri, Jul 5, 2013 at 11:40 PM, John Lilley