ssh localhost returns connection closed by ::1 under Cygwin installation on windows 8

2015-07-23 Thread Yepeng Sun
Hi,

I tried to install Hadoop on windows 8 to form multi-node clusters. So first I 
have to install Cygwin in order to make SSH working. I installed Cygwin with 
Sshd service successfully. And also I generated passwordless keys. But when I 
run ssh localhost, it returns connection closed by ::1. Does anyone has the 
same experience? How did you solve it.

Thanks!

Yepeng



Re: ssh localhost returns connection closed by ::1 under Cygwin installation on windows 8

2015-07-23 Thread sajid mohammed
This May help you

I experienced this same issue. The problem for me at least, was the
creation of the cyg_server user using the ssh-host-config. It was created
without a home directory and with it's shell set to /bin/false. So, I
altered the /etc/passwd file for the cyg_server user to reflect
/home/cyg_server (changed from /var/empty) and /bin/false to /bin/bash and
created the home directory for the user. Tried to reconnect using the
cyg_server user and voila.

in short: mkdir /home/cyg_server vim /etc/passwd cyg_server:...(bunch of
stuff)...:/var/empty:/bin/false to cyg_server:...(bunch of
stuff)...:/home/cyg_server:/bin/bash

Presumably cygwin has a usermod command or synonym that would do this a
little safer. But I was impatient and this is what I did. If anyone follows
this please note to be extremely careful when editing the /etc/passwd file.

On Fri, Jul 24, 2015 at 3:05 AM, Yepeng Sun yepeng@llamasoft.com
wrote:

  Hi,



 I tried to install Hadoop on windows 8 to form multi-node clusters. So
 first I have to install Cygwin in order to make SSH working. I installed
 Cygwin with Sshd service successfully. And also I generated passwordless
 keys. But when I run “ssh localhost”, it returns “connection closed by
 ::1”. Does anyone has the same experience? How did you solve it.



 Thanks!



 Yepeng





how to install fuse-dfs correctly in current version hadoop

2015-07-23 Thread 罗辉

Hi chris and all:
I am also trying to access and operate HDFS in perl application via 
fuse-dfs. How to do this successfully?
I installed hadoop-2.7.0-src and ran mvn clean package -Pnative 
-Drequire.fuse=true -DskipTests -Dmaven.javadoc.skip=true successfully. 
However failed to run fuse_dfs_wrapper.sh dfs://master:9000 /export/hdfs -d 
with a error message ./fuse_dfs_wrapper.sh: line 54: fuse_dfs: command not 
found. I checked line 54 in fuse_dfs_wrapper.sh, it is fuse_dfs $@.
 in hadoop2.7.0 version, there is not  fuse_dfs.sh in the path 
hadoop-2.7.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs.
Any idea to solve this problem?
   thanks.













Best wishes.


San.Luo
Celloud










在2015年07月23 03时18分,Chris Naurothcnaur...@hortonworks.com写道:

The only fuse-dfs documentation I'm aware of is here:


https://github.com/apache/hadoop/tree/release-2.7.1/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/doc


(This is a link into the source for the recent 2.7.1 release.)


Unfortunately, this is somewhat outdated.  I can tell because the build command 
shows ant, but we've converted to Maven.  Running this command would build it:


mvn clean package -Pnative -Drequire.fuse=true -DskipTests 
-Dmaven.javadoc.skip=true


If you need more information on setting up a Hadoop build environment, see 
BUILDING.txt in the root of the project's source tree.


--Chris Nauroth




From: Caesar Samsi caesarsa...@mac.com
Date: Wednesday, July 22, 2015 at 11:20 AM
To: Chris Nauroth cnaur...@hortonworks.com, user@hadoop.apache.org 
user@hadoop.apache.org
Subject: RE: fuse-dfs



Hi Chris, and all,
 
Fuse-Dfs is of interest also, would you have a getting started link to it? The 
link I have is rather old from version 0.x.
 
Thank you! Caesar.
 
From: Chris Nauroth [mailto:cnaur...@hortonworks.com]
Sent: Wednesday, July 22, 2015 11:56 AM
To: user@hadoop.apache.org
Subject: Re: hadoop-hdfs-fuse missing?


 
Hello Caesar,

 

Since this is a question specific to a vendor's distribution and how to consume 
their packaging, I recommend taking the question to the vendor's own forums.  
Questions about how to use fuse-dfs or build it from Apache source definitely 
would be a good fit for this list though.

 

Thank you!

 

--Chris Nauroth



 

From: Caesar Samsi caesarsa...@mac.com
Reply-To: user@hadoop.apache.org user@hadoop.apache.org
Date: Tuesday, July 21, 2015 at 5:10 PM
To: user@hadoop.apache.org user@hadoop.apache.org
Subject: hadoop-hdfs-fuse missing?

 

Hi,
 
I’m trying to install hadoop-hdfs-fuse package on a Ubuntu machine.
 
I’ve added the cloudera repository deb 
[arch=amd64]http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm trusty-cm5 
contrib
 
Also done sudo apt-get update
 
When I do sudo apt-get install hadoop-hdfs-fuse I get an “Unable to locate 
package” error.
 
Did I use the right repository? If not, what is the correct one?
 
Thank you, Caesar.
 










Subscribe

2015-07-23 Thread sajid mohammed



Re: NNBench on external HDFS

2015-07-23 Thread Alexander Striffeler

Hi Chris,

Wow, thanks a lot for your swift and extensive response! I'll try your 
suggestion with the local copy and in a second step I'll open a jira 
request...


Have a good day,
--Alex

On 22.07.2015 20:14, Chris Nauroth wrote:

Hi Alexander,

Your NNBench usage looks basically correct, but NNBench is not a standard
Hadoop tool.  It does not implement the org.apache.hadoop.util.Tool
interface, it does not execute through org.apache.hadoop.util.ToolRunner,
and therefore it does not support the command line arguments that a lot of
other Hadoop tools like the FsShell support.  Specifically, it does not
support passing -D arguments to override fs.defaultFS or any other
configuration properties.

An alternative way to handle this would be to get a local copy of the
configuration directory from the remote cluster that you want to test.  I
expect those configuration files would have fs.defaultFS set to the URL of
that remote cluster in core-site.xml.  Before launching NNBench, run
export HADOOP_CONF_DIR=path to local copy of configuration files.
After exporting that environment variable, you can run hadoop classpath
to print the classpath that will be used by all hadoop commands and
confirm that the correct configuration directory for the target cluster is
on the classpath.  Then, you can run NNBench again, but drop the -D
argument, since it's going to get ignored anyway.

I don't see any reason why NNBench shouldn't implement the standard Tool
interface and thus support the command line arguments that you were
expecting.  If you'd like to request that as an enhancement, please go
ahead and file an HDFS jira to request it.  Feel free to post a patch too
if you're inclined.  Otherwise, someone else in the community can pick it
up.

I hope this helps.

--Chris Nauroth




On 7/22/15, 12:41 AM, Alexander Striffeler
a.striffe...@students.unibe.ch wrote:


Hi all

I'm pretty new to the Hadoop environment and I'm about performing some
micro benchmarks. In particular, I'm struggling with executing NNBench
against an external File System:

hadoop jar
/usr/hdp/2.2.6.0-2800/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-t
ests.jar
nnbench -Dfs.defaultFS='hfds://external.file.system' -operation
create_write -bytesToWrite 10 -maps 2 -reduces 1 -numberOfFiles 100
-baseDir hdfs://dapsilon.daplab.ch/user/username/nnbench-`hostname -s`

yields in
java.lang.IllegalArgumentException: Wrong FS:
hdfs://external.file.system/user/username/nnbench-hostname/data,
expected: hdfs://native fs

If I neglect the ext FS prefix in the baseDir, NNBench simply ignores
the -D option and writes the files to the native DFS. Does anyone have
an idea how to solve this and nnbench an external DFS?

Thanks a lot, any hints are very appreciated!
Regards,
Alex




RE: NNBench on external HDFS

2015-07-23 Thread Brahma Reddy Battula
HI Alex

HDFS-7651 is raised for same..Please have a look at once..

Thanks  Regards
 Brahma Reddy Battula
___
From: Alexander Striffeler [a.striffe...@students.unibe.ch]
Sent: Thursday, July 23, 2015 1:00 PM
To: user@hadoop.apache.org
Subject: Re: NNBench on external HDFS

Hi Chris,

Wow, thanks a lot for your swift and extensive response! I'll try your
suggestion with the local copy and in a second step I'll open a jira
request...

Have a good day,
--Alex

On 22.07.2015 20:14, Chris Nauroth wrote:
 Hi Alexander,

 Your NNBench usage looks basically correct, but NNBench is not a standard
 Hadoop tool.  It does not implement the org.apache.hadoop.util.Tool
 interface, it does not execute through org.apache.hadoop.util.ToolRunner,
 and therefore it does not support the command line arguments that a lot of
 other Hadoop tools like the FsShell support.  Specifically, it does not
 support passing -D arguments to override fs.defaultFS or any other
 configuration properties.

 An alternative way to handle this would be to get a local copy of the
 configuration directory from the remote cluster that you want to test.  I
 expect those configuration files would have fs.defaultFS set to the URL of
 that remote cluster in core-site.xml.  Before launching NNBench, run
 export HADOOP_CONF_DIR=path to local copy of configuration files.
 After exporting that environment variable, you can run hadoop classpath
 to print the classpath that will be used by all hadoop commands and
 confirm that the correct configuration directory for the target cluster is
 on the classpath.  Then, you can run NNBench again, but drop the -D
 argument, since it's going to get ignored anyway.

 I don't see any reason why NNBench shouldn't implement the standard Tool
 interface and thus support the command line arguments that you were
 expecting.  If you'd like to request that as an enhancement, please go
 ahead and file an HDFS jira to request it.  Feel free to post a patch too
 if you're inclined.  Otherwise, someone else in the community can pick it
 up.

 I hope this helps.

 --Chris Nauroth




 On 7/22/15, 12:41 AM, Alexander Striffeler
 a.striffe...@students.unibe.ch wrote:

 Hi all

 I'm pretty new to the Hadoop environment and I'm about performing some
 micro benchmarks. In particular, I'm struggling with executing NNBench
 against an external File System:

 hadoop jar
 /usr/hdp/2.2.6.0-2800/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-t
 ests.jar
 nnbench -Dfs.defaultFS='hfds://external.file.system' -operation
 create_write -bytesToWrite 10 -maps 2 -reduces 1 -numberOfFiles 100
 -baseDir hdfs://dapsilon.daplab.ch/user/username/nnbench-`hostname -s`

 yields in
 java.lang.IllegalArgumentException: Wrong FS:
 hdfs://external.file.system/user/username/nnbench-hostname/data,
 expected: hdfs://native fs

 If I neglect the ext FS prefix in the baseDir, NNBench simply ignores
 the -D option and writes the files to the native DFS. Does anyone have
 an idea how to solve this and nnbench an external DFS?

 Thanks a lot, any hints are very appreciated!
 Regards,
 Alex