Re: fuse-dfs

2008-08-08 Thread Sebastian Vieira
Hi Pete,

>From within the 0.19 source i did:

ant jar
ant metrics.jar
ant test-core

This resulted in 3 jar files within $HADOOP_HOME/build :

[EMAIL PROTECTED] hadoop-0.19]# ls -l build/*.jar
-rw-r--r-- 1 root root 2201651 Aug  8 08:26 build/hadoop-0.19.0-dev-core.jar
-rw-r--r-- 1 root root 1096699 Aug  8 08:29 build/hadoop-0.19.0-dev-test.jar
-rw-r--r-- 1 root root   55695 Aug  8 08:26
build/hadoop-metrics-0.19.0-dev.jar

I've added these to be included in the CLASSPATH within the wrapper script:

for f in `ls $HADOOP_HOME/build/*.jar`; do
export CLASSPATH=$CLASSPATH:$f
done

This still produced the same error, so (thanks to the more detailed error
output your patch provided) i renamed hadoop-0.19.0-dev-core.jar to
hadoop-core.jar to match the regexp.

Then i figured out that i can't use dfs://master:9000 becaus in
hadoop-site.xml i specified that dfs should run on port 54310 (doh!). So i
issued this command:

./fuse_dfs_wrapper.sh dfs://master:54310 /mnt/hadoop -d

Succes! Even though the output from df -h is .. weird :

fuse  512M 0  512M   0% /mnt/hadoop

I added some data:

for x in `seq 1 25`;do
dd if=/dev/zero of=/mnt/hadoop/test-$x.raw bs=1MB count=10
done

And now the output from df -h is:

fuse  512M -3.4G  3.9G   -  /mnt/hadoop

Note that my HDFS setup now consists of 20 nodes, exporting 15G each, so df
is a little confused. Hadoop's status page (dfshealth.jsp) correctly
displays the output though, evenly dividing the blocks over all the nodes.

What i didn't understand however, is why there's no fuse-dfs in the
downloadable tarballs. Am i looking in the wrong place perhaps?

Anyway, now that i got things mounted, i come upon the next problem. I can't
do much else than dd :)

[EMAIL PROTECTED] fuse-dfs]# touch /mnt/hadoop/test.tst
touch: setting times of `/mnt/hadoop/test.tst': Function not implemented


regards,

Sebastian


Re: fuse-dfs

2008-08-07 Thread Sebastian Vieira
On Thu, Aug 7, 2008 at 4:25 PM, Pete Wyckoff <[EMAIL PROTECTED]> wrote:

>
> Hi Sebastian,
>
> Those 2 things are just warnings and shouldn't cause any problems.  What
> happens when you ls /mnt/hadoop ?


[EMAIL PROTECTED] fuse-dfs]# ls /mnt/hadoop
ls: /mnt/hadoop: Transport endpoint is not connected

Also, this happens when i start fuse-dfs in one terminal, and do a df -h in
another:

[EMAIL PROTECTED] fuse-dfs]# ./fuse_dfs_wrapper.sh dfs://master:9000 /mnt/hadoop
-d
port=9000,server=master
fuse-dfs didn't recognize /mnt/hadoop,-2
fuse-dfs ignoring option -d
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.8
flags=0x0003
max_readahead=0x0002
   INIT: 7.8
   flags=0x0001
   max_readahead=0x0002
   max_write=0x0010
   unique: 1, error: 0 (Success), outsize: 40
unique: 2, opcode: STATFS (17), nodeid: 1, insize: 40

-now i do a df -h in the other term-

Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/hadoop/conf/Configuration
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.conf.Configuration
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClassInternal(Unknown Source)

Then the output from df is:

df: `/mnt/hadoop': Software caused connection abort



>  And also what version of fuse-dfs are you
> using? The handling of options is different in trunk than in the last
> release.


[EMAIL PROTECTED] fuse-dfs]# ./fuse_dfs --version
./fuse_dfs 0.1.0

I did a checkout of the latest svn and compiled using the command you gave
in one of your previous mails.


>
> You can also look in /var/log/messages.
>

Only one line:
Aug  7 20:21:05 master fuse_dfs: mounting dfs://master:9000/


Thanks for your time,


Sebastian


Re: fuse-dfs

2008-08-07 Thread Sebastian Vieira
Thanks. After alot of experimenting (and ofcourse, right before you sent
this reply) i figured it out. I also had to include the path to libhdfs.so
in my ld.so.conf and update it before i was able to succesfully compile
fuse_dfs. However when i try to mount the HDFS, it fails. I have tried both
the wrapper script and the single binary. Both display the following error:

fuse-dfs didn't recognize /mnt/hadoop,-2
fuse-dfs ignoring option -d

regards,

Sebastian

On Wed, Aug 6, 2008 at 5:29 PM, Pete Wyckoff <[EMAIL PROTECTED]> wrote:

>
> Sorry - I see the problem now: should be:
>
> Ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1
>
> Compile-contrib depends on compile-libhdfs which also requires the
> -Dlibhdfs=1 property to be set.
>
> pete
>
>
> On 8/6/08 5:04 AM, "Sebastian Vieira" <[EMAIL PROTECTED]> wrote:
>
> > Hi,
> >
> > I have installed Hadoop on 20 nodes (data storage) and one master
> (namenode)
> > to which i want to add data. I have learned that this is possible through
> a
> > Java API or via the Hadoop shell. However, i would like to mount the HDFS
> > using FUSE and i discovered that there's a contrib/fuse-dfs within the
> > Hadoop tar.gz package. Now i read the README file and noticed that i was
> > unable to compile using this command:
> >
> > ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1
> >
> > If i change the line to:
> >
> > ant compile-contrib -Dcompile.c++=1 -Dlibhdfs-fuse=1
> >
> > It goes a little bit further. It will now start the configure script, but
> > still fails. I've tried alot of different things but i'm unable to
> compile
> > fuse-dfs. This is a piece of the error i get from ant:
> >
> > compile:
> >  [echo] contrib: fuse-dfs
> > -snip-
> >  [exec] Making all in src
> >  [exec] make[1]: Entering directory
> > `/usr/local/src/hadoop-core-trunk/src/contrib/fuse-dfs/src'
> >  [exec] gcc  -Wall -O3
> -L/usr/local/src/hadoop-core-trunk/build/libhdfs
> > -lhdfs -L/usr/lib -lfuse -L/usr/java/jdk1.6.0_07/jre/lib/i386/server
> -ljvm
> > -o fuse_dfs  fuse_dfs.o
> >  [exec] /usr/bin/ld: cannot find -lhdfs
> >  [exec] collect2: ld returned 1 exit status
> >  [exec] make[1]: *** [fuse_dfs] Error 1
> >  [exec] make[1]: Leaving directory
> > `/usr/local/src/hadoop-core-trunk/src/contrib/fuse-dfs/src'
> >  [exec] make: *** [all-recursive] Error 1
> >
> > BUILD FAILED
> > /usr/local/src/hadoop-core-trunk/build.xml:413: The following error
> occurred
> > while executing this line:
> > /usr/local/src/hadoop-core-trunk/src/contrib/build.xml:30: The following
> > error occurred while executing this line:
> > /usr/local/src/hadoop-core-trunk/src/contrib/fuse-dfs/build.xml:40: exec
> > returned: 2
> >
> >
> > Could somebody shed some light on this?
> >
> >
> > thanks,
> >
> > Sebastian.
>
>


fuse-dfs

2008-08-06 Thread Sebastian Vieira
Hi,

I have installed Hadoop on 20 nodes (data storage) and one master (namenode)
to which i want to add data. I have learned that this is possible through a
Java API or via the Hadoop shell. However, i would like to mount the HDFS
using FUSE and i discovered that there's a contrib/fuse-dfs within the
Hadoop tar.gz package. Now i read the README file and noticed that i was
unable to compile using this command:

ant compile-contrib -Dcompile.c++=1 -Dfusedfs=1

If i change the line to:

ant compile-contrib -Dcompile.c++=1 -Dlibhdfs-fuse=1

It goes a little bit further. It will now start the configure script, but
still fails. I've tried alot of different things but i'm unable to compile
fuse-dfs. This is a piece of the error i get from ant:

compile:
 [echo] contrib: fuse-dfs
-snip-
 [exec] Making all in src
 [exec] make[1]: Entering directory
`/usr/local/src/hadoop-core-trunk/src/contrib/fuse-dfs/src'
 [exec] gcc  -Wall -O3 -L/usr/local/src/hadoop-core-trunk/build/libhdfs
-lhdfs -L/usr/lib -lfuse -L/usr/java/jdk1.6.0_07/jre/lib/i386/server -ljvm
-o fuse_dfs  fuse_dfs.o
 [exec] /usr/bin/ld: cannot find -lhdfs
 [exec] collect2: ld returned 1 exit status
 [exec] make[1]: *** [fuse_dfs] Error 1
 [exec] make[1]: Leaving directory
`/usr/local/src/hadoop-core-trunk/src/contrib/fuse-dfs/src'
 [exec] make: *** [all-recursive] Error 1

BUILD FAILED
/usr/local/src/hadoop-core-trunk/build.xml:413: The following error occurred
while executing this line:
/usr/local/src/hadoop-core-trunk/src/contrib/build.xml:30: The following
error occurred while executing this line:
/usr/local/src/hadoop-core-trunk/src/contrib/fuse-dfs/build.xml:40: exec
returned: 2


Could somebody shed some light on this?


thanks,

Sebastian.