Thanks  Noah!
It worked!
I managed to run the wordcount example!

Can you remove the jar that is posted online? It is misleading...

Thanks!
Rolando



On Mon, Sep 23, 2013 at 5:07 PM, Noah Watkins <noah.watk...@inktank.com> wrote:
> You need to stick the CephFS jar files in the hadoop lib folder.
>
> On Mon, Sep 23, 2013 at 2:02 PM, Rolando Martins
> <rolando.mart...@gmail.com> wrote:
>> I tried to compile it, but the compilation failed.
>> The error log starts with:
>> compile-core-classes:
>>   [taskdef] 2013-09-23 20:59:25,540 INFO  mortbay.log
>> (Slf4jLog.java:info(67)) - Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>>     [javac] /home/ubuntu/Projects/hadoop-common/build.xml:487:
>> warning: 'includeantruntime' was not set, defaulting to
>> build.sysclasspath=last; set to false for repeatable builds
>>     [javac] Compiling 440 source files to
>> /home/ubuntu/Projects/hadoop-common/build/classes
>>     [javac] 
>> /home/ubuntu/Projects/hadoop-common/src/core/org/apache/hadoop/fs/ceph/CephFS.java:31:
>> package com.ceph.fs does not exist
>>     [javac] import com.ceph.fs.CephStat;
>>     [javac]                   ^
>>
>> What are the dependencies that I need to have installed?
>>
>>
>> On Mon, Sep 23, 2013 at 4:32 PM, Noah Watkins <noah.watk...@inktank.com> 
>> wrote:
>>> Ok thanks. That narrows things down a lot. It seems like the keyring
>>> property is not being recognized, and I don't see  so I'm wondering if
>>> it is possible that the jar file is out of date and doesn't include
>>> these configuration features.
>>>
>>> If you clone http://github.com/ceph/hadoop-common/ and checkout the
>>> cephfs/branch-1.0 branch, you can run 'ant cephfs' to make a fresh jar
>>> file.
>>>
>>> On Mon, Sep 23, 2013 at 1:22 PM, Rolando Martins
>>> <rolando.mart...@gmail.com> wrote:
>>>> My bad, I associated conf_read_file with conf_set.
>>>> No, it does not appear in the logs.
>>>>
>>>> On Mon, Sep 23, 2013 at 4:20 PM, Noah Watkins <noah.watk...@inktank.com> 
>>>> wrote:
>>>>> I'm not sure what you grepped for. Does this output mean that the
>>>>> string "conf_set" didn't show up in the log?
>>>>>
>>>>> On Mon, Sep 23, 2013 at 12:52 PM, Rolando Martins
>>>>> <rolando.mart...@gmail.com> wrote:
>>>>>> 2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit ret >>>>>> 0
>>>>>> 2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
>>>>>> 2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
>>>>>> missing keyring, cannot use cephx for authentication
>>>>>> 2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 
>>>>>> max 0
>>>>>> 2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
>>>>>> 2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret >>>>>> 0
>>>>>> 2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
>>>>>> ....
>>>>>>
>>>>>>
>>>>>> On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins <noah.watk...@inktank.com> 
>>>>>> wrote:
>>>>>>> In the log file that you showing, do you see where the keyring file is
>>>>>>> being set by Hadoop? You can find it by grepping for: "jni: conf_set"
>>>>>>>
>>>>>>> On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
>>>>>>> <rolando.mart...@gmail.com> wrote:
>>>>>>>> bin/hadoop fs -ls
>>>>>>>>
>>>>>>>> Bad connection to FS. command aborted. exception:
>>>>>>>>
>>>>>>>> (no other information is thrown)
>>>>>>>>
>>>>>>>> ceph log:
>>>>>>>> 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
>>>>>>>> missing keyring, cannot use cephx for authentication
>>>>>>>> 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 
>>>>>>>> max 0
>>>>>>>> 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2
>>>>>>>>
>>>>>>>> On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins 
>>>>>>>> <noah.watk...@inktank.com> wrote:
>>>>>>>>> What happens when you run `bin/hadoop fs -ls` ? This is entirely
>>>>>>>>> local, and a bit simpler and easier to grok.
>>>>>>>>>
>>>>>>>>> On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
>>>>>>>>> <rolando.mart...@gmail.com> wrote:
>>>>>>>>>> I am trying to start hadoop using bin/start-mapred.sh.
>>>>>>>>>> In the HADOOP_HOME/lib, I have:
>>>>>>>>>> lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
>>>>>>>>>> (the first I downloaded from
>>>>>>>>>> http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I 
>>>>>>>>>> copied
>>>>>>>>>> from my system (after installing the ubuntu package for the ceph java
>>>>>>>>>> client))
>>>>>>>>>>
>>>>>>>>>> I added to conf/hadoop-env.sh:
>>>>>>>>>> export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib
>>>>>>>>>>
>>>>>>>>>> I confirmed using bin/hadoop classpath that both jar are in the 
>>>>>>>>>> classpath.
>>>>>>>>>>
>>>>>>>>>> On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins 
>>>>>>>>>> <noah.watk...@inktank.com> wrote:
>>>>>>>>>>> How are you invoking Hadoop? Also, I forgot to ask, are you using 
>>>>>>>>>>> the
>>>>>>>>>>> wrappers located in github.com/ceph/hadoop-common (or the jar linked
>>>>>>>>>>> to on http://ceph.com/docs/master/cephfs/hadoop/)?
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
>>>>>>>>>>> <rolando.mart...@gmail.com> wrote:
>>>>>>>>>>>> Hi Noah,
>>>>>>>>>>>> I enabled the debugging and got:
>>>>>>>>>>>>
>>>>>>>>>>>> 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): 
>>>>>>>>>>>> ERROR:
>>>>>>>>>>>> missing keyring, cannot use cephx for authentication
>>>>>>>>>>>> 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache 
>>>>>>>>>>>> size 0 max 0
>>>>>>>>>>>> 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit 
>>>>>>>>>>>> ret -2
>>>>>>>>>>>>
>>>>>>>>>>>> I have the ceph.client.admin.keyring file in /etc/ceph and I tried
>>>>>>>>>>>> with and without the
>>>>>>>>>>>> parameter in core-site.xml. Unfortunately without success:(
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Rolando
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> <property>
>>>>>>>>>>>>         <name>fs.ceph.impl</name>
>>>>>>>>>>>>         <value>org.apache.hadoop.fs.ceph.CephFileSystem</value>
>>>>>>>>>>>> </property>
>>>>>>>>>>>>
>>>>>>>>>>>> <property>
>>>>>>>>>>>>         <name>fs.default.name</name>
>>>>>>>>>>>>         <value>ceph://hyrax1:6789/</value>
>>>>>>>>>>>> </property>
>>>>>>>>>>>>
>>>>>>>>>>>> <property>
>>>>>>>>>>>>         <name>ceph.conf.file</name>
>>>>>>>>>>>>         <value>/hyrax/hadoop-ceph/ceph/ceph.conf</value>
>>>>>>>>>>>> </property>
>>>>>>>>>>>>
>>>>>>>>>>>> <property>
>>>>>>>>>>>>         <name>ceph.root.dir</name>
>>>>>>>>>>>>         <value>/</value>
>>>>>>>>>>>> </property>
>>>>>>>>>>>>  <property>
>>>>>>>>>>>>     <name>ceph.auth.keyring</name>
>>>>>>>>>>>>    <value>/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring</value>
>>>>>>>>>>>> </property>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins 
>>>>>>>>>>>> <noah.watk...@inktank.com> wrote:
>>>>>>>>>>>>> Shoot, I thought I had it figured out :)
>>>>>>>>>>>>>
>>>>>>>>>>>>> There is a default admin user created when you first create your
>>>>>>>>>>>>> cluster. After a typical install via ceph-deploy, there should be 
>>>>>>>>>>>>> a
>>>>>>>>>>>>> file called 'ceph.client.admin.keyring', usually sibling to 
>>>>>>>>>>>>> ceph.conf.
>>>>>>>>>>>>> If this is in a standard location (e.g. /etc/ceph) you shouldn't 
>>>>>>>>>>>>> need
>>>>>>>>>>>>> the keyring option, otherwise point 'ceph.auth.keyring' at that
>>>>>>>>>>>>> keyring file. You shouldn't need both the keyring and the keyfile
>>>>>>>>>>>>> options set, but it just depends on how your authentication / 
>>>>>>>>>>>>> users
>>>>>>>>>>>>> are all setup.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The easiest thing to do if that doesn't solve your problem is 
>>>>>>>>>>>>> probably
>>>>>>>>>>>>> to turn on logging so we can see what is blowing up.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In your ceph.conf you can add 'debug client = 20' and 'debug
>>>>>>>>>>>>> javaclient = 20' to the client section. You may also need to set 
>>>>>>>>>>>>> the
>>>>>>>>>>>>> log file 'log file = /path/...'. You don't need to do this on all 
>>>>>>>>>>>>> your
>>>>>>>>>>>>> nodes, just one node where you get the failure.
>>>>>>>>>>>>>
>>>>>>>>>>>>> - Noah
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> Rolando
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> P.S.: I have the cephFS mounted locally, so the cluster is ok.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> cluster d9ca74d0-d9f4-436d-92de-762af67c6534
>>>>>>>>>>>>>>    health HEALTH_OK
>>>>>>>>>>>>>>    monmap e1: 9 mons at
>>>>>>>>>>>>>> {hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0},
>>>>>>>>>>>>>> election epoch 6, quorum 0,1,2,3,4,5,6,7,8
>>>>>>>>>>>>>> hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9
>>>>>>>>>>>>>>    osdmap e30: 9 osds: 9 up, 9 in
>>>>>>>>>>>>>>     pgmap v2457: 192 pgs: 192 active+clean; 10408 bytes data, 
>>>>>>>>>>>>>> 44312 MB
>>>>>>>>>>>>>> used, 168 GB / 221 GB avail
>>>>>>>>>>>>>>    mdsmap e4: 1/1/1 up {0=hyrax1=up:active}
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>> <name>fs.ceph.impl</name>
>>>>>>>>>>>>>> <value>org.apache.hadoop.fs.ceph.CephFileSystem</value>
>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>> <name>fs.default.name</name>
>>>>>>>>>>>>>> <value>ceph://hyrax1:6789/</value>
>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>> <name>ceph.conf.file</name>
>>>>>>>>>>>>>> <value>/hyrax/hadoop-ceph/ceph/ceph.conf</value>
>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>> <name>ceph.root.dir</name>
>>>>>>>>>>>>>> <value>/</value>
>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>> <name>ceph.auth.keyfile</name>
>>>>>>>>>>>>>> <value>/hyrax/hadoop-ceph/ceph/admin.secret</value>
>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>> <name>ceph.auth.keyring</name>
>>>>>>>>>>>>>> <value>/hyrax/hadoop-ceph/ceph/ceph.mon.keyring</value>
>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Sep 23, 2013 at 11:42 AM, Noah Watkins 
>>>>>>>>>>>>>> <noah.watk...@inktank.com> wrote:
>>>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>>>>         <name>ceph.root.dir</name>
>>>>>>>>>>>>>>>>         <value>/mnt/mycephfs</value>
>>>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This is probably causing the issue. Is this meant to be a local 
>>>>>>>>>>>>>>> mount
>>>>>>>>>>>>>>> point? The 'ceph.root.dir' property specifies the root directory
>>>>>>>>>>>>>>> /inside/ CephFS, and the Hadoop implementation doesn't require 
>>>>>>>>>>>>>>> a local
>>>>>>>>>>>>>>> CephFS mount--it uses a client library to interact with the file
>>>>>>>>>>>>>>> system.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The default value for this property is "/", so you can probably 
>>>>>>>>>>>>>>> just
>>>>>>>>>>>>>>> remove this from your config file unless your CephFS directory
>>>>>>>>>>>>>>> structure is carved up in a special way.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>>>>         <name>ceph.conf.file</name>
>>>>>>>>>>>>>>>>         <value>/hyrax/hadoop-ceph/ceph/ceph.conf</value>
>>>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>>>>         <name>ceph.auth.keyfile</name>
>>>>>>>>>>>>>>>>         <value>/hyrax/hadoop-ceph/ceph/admin.secret</value>
>>>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>>>>         <name>ceph.auth.keyring</name>
>>>>>>>>>>>>>>>>         <value>/hyrax/hadoop-ceph/ceph/ceph.mon.keyring</value>
>>>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> These files will need to be available locally on every node 
>>>>>>>>>>>>>>> Hadoop
>>>>>>>>>>>>>>> runs on. I think the error below will occur after these are 
>>>>>>>>>>>>>>> loaded, so
>>>>>>>>>>>>>>> it probably isn't your issue, though I don't recall exactly at 
>>>>>>>>>>>>>>> which
>>>>>>>>>>>>>>> point different configuration files are loaded.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>>>>         <name>fs.hdfs.impl</name>
>>>>>>>>>>>>>>>>         <value>org.apache.hadoop.fs.ceph.CephFileSystem</value>
>>>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I don't think this is part of the problem you are seeing, but 
>>>>>>>>>>>>>>> this
>>>>>>>>>>>>>>> 'fs.hdfs.impl' property should probably be removed. We aren't
>>>>>>>>>>>>>>> overriding HDFS, just replacing it.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>>>>         <name>ceph.mon.address</name>
>>>>>>>>>>>>>>>>         <value>hyrax1:6789</value>
>>>>>>>>>>>>>>>> </property>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This was already specified in your 'fs.default.name' property. 
>>>>>>>>>>>>>>> I don't
>>>>>>>>>>>>>>> think that duplicating it is an issue, but I should probably 
>>>>>>>>>>>>>>> update
>>>>>>>>>>>>>>> the documentation to make it clear that the monitor only needs 
>>>>>>>>>>>>>>> to be
>>>>>>>>>>>>>>> listed once.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>>>> Noah
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to