Have you configured JCE ?

On 01/07/16 6:36 AM, Aneela Saleem wrote:
Thanks Vinaykumar and Gurmukh,

I have made it working successfully through auth_to_local configs. But i faced much issues.

Actually I have two nodes cluster, one being namenode and datanode, and second being datanode only. I faced some authentication from keytab related issues. for example:

I added nn/hadoop-master to both nn.keytab and dn.keytab and did the same with dn/hadoop-slave (following you github dn.keytab file). but when i start cluster i got following error:

*Login failure for nn/hadoop-master@platalyticsrealm from keytab /etc/hadoop/conf/hdfs.keytab: javax.security.auth.login.LoginException: Checksum failed*
*
*
i verified the authentication of nn/hadoop-master with keytab nn/hadoop-master through kinit, but couldn't do so. because of error like *could not verify credentials*

i then removed nn/hadoop-master from dn.keytab then it authenticated successfully. And i removed all the hadoop-master principals from dn.keytab and hadoop-slave principals from nn.keytab. So does it mean that a principal can't belong to more than one keytabs? And please make some time to review the attached hdfs-site.xml for both namenode and datanode, and keytab files. And point me out if something wrong.

Thanks


On Thu, Jun 30, 2016 at 1:21 PM, Vinayakumar B <vinayakumar...@huawei.com <mailto:vinayakumar...@huawei.com>> wrote:

    Please note, there are two different configs.

    “dfs.datanode.kerberos.principal” and
    “dfs.namenode.kerberos.principal”

    Following configs can be set, as required.

    dfs.datanode.kerberos.principal àdn/_HOST

    dfs.namenode.kerberos.principal ànn/_HOST

    “nn/_HOST” will be used only in namenode side.

    -Vinay

    *From:*Aneela Saleem [mailto:ane...@platalytics.com
    <mailto:ane...@platalytics.com>]
    *Sent:* 30 June 2016 13:24
    *To:* Vinayakumar B <vinayakumar...@huawei.com
    <mailto:vinayakumar...@huawei.com>>
    *Cc:* user@hadoop.apache.org <mailto:user@hadoop.apache.org>


    *Subject:* Re: datanode is unable to connect to namenode

    Thanks Vinayakumar

    Yes you got it right i was using different principal names i.e.,
    *nn/_HOST* for namenode and *dn/_HOST* for datanode. Setting the
    same principal name for both datanode and namenode i.e.,
    hdfs/_HOST@platalyticsrealm solved the issue. Now datanode

    can connect to namenode successfully.

    So my question is, is it mandatory to have same principal name on
    all hosts i.e., hdfs/_HOST@platalyticsrealm, because i found in many

    tutorials that the convention is to have different principals for
    all services like

    dn/_HOST for datanode

    nn/_HOST for namenode

    sn/_HOST for secondarynamenode etc

    Secondly for map reduce and yarn, would that mapred-site.xml and
    yarn-site.xml be same on all cluster nodes? just like for
    hdfs-site.xml

    Thanks

    On Thu, Jun 30, 2016 at 10:51 AM, Vinayakumar B
    <vinayakumar...@huawei.com <mailto:vinayakumar...@huawei.com>> wrote:

        Hi Aneela,

        1. Looks like you have attached the hdfs-site.xml from
        'hadoop-master' node. For this node datanode connection is
        successfull as mentioned in below logs.

                 2016-06-29 10:01:35,700 INFO
        SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful
        for nn/hadoop-master@platalyticsrealm (auth:KERBEROS)

        2016-06-29 10:01:35,744 INFO
        
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
        Authorization successful for nn/hadoop-master@platalyticsrealm
        (auth:KERBEROS) for protocol=interface
        org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol

                 2016-06-29 10:01:36,845 INFO
        org.apache.hadoop.net.NetworkTopology: Adding a new node:
        /default-rack/192.168.23.206:1004 <http://192.168.23.206:1004>

        2. For the other node, 'hadoop-slave' kerberos athentication
        is successfull, but ServiceAuthorizationManager check failed.

        2016-06-29 10:01:37,474 INFO
        SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful
        for dn/hadoop-slave@platalyticsrealm (auth:KERBEROS)

        2016-06-29 10:01:37,512 WARN
        
SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager:
        Authorization failed for dn/hadoop-slave@platalyticsrealm
        (auth:KERBEROS) for protocol=interface
        org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol,
        expected client Kerberos principal is
        nn/hadoop-slave@platalyticsrealm

        2016-06-29 10:01:37,514 INFO org.apache.hadoop.ipc.Server:
        Connection from 192.168.23.207:32807
        <http://192.168.23.207:32807>for protocol
        org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is
        unauthorized for user dn/hadoop-slave@platalyticsrealm
        (auth:KERBEROS)

        reason could be mostly, "dfs.datanode.kerberos.principal"
        configuration in both nodes differ. I can see that this
        configuration in hadoop-master's hdfs-site.xml set to
        'nn/_HOST@platalyticsrealm' but it might have been set to
        'dn/_HOST@platalyticsrealm' in hadoop-slave node's configurations.

        Please change this configuration in all nodes to
        'dn/_HOST@platalyticsrealm' and restart all NNs and DNs, and
        check again.

        If this does not help, then please share the hdfs-site.xml of
        hadoop-slave node too.

        -Vinay

        *From:*Aneela Saleem [mailto:ane...@platalytics.com
        <mailto:ane...@platalytics.com>]
        *Sent:* 29 June 2016 21:35
        *To:* user@hadoop.apache.org <mailto:user@hadoop.apache.org>
        *Subject:* Fwd: datanode is unable to connect to namenode



        Sent from my iPhone


        Begin forwarded message:

            *From:*Aneela Saleem <ane...@platalytics.com
            <mailto:ane...@platalytics.com>>
            *Date:* 29 June 2016 at 10:16:36 GMT+5
            *To:* "sreebalineni ." <sreebalin...@gmail.com
            <mailto:sreebalin...@gmail.com>>
            *Subject:* *Re: datanode is unable to connect to namenode*

            Attached are the log files for datanode and namenode. Also
            i have attached hdfs-site.xml for namenode please check if
            there are any issues in configuration file.

            I have following two Kerberos Principals:

            nn/hadoop-master

            dn/hadoop-slave

            i have copied kdc.conf and krb5.conf on both nodes. Also i
            copied keytab file on datanode. And i have starting
            services with principal nn/hadoop-master.

            On Wed, Jun 29, 2016 at 9:35 AM, sreebalineni .
            <sreebalin...@gmail.com <mailto:sreebalin...@gmail.com>>
            wrote:

                Probably sharing both Name node and datanode logs may
                help.

                On Wed, Jun 29, 2016 at 10:02 AM, Aneela Saleem
                <ane...@platalytics.com
                <mailto:ane...@platalytics.com>> wrote:

                    Following is the result of telnet

                    Trying 192.168.23.206...

                    Connected to hadoop-master.

                    Escape character is '^]'.

                    On Wed, Jun 29, 2016 at 3:57 AM, Aneela Saleem
                    <ane...@platalytics.com
                    <mailto:ane...@platalytics.com>> wrote:

                        Thanks Sreebalineni for the response.

                        This is the result of the *netstat -a | grep
                        8020* command

                        tcp        0  0 hadoop-master:8020      *:* LISTEN

                        tcp        0  0 hadoop-master:33356
                        hadoop-master:8020  ESTABLISHED

                        tcp        0  0 hadoop-master:8020
                         hadoop-master:33356 ESTABLISHED

                        tcp        0  0 hadoop-master:55135
                        hadoop-master:8020      TIME_WAIT

                        And this is my */etc/hosts* file

                        #127.0.0.1      localhost

                        #127.0.1.1  vm6-VirtualBox

                        192.168.23.206  hadoop-master platalytics.com
                        <http://platalytics.com> vm6-VirtualBox

                        192.168.23.207  hadoop-slave

                        # The following lines are desirable for IPv6
                        capable hosts

                        ::1 ip6-localhost ip6-loopback

                        fe00::0 ip6-localnet

                        ff00::0 ip6-mcastprefix

                        ff02::1 ip6-allnodes

                        ff02::2 ip6-allrouters



                        Can you please tell me what's wrong with above
                        configuration and how can i check whether it
                        is firewall issue?

                        Thanks

                        On Wed, Jun 29, 2016 at 12:11 AM, sreebalineni
                        . <sreebalin...@gmail.com
                        <mailto:sreebalin...@gmail.com>> wrote:

                            Are you able to telnet ping. Check the
                            firewalls as well

                            On Jun 29, 2016 12:39 AM, "Aneela Saleem"
                            <ane...@platalytics.com
                            <mailto:ane...@platalytics.com>> wrote:

                                Hi all,

                                I have setup two nodes cluster with
                                security enabled. I have everything
                                running successful like namenode,
                                datanode, resourcemanager,
                                nodemanager, jobhistoryserver etc. But
                                datanode is unable to connect to
                                namenode, as i can see only one node
                                on the web UI. checking logs of
                                datanode gives following warning:

                                *WARN
                                org.apache.hadoop.hdfs.server.datanode.DataNode:
                                Problem connecting to server:
                                hadoop-master/**192.168.23.206:8020*
                                <http://192.168.23.206:8020>

                                Rest of the things look fine. Please
                                help me in this regard, what could be
                                the issue?




---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org

--
--
Thanks and Regards

Gurmukh Singh

Reply via email to