Hello Ms Megan


I am happy it is resolved


it was a problem of UUID


I will post later on the solution+ problem



Cheers

Le 19/05/2021 à 13:45, Abdeslam Tahari a écrit :
Hello Ms Megan

Thank you for the reply and your help

I have checked the lctl ping
it seems to be ok the result
 lctl ping 10.0.1.70
12345-0@lo
12345-10.0.1.70@tcp


the ping is good it is always ok .

the problem is when i mount the luster file system

mount -t lustre /dev/sda /mds

i have the following output
 lctl dl
  0 UP osd-ldiskfs lustre-MDT0000-osd lustre-MDT0000-osd_UUID 3
  2 UP mgc MGC10.0.1.70@tcp 3ec79ce9-5167-9661-9bd6-0b897fcc42f2 4
  3 UP mds MDS MDS_uuid 2


if i execute the command for the second time i will have no output at all
and the filesystem in reality is not mounted

i think but i am not sure it complains about the UUID of the MDT

from the output of the

lctl dk
00000100:00080000:78.0:1621365812.955564:0:84913:0:(pinger.c:413:ptlrpc_pinger_del_import()) removing pingable import lustre-MDT0000-lwp-MDT0000_UUID->lustre-MDT0000_UUID 00000100:00080000:78.0:1621365812.955567:0:84913:0:(import.c:86:import_set_state_nolock()) ffff9b985701b800 lustre-MDT0000_UUID: changing import state from DISCONN to CLOSED *00000100:00080000:78.0:1621365812.955571:0:84913:0:(import.c:157:ptlrpc_deactivate_import_nolock()) setting import lustre-MDT0000_UUID INVALID* 10000000:01000000:78.0:1621365812.965420:0:84913:0:(mgc_request.c:151:config_log_put()) dropping config log lustre-mdtir

Kind regards


Le mer. 19 mai 2021 à 03:15, Ms. Megan Larko via lustre-discuss <lustre-discuss@lists.lustre.org <mailto:lustre-discuss@lists.lustre.org>> a écrit :

    Hello Tahari,
    What is the result of "lctl ping 10.0.1.70@tcp_0" from the box on
    which you are trying to mount the Lustre File System?   Is the
    ping successful and then fails after 03 seconds? If yes, you may
    wish to check the /etc/lnet.conf file for Lustre LNet path
    "discovery"  (1 allows LNet discovery while 0 does not), and
    drop_asym_route (0 disallows asymmetrical routing while 1 permits
    it).   I have worked with a few complex networks in which we chose
    to turn off LNet discovery and specify, via /etc/lnet.conf, the
    routes.  On one system the asymmetrical routing (we have 16 LNet
    boxes between the system and the Lustre storage) seemed to be a
    problem, but we couldn't pin it to any particular box.  On that
    system disallowing asymmetrical routing seemed to help maintain
    LNet/Lustre connectivity.

    One may check the lctl ping to narrow down net connectivity from
    other possibilities.

    Cheers,
    megan

    On Mon, May 17, 2021 at 3:50 PM
    <lustre-discuss-requ...@lists.lustre.org
    <mailto:lustre-discuss-requ...@lists.lustre.org>> wrote:

        Send lustre-discuss mailing list submissions to
        lustre-discuss@lists.lustre.org
        <mailto:lustre-discuss@lists.lustre.org>

        To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
        <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>
        or, via email, send a message with subject or body 'help' to
        lustre-discuss-requ...@lists.lustre.org
        <mailto:lustre-discuss-requ...@lists.lustre.org>

        You can reach the person managing the list at
        lustre-discuss-ow...@lists.lustre.org
        <mailto:lustre-discuss-ow...@lists.lustre.org>

        When replying, please edit your Subject line so it is more
        specific
        than "Re: Contents of lustre-discuss digest..."


        Today's Topics:

           1. Re: problems to mount MDS and MDT (Abdeslam Tahari)
           2. Re: problems to mount MDS and MDT (Colin Faber)


        ----------------------------------------------------------------------

        Message: 1
        Date: Mon, 17 May 2021 21:35:34 +0200
        From: Abdeslam Tahari <abes...@gmail.com
        <mailto:abes...@gmail.com>>
        To: Colin Faber <cfa...@gmail.com <mailto:cfa...@gmail.com>>
        Cc: lustre-discuss <lustre-discuss@lists.lustre.org
        <mailto:lustre-discuss@lists.lustre.org>>
        Subject: Re: [lustre-discuss] problems to mount MDS and MDT
        Message-ID:
               
        <CA+LuYSL9_TTcHopwHYbFRosZNgUFK=bxecepen5dzzd+qxn...@mail.gmail.com
        <mailto:bxecepen5dzzd%2bqxn...@mail.gmail.com>>
        Content-Type: text/plain; charset="utf-8"

        Thank you Colin

        No i don't have iptables or rules

        firewalled is stopped selinux disabled as well
         iptables -L
        Chain INPUT (policy ACCEPT)
        target     prot opt source               destination

        Chain FORWARD (policy ACCEPT)
        target     prot opt source               destination

        Chain OUTPUT (policy ACCEPT)
        target     prot opt source               destination


        Regards


        Regards

        Le lun. 17 mai 2021 ? 21:29, Colin Faber <cfa...@gmail.com
        <mailto:cfa...@gmail.com>> a ?crit :

        > Firewall rules dealing with localhost?
        >
        > On Mon, May 17, 2021 at 11:33 AM Abdeslam Tahari via
        lustre-discuss <
        > lustre-discuss@lists.lustre.org
        <mailto:lustre-discuss@lists.lustre.org>> wrote:
        >
        >> Hello
        >>
        >> i have a problem to mount the mds/mdt luster, it wont mount
        at all and
        >> there is no message errors at the console
        >>
        >> -it does not show errors or messages while mounting it
        >>
        >> here are some debug file logs
        >>
        >>
        >> i specify it is a new project that i am doing.
        >>
        >> the version and packages of luter installed:
        >> kmod-lustre-2.12.5-1.el7.x86_64
        >> kernel-devel-3.10.0-1127.8.2.el7_lustre.x86_64
        >> lustre-2.12.5-1.el7.x86_64
        >> lustre-resource-agents-2.12.5-1.el7.x86_64
        >> kernel-3.10.0-1160.2.1.el7_lustre.x86_64
        >>
        kernel-debuginfo-common-x86_64-3.10.0-1160.2.1.el7_lustre.x86_64
        >> kmod-lustre-osd-ldiskfs-2.12.5-1.el7.x86_64
        >> kernel-3.10.0-1127.8.2.el7_lustre.x86_64
        >> lustre-osd-ldiskfs-mount-2.12.5-1.el7.x86_64
        >>
        >>
        >>
        >> the system(os) Centos 7
        >>
        >> the kernel
        >> Linux lustre-mds1 3.10.0-1127.8.2.el7_lustre.x86_64
        >>  cat /etc/redhat-release
        >>
        >>
        >> when i mount the luster file-system it wont show up and no
        errors
        >>
        >> mount -t lustre /dev/sda /mds
        >>
        >> lctl dl  does not show up
        >>
        >> df -h   no mount point for /dev/sda
        >>
        >>
        >> lctl dl
        >>
        >> shows this:
        >> lctl dl
        >>   0 UP osd-ldiskfs lustre-MDT0000-osd lustre-MDT0000-osd_UUID 3
        >>   2 UP mgc MGC10.0.1.70@tcp
        57e06c2d-5294-f034-fd95-460cee4f92b7 4
        >>   3 UP mds MDS MDS_uuid 2
        >>
        >>
        >> but unfortunately it disappears after 03 seconds
        >>
        >> lctl  dl shows nothing
        >>
        >> lctl dk
        >>
        >> shows this debug output
        >>
        >>
        >>
        
00000020:00000080:18.0:1621276062.004338:0:13403:0:(obd_config.c:1128:class_process_config())
        >> processing cmd: cf006
        >>
        
00000020:00000080:18.0:1621276062.004341:0:13403:0:(obd_config.c:1147:class_process_config())
        >> removing mappings for uuid MGC10.0.1.70@tcp_0
        >>
        
00000020:01000004:18.0:1621276062.004346:0:13403:0:(obd_mount.c:661:lustre_put_lsi())
        >> put ffff9bbbf91d5800 1
        >>
        
00000020:00000080:18.0:1621276062.004351:0:13403:0:(genops.c:1501:class_disconnect())
        >> disconnect: cookie 0x256dd92fc5bf929c
        >>
        
00000020:00000080:18.0:1621276062.004354:0:13403:0:(genops.c:1024:class_export_put())
        >> final put ffff9bbf3e66a400/lustre-MDT0000-osd_UUID
        >>
        
00000020:01000000:18.0:1621276062.004361:0:13403:0:(obd_config.c:2100:class_manual_cleanup())
        >> Manual cleanup of lustre-MDT0000-osd (flags='')
        >>
        
00000020:00000080:18.0:1621276062.004368:0:821:0:(genops.c:974:class_export_destroy())
        >> destroying export ffff9bbf3e66a400/lustre-MDT0000-osd_UUID for
        >> lustre-MDT0000-osd
        >>
        
00000020:00000080:18.0:1621276062.004376:0:13403:0:(obd_config.c:1128:class_process_config())
        >> processing cmd: cf004
        >>
        
00000020:00000080:18.0:1621276062.004379:0:13403:0:(obd_config.c:659:class_cleanup())
        >> lustre-MDT0000-osd: forcing exports to disconnect: 0/0
        >>
        
00000020:00080000:18.0:1621276062.004382:0:13403:0:(genops.c:1590:class_disconnect_exports())
        >> OBD device 0 (ffff9bbf47141080) has no exports
        >>
        
00000020:00000080:18.0:1621276062.004788:0:13403:0:(obd_config.c:1128:class_process_config())
        >> processing cmd: cf002
        >>
        
00000020:00000080:18.0:1621276062.004791:0:13403:0:(obd_config.c:589:class_detach())
        >> detach on obd lustre-MDT0000-osd (uuid lustre-MDT0000-osd_UUID)
        >>
        
00000020:00000080:18.0:1621276062.004794:0:13403:0:(genops.c:1024:class_export_put())
        >> final put ffff9bbf48800c00/lustre-MDT0000-osd_UUID
        >>
        
00000020:00000080:18.0:1621276062.004796:0:13403:0:(genops.c:974:class_export_destroy())
        >> destroying export ffff9bbf48800c00/lustre-MDT0000-osd_UUID for
        >> lustre-MDT0000-osd
        >>
        
00000020:01000000:18.0:1621276062.004799:0:13403:0:(genops.c:481:class_free_dev())
        >> finishing cleanup of obd lustre-MDT0000-osd
        (lustre-MDT0000-osd_UUID)
        >>
        
00000020:01000004:18.0:1621276062.450759:0:13403:0:(obd_mount.c:605:lustre_free_lsi())
        >> Freeing lsi ffff9bbbf91d6800
        >>
        
00000020:01000000:18.0:1621276062.450805:0:13403:0:(obd_config.c:2100:class_manual_cleanup())
        >> Manual cleanup of MDS (flags='F')
        >>
        
00000020:00000080:18.0:1621276062.450806:0:13403:0:(obd_config.c:1128:class_process_config())
        >> processing cmd: cf004
        >>
        
00000020:00000080:18.0:1621276062.450807:0:13403:0:(obd_config.c:659:class_cleanup())
        >> MDS: forcing exports to disconnect: 0/0
        >>
        
00000020:00080000:18.0:1621276062.450809:0:13403:0:(genops.c:1590:class_disconnect_exports())
        >> OBD device 3 (ffff9bbf43fdd280) has no exports
        >>
        
00000020:00000080:58.0F:1621276062.490781:0:13403:0:(obd_config.c:1128:class_process_config())
        >> processing cmd: cf002
        >>
        
00000020:00000080:58.0:1621276062.490787:0:13403:0:(obd_config.c:589:class_detach())
        >> detach on obd MDS (uuid MDS_uuid)
        >>
        
00000020:00000080:58.0:1621276062.490788:0:13403:0:(genops.c:1024:class_export_put())
        >> final put ffff9bbf3e668800/MDS_uuid
        >>
        
00000020:00000080:58.0:1621276062.490790:0:13403:0:(genops.c:974:class_export_destroy())
        >> destroying export ffff9bbf3e668800/MDS_uuid for MDS
        >>
        
00000020:01000000:58.0:1621276062.490791:0:13403:0:(genops.c:481:class_free_dev())
        >> finishing cleanup of obd MDS (MDS_uuid)
        >>
        
00000020:02000400:58.0:1621276062.490877:0:13403:0:(obd_mount_server.c:1642:server_put_super())
        >> server umount lustre-MDT0000 complete
        >>
        
00000400:02020000:42.0:1621276086.284109:0:5400:0:(acceptor.c:321:lnet_accept())
        >> 120-3: Refusing connection from 127.0.0.1 for
        127.0.0.1@tcp: No matching
        >> NI
        >>
        
00000800:00020000:6.0:1621276086.284152:0:5383:0:(socklnd_cb.c:1817:ksocknal_recv_hello())
        >> Error -104 reading HELLO from 127.0.0.1
        >>
        
00000400:02020000:6.0:1621276086.284174:0:5383:0:(acceptor.c:127:lnet_connect_console_error())
        >> 11b-b: Connection to 127.0.0.1@tcp at host 127.0.0.1 on
        port 988 was
        >> reset: is it running a compatible version of Lustre and is
        127.0.0.1@tcp
        >> one of its NIDs?
        >>
        
00000800:00000100:6.0:1621276086.284189:0:5383:0:(socklnd_cb.c:438:ksocknal_txlist_done())
        >> Deleting packet type 2 len 0 10.0.1.70@tcp->127.0.0.1@tcp
        >>
        
00000800:00000100:34.0:1621276136.363882:0:5401:0:(socklnd_cb.c:979:ksocknal_launch_packet())
        >> No usable routes to 12345-127.0.0.1@tcp
        >>
        
00000400:02020000:42.0:1621276186.440095:0:5400:0:(acceptor.c:321:lnet_accept())
        >> 120-3: Refusing connection from 127.0.0.1 for
        127.0.0.1@tcp: No matching
        >> NI
        >>
        
00000800:00020000:44.0:1621276186.446533:0:5386:0:(socklnd_cb.c:1817:ksocknal_recv_hello())
        >> Error -104 reading HELLO from 127.0.0.1
        >>
        
00000400:02020000:44.0:1621276186.452996:0:5386:0:(acceptor.c:127:lnet_connect_console_error())
        >> 11b-b: Connection to 127.0.0.1@tcp at host 127.0.0.1 on
        port 988 was
        >> reset: is it running a compatible version of Lustre and is
        127.0.0.1@tcp
        >> one of its NIDs?
        >>
        
00000800:00000100:44.0:1621276186.461433:0:5386:0:(socklnd_cb.c:438:ksocknal_txlist_done())
        >> Deleting packet type 2 len 0 10.0.1.70@tcp->127.0.0.1@tcp
        >> Debug log: 872 lines, 872 kept, 0 dropped, 0 bad.
        >>
        >>
        >>
        >> I just cant find out any help would be very appreciated
        >>
        >>
        >> Thanks all
        >>
        >>
        >>
        >>
        >>
        >>
        >> --
        >> Tahari.Abdeslam
        >> _______________________________________________
        >> lustre-discuss mailing list
        >> lustre-discuss@lists.lustre.org
        <mailto:lustre-discuss@lists.lustre.org>
        >>
        http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
        <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>
        >>
        >

-- Tahari.Abdeslam
        -------------- next part --------------
        An HTML attachment was scrubbed...
        URL:
        
<http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210517/1decdc97/attachment-0001.html
        
<http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210517/1decdc97/attachment-0001.html>>

        ------------------------------

        Message: 2
        Date: Mon, 17 May 2021 13:50:03 -0600
        From: Colin Faber <cfa...@gmail.com <mailto:cfa...@gmail.com>>
        To: Abdeslam Tahari <abes...@gmail.com <mailto:abes...@gmail.com>>
        Cc: lustre-discuss <lustre-discuss@lists.lustre.org
        <mailto:lustre-discuss@lists.lustre.org>>
        Subject: Re: [lustre-discuss] problems to mount MDS and MDT
        Message-ID:
               
        <CAJcXmB=T884j=5n8nhwspfbvns+naooma9b8xjudhxt-fbo...@mail.gmail.com
        <mailto:5n8nhwspfbvns%2bnaooma9b8xjudhxt-fbo...@mail.gmail.com>>
        Content-Type: text/plain; charset="utf-8"

        It appears part of the debug data is missing (the part before
        you posted
        it), Can you try again, lctl dk > /dev/null to clear it then
        try your mount
        and grab the debug again?

        On Mon, May 17, 2021 at 1:35 PM Abdeslam Tahari
        <abes...@gmail.com <mailto:abes...@gmail.com>> wrote:

        > Thank you Colin
        >
        > No i don't have iptables or rules
        >
        > firewalled is stopped selinux disabled as well
        >  iptables -L
        > Chain INPUT (policy ACCEPT)
        > target     prot opt source               destination
        >
        > Chain FORWARD (policy ACCEPT)
        > target     prot opt source               destination
        >
        > Chain OUTPUT (policy ACCEPT)
        > target     prot opt source               destination
        >
        >
        > Regards
        >
        >
        > Regards
        >
        > Le lun. 17 mai 2021 ? 21:29, Colin Faber <cfa...@gmail.com
        <mailto:cfa...@gmail.com>> a ?crit :
        >
        >> Firewall rules dealing with localhost?
        >>
        >> On Mon, May 17, 2021 at 11:33 AM Abdeslam Tahari via
        lustre-discuss <
        >> lustre-discuss@lists.lustre.org
        <mailto:lustre-discuss@lists.lustre.org>> wrote:
        >>
        >>> Hello
        >>>
        >>> i have a problem to mount the mds/mdt luster, it wont
        mount at all and
        >>> there is no message errors at the console
        >>>
        >>> -it does not show errors or messages while mounting it
        >>>
        >>> here are some debug file logs
        >>>
        >>>
        >>> i specify it is a new project that i am doing.
        >>>
        >>> the version and packages of luter installed:
        >>> kmod-lustre-2.12.5-1.el7.x86_64
        >>> kernel-devel-3.10.0-1127.8.2.el7_lustre.x86_64
        >>> lustre-2.12.5-1.el7.x86_64
        >>> lustre-resource-agents-2.12.5-1.el7.x86_64
        >>> kernel-3.10.0-1160.2.1.el7_lustre.x86_64
        >>>
        kernel-debuginfo-common-x86_64-3.10.0-1160.2.1.el7_lustre.x86_64
        >>> kmod-lustre-osd-ldiskfs-2.12.5-1.el7.x86_64
        >>> kernel-3.10.0-1127.8.2.el7_lustre.x86_64
        >>> lustre-osd-ldiskfs-mount-2.12.5-1.el7.x86_64
        >>>
        >>>
        >>>
        >>> the system(os) Centos 7
        >>>
        >>> the kernel
        >>> Linux lustre-mds1 3.10.0-1127.8.2.el7_lustre.x86_64
        >>>  cat /etc/redhat-release
        >>>
        >>>
        >>> when i mount the luster file-system it wont show up and no
        errors
        >>>
        >>> mount -t lustre /dev/sda /mds
        >>>
        >>> lctl dl  does not show up
        >>>
        >>> df -h   no mount point for /dev/sda
        >>>
        >>>
        >>> lctl dl
        >>>
        >>> shows this:
        >>> lctl dl
        >>>   0 UP osd-ldiskfs lustre-MDT0000-osd
        lustre-MDT0000-osd_UUID 3
        >>>   2 UP mgc MGC10.0.1.70@tcp
        57e06c2d-5294-f034-fd95-460cee4f92b7 4
        >>>   3 UP mds MDS MDS_uuid 2
        >>>
        >>>
        >>> but unfortunately it disappears after 03 seconds
        >>>
        >>> lctl  dl shows nothing
        >>>
        >>> lctl dk
        >>>
        >>> shows this debug output
        >>>
        >>>
        >>>
        
00000020:00000080:18.0:1621276062.004338:0:13403:0:(obd_config.c:1128:class_process_config())
        >>> processing cmd: cf006
        >>>
        
00000020:00000080:18.0:1621276062.004341:0:13403:0:(obd_config.c:1147:class_process_config())
        >>> removing mappings for uuid MGC10.0.1.70@tcp_0
        >>>
        
00000020:01000004:18.0:1621276062.004346:0:13403:0:(obd_mount.c:661:lustre_put_lsi())
        >>> put ffff9bbbf91d5800 1
        >>>
        
00000020:00000080:18.0:1621276062.004351:0:13403:0:(genops.c:1501:class_disconnect())
        >>> disconnect: cookie 0x256dd92fc5bf929c
        >>>
        
00000020:00000080:18.0:1621276062.004354:0:13403:0:(genops.c:1024:class_export_put())
        >>> final put ffff9bbf3e66a400/lustre-MDT0000-osd_UUID
        >>>
        
00000020:01000000:18.0:1621276062.004361:0:13403:0:(obd_config.c:2100:class_manual_cleanup())
        >>> Manual cleanup of lustre-MDT0000-osd (flags='')
        >>>
        
00000020:00000080:18.0:1621276062.004368:0:821:0:(genops.c:974:class_export_destroy())
        >>> destroying export ffff9bbf3e66a400/lustre-MDT0000-osd_UUID for
        >>> lustre-MDT0000-osd
        >>>
        
00000020:00000080:18.0:1621276062.004376:0:13403:0:(obd_config.c:1128:class_process_config())
        >>> processing cmd: cf004
        >>>
        
00000020:00000080:18.0:1621276062.004379:0:13403:0:(obd_config.c:659:class_cleanup())
        >>> lustre-MDT0000-osd: forcing exports to disconnect: 0/0
        >>>
        
00000020:00080000:18.0:1621276062.004382:0:13403:0:(genops.c:1590:class_disconnect_exports())
        >>> OBD device 0 (ffff9bbf47141080) has no exports
        >>>
        
00000020:00000080:18.0:1621276062.004788:0:13403:0:(obd_config.c:1128:class_process_config())
        >>> processing cmd: cf002
        >>>
        
00000020:00000080:18.0:1621276062.004791:0:13403:0:(obd_config.c:589:class_detach())
        >>> detach on obd lustre-MDT0000-osd (uuid
        lustre-MDT0000-osd_UUID)
        >>>
        
00000020:00000080:18.0:1621276062.004794:0:13403:0:(genops.c:1024:class_export_put())
        >>> final put ffff9bbf48800c00/lustre-MDT0000-osd_UUID
        >>>
        
00000020:00000080:18.0:1621276062.004796:0:13403:0:(genops.c:974:class_export_destroy())
        >>> destroying export ffff9bbf48800c00/lustre-MDT0000-osd_UUID for
        >>> lustre-MDT0000-osd
        >>>
        
00000020:01000000:18.0:1621276062.004799:0:13403:0:(genops.c:481:class_free_dev())
        >>> finishing cleanup of obd lustre-MDT0000-osd
        (lustre-MDT0000-osd_UUID)
        >>>
        
00000020:01000004:18.0:1621276062.450759:0:13403:0:(obd_mount.c:605:lustre_free_lsi())
        >>> Freeing lsi ffff9bbbf91d6800
        >>>
        
00000020:01000000:18.0:1621276062.450805:0:13403:0:(obd_config.c:2100:class_manual_cleanup())
        >>> Manual cleanup of MDS (flags='F')
        >>>
        
00000020:00000080:18.0:1621276062.450806:0:13403:0:(obd_config.c:1128:class_process_config())
        >>> processing cmd: cf004
        >>>
        
00000020:00000080:18.0:1621276062.450807:0:13403:0:(obd_config.c:659:class_cleanup())
        >>> MDS: forcing exports to disconnect: 0/0
        >>>
        
00000020:00080000:18.0:1621276062.450809:0:13403:0:(genops.c:1590:class_disconnect_exports())
        >>> OBD device 3 (ffff9bbf43fdd280) has no exports
        >>>
        
00000020:00000080:58.0F:1621276062.490781:0:13403:0:(obd_config.c:1128:class_process_config())
        >>> processing cmd: cf002
        >>>
        
00000020:00000080:58.0:1621276062.490787:0:13403:0:(obd_config.c:589:class_detach())
        >>> detach on obd MDS (uuid MDS_uuid)
        >>>
        
00000020:00000080:58.0:1621276062.490788:0:13403:0:(genops.c:1024:class_export_put())
        >>> final put ffff9bbf3e668800/MDS_uuid
        >>>
        
00000020:00000080:58.0:1621276062.490790:0:13403:0:(genops.c:974:class_export_destroy())
        >>> destroying export ffff9bbf3e668800/MDS_uuid for MDS
        >>>
        
00000020:01000000:58.0:1621276062.490791:0:13403:0:(genops.c:481:class_free_dev())
        >>> finishing cleanup of obd MDS (MDS_uuid)
        >>>
        
00000020:02000400:58.0:1621276062.490877:0:13403:0:(obd_mount_server.c:1642:server_put_super())
        >>> server umount lustre-MDT0000 complete
        >>>
        
00000400:02020000:42.0:1621276086.284109:0:5400:0:(acceptor.c:321:lnet_accept())
        >>> 120-3: Refusing connection from 127.0.0.1 for
        127.0.0.1@tcp: No
        >>> matching NI
        >>>
        
00000800:00020000:6.0:1621276086.284152:0:5383:0:(socklnd_cb.c:1817:ksocknal_recv_hello())
        >>> Error -104 reading HELLO from 127.0.0.1
        >>>
        
00000400:02020000:6.0:1621276086.284174:0:5383:0:(acceptor.c:127:lnet_connect_console_error())
        >>> 11b-b: Connection to 127.0.0.1@tcp at host 127.0.0.1 on
        port 988 was
        >>> reset: is it running a compatible version of Lustre and is
        127.0.0.1@tcp
        >>> one of its NIDs?
        >>>
        
00000800:00000100:6.0:1621276086.284189:0:5383:0:(socklnd_cb.c:438:ksocknal_txlist_done())
        >>> Deleting packet type 2 len 0 10.0.1.70@tcp->127.0.0.1@tcp
        >>>
        
00000800:00000100:34.0:1621276136.363882:0:5401:0:(socklnd_cb.c:979:ksocknal_launch_packet())
        >>> No usable routes to 12345-127.0.0.1@tcp
        >>>
        
00000400:02020000:42.0:1621276186.440095:0:5400:0:(acceptor.c:321:lnet_accept())
        >>> 120-3: Refusing connection from 127.0.0.1 for
        127.0.0.1@tcp: No
        >>> matching NI
        >>>
        
00000800:00020000:44.0:1621276186.446533:0:5386:0:(socklnd_cb.c:1817:ksocknal_recv_hello())
        >>> Error -104 reading HELLO from 127.0.0.1
        >>>
        
00000400:02020000:44.0:1621276186.452996:0:5386:0:(acceptor.c:127:lnet_connect_console_error())
        >>> 11b-b: Connection to 127.0.0.1@tcp at host 127.0.0.1 on
        port 988 was
        >>> reset: is it running a compatible version of Lustre and is
        127.0.0.1@tcp
        >>> one of its NIDs?
        >>>
        
00000800:00000100:44.0:1621276186.461433:0:5386:0:(socklnd_cb.c:438:ksocknal_txlist_done())
        >>> Deleting packet type 2 len 0 10.0.1.70@tcp->127.0.0.1@tcp
        >>> Debug log: 872 lines, 872 kept, 0 dropped, 0 bad.
        >>>
        >>>
        >>>
        >>> I just cant find out any help would be very appreciated
        >>>
        >>>
        >>> Thanks all
        >>>
        >>>
        >>>
        >>>
        >>>
        >>>
        >>> --
        >>> Tahari.Abdeslam
        >>> _______________________________________________
        >>> lustre-discuss mailing list
        >>> lustre-discuss@lists.lustre.org
        <mailto:lustre-discuss@lists.lustre.org>
        >>>
        http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
        <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>
        >>>
        >>
        >
        > --
        > Tahari.Abdeslam
        >
        -------------- next part --------------
        An HTML attachment was scrubbed...
        URL:
        
<http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210517/2adc6c81/attachment.html
        
<http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20210517/2adc6c81/attachment.html>>

        ------------------------------

        Subject: Digest Footer

        _______________________________________________
        lustre-discuss mailing list
        lustre-discuss@lists.lustre.org
        <mailto:lustre-discuss@lists.lustre.org>
        http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
        <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>


        ------------------------------

        End of lustre-discuss Digest, Vol 182, Issue 12
        ***********************************************

    _______________________________________________
    lustre-discuss mailing list
    lustre-discuss@lists.lustre.org
    <mailto:lustre-discuss@lists.lustre.org>
    http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
    <http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>



--
Tahari.Abdeslam
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to