What is the keepalived version you are running ?

It's suppose to be keepalived-2.0.20-2.1.x86_64 but yours looks different.


Le 20-10-09 à 15 h 57, Jeff Linden a écrit :

There is one warning in the log during restart of keepalived.

# journalctl -f | grep keepalived

Oct 09 15:54:05 nadc1-pfence-01 sudo[152287]:     root : TTY=pts/1 ; PWD=/root ; USER=root ; COMMAND=/bin/systemctl restart packetfence-keepalived

Oct 09 15:54:09 nadc1-pfence-01 packetfence[152297]: -e(152297) INFO: main, -e, 1 (pf::services::manager::keepalived::generateConfig)

Oct 09 15:54:09 nadc1-pfence-01 Keepalived[152324]: WARNING - default user 'keepalived_script' for script execution does not exist - please create.

Oct 09 15:54:09 nadc1-pfence-01 Keepalived[152324]: Opening file '/usr/local/pf/var/conf/keepalived.conf'.

Oct 09 15:54:09 nadc1-pfence-01 Keepalived_vrrp[152328]: Opening file '/usr/local/pf/var/conf/keepalived.conf'.

Oct 09 15:54:09 nadc1-pfence-01 packetfence[152108]: pfcmd.pl(152108) INFO: Daemon keepalived took 3.692 seconds to start. (pf::services::manager::restartService)

Oct 09 15:54:09 nadc1-pfence-01 Keepalived_healthcheckers[152327]: Opening file '/usr/local/pf/var/conf/keepalived.conf'.

Oct 09 15:54:09 nadc1-pfence-01 sudo[152333]:     root : TTY=pts/1 ; PWD=/root ; USER=root ; COMMAND=/bin/systemctl show -p MainPID packetfence-keepalived

Here is the keepalived.conf

# This file is generated from a template at /usr/local/pf/conf/keepalived.conf

# Any changes made to this file will be lost on restart

global_defs {

notification_email {

jlin...@jerviswebb.com

}

notification_email_from packetfe...@daifukuna.com

smtp_server 10.22.0.92

smtp_connect_timeout 30

router_id PacketFence-nadc1-pfence-01

}

vrrp_track_process radius_load_balancer {

process /usr/sbin/freeradius -d /usr/local/pf/raddb  -n load_balancer -fm

full_command

quorum 1

delay 15

}

vrrp_track_process haproxy_portal {

process /usr/sbin/haproxy -Ws -f /usr/local/pf/var/conf/haproxy-portal.conf -p /usr/local/pf/var/run/haproxy-portal.pid

full_command

quorum 1

delay 15

}

static_ipaddress {

66.70.255.147 dev lo scope link

}

static_routes {

10.20.254.0/24 via 10.30.247.2 dev eth0.247

10.20.16.0/24 via 10.30.247.2 dev eth0.247

10.20.31.0/24 via 10.30.247.2 dev eth0.247

10.20.253.0/24 via 10.30.247.2 dev eth0.247

10.20.252.0/24 via 10.30.247.2 dev eth0.247

}

*From:* Fabrice Durand <fdur...@inverse.ca>
*Sent:* Friday, October 9, 2020 3:51 PM
*To:* Jeff Linden <jlin...@jerviswebb.com>; packetfence-users@lists.sourceforge.net *Subject:* Re: [PacketFence-users] captive_portal.ip_address in pf.conf.defaults

Can i see the keepalived.conf ?

And do you have something (like error) in the logs about keepalived (journalctl -f | grep keepalived) when you restart it ?

Le 20-10-09 à 15 h 46, Jeff Linden a écrit :

    Keepalived restarts successfully, but is not showing on the lo
    interface.

    I performed the restart of keepalive using this…

    # /usr/local/pf/bin/pfcmd service keepalived restart

    Service Status    PID

    Checking configuration sanity...

    packetfence-keepalived.service started   145901

    But, no, the address is still not assigned to lo


    # ip a


    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state
    UNKNOWN group default qlen 1

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

        inet 127.0.0.1/8 scope host lo

           valid_lft forever preferred_lft forever

        inet6 ::1/128 scope host

           valid_lft forever preferred_lft forever

    Jeff

    *From:* Fabrice Durand <fdur...@inverse.ca>
    <mailto:fdur...@inverse.ca>
    *Sent:* Friday, October 9, 2020 3:30 PM
    *To:* Jeff Linden <jlin...@jerviswebb.com>
    <mailto:jlin...@jerviswebb.com>;
    packetfence-users@lists.sourceforge.net
    <mailto:packetfence-users@lists.sourceforge.net>
    *Subject:* Re: [PacketFence-users] captive_portal.ip_address in
    pf.conf.defaults

    When you restart keepalived does the ip appear on lo ?

    Does keepalived start ?

    Le 20-10-09 à 15 h 20, Jeff Linden a écrit :

        Fabrice,

        I realized that I tested previously with the line commented
        out of pf.conf.defaults.

        I’ve put the line back in to pf.conf.defaults and re-run the
        test you asked for.  Here are the better results.  Still no on
        the IP being assigned to lo, but yes to it being in the
        keepalived.conf.

        Does the ip is assigned to lo ? (ip a)

                      No, it is not assigned to lo.  Only 127.0.0.1/8
        is assigned

        Check the keepalived.conf file if it contain the ip
        66.70.255.147 (var/conf/keepalived.conf).

        Yes, keepalived.conf does not contain the IP 66.70.255.147

        Also check if there is not a keepalived.conf.rpmnew somewhere.

                      No, there is no keepalived.conf.rpmnew anywhere.

        Jeff

        *From:* Jeff Linden via PacketFence-users
        <packetfence-users@lists.sourceforge.net>
        <mailto:packetfence-users@lists.sourceforge.net>
        *Sent:* Friday, October 9, 2020 3:10 PM
        *To:* Fabrice Durand <fdur...@inverse.ca>
        <mailto:fdur...@inverse.ca>;
        packetfence-users@lists.sourceforge.net
        <mailto:packetfence-users@lists.sourceforge.net>
        *Cc:* Jeff Linden <jlin...@jerviswebb.com>
        <mailto:jlin...@jerviswebb.com>
        *Subject:* Re: [PacketFence-users] captive_portal.ip_address
        in pf.conf.defaults

        Does the ip is assigned to lo ? (ip a)

                      No, it is not assigned to lo.  Only 127.0.0.1/8
        is assigned

        Check the keepalived.conf file if it contain the ip
        66.70.255.147 (var/conf/keepalived.conf).

                      No, keepalived.conf does not contain the IP
        66.70.255.147

        Also check if there is not a keepalived.conf.rpmnew somewhere.

                      No, there is no keepalived.conf.rpmnew anywhere.

        Jeff



        *From:* Fabrice Durand <fdur...@inverse.ca
        <mailto:fdur...@inverse.ca>>
        *Sent:* Friday, October 9, 2020 2:59 PM
        *To:* Jeff Linden <jlin...@jerviswebb.com
        <mailto:jlin...@jerviswebb.com>>;
        packetfence-users@lists.sourceforge.net
        <mailto:packetfence-users@lists.sourceforge.net>
        *Subject:* Re: [PacketFence-users] captive_portal.ip_address
        in pf.conf.defaults

        Does the ip is assigned to lo ? (ip a)

        Check the keepalived.conf file if it contain the ip
        66.70.255.147 (var/conf/keepalived.conf).

        Also check if there is not a keepalived.conf.rpmnew somewhere.

        Regards

        Fabrice

        Le 20-10-09 à 14 h 52, Jeff Linden a écrit :

            Fabrice,

            ps -fe | grep keepalive

            root      98543      1  0 13:56 ?        00:00:00
            /usr/sbin/keepalived -f
            /usr/local/pf/var/conf/keepalived.conf
            --pid=/usr/local/pf/var/run/keepalived.pid

            root      98549  98543  0 13:56 ?        00:00:00
            /usr/sbin/keepalived -f
            /usr/local/pf/var/conf/keepalived.conf
            --pid=/usr/local/pf/var/run/keepalived.pid

            root      98550  98543  0 13:56 ?        00:00:00
            /usr/sbin/keepalived -f
            /usr/local/pf/var/conf/keepalived.conf
            --pid=/usr/local/pf/var/run/keepalived.pid

            root     115221 111126  0 14:45 pts/0    00:00:00 grep
            keepalive

            Keep alive is running fine.  I didn’t mention it before,
            but I can see those log entries presented below from
            haproxy.log are repeating over and over.

            And, as I run the systemctl status command I can see the
            PID change and the time since it started activating
            updates as well.

            In the web interface, when I tell the service to stop, it
            immediately restarts in the same state I describe below. 
            Managed, Active, but not Alive.

            Additionally, there is a log entry in packetfence.log that
            is repeating each time the haproxy-portal service tries to
            start.  It says “packetfence: -e(82711) WARN: requesting
            member ips for an undefined interface...
            (pf::cluster::members_ips)”.

            Jeff Linden | Corporate Infrastructure Specialist

            *DAIFUKU NORTH AMERICA*

            30100 Cabot Drive, Novi MI 48377

            (248) 553-1234 x1013

            *DAIFUKU * <http://www.daifukuna.com/>

            *Always an Edge Ahead*

            *From:* Fabrice Durand via PacketFence-users
            <packetfence-users@lists.sourceforge.net>
            <mailto:packetfence-users@lists.sourceforge.net>
            *Sent:* Friday, October 9, 2020 2:18 PM
            *To:* packetfence-users@lists.sourceforge.net
            <mailto:packetfence-users@lists.sourceforge.net>
            *Cc:* Fabrice Durand <fdur...@inverse.ca>
            <mailto:fdur...@inverse.ca>
            *Subject:* Re: [PacketFence-users]
            captive_portal.ip_address in pf.conf.defaults

            Hello Jeff,

            your issue is because keepalived is not running.

            let's try:

            /usr/local/pf/bin/pfcmd service pf updatesystemd

            systemctl restart packetfence-keepalived.service

            Regards

            Fabrice

            Le 20-10-09 à 14 h 11, Jeff Linden via PacketFence-users a
            écrit :

                Hello,

                I’ve upgraded PacketFence from 9.2 to 10.1.  Since
                then, I’ve had trouble getting the Captive Portal to
                function.  Since I noticed a newer version is
                available, I have now upgraded to 10.2 before writing
                this.

                In the web interface, under Status -> Services, the
                haproxy-portal is enabled and running.  All green. 
                Except, the pid is 0.

                Also in the web interface, under Advanced Access
                Configuration -> Captive Portal, the haproxy-portal
                dropdown is showing green.  But, looking further by
                clicking the dropdown, I notice Enabled and Managed
                are green, but Alive is red.

                Systemctl status packetfence-haproxy-portal returns
                the following result:

                ● packetfence-haproxy-portal.service - PacketFence
                HAProxy Load Balancer for the captive portal

                   Loaded: loaded
                (/lib/systemd/system/packetfence-haproxy-portal.service;
                enabled; vendor preset: enabled)

                   Active: activating (start-pre) since Fri 2020-10-09
                10:57:14 EDT; 2s ago

                  Process: 230643 ExecStart=/usr/sbin/haproxy -Ws -f
                /usr/local/pf/var/conf/haproxy-portal.conf -p
                /usr/local/pf/var/run/haproxy-portal.pid (code=exited,
                status=1/FAILU

                Main PID: 230643 (code=exited, status=1/FAILURE);
                Control PID: 230652 (perl)

                    Tasks: 1 (limit: 36864)

                   CGroup:
                /packetfence.slice/packetfence-haproxy-portal.service

                └─control

                └─230652 /usr/bin/perl -I/usr/local/pf/lib
                -Mpf::services::manager::haproxy_portal -e
                pf::services::manager::haproxy_portal->new()->generateConfig()

                Oct 09 10:57:16 nadc1-pfence-01 haproxy[230643]:
                [ALERT] 282/105714 (230643) : Starting frontend
                portal-http-66.70.255.147: cannot bind socket
                [66.70.255.147:80]

                Oct 09 10:57:16 nadc1-pfence-01 haproxy[230643]:
                [ALERT] 282/105714 (230643) : Starting frontend
                portal-https-66.70.255.147: cannot bind socket
                [66.60.255.147:443]

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]:
                packetfence-haproxy-portal.service: Main process
                exited, code=exited, status=1/FAILURE

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]: Failed to
                start PacketFence HAProxy Load Balancer for the
                captive portal.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]:
                packetfence-haproxy-portal.service: Unit entered
                failed state.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]:
                packetfence-haproxy-portal.service: Failed with result
                'exit-code'.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]:
                packetfence-haproxy-portal.service: Service hold-off
                time over, scheduling restart.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]: Stopped
                PacketFence HAProxy Load Balancer for the captive portal.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]: Starting
                PacketFence HAProxy Load Balancer for the captive
                portal...

                In /var/log/haproxy.log, I find the following:

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                proxy started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                static started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]:
                [ALERT] 282/114838 (17789) : Starting frontend
                portal-http-66.70.255.147: cannot bind socket
                [66.70.255.147:80]

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]:
                [ALERT] 282/114838 (17789) : Starting frontend
                portal-https-66.70.255.147: cannot bind socket
                [66.70.255.147:443]

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                portal-http-10.30.247.1 started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                portal-https-10.30.247.1 started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                10.30.247.1-backend started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                portal-http-10.30.3.162 started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                portal-https-10.30.3.162 started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                10.30.3.162-backend started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                portal-http-10.30.248.1 started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                portal-https-10.30.248.1 started.

                Oct  9 11:48:38 nadc1-pfence-01 haproxy[17789]: Proxy
                10.30.248.1-backend started.

                I notice the error about binding to 66.70.255.147. 
                That is not an IP I recognize, it is certainly not
                assigned to any of the interfaces on my system.

                I find the address 66.70.255.147 in the
                pf.conf.defaults file with the header

                # The IP address the portal uses in the registration
                and isolation networks.

                # This IP address should point to an IP outside the
                registration and isolation networks.

                # Do not change unless you know what you are doing.

                ip_address=66.70.255.147

                I found a github entry that discusses the captive
                portal IP here
                https://github.com/inverse-inc/packetfence/pull/5682
                .  It says the previous hardcoded address of 192.0.2.1
                is removed and an Inverse owned IP is put in its
                place.  I see that 66.70.255.147 is owned by Ovh
                Hosting in Montreal, not Inverse specifically, but I
                believe this github entry is talking about the captive
                portal section of pf.conf.defaults.

                So, I set the address in the Captive Portal web page
                to 192.0.2.1 and experience the same results.  No
                captive portal and the error with the haproxy-portal
                service still exists.

                Systemctl status packetfence-haproxy-portal now
                returns the following result:

                ● packetfence-haproxy-portal.service - PacketFence
                HAProxy Load Balancer for the captive portal

                   Loaded: loaded
                (/lib/systemd/system/packetfence-haproxy-portal.service;
                enabled; vendor preset: enabled)

                   Active: activating (start-pre) since Fri 2020-10-09
                10:57:14 EDT; 2s ago

                  Process: 230643 ExecStart=/usr/sbin/haproxy -Ws -f
                /usr/local/pf/var/conf/haproxy-portal.conf -p
                /usr/local/pf/var/run/haproxy-portal.pid (code=exited,
                status=1/FAILU

                Main PID: 230643 (code=exited, status=1/FAILURE);
                Control PID: 230652 (perl)

                    Tasks: 1 (limit: 36864)

                   CGroup:
                /packetfence.slice/packetfence-haproxy-portal.service

                └─control

                └─230652 /usr/bin/perl -I/usr/local/pf/lib
                -Mpf::services::manager::haproxy_portal -e
                pf::services::manager::haproxy_portal->new()->generateConfig()

                Oct 09 10:57:16 nadc1-pfence-01 haproxy[230643]:
                [ALERT] 282/105714 (230643) : Starting frontend
                portal-http-192.0.2.1: cannot bind socket [192.0.2.1:80]

                Oct 09 10:57:16 nadc1-pfence-01 haproxy[230643]:
                [ALERT] 282/105714 (230643) : Starting frontend
                portal-https-192.0.2.1: cannot bind socket [192.0.2.1:443]

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]:
                packetfence-haproxy-portal.service: Main process
                exited, code=exited, status=1/FAILURE

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]: Failed to
                start PacketFence HAProxy Load Balancer for the
                captive portal.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]:
                packetfence-haproxy-portal.service: Unit entered
                failed state.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]:
                packetfence-haproxy-portal.service: Failed with result
                'exit-code'.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]:
                packetfence-haproxy-portal.service: Service hold-off
                time over, scheduling restart.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]: Stopped
                PacketFence HAProxy Load Balancer for the captive portal.

                Oct 09 10:57:14 nadc1-pfence-01 systemd[1]: Starting
                PacketFence HAProxy Load Balancer for the captive
                portal...

                /var/log/haproxy.log now shows:

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                proxy started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]:
                [ALERT] 282/104756 (223396) : Starting frontend
                portal-http-192.0.2.1: cannot bind socket [192.0.2.1:80]

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]:
                [ALERT] 282/104756 (223396) : Starting frontend
                portal-https-192.0.2.1: cannot bind socket [192.0.2.1:443]

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                static started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                portal-http-10.30.247.1 started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                portal-https-10.30.247.1 started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                10.30.247.1-backend started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                portal-http-10.30.3.162 started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                portal-https-10.30.3.162 started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                10.30.3.162-backend started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                portal-http-10.30.248.1 started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                portal-https-10.30.248.1 started.

                Oct  9 10:47:56 nadc1-pfence-01 haproxy[223396]: Proxy
                10.30.248.1-backend started.

                In the pf.conf.defaults file, I commented out the IP. 
                This produces a warning when restarting the services
                “pf.conf value captive_portal.ip_address is not defined!”.

                The haproxy-portal service is now started and I
                successfully performed guest registration.

                Sorry to trouble you with all of this, but the first
                time I performed these steps, I was still experiencing
                trouble with the captive portal.  It’s not until I
                went through it all again to collect the information
                to include with my question that I found the captive
                portal to be working.  It is working with the
                captive_portal.ip_address section of pf.conf.defaults
                commented out.  I’m not certain commenting this line
                is the correct solution.  It must be there for a
                reason, no?

                I will leave these questions for the group then…

                Why is the haproxy-portal showing green in the web
                interface when, in fact, it is not successfully started?

                What is the story with the captive_portal.ip_address
                section of pf.conf.defaults?  Is it a mistake to leave
                it commented?

                Thank you,

                Jeff

                PRIVACY NOTICE: The information contained in this
                e-mail, including any attachments, is confidential and
                intended only for the named recipient(s). Unauthorized
                use, disclosure, forwarding, or copying is strictly
                prohibited and may be unlawful. If you are not the
                intended recipient, please delete the e-mail and any
                attachments and notify us immediately by return e-mail.




                _______________________________________________

                PacketFence-users mailing list

                PacketFence-users@lists.sourceforge.net  
<mailto:PacketFence-users@lists.sourceforge.net>

                https://lists.sourceforge.net/lists/listinfo/packetfence-users

--
            Fabrice Durand

            fdur...@inverse.ca  <mailto:fdur...@inverse.ca>  ::  +1.514.447.4918 
(x135) ::www.inverse.ca  <http://www.inverse.ca>

            Inverse inc. :: Leaders behind SOGo (http://www.sogo.nu) and 
PacketFence (http://packetfence.org)

--
        Fabrice Durand

        fdur...@inverse.ca  <mailto:fdur...@inverse.ca>  ::  +1.514.447.4918 (x135) 
::www.inverse.ca  <http://www.inverse.ca>

        Inverse inc. :: Leaders behind SOGo (http://www.sogo.nu) and 
PacketFence (http://packetfence.org)

--
    Fabrice Durand

    fdur...@inverse.ca  <mailto:fdur...@inverse.ca>  ::  +1.514.447.4918 (x135) 
::www.inverse.ca  <http://www.inverse.ca>

    Inverse inc. :: Leaders behind SOGo (http://www.sogo.nu) and PacketFence 
(http://packetfence.org)

--
Fabrice Durand
fdur...@inverse.ca  <mailto:fdur...@inverse.ca>  ::  +1.514.447.4918 (x135) 
::www.inverse.ca  <http://www.inverse.ca>
Inverse inc. :: Leaders behind SOGo (http://www.sogo.nu) and PacketFence 
(http://packetfence.org)

--
Fabrice Durand
fdur...@inverse.ca ::  +1.514.447.4918 (x135) ::  www.inverse.ca
Inverse inc. :: Leaders behind SOGo (http://www.sogo.nu) and PacketFence 
(http://packetfence.org)

_______________________________________________
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users

Reply via email to