Hi Fabrice,

Sorry for slow response.  My latency issues seems to be calmer after
upgrading unifi fw on the APs, so perhaps there was an element of that
involved. in the mix.  I'll keep monitoring for now.
My portal auth source is LDAPS yes, given i rarely have more than one user
unregistered at a time I was surprised that there were so many open
connections, but then my understanding of what pfstats might be polling
LDAP for is limited.  What would you expect to see in terms of open
connections to ldaps from pfstats?
I'll monitor for now and take a pcap if things get lumpy again or the
number of open LDAPS connections merits further investigation.

Thanks as always,

David

On Sat, Aug 11, 2018 at 3:06 AM, Durand fabrice via PacketFence-users <
packetfence-users@lists.sourceforge.net> wrote:

> Hello David,
> maybe you can take a pcap to see if you see some errors.
> Is it ldaps ?
>
> Also pfstats crashed when it tried to fetch the eduroam config and i am
> not sure that it's related.
>
> Regards
> Fabrice
>
>
> Le 2018-08-10 à 10:16, David Harvey via PacketFence-users a écrit :
>
> Detail I should have included: pf 8.1.0 on Debian
>
> Detail I have since seen (IPs remove/swapped out for IPSCRUBBED):
>
> Aug 10 12:23:37 pf pfstats[26534]: t=2018-08-10T12:23:37+0100 lvl=info
> msg="Calling Unified API on uri: https://127.0.0.1:9999/api/v1/
> dhcp/stats/eth1/IPSCRUBBED" pid=26534
> Aug 10 12:23:37 pf pfstats[26534]: t=2018-08-10T12:23:37+0100 lvl=eror
> msg="API error: Get https://127.0.0.1:9999/api/v1/dhcp/stats/eth1/
> IPSCRUBBED: dial tcp 127.0.0.1:9999: socket: too many open files"
> pid=26534
> Aug 10 12:23:37 pf pfstats[26534]: t=2018-08-10T12:23:37+0100 lvl=info
> msg="Calling Unified API on uri: https://127.0.0.1:9999/api/v1/
> dhcp/stats/eth2/1IPSCRUBBED" pid=26534
> Aug 10 12:23:37 pf pfstats[26534]: t=2018-08-10T12:23:37+0100 lvl=eror
> msg="API error: Get https://127.0.0.1:9999/api/v1/dhcp/stats/eth2/
> IPSCRUBBED: dial tcp 127.0.0.1:9999: socket: too many open files"
> pid=26534
> Aug 10 12:23:37 pf pfstats[26534]: t=2018-08-10T12:23:37+0100 lvl=info
> msg="Calling Unified API on uri: https://127.0.0.1:9999/api/v1/
> dhcp/stats/eth3/IPSCRUBBED" pid=26534
> Aug 10 12:23:37 pf pfstats[26534]: t=2018-08-10T12:23:37+0100 lvl=eror
> msg="API error: Get https://127.0.0.1:9999/api/v1/dhcp/stats/eth3
> IPSCRUBBED: dial tcp 127.0.0.1:9999: socket: too many open files"
> pid=26534
> Aug 10 12:23:37 pf pfstats[26534]: t=2018-08-10T12:23:37+0100 lvl=info
> msg="Calling Unified API on uri: https://127.0.0.1:9999/api/v1/
> queues/stats" pid=26534
> Aug 10 12:23:37 pf pfstats[26534]: t=2018-08-10T12:23:37+0100 lvl=eror
> msg="API error: Get https://127.0.0.1:9999/api/v1/queues/stats: dial tcp
> 127.0.0.1:9999: socket: too many open files" pid=26534
> Aug 10 12:23:38 pf pfstats[26534]: t=2018-08-10T12:23:38+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:38 pf pfstats[26534]: t=2018-08-10T12:23:38+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:38 pf pfstats[26534]: t=2018-08-10T12:23:38+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:39 pf pfstats[26534]: t=2018-08-10T12:23:39+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:39 pf pfstats[26534]: t=2018-08-10T12:23:39+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:39 pf pfstats[26534]: t=2018-08-10T12:23:39+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:40 pf pfstats[26534]: t=2018-08-10T12:23:40+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:40 pf pfstats[26534]: t=2018-08-10T12:23:40+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:40 pf pfstats[26534]: t=2018-08-10T12:23:40+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:41 pf pfstats[26534]: t=2018-08-10T12:23:41+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:41 pf pfstats[26534]: t=2018-08-10T12:23:41+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:41 pf pfstats[26534]: t=2018-08-10T12:23:41+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:42 pf pfstats[26534]: t=2018-08-10T12:23:42+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:42 pf pfstats[26534]: t=2018-08-10T12:23:42+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:42 pf pfstats[26534]: t=2018-08-10T12:23:42+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:43 pf pfstats[26534]: t=2018-08-10T12:23:43+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:43 pf pfstats[26534]: t=2018-08-10T12:23:43+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:23:43 pf pfstats[26534]: t=2018-08-10T12:23:43+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
>
> Which seems to sort itself out after:
>
> Aug 10 12:24:17 pf pfstats[26534]: t=2018-08-10T12:24:17+0100 lvl=eror
> msg="Cannot connect to pfconfig socket..." pid=26534
> Aug 10 12:24:17 pf pfstats[26534]: panic: Can't connect to pfconfig socket
> Aug 10 12:24:17 pf pfstats[26534]: goroutine 37 [running]:
> Aug 10 12:24:17 pf pfstats[26534]: github.com/inverse-inc/
> packetfence/go/pfconfigdriver.connectSocket(0x8a0820
> <http://github.com/inverse-inc/packetfence/go/pfconfigdriver.connectSocket%280x8a0820>,
> 0xc42026d0b0, 0x444707, 0x0)
> Aug 10 12:24:17 pf pfstats[26534]: /tmp/buildd/packetfence-8.1.0/
> debian/tmp.7VfKM79Nh5/src/github.com/inverse-inc/
> packetfence/go/pfconfigdriver/fetch.go:95 +0x190
> Aug 10 12:24:17 pf pfstats[26534]: github.com/inverse-inc/
> packetfence/go/pfconfigdriver.FetchSocket(0x8a0820
> <http://github.com/inverse-inc/packetfence/go/pfconfigdriver.FetchSocket%280x8a0820>,
> 0xc42026d0b0, 0xc42227a000, 0x55, 0x7ee8d4, 0x4, 0x74034c)
> Aug 10 12:24:17 pf pfstats[26534]: /tmp/buildd/packetfence-8.1.0/
> debian/tmp.7VfKM79Nh5/src/github.com/inverse-inc/
> packetfence/go/pfconfigdriver/fetch.go:114 +0x4d
> Aug 10 12:24:17 pf pfstats[26534]: github.com/inverse-inc/
> packetfence/go/pfconfigdriver.FetchDecodeSocket(0x8a0820
> <http://github.com/inverse-inc/packetfence/go/pfconfigdriver.FetchDecodeSocket%280x8a0820>,
> 0xc42026d0b0, 0x89e9c0, 0xc4230dea80, 0x0, 0x0)
> Aug 10 12:24:17 pf pfstats[26534]: /tmp/buildd/packetfence-8.1.0/
> debian/tmp.7VfKM79Nh5/src/github.com/inverse-inc/
> packetfence/go/pfconfigdriver/fetch.go:277 +0x103
> Aug 10 12:24:17 pf pfstats[26534]: main.main.func5(0x8a0820, 0xc42026d0b0)
> Aug 10 12:24:17 pf pfstats[26534]: /tmp/buildd/packetfence-8.1.0/
> debian/tmp.7VfKM79Nh5/src/github.com/inverse-inc/
> packetfence/go/stats/main.go:329 +0x27b
> Aug 10 12:24:17 pf pfstats[26534]: created by main.main
> Aug 10 12:24:17 pf pfstats[26534]: /tmp/buildd/packetfence-8.1.0/
> debian/tmp.7VfKM79Nh5/src/github.com/inverse-inc/
> packetfence/go/stats/main.go:324 +0x315
> Aug 10 12:24:17 pf pfstats[1465]: t=2018-08-10T12:24:17+0100 lvl=info
> msg="Starting stats server" pid=1465
> Aug 10 12:24:17 pf pfstats[1465]: t=2018-08-10T12:24:17+0100 lvl=dbug
> msg="Adding struct with address 0xa588d8 to the pool" pid=1465
> Aug 10 12:24:17 pf pfstats[1465]: t=2018-08-10T12:24:17+0100 lvl=dbug
> msg="Resource is not valid anymore. Was loaded at 0001-01-01 00:00:00 +0000
> UTC" pid=1465 PfconfigObject=hash_element|config::Pf;database
> Aug 10 12:24:17 pf pfstats[1465]: t=2018-08-10T12:24:17+0100 lvl=info
> msg="Calling Unified API on uri: https://127.0.0.1:9999/api/v1/
> dhcp/stats/eth1/IPSCRUBBED" pid=1465
> Aug 10 12:24:17 pf pfstats[1465]: t=2018-08-10T12:24:17+0100 lvl=info
> msg="Calling Unified API on uri: https://127.0.0.1:9999/api/v1/
> dhcp/stats/eth1/IPSCRUBBED" pid=1465
> Aug 10 12:24:17 pf pfstats[1465]: t=2018-08-10T12:24:17+0100 lvl=info
> msg="Calling Unified API on uri: https://127.0.0.1:9999/api/v1/
> dhcp/stats/eth1/IPSCRUBBED" pid=1465
> Aug 10 12:24:17 pf pfstats[1465]: t=2018-08-10T12:24:17+0100 lvl=info
> msg="Calling Unified API on uri: https://127.0.0.1:9999/api/v1/
> dhcp/stats/eth1/IPSCRUBBED" pid=1465
> Aug 10 12:24:17 pf pfstats[1465]: t=2018-08-10T12:24:17+0100 lvl=info
> msg="Calling Unified API on uri: https://127.0.0.1:9999/api/v1/
> dhcp/stats/eth2/IPSCRUBBED" pid=1465
>
>
> I also noted that this doesn't make curl too happy, so I'm not sure if the
> cert is an issue or if for 127.0.0.1 it would use an equivalent of "curl
> -k" / allow insecure.
>
> root@pf:/home/tm-admin# curl https://127.0.0.1:9999/api/v1/queues/stats
> curl: (51) SSL: certificate subject name 'pf.internal-scrubbed.net' does
> not match target host name '127.0.0.1'
> root@pf:/home/tm-admin# curl https://127.0.0.1:9999/api/v1/
> dhcp/stats/eth1/10.17.10.0
> curl: (51) SSL: certificate subject name 'pf.internal-scrubbed.net' does
> not match target host name '127.0.0.1'
>
>
>
>
> On Thu, Aug 9, 2018 at 4:41 PM, David Harvey <da...@thoughtmachine.net>
> wrote:
>
>> Hi again!
>>
>> I'm investigating some latency issues with RADIUS being a bit lumpy and
>> noticed that the number of open IPv4 sockets was incredibly high.
>>
>>
>> Checking on netstat -anp showed a vast number of pfstats -> LDAP:636
>> conencitnos (and yes I use LDAP as a portal auth source).  The drop off is
>> after restarting pfstats.
>>
>> Any idea if this is working as expected or if some total connections
>> setting I might have missed could be at play?
>>
>> Appreciating your time as ever.
>>
>> David
>>
>
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>
>
>
> _______________________________________________
> PacketFence-users mailing 
> listPacketFence-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/packetfence-users
>
>
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> PacketFence-users mailing list
> PacketFence-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/packetfence-users
>
>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users

Reply via email to