Re: 2.2.0 dhcp: regression

2013-07-15 Thread Eugene Grosbein
once using private flag and solved my problem - every thread uses creates only one file descriptor (persistent DB connection) and I do not hit the limit of 1024. Basically, I solved our problem but one question persists: why is CLONE_SKIP called many times at radiusd start time? Eugene Grosbein

2.2.0 dhcp: regression

2013-07-12 Thread Eugene Grosbein
of thread pool size breaks after open of 1024th file descriptor. Please help. We need at least 1000 concurrent threads to deal with the load here. Our hardware has enough raw power and we do not like to create useless queueing delays. Eugene Grosbein - List info/subscribe/unsubscribe? See http

Re: 2.2.0 dhcp: regression

2013-07-12 Thread Eugene Grosbein
On 12.07.2013 17:17, Eugene Grosbein wrote: Hi! We have been running FreeRADIUS 2.1.12/dhcp module with success for long time with FreeBSD 8. Our DHCP perl script opens two file descriptors (per thread): one for database connection TCP socket and one for syslog (/var/run/log unix domain

Re: 2.2.0 dhcp: regression

2013-07-12 Thread Eugene Grosbein
On 12.07.2013 17:38, Phil Mayers wrote: On 12/07/13 11:17, Eugene Grosbein wrote: Please help. We need at least 1000 concurrent threads to deal with the load here. 1000 threads is a crazy number. Can you explain why you think you need that many? Are you doing very slow logic/lookups

Re: 2.2.0 dhcp: regression

2013-07-12 Thread Eugene Grosbein
On 12.07.2013 18:10, Alan DeKok wrote: Eugene Grosbein wrote: Forgot to mention that operating system's open files limit for freeradius is over 11000. And file descriptors are numbered starting from zero, so descriptor 1024 is really 1025th. radiusd works fine until it has descriptors 0

Re: 2.2.0 dhcp: regression

2013-07-12 Thread Eugene Grosbein
On 12.07.2013 19:07, Alan DeKok wrote: Eugene Grosbein wrote: Extra sockets got opened just fine, I see that with lsof/fstat here. OK. But I'm not aware of any change in any code which will limit the number of sockets. 2.1.12 has not this issue with same Perl. OK. The rlm_perl

Re: 2.2.0 dhcp: regression

2013-07-12 Thread Eugene Grosbein
On 12.07.2013 18:39, Phil Mayers wrote: Our database is powerful enough to deal with so many requests. We may easily get that many requests and want to be able to process them in parallel without needless queueing. With respect, this is a pretty basic logic. The figure of merit here is

Re: 2.2.0 dhcp: regression

2013-07-12 Thread Eugene Grosbein
On 12.07.2013 19:57, Alan DeKok wrote: Eugene Grosbein wrote: The problem is always reproducible and have obvious hard limit correlating or consisting with number of open files. I'm not sure what changes from 2.1.12 to 2.2.0 would cause that. I understand. With one exception - we have