Hi,
HAProxy 2.0.8 was released on 2019/10/23. It added 47 new commits
after version 2.0.7.
There is one bug tagged major but also a significant number of medium bugs
which arguably have more chances to impact more people so I preferred not
to wait too long before a release.
The main fix is a ris
Hi Ben,
after Brian reported the thread performance regression affecting the
pattern matchings in haproxy when relying on the LRU cache, I had a
look at other users of the LRU cache and found that 51d.c is using
it with a lock as well and may also suffer from a lack of linearity
with threads.
You
Hi Brian,
On Tue, Oct 22, 2019 at 04:19:58PM +0200, Willy Tarreau wrote:
> At the moment I don't know what it requires to break it down per thread,
> so I'll add a github issue referencing your report so that we don't forget.
> Depending on the complexity, it may make sense to backport it once don
On Wed, Oct 23, 2019 at 08:52:58AM +1100, Igor Cicimov wrote:
> Sorry misread your issue. It is a strange setup you got there wonder why do
> you need cross DC load balancing on the k8s ingress when you are already
> doing it globally via DNS?
Agreed, for me the setup is completely shifted by one
Hi haproxy.com
Digital marketing is more important now than ever before. These days lack
of organic keyword data and traffic are common issues. Your business need
to have a concrete SEO strategy in place if you want to succeed in online
marketing.
*We can deliver you the exact solution you are
On Wed, Oct 23, 2019, 8:36 AM Igor Cicimov
wrote:
>
>
> On Tue, Oct 22, 2019, 10:27 PM Morotti, Romain D <
> romain.d.moro...@jpmorgan.com> wrote:
>
>> Hello,
>>
>>
>>
>> The use case is to load balance applications in multiple datacenters or
>> regions.
>>
>> The common pattern today to cover mu
On Tue, Oct 22, 2019, 10:27 PM Morotti, Romain D <
romain.d.moro...@jpmorgan.com> wrote:
> Hello,
>
>
>
> The use case is to load balance applications in multiple datacenters or
> regions.
>
> The common pattern today to cover multiple locations is to deploy services
> in each location separately
Hello Willy,
On Tue, Oct 22, 2019 at 04:46:43PM +0200, Willy Tarreau wrote:
> To be honest, I'm quite embarrassed by such a change. I think the main
> reason for the initial warning instead of error was that most people
> using haproxy in unprivileged environments never knew what to use as
> a uli
Hi William,
On Mon, Oct 21, 2019 at 11:14:20AM +0200, William Dauchy wrote:
> On a production environment with a given maxconn, the fd limit can be
> successfully set at a given time. While raising maxconn, a new max fd
> limit is calculated. If the setrlimit fails (e.g if sysctl fs.nr_open
> is l
Hello,
On Tue, Oct 22, 2019 at 10:23:05AM -0400, PR Bot wrote:
> Patch title(s):
>updating req.body_param doc with chunk-encoding limitation
>
> Link:
>https://github.com/haproxy/haproxy/pull/333
diff --git a/doc/configuration.txt b/doc/configuration.txt
index b79795c68..fc988b320 10064
Dear list!
Author: fclerg <29798784+fcl...@users.noreply.github.com>
Number of patches: 1
This is an automated relay of the Github pull request:
updating req.body_param doc with chunk-encoding limitation
Patch title(s):
updating req.body_param doc with chunk-encoding limitation
Link:
Hi Brian,
On Mon, Oct 14, 2019 at 11:28:17PM +, Brian Diekelman wrote:
> Just wanted to provide some information on what appears to be lock contention
> around ACL lookups.
>
> We recently upgraded from haproxy-1.6 to haproxy-1.8.20 and switched from
> 'nbprocs 8' to 'nbprocs 1, nbthreads 1
Hi Amin,
On Mon, Oct 14, 2019 at 12:23:53AM +0330, Amin Shayan wrote:
> Hello,
>
> I've several installations with different config and usage on 1.8.21 and no
> problem so far. I found one installation on a cluster of 6 servers, which
> all of them had one cpu core stuck at 100% and Idle_pct lowe
Hello,
On Thu, Oct 17, 2019 at 02:11:19PM +0200, Gaetan Deputier wrote:
> Hello!
>
> I'm reaching out regarding the contstats option and "small performance
> drop". What is exactly the performance drop described here? Induced
> latency, CPU usage?
>
> >From what I've seen, it triggers a recount
Le 22/10/2019 à 13:42, Baptiste a écrit :
My comment is wrong.
A server weight can have a value of 256.
Please update the comment :)
Ok, thanks. Merged now.
--
Christopher Faulet
Hi Luke,
I remember I first did that intentionally to avoid values below 255 being
"rounded" to 0...
And I assumed people would remove servers from their DNS if they want a
weight to 0.
Now, with some feedback, I can see I was wrong.
Next time, don't hesitate to ask the question on the ML, or on
My comment is wrong.
A server weight can have a value of 256.
Please update the comment :)
Baptiste
On Mon, Oct 21, 2019 at 4:35 PM Christopher Faulet
wrote:
> Le 21/10/2019 à 16:20, Baptiste a écrit :
> > Thx to 2 people who spotted a bug in my patch, (missing parenthesis).
> >
> > here is th
Hello,
The use case is to load balance applications in multiple datacenters or regions.
The common pattern today to cover multiple locations is to deploy services in
each location separately and independently.
This happens with kubernetes for example, where a cluster is typically limited
to a d
Le 22/10/2019 à 08:55, Awais Azeem a écrit :
I have seen your website it looks amazing but there is an only issue of SSL
Certificate.
Your website is not secure, a screenshot is also attached here.
If you’re interested, let me know I'll help you.
Happy to Help 😊
Hi,
It is not a SSL cert i
19 matches
Mail list logo