Re: regression? scheme and hostname logged with %r with 2.6.13

2023-06-07 Thread Robert Newson
Hi,

Yeah I addressed this with "%HM %HPO%HQ %HV" which looks right in my logs under 
some light testing, but I will check the pathq option also.

B.

> On 7 Jun 2023, at 22:39, Lukas Tribus  wrote:
> 
> Hello,
> 
> 
> yes, H2 behaves very differently; due to protocol differences but also
> due to other changes. In the beginning H2 was only implemented in the
> frontend and every transaction was downgraded to HTTP/1.1 internally.
> This was later changed to an internal generic "HTX" representation
> that allowed to unify the protocol stack.
> 
> 
> To return relative URIs form in the logs, I guess you could
> reconstruct the string manually with pathq (untested):
> \"%HM%[pathq]%HV\" as opposed to %r
> 
> 
> pathq is a HTTP sample designed to do exactly this, always returning
> the URI in a relative format:
> 
> http://docs.haproxy.org/2.6/configuration.html#7.3.6-pathq
> 
> 
> Not sure what %HU does, I assume it refers to the url not pathq.
> 
> 
> 
> I agree that doc updates are needed at least in section "8.2.3. HTTP
> log format" and "8.2.6. Custom log format".
> 
> 
> 
> Lukas




Re: regression? scheme and hostname logged with %r with 2.6.13

2023-06-07 Thread Lukas Tribus
Hello,


yes, H2 behaves very differently; due to protocol differences but also
due to other changes. In the beginning H2 was only implemented in the
frontend and every transaction was downgraded to HTTP/1.1 internally.
This was later changed to an internal generic "HTX" representation
that allowed to unify the protocol stack.


To return relative URIs form in the logs, I guess you could
reconstruct the string manually with pathq (untested):
\"%HM%[pathq]%HV\" as opposed to %r


pathq is a HTTP sample designed to do exactly this, always returning
the URI in a relative format:

http://docs.haproxy.org/2.6/configuration.html#7.3.6-pathq


Not sure what %HU does, I assume it refers to the url not pathq.



I agree that doc updates are needed at least in section "8.2.3. HTTP
log format" and "8.2.6. Custom log format".



Lukas



re: regression? scheme and hostname logged with %r with 2.6.13

2023-06-07 Thread Robert Newson
Hi,

Figured this out (my reply might not be threaded, the mailing list daemon 
doesn't add me after I confirm my subscription)

It was 
https://github.com/haproxy/haproxy/commit/30ee1efe676e8264af16bab833c621d60a72a4d7
 in haproxy 2.1 that caused this change. It's deliberate but the documentation 
wasn't updated to match.

I found this by bisecting between 2.0 and 2.1 after noticing that only HTTP/2 
requests were being logged this way.

B.


[ANNOUNCE] haproxy-2.7.9

2023-06-07 Thread Christopher Faulet

Hi,

HAProxy 2.7.9 was released on 2023/06/07. It added 118 new commits
after version 2.7.8.

This release, as the previous one, is a bit huge. We were busy to release
the 2.8.0. It is high time for us to emit new releases for other stable
versions. The 2.7.9 is the first one of a long series.

In this release, Amaury and Fred continued to stabilize the QUIC stack. It
is now pretty stable, but it is probably better to deploy the 2.8 to use QUIC
in production because it is a LTS version. The 2.7 will still receive bug
fixes, but most of improvements will not be backported. In this release, some
patches fixed the report of the end of the request to upper layer, mainly to
conform to the stream-connector layer refactoring. Few minor bugs on error
paths were also addressed, and comments were added at various places to help
understand some BUG_ON(). Fred also added a number of event counters that
had been missing over the last few troubleshooting sessions.

The SPOE was fixed to limit the number of idle applets on edge cases. On
sporadic bursts, it was possible to systematically start new applets because
the SPOE processing frequency was lower than the messages rate, and this
independently on the number of idle applets. The idle applets tracking was
improved to be able to properly reuse them.
This fix revealed a flaw in the way synchronous frames were handled, leading
to a raise of the message processing latency. To fix this issue, in
synchronous mode, a SPOE applet will now systematically try to send a frame
when it is woken up, except if it is still waiting for a ACK frame after a
receive attempt.
Finally, a crash for engines configured on disabled proxies was fixed. SPOE
engines must not be released for such proxies during the startup because
some resources may be shared with other engines, for instance the ACLs.

Two issues were fixes in the H2 multiplexer:
  * First, we now take care to not refresh the idle timeout when control
frames are received. Because of this bug, it was possible to keep a
connection alive by sending periodically control frames, like PING or
PRIORITY, even after GOAWAY frame was sent. Among other things, it was
possible to hit this bug during a soft-stop or a reload.
  * Then, the request state at the H2 stream level is now properly reported
to upper layer when the stream-connector is created. This bug was
introduced in 2.4. A request may be fully received when the
stream-connector is created. In this case, all subsequent receives may
be skipped. It was an issue when an error was also detected because the
upper layer was not aware of it and the session could be frozen.

The FCGI multiplexer was fixed to be sure to never request more room to the
channel when the mux is waiting for more data. It is especially important to
not do so if the channel buffer is empty. Otherwise, the situation cannot
evolved and the session remains stuck.

A race condition was fixed in the thread isolation that can allow a thread
that was running under isolation to continue running while another one
enters isolation.

The total boot time is now measured. It is used to postpone the startup of
health checks. It is pretty useful for very large configurations taking up
few seconds to start, to not schedule some servers' checks in past. This
also helps to have a better distribution of health-checks when
"spread-checks" option is used. In addition, the spread-checks is also used
at boot time, making the load much smoother from the start.

More actions were added to the "http-after-response" (set-map,
set-log-level, sc-inc-gpc etc)

Finally, as usual, several minor bugs were fixed. The doc was improved. Most
notably, a section about side format was added in the configuration
manual. And the development tools were extended. A script to decode most
flags in the "show sess all" output was added.

If you are running a 2.7, please upgrade. But keep in mind it is not a LTS
version. Now the 2.8.0 was released, it could be good to start to evaluate
it. However keep cool, there is no rush to upgrade. You have 1 year to do
so ;)

Thanks everyone for you help and your contributions !

Please find the usual URLs below :
   Site index   : https://www.haproxy.org/
   Documentation: https://docs.haproxy.org/
   Wiki : https://github.com/haproxy/wiki/wiki
   Discourse: https://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Sources  : https://www.haproxy.org/download/2.7/src/
   Git repository   : https://git.haproxy.org/git/haproxy-2.7.git/
   Git Web browsing : https://git.haproxy.org/?p=haproxy-2.7.git
   Changelog: https://www.haproxy.org/download/2.7/src/CHANGELOG
   Dataplane API: 
https://github.com/haproxytech/dataplaneapi/releases/latest
   Pending bugs : https://www.haproxy.org/l/pending-bugs
   Reviewed bugs: 

Re: [PATCH] DOC: quic: fix misspelled tune.quic.socket-owner

2023-06-07 Thread Artur

Hello Willy,

 I understand, thank you for the explanation.

Have a nice holidays ! ;)

Le 07/06/2023 à 14:55, Willy Tarreau a écrit :

Hello Artur,

On Tue, Jun 06, 2023 at 03:18:31PM +0200, Artur wrote:

About the backporting instructions I was not sure how far it should be
backported. I preferred to skip it instead of giving an erroneous
instruction.
Maybe someone can explain if this backport instruction is really required
and what to do if one is unsure about how to backport.

You should see them as a time saver for the person doing the backports,
that's why we like patch authors to provide as much useful information
as they can. Sometimes even just adding "this patch probably needs to be
backported" or "the feature was already there in 2.7 and maybe before so
the patch may need to be backported at least there" will be a hint to
the person that they should really check twice if they don't find it
the first time.

As a rule of thumb, just keep in mind that the commit message part of
a patch is the one where humans talk to humans, and that anything that
crosses your head and that can help decide if a patch has to be
backported, could be responsible for a regression, needs to be either
fixed or reverted etc is welcome.

Thanks,
Willy


--
Best regards,
Artur


Re: maint, drain: the right approach

2023-06-07 Thread Matteo Piva
Hi Willy,

> > Seems that it's considered an expected behavior to consider 
> > optimistically the server as UP 
> > when leaving MAINT mode, even if the L4 health checks are not completed 
> > yet. 

> Normally using the existing API you could forcefully 
> mark the server's check as down using this before leaving maintenance: 

> set server / health [ up | stopping | down ] 

> Doesn't it work to force it to down before leaving maintenance and wait 
> for it to succeed its checks ? That would give this to leave maintenance: 

> set server blah health down; set server blah state ready 

> By the way that reminds me that a long time ago we envisioned a server 
> option such as "init-state down" but with the ability to add servers on 
> the fly via the CLI it seemed a bit moot precisely because you should be 
> able to do the above. But again, do not hesitate to tell me if I'm wrong 
> somewhere, my goal is not to reject any change but to make sure we're not 
> trying to do something that's already possible (and possibly not obvious, 
> I concede). 


I just did some tests to share with you:


1) - Forcing health "DOWN" before exiting "MAINT" mode -
COMMANDS:
set server test_backend/s1 state maint
set server test_backend/s1 health down
set server test_backend/s1 state ready

LOG:
Server test_backend/s1 is going DOWN for maintenance. 1 active and 0 backup 
servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server test_backend/s1 is UP/READY (leaving forced maintenance).

In this case, forcing health as "DOWN" when in MAINT mode isn't evaluated by 
haproxy, and it optimistically comes back to "UP" once state is set to READY.


2) - Entering "MAINT" mode when health is "DOWN":
COMMANDS:
set server test_backend/s1 health down
set server test_backend/s1 state maint
set server test_backend/s1 state ready

LOG:
Server test_backend/s1 is DOWN, changed from CLI. 1 active and 0 backup servers 
left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server test_backend/s1 was DOWN and now enters maintenance.
Server test_backend/s1 is UP/READY (leaving forced maintenance).

In this case, health is successfully forced to "DOWN" before entering MAINT 
mode, but it's again optimistically restored as "UP" once state is set to READY.


3) - Exiting "MAINT" mode passing through "DRAIN", forcing health "DOWN" -
COMMANDS:
set server test_backend/s1 state maint
set server test_backend/s1 state drain
set server test_backend/s1 health down
set server test_backend/s1 state ready

LOG:
Server test_backend/s1 is going DOWN for maintenance. 1 active and 0 backup 
servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server test_backend/s1 is UP/DRAIN (leaving forced maintenance).
Server test_backend/s1 remains in forced drain mode.
Server test_backend/s1 is DOWN, changed from CLI. 1 active and 0 backup servers 
left. 0 sessions active, 0 requeued, 0 remaining in queue.
Server test_backend/s1 remains in forced drain mode.
Server test_backend/s1 is DOWN (leaving forced drain).

That one is working as intended, since it's exiting from MAINT with health 
"DOWN", and then haproxy evaluates the health before going "UP".
... but it's passing through "DRAIN", and I don't know it's intended.



Do you think something has to be different on the first two tests I did? Maybe 
forced health "DOWN" should have been honored once out from "MAINT" to "READY" 
state?


Thanks,

Matteo


Re: Contribute to HaProxy

2023-06-07 Thread Willy Tarreau
Hi Umesh,

On Fri, Jun 02, 2023 at 10:27:48AM +0530, umesh patel wrote:
> Hi There,
> 
> I am looking for SCTP protocol based load balancer. I see that HaProxy has
> a solid platform for TCP load balancing. However, SCTP is not supported. I
> would like to develop and contribute to HaProxy SCTP support. I will start
> with SCTP unihomed connections and then graduate on to support the SCTP
> multi-homing support.
> 
> I would like to know, how can I join the development and contribute to the
> enrichment of the HaProxy load-balancer.

There are several aspects in your question. Some of them are purely
technical (i.e. use IPPROTO_SCTP vs IPPROTO_TCP), other aspects will
touch the architecture (SCTP being multi-stream, how to support this),
and the last one is how to integrate yourself into the development
process.

As for every significant addition, what is important is to figure how
the feature will be added, not in the short term but in the long term,
so that it is possible to identify the changes to be performed on the
various areas to be compatible with what you'll need. This is critically
important because we don't want to add a partial feature, then figure
we're in a dead end and say "sorry users, we're driving back, there's
no exit here, we'll completely change the way you're using it and trying
another approach". This often means that it can take quite some time to
add some new features, but when you finish, you feel like they fit
perfectly in place. There will always be rough edges, but they must not
affect how end users will use them over time, nor hinder the evolution
of the rest of the features around. Most of the features that have been
delayed were delayed because of a road block in the way that was left
there along another change completed in a hurry. That's why we've become
extremely cautious over time not to leave road blocks anymore on the way.

QUIC is a great example of how things have evolved. Initially a PoC was
needed to progress on the code, so it was based on a static code base
that didn't change. In parallel, the various identified parts that were
identified as problematic were improved. Then the code started to be
rebased on more recent versions to benefit from these parallel changes.
Over time the muxes API, listeners, protocols, threads, datagram
processing, polling, buffers etc had to evolve to offer a natural
interface to the lower layers that QUIC needed, and if you look at it
today it fits smoothly there, but internally it has not always been the
case.

Similarly I expect SCTP to require some changes. Which ones, I don't
know. It also depends what you want to transport on top of it. For
example, right now our muxes deal with low-level stream multiplexing
that is found in HTTP/2 and QUIC, and to a less extent HTTP/1 (which
supports multiple streams per connection but one at a time but
perfectly delimited for the request and the response). Thus it would
seem natural that SCTP gets its own mux and offers multiple streams
per connection. But to transport what ? Because if we need to place
some HTTP on top of it, then we'll face a new problem given that the
various HTTP versions already have their own muxes. And if the goal
is to just transport SCTP from a client to a server and maintain the
streams together over the same connection, this could probably be
done as well but it will involve outgoing connection reuse and for
now this is only done for HTTP. Also the action of choosing a server
(and possibly inspecting contents) is currently done per stream, so
if we want to tie them all together it will mean again something a
bit different that has to be defined.

Last, but not least, a very important aspect is the maintenance. Are
you sufficiently interested in SCTP to accept to keep working on it
for 5-10 years after it's offered to end users ? That's important
because you don't want the feature you freshly added to start failing
in field with nobody able to debug it. And sometimes when analysing
the cause of some failures, the conclusion is harsh, like "OK the
design is totally f*cked, it needs to be redone from scratch", which
indicates that you'll need to continue supporting existing code base
in best effort mode (i.e. plugging holes) while trying to rework on a
better approach for a future version.

We've seen all this over time, that's why I prefer to warn in order to
gauge your motivation ;-)

If you think this matches what you have in mind, I can recommend you to
start to think about what your longterm goal with SCTP would be, and
what intermediary steps would be acceptable, then study how that would
fit with what exists and if it looks sane or hackish. Then you'll need
to think, based on the services that will be transported over that
protocol, what will users need in the future and if it will be possible
at all to serve them (e.g. interface TCP<->SCTP, mux/demux SCTP
connections, content inspection and processing, act on the connection
or on the streams, will it ever be 

Re: [PATCH] DOC: quic: fix misspelled tune.quic.socket-owner

2023-06-07 Thread Willy Tarreau
Hello Artur,

On Tue, Jun 06, 2023 at 03:18:31PM +0200, Artur wrote:
> About the backporting instructions I was not sure how far it should be
> backported. I preferred to skip it instead of giving an erroneous
> instruction.
> Maybe someone can explain if this backport instruction is really required
> and what to do if one is unsure about how to backport.

You should see them as a time saver for the person doing the backports,
that's why we like patch authors to provide as much useful information
as they can. Sometimes even just adding "this patch probably needs to be
backported" or "the feature was already there in 2.7 and maybe before so
the patch may need to be backported at least there" will be a hint to
the person that they should really check twice if they don't find it
the first time.

As a rule of thumb, just keep in mind that the commit message part of
a patch is the one where humans talk to humans, and that anything that
crosses your head and that can help decide if a patch has to be
backported, could be responsible for a regression, needs to be either
fixed or reverted etc is welcome.

Thanks,
Willy