Re[2]: The case for changing the documentation syntax

2019-07-09 Thread Nick Ramirez
It sounds like restructuredText and Asciidoc are the top choices. They 
both look capable:


http://hyperpolyglot.org/lightweight-markup

I can, as a next step, post this as an Issue on the Github project and 
it can be triaged and tracked.


For something like this, it might even make sense to create a new branch 
so that multiple people can work on it. In that case, splitting the 
documentation into multiple files would be helpful too. If approved,  an 
empty file for each section of the documentation could be created in 
order to have the skeleton of the project. Having the documentation 
split into multiple files may make maintaining the documentation easier 
in the future too (i.e. someone could change one section without 
conflicting with a person making a change in another section).


How have collaborative efforts like this been done in the past? How 
would multiple people be able to commit changes to this branch?


Other thoughts?


-- Original Message --
From: "Pavlos Parissis" 
To: "Nick Ramirez" 
Sent: 7/3/2019 10:44:11 AM
Subject: Re: The case for changing the documentation syntax


On Δευτέρα, 1 Ιουλίου 2019 5:01:33 Μ.Μ. CEST Nick Ramirez wrote:

 Hello all,




[...snip...]


 The solution I am proposing:

 Rather than using a home-grown, difficult to parse,
 not-consistently-used grammar. We should use a standard. We should use
 reStructuredText: http://docutils.sourceforge.net/rst.html
 

 The reStructuredText syntax gives us the following benefits:

 * It is well documented
 * Tools exist to parse this and convert it to other formats (such as
 HTML)
 * Tools exist that will "error check" the document to ensure that the
 correct syntax is used throughout configuration.txt (which would become
 configuration.rst)
 * Tools such as Jekyll can easily parse reStructuredText and build
 sophisticated, beautiful webpages that feature search functionality,
 table-of-contents, images, graphs, links, etc. We could really start to
 make the documentation shine!
 * We won't have to worry about updating special tools because
 reStructuredText syntax will allow us to reliably parse it forever
 * reStructuredText is still easily human-readable using a terminal,
 plain-text editor, etc.

 I and others are fully willing to make the conversion to
 reStructuredText, too. What do you all think?




+1 from me. asciidoctor is something you should have a look at and consider as 
well.
I know that people don't like markdown, but it is very simple to use and that 
is, sometimes, more
important than standards and etc.

My cents,
Pavlos

Re: Re[2]: The case for changing the documentation syntax

2019-07-02 Thread Hugues Alary
And for comparison's sake, here's Asciidoc renders on github:
https://github.com/asciidoctor/asciidoctor/blob/master/README.adoc

Other features of the asciidoc/asciidoctor ecosystem are:
- Asciidoc is also standardized
- https://antora.org/ allows you to build 1 documentation website, from
multiple documentation repositories.
- asciidoctor is extendable, either by writing an extension in Javascript (
https://asciidoctor-docs.netlify.com/asciidoctor.js/extend/extensions/register/)
or in Ruby (https://asciidoctor.org/docs/user-manual/#example-extensions)
and it supports custom backends

Though I have no skin in the game. ReStructuredText is great, I'm merely
presenting other options.

On Tue, Jul 2, 2019 at 9:05 AM Nick Ramirez  wrote:

> I found this page on Github. It uses reStructuredText and demonstrates how
> Github would render various elements out of the box. Of course, it can be
> made more visually appealing with other tools, but it's a free benefit that
> it renders on Github.
>
> https://gist.github.com/ionelmc/e876b73e2001acd2140f
>
>
> -- Original Message --
> From: "Lukas Tribus" 
> To: "Nick Ramirez" 
> Cc: "haproxy@formilux.org" ; "Cyril" <
> cyril.bo...@free.fr>
> Sent: 7/1/2019 6:49:50 PM
> Subject: Re: The case for changing the documentation syntax
>
> Hello Nick,
>
>
> On Mon, 1 Jul 2019 at 17:02, Nick Ramirez  wrote:
>
>
> Hello all,
>
> I'd like to propose something radical, but that will greatly help us in
> terms of documentation. (And documentation is very important when it comes
> to people choosing whether to use a piece of software, as I am sure you
> agree!)
>
> First, the problem: Our documentation at
> https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt is
> written using a sort of home-grown syntax that uses various conventions for
> indicating sections, keywords, etc.
>
> However, parsing this home-grown documentation is difficult. For example,
> I contribute to the HAProxy Syntax Support for Atom project (
> https://github.com/abulimov/atom-language-haproxy). This is a python
> program that must parse the HAProxy configuration.txt file and find the
> keywords by first finding specific section titles, then looking for lines
> that don't have spaces in front of them. That's how we find the keywords in
> each section. It must be updated when new versions of HAProxy are released
> because new sections are added and the section numbers may change, and some
> sections are not reliably using the home-grown syntax. In short, parsing
> configuration.txt is difficult, error-prone and requires regular
> maintenance because its syntax is:
>
> * Not a standard
> * Not used consistently throughout the document
> * Not easily parsed by existing tools (home-grown tools must be created
> and maintained)
>
> You may wonder, why do we need to parse configuration.txt? The reasons are:
>
> * A text file without any styling is difficult to read, so we want to add
> styling (e.g. convert it to HTML with CSS or offer a PDF download)
> * We want search functionality of the document and an auto-generated table
> of contents
> * We want to write haproxy.cfg files and have them displayed in
> syntax-highlighted color when using Github Gist or any modern text editor
> (Atom, VS Code, Sublime Text, etc.). For that, we must currently parse
> configuration.txt to learn the keywords (as in the atom-language-haproxy
> project mentioned). For example, we use Github Gist, with the
> atom-language-haproxy project, to display HAProxy configuration snippets in
> color on the haproxy.com/blog. It would be easier to maintain this if we
> could parse configuration.text more easily.
>
>
>
> Actually since 7 years we do 2 of the 3 things you mention here;
> documentation.txt and others are parsed automatically (in python) and
> generate a verify nice HTML site, searchable and indexed with table of
> contents, etc:
>
> https://www.mail-archive.com/haproxy@formilux.org/msg07040.html
> https://github.com/cbonte/haproxy-dconv
> https://cbonte.github.io/haproxy-dconv/
>
>
> We use this extensively and are able to point people to specific
> sections or keywords of the documentation. When the documentation is
> inconsistent and breaks the tool, we (or more specifically Cyril)
> fixes it. I don't see any 2.0 specific changes in haproxy-dconv, and
> I'm not sure if a structured text would fix all the issues you have
> with the atom project.
>
> I'm not saying we should maintain configuration.txt as it currently
> is, but to me the status-quo does not feel as dire as you suggested.
>
>
> haproxy-dconv also mentions:
>
>
> The purpose of this project is to ultimately convert the HAProxy
> documentation into a more generic format (example : ReStructuredText) that
> would allow for an easier spreading of various output files (.pdf, .html,
> .epub, etc).
>
>
> So it seems like there is common ground. I'm CCing Cyril who has
> invested a lot of time for this already.
>
>
> I think I agree that we would 

Re[2]: The case for changing the documentation syntax

2019-07-02 Thread Nick Ramirez
I found this page on Github. It uses reStructuredText and demonstrates 
how Github would render various elements out of the box. Of course, it 
can be made more visually appealing with other tools, but it's a free 
benefit that it renders on Github.


https://gist.github.com/ionelmc/e876b73e2001acd2140f


-- Original Message --
From: "Lukas Tribus" 
To: "Nick Ramirez" 
Cc: "haproxy@formilux.org" ; "Cyril" 


Sent: 7/1/2019 6:49:50 PM
Subject: Re: The case for changing the documentation syntax


Hello Nick,


On Mon, 1 Jul 2019 at 17:02, Nick Ramirez  wrote:


 Hello all,

 I'd like to propose something radical, but that will greatly help us in terms 
of documentation. (And documentation is very important when it comes to people 
choosing whether to use a piece of software, as I am sure you agree!)

 First, the problem: Our documentation at 
https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt is written 
using a sort of home-grown syntax that uses various conventions for indicating 
sections, keywords, etc.

 However, parsing this home-grown documentation is difficult. For example, I 
contribute to the HAProxy Syntax Support for Atom project 
(https://github.com/abulimov/atom-language-haproxy). This is a python program 
that must parse the HAProxy configuration.txt file and find the keywords by 
first finding specific section titles, then looking for lines that don't have 
spaces in front of them. That's how we find the keywords in each section. It 
must be updated when new versions of HAProxy are released because new sections 
are added and the section numbers may change, and some sections are not 
reliably using the home-grown syntax. In short, parsing configuration.txt is 
difficult, error-prone and requires regular maintenance because its syntax is:

 * Not a standard
 * Not used consistently throughout the document
 * Not easily parsed by existing tools (home-grown tools must be created and 
maintained)

 You may wonder, why do we need to parse configuration.txt? The reasons are:

 * A text file without any styling is difficult to read, so we want to add 
styling (e.g. convert it to HTML with CSS or offer a PDF download)
 * We want search functionality of the document and an auto-generated table of 
contents
 * We want to write haproxy.cfg files and have them displayed in 
syntax-highlighted color when using Github Gist or any modern text editor 
(Atom, VS Code, Sublime Text, etc.). For that, we must currently parse 
configuration.txt to learn the keywords (as in the atom-language-haproxy 
project mentioned). For example, we use Github Gist, with the 
atom-language-haproxy project, to display HAProxy configuration snippets in 
color on the haproxy.com/blog. It would be easier to maintain this if we could 
parse configuration.text more easily.



Actually since 7 years we do 2 of the 3 things you mention here;
documentation.txt and others are parsed automatically (in python) and
generate a verify nice HTML site, searchable and indexed with table of
contents, etc:

https://www.mail-archive.com/haproxy@formilux.org/msg07040.html
https://github.com/cbonte/haproxy-dconv
https://cbonte.github.io/haproxy-dconv/


We use this extensively and are able to point people to specific
sections or keywords of the documentation. When the documentation is
inconsistent and breaks the tool, we (or more specifically Cyril)
fixes it. I don't see any 2.0 specific changes in haproxy-dconv, and
I'm not sure if a structured text would fix all the issues you have
with the atom project.

I'm not saying we should maintain configuration.txt as it currently
is, but to me the status-quo does not feel as dire as you suggested.


haproxy-dconv also mentions:


 The purpose of this project is to ultimately convert the HAProxy documentation 
into a more generic format (example : ReStructuredText) that would allow for an 
easier spreading of various output files (.pdf, .html, .epub, etc).


So it seems like there is common ground. I'm CCing Cyril who has
invested a lot of time for this already.


I think I agree that we would benefit from moving towards a
standardized, structured text.


Regarding markdown vs reStructuredText vs asciidoc, I don't have a lot
of experience with either of those, but if we go down this road I feel
like we should pick a format that is here to stay and is standardized,
and for me, that is reStructuredText. Markdown is probably the worst
possible choice and I know first hand how the lack of standardization
negatively affects it's interoperability (specifically a site had a JS
based preview that looked different than when the server-side code
parsed it after the submission ... so I have a strong negative opinion
about Markdown).

Readthedocs supports reStructuredText (and discourages but supports
markdown), however asciidoc is not supported. Not that we need to use
readthedocs, but it's something to keep in mind.



cheers,
lukas

Re: Re[2]: The case for changing the documentation syntax

2019-07-01 Thread Hugues Alary
Adding my 2 cents here: I write documentation a lot and would like to
mention the Asciidoc format, and more specifically asciidoctor (
https://asciidoctor.org/). Asciidoc is a _very_ powerful syntax yet
extremely simple to use.

Here's a link to their cheat sheet to give you a quick idea of the syntax:
https://asciidoctor.org/docs/asciidoc-syntax-quick-reference/
And a more in depth manual:
https://asciidoctor.org/docs/user-manual/#introduction-to-asciidoctor

On Mon, Jul 1, 2019 at 1:51 PM Nick Ramirez  wrote:

> Yes, either reStructuredText or Markdown would be okay. They both have a
> very intuitive syntax, so newcomers would pick it up and become
> productive with it quickly. It is quite easy to learn either one.
>
>
>
> -- Original Message --
> From: "Aleksandar Lazic" 
> To: "Nick Ramirez" ; "haproxy@formilux.org"
> 
> Sent: 7/1/2019 12:05:15 PM
> Subject: Re: The case for changing the documentation syntax
>
> >Hi Nick.
> >
> >Am 01.07.2019 um 17:01 schrieb Nick Ramirez:
> >>  Hello all,
> >>
> >>  I'd like to propose something radical, but that will greatly help us
> in terms of
> >>  documentation. (And documentation is very important when it comes to
> people
> >>  choosing whether to use a piece of software, as I am sure you agree!)
> >
> >Full Ack. This discussion comes up from time to time and I agree with you
> that a
> >more mainstream format would be nice.
> >
> >>  First, the problem: Our documentation
> >>  at
> https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt is
> >>  written using a sort of home-grown syntax that uses various
> conventions for
> >>  indicating sections, keywords, etc.
> >>
> >>  However, parsing this home-grown documentation is difficult. For
> example, I
> >>  contribute to the HAProxy Syntax Support for Atom project
> >>  (https://github.com/abulimov/atom-language-haproxy). This is a python
> program
> >>  that must parse the HAProxy configuration.txt file and find the
> keywords by
> >>  first finding specific section titles, then looking for lines that
> don't have
> >>  spaces in front of them. That's how we find the keywords in each
> section. It
> >>  must be updated when new versions of HAProxy are released because new
> sections
> >>  are added and the section numbers may change, and some sections are
> not reliably
> >>  using the home-grown syntax. In short, parsing configuration.txt is
> difficult,
> >>  error-prone and requires regular maintenance because its syntax is:
> >>
> >>  * Not a standard
> >>  * Not used consistently throughout the document
> >>  * Not easily parsed by existing tools (home-grown tools must be
> created and
> >>  maintained)
> >>
> >>  You may wonder, why do we need to parse configuration.txt? The reasons
> are:
> >>
> >>  * A text file without any styling is difficult to read, so we want to
> add
> >>  styling (e.g. convert it to HTML with CSS or offer a PDF download)
> >>  * We want search functionality of the document and an auto-generated
> table of
> >>  contents
> >>  * We want to write haproxy.cfg files and have them displayed in
> >>  syntax-highlighted color when using Github Gist or any modern text
> editor (Atom,
> >>  VS Code, Sublime Text, etc.). For that, we must currently parse
> >>  configuration.txt to learn the keywords (as in the
> atom-language-haproxy project
> >>  mentioned). For example, we use Github Gist, with the
> atom-language-haproxy
> >>  project, to display HAProxy configuration snippets in color on the
> >>  haproxy.com/blog. It would be easier to maintain this if we could
> parse
> >>  configuration.text more easily.
> >>
> >>  The solution I am proposing:
> >>
> >>  Rather than using a home-grown, difficult to parse,
> not-consistently-used
> >>  grammar. We should use a standard. We should use
> >>  reStructuredText: http://docutils.sourceforge.net/rst.html
> >>
> >>  The reStructuredText syntax gives us the following benefits:
> >>
> >>  * It is well documented
> >>  * Tools exist to parse this and convert it to other formats (such as
> HTML)
> >>  * Tools exist that will "error check" the document to ensure that the
> correct
> >>  syntax is used throughout configuration.txt (which would become
> configuration.rst)
> >>  * Tools such as Jekyll can easily parse reStructuredText and build
> >>  sophisticated, beautiful webpages that feature search functionality,
> >>  table-of-contents, images, graphs, links, etc. We could really start
> to make the
> >>  documentation shine!
> >>  * We won't have to worry about updating special tools because
> reStructuredText
> >>  syntax will allow us to reliably parse it forever
> >>  * reStructuredText is still easily human-readable using a terminal,
> plain-text
> >>  editor, etc.
> >>
> >>  I and others are fully willing to make the conversion to
> reStructuredText, too.
> >>  What do you all think?
> >
> >I would prefer Markdown with pandoc as I don't like the rst format, but
> I'm fine
> >with what the community and contributes 

Re[2]: The case for changing the documentation syntax

2019-07-01 Thread Nick Ramirez
Yes, either reStructuredText or Markdown would be okay. They both have a 
very intuitive syntax, so newcomers would pick it up and become 
productive with it quickly. It is quite easy to learn either one.




-- Original Message --
From: "Aleksandar Lazic" 
To: "Nick Ramirez" ; "haproxy@formilux.org" 


Sent: 7/1/2019 12:05:15 PM
Subject: Re: The case for changing the documentation syntax


Hi Nick.

Am 01.07.2019 um 17:01 schrieb Nick Ramirez:

 Hello all,

 I'd like to propose something radical, but that will greatly help us in terms 
of
 documentation. (And documentation is very important when it comes to people
 choosing whether to use a piece of software, as I am sure you agree!)


Full Ack. This discussion comes up from time to time and I agree with you that a
more mainstream format would be nice.


 First, the problem: Our documentation
 at https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt is
 written using a sort of home-grown syntax that uses various conventions for
 indicating sections, keywords, etc.

 However, parsing this home-grown documentation is difficult. For example, I
 contribute to the HAProxy Syntax Support for Atom project
 (https://github.com/abulimov/atom-language-haproxy). This is a python program
 that must parse the HAProxy configuration.txt file and find the keywords by
 first finding specific section titles, then looking for lines that don't have
 spaces in front of them. That's how we find the keywords in each section. It
 must be updated when new versions of HAProxy are released because new sections
 are added and the section numbers may change, and some sections are not 
reliably
 using the home-grown syntax. In short, parsing configuration.txt is difficult,
 error-prone and requires regular maintenance because its syntax is:

 * Not a standard
 * Not used consistently throughout the document
 * Not easily parsed by existing tools (home-grown tools must be created and
 maintained)

 You may wonder, why do we need to parse configuration.txt? The reasons are:

 * A text file without any styling is difficult to read, so we want to add
 styling (e.g. convert it to HTML with CSS or offer a PDF download)
 * We want search functionality of the document and an auto-generated table of
 contents
 * We want to write haproxy.cfg files and have them displayed in
 syntax-highlighted color when using Github Gist or any modern text editor 
(Atom,
 VS Code, Sublime Text, etc.). For that, we must currently parse
 configuration.txt to learn the keywords (as in the atom-language-haproxy 
project
 mentioned). For example, we use Github Gist, with the atom-language-haproxy
 project, to display HAProxy configuration snippets in color on the
 haproxy.com/blog. It would be easier to maintain this if we could parse
 configuration.text more easily.

 The solution I am proposing:

 Rather than using a home-grown, difficult to parse, not-consistently-used
 grammar. We should use a standard. We should use
 reStructuredText: http://docutils.sourceforge.net/rst.html

 The reStructuredText syntax gives us the following benefits:

 * It is well documented
 * Tools exist to parse this and convert it to other formats (such as HTML)
 * Tools exist that will "error check" the document to ensure that the correct
 syntax is used throughout configuration.txt (which would become 
configuration.rst)
 * Tools such as Jekyll can easily parse reStructuredText and build
 sophisticated, beautiful webpages that feature search functionality,
 table-of-contents, images, graphs, links, etc. We could really start to make 
the
 documentation shine!
 * We won't have to worry about updating special tools because reStructuredText
 syntax will allow us to reliably parse it forever
 * reStructuredText is still easily human-readable using a terminal, plain-text
 editor, etc.

 I and others are fully willing to make the conversion to reStructuredText, too.
 What do you all think?


I would prefer Markdown with pandoc as I don't like the rst format, but I'm fine
with what the community and contributes decides.


 Thanks,
 Nick Ramirez


Regards
Aleks





Re[2]: agent-check requires newline in response?

2018-12-15 Thread Nick Ramirez

Yep, your suggestion reads better to me.

-- Original Message --
From: "Willy Tarreau" 
To: "Nick Ramirez" 
Cc: haproxy@formilux.org
Sent: 12/15/2018 10:30:58 AM
Subject: Re: agent-check requires newline in response?


Hi Nick,

On Fri, Dec 14, 2018 at 05:43:49PM +, Nick Ramirez wrote:

 In the documentation for agent-check
 (https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#agent-check)
 it says that the string returned by the agent may be optionally terminated
 by '\r' or '\n'. However, in my tests, it was mandatory to end the response
 with this. Should the word "optionally" be removed from the docs?


At least one of them is required. Since it's TCP we need to find the end
of the message and to know it was not truncated. I agree that the wording
is confusing, it says :

  "The string is made of a series of words delimited by spaces, tabs or
   commas in any order, optionally terminated by '\r' and/or '\n'"

Which was meant to say that one of them was optional. Maybe we should
say this instead :

  "The string is made of a series of words delimited by spaces, tabs or
   commas in any order, terminated by the first '\r' or '\n' met"

(this basically is what the comments in the code say BTW). What do you
think ?

Thanks,
Willy





Re[2]: HTTP/2 to backend server fails health check when 'option httpchk' set

2018-12-15 Thread Nick Ramirez
Thanks! That points me in the right direction. I found that to enable 
Layer 7 health checks in this case, I would open another port on the web 
server that does not advertise HTTP/2 support (ALPN HTTP/1.1) or does 
not use TLS (which also turns off HTTP/2 in the case of the Caddy web 
server), and then use the "port" parameter on the server line to point 
to that port.


backend webservers
  balance roundrobin
  option httpchk HEAD /
  server server1 web:443 ssl  verify none  alpn h2,http/1.1  check port 
80


Layer 7 health checks back up and running. :-)

-- Original Message --
From: "Willy Tarreau" 
To: "Nick Ramirez" 
Cc: haproxy@formilux.org
Sent: 12/15/2018 10:25:42 AM
Subject: Re: HTTP/2 to backend server fails health check when 'option 
httpchk' set



Hi Nick,

On Fri, Dec 14, 2018 at 10:43:04PM +, Nick Ramirez wrote:

 This may be something very simple that I am missing. I am using the latest
 HAProxy Docker image, which is using HAProxy 1.9-dev10 2018/12/08. It is
 using HTTP/2 to the backend web server (Caddy).

 It fails its health check if I uncomment the "option httpchk" line:

 backend webservers
   balance roundrobin
   #option httpchk
   server server1 web:443 check ssl verify none alpn h2


 With that line commented out, it works.

 The project is on Github:
 https://github.com/NickMRamirez/experiment-haproxy-http2

 Am I doing something wrong? It also works if I remove "option http-use-htx"
 and "alpn h2" and uncomment "option httpchk".


You're not really doing anything wrong, it's just the current limitation
of health checks that we've wanted to redesing for years and that deserve
a year worth of work. Currently health checks are only made of a TCP string
sent over the socket and checked in return. Since 1.6 or so, we introduced
the ability to send this string over SSL (when "check-ssl" is set) but that's
basically the limit.

In fact, health checks are completely separate from the traffic. You can
see them as being part of the control plane while the traffic is the data
plane. You're not even forced to send them to the same IP, ports, nor
protocol as your traffic. They only pre-set the same target IP and port
for convenience, but that's all.

I've thought we could at least implement an H2 preface+settings check but
this would provide a very low value for quite some hassle to make it work
for the user, so I think it would only steer the efforts away from a real
redesign of better HTTP checks.

However we should at the very least document this limitation more clearly
for 1.9, as chances are that a number of people will want to try this :-/

Thanks,
Willy

Re[2]: [PATCH] MINOR: introduce proxy-v2-options for send-proxy-v2

2018-02-07 Thread Aleksandar Lazic

Hi Manu.

-- Originalnachricht --
Von: "Emmanuel Hocdet" 
An: "Aleksandar Lazic" 
Cc: "haproxy" 
Gesendet: 05.02.2018 14:58:20
Betreff: Re: [PATCH] MINOR: introduce proxy-v2-options for send-proxy-v2



Hi Aleks,

Le 2 févr. 2018 à 20:46, Aleksandar Lazic  a écrit 
:


Hi Manu.

Am 02-02-2018 10:49, schrieb Emmanuel Hocdet:

Hi Aleks
Le 1 févr. 2018 à 23:34, Aleksandar Lazic  a 
écrit :

Hi.
-- Originalnachricht --
Von: "Emmanuel Hocdet" 
An: "haproxy" 
Gesendet: 01.02.2018 17:54:46
Betreff: [PATCH] MINOR: introduce proxy-v2-options for send-proxy-v2

Hi,
It’s patch introduce proxy-v2-options for send-proxy-v2.
Goal is to add more options from  doc/proxy-protocol.txt, 
especially

all TLS informations related to security.
Can then this function replace the current one 
`send-proxy-v2-ssl-cn` && `send-proxy-v2-ssl`

yes and no,  you must add send-proxy-v2 to activate proxy-v2
Let's say when the option is 'ssl-cn' then add all three flags as in 
the current `srv_parse_send_proxy_cn` function?

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/ssl_sock.c;hb=497959290789002b814b9021a737a3c5f14e7407#l7788
http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/ssl_sock.c;hb=497959290789002b814b9021a737a3c5f14e7407#l7796
We offer with this suggested solution a backward compatibility and 
the new function is in use.

you must used  "send-proxy-v2 proxy-v2-options ssl » for current
send-proxy-v2-ssl
you must used  "send-proxy-v2 proxy-v2-options cert-cn »   for 
current

send-proxy-v2-ssl-cn
next options should be  authority,cert-key,cert-sig,ssl-cipher
Maybe in the next step there could be a 'tlv' option which can 
decode custom tlv's ?

http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/connection.c;hb=497959290789002b814b9021a737a3c5f14e7407#l606
Just some brainstorming ;-)
What do you mean?

Haproxy is naturally a producer for ‘tlv’ options (for sure when
related to ssl). I don’t know how ‘tlv’ options (other than netns)
could be really useful to consume,  passthru coud be more useful.


How about this example.

https://www.mail-archive.com/haproxy@formilux.org/msg28647.html

How to parse custom PROXY protocol v2 header for custom routing in 
HAProxy configuration?


This case describes a case for AWS own header in PP2 
PP2_SUBTYPE_AWS_VPCE_ID
I know it's not easy but maybe worth to discuss how to use the free 
fields in PP2 for some acls




Consume and produce pp-v2 tlv are two different things.
For tlv consume, i work with Varnish and the problem is the same: where 
to store them and how to use them.
I do not know of a generic solution, specially in the case of custom 
tlv.

Thanks for explanation.
I also have no idea for now.


++
Manu

Best regards
aleks




Re[2]: [PATCH] Minor : Add a sampler to extract the microsecond information of the hit date

2018-01-17 Thread Aleksandar Lazic

Thanks for your answer.

Interesting use case.

Regards
aleks
-- Originalnachricht --
Von: "Etienne Carrière" 
An: "Aleksandar Lazic" 
Cc: haproxy@formilux.org
Gesendet: 17.01.2018 23:28:40
Betreff: Re: [PATCH] Minor : Add a sampler to extract the microsecond 
information of the hit date



Hi,

2018-01-17 20:47 GMT+01:00 Aleksandar Lazic :
(...)

Sounds interesting.
What use case have you in mind for this fetcher?


The use case is the following : we are using SPOE
(http://www.haproxy.org/download/1.8/doc/SPOE.txt) + SPO protocol in a
SaaS logic : Haproxy is in the customer office and our API Server (in
SPOP protocol) is in our datacenter. As the latency is critical, we
wanto to have an heuristic to measure the network latency so we want a
precise timestamp in SPOP message.


I vote for the attached patch, because it's small and looks not to
complicated.


(...)




Regards,

Etienne Carrière






Re[2]: How to parse custom PROXY protocol v2 header for custom routing in HAProxy configuration?

2018-01-15 Thread Aleksandar Lazic

Hi.

Follow up question to proxy protocol

Is it possible to handle the Type-Length-Value (TLV)  fields in from pp2 
in haproxy config or in lua?


I refer to
2.2.7. Reserved type ranges
https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

from the question on so 
https://stackoverflow.com/questions/48195311/how-to-parse-custom-proxy-protocol-v2-header-for-custom-routing-in-haproxy-confi


Regards
aleks

-- Originalnachricht --
Von: "Aleksandar Lazic" 
An: "Adam Sherwood" ; haproxy@formilux.org
Gesendet: 11.01.2018 12:24:46
Betreff: Re: How to parse custom PROXY protocol v2 header for custom 
routing in HAProxy configuration?



Hi.

-- Originalnachricht --
Von: "Adam Sherwood" 
An: haproxy@formilux.org
Gesendet: 10.01.2018 23:40:25
Betreff: How to parse custom PROXY protocol v2 header for custom 
routing in HAProxy configuration?


I have written this up as a StackOverflow question here: 
https://stackoverflow.com/q/48195311/2081835.


When adding PROXY v2 with AWS VPC PrivateLink connected to a Network 
Load Balancer, the endpoint ID of the connecting account is added as a 
TLV. I need to use this for routing frontend to backend, but I cannot 
sort out how.


Is there a way to call a custom matcher that could do the parsing 
logic, or is this already built-in and I'm just not finding the 
documentation?


Any ideas on the topic would be super helpful. Thank you.
Looks like AWS use the "2.2.7. Reserved type ranges" as described in 
https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt therefore 
you will need to parse this part by your own.


This could be possible in lua, maybe I'm not an expert in lua, yet ;-)

There are javexamples in the doc link ( 
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol 
) which you have added int the stackoverflow request.


Regards
Aleks







Re[2]: [BUG] 100% cpu on each threads

2018-01-12 Thread Aleksandar Lazic


-- Originalnachricht --
Von: "Willy Tarreau" 
An: "Emmanuel Hocdet" 
Cc: "haproxy" 
Gesendet: 12.01.2018 13:04:02
Betreff: Re: [BUG] 100% cpu on each threads


On Fri, Jan 12, 2018 at 12:01:15PM +0100, Emmanuel Hocdet wrote:

When syndrome appear, i see such line on syslog:
(for one or all servers)

Server tls/L7_1 is DOWN, reason: Layer4 connection problem, info: "Bad 
file
descriptor", check duration: 2018ms. 0 active and 1 backup servers 
left.
Running on backup. 0 sessions active, 0 requeued, 0 remaining in 
queue.


Wow, that's scary! This means we have a problem with server-side 
connections

and I really have no idea what it's about at the moment :-(
@Emmanuel: Wild guess, is this a meltdown/spectre patched server and 
since the patch you have seen this errors?



Willy


Aleks




Re: Re[2]: Makefile:813: recipe for target 'haproxy' failed

2018-01-07 Thread Milenko Markovic
Hi Aleks

I have solved the problem, OpenSSL was not recognized

TARGET=linux24 USE_OPENSSL=1.1.0g SSL_INC=$STATICLIBSSL/include
SSL_LIB=$STATICLIBSSL/lib ADDLIB=-ldl

All the best

Milenko

On 7 January 2018 at 20:25, Aleksandar Lazic  wrote:

> Zadravo Milenko.
>
> Please keep the list in the communication, thank you very much.
>
> -- Originalnachricht --
> Von: "Milenko Markovic" 
> An: "Aleksandar Lazic" 
> Gesendet: 07.01.2018 09:02:56
> Betreff: Re: Makefile:813: recipe for target 'haproxy' failed
>
> locate libpthread.so
>
> /lib/x86_64-linux-gnu/libpthread.so.0
> /lib32/libpthread.so.0
> /usr/lib/x86_64-linux-gnu/libpthread.so
>
> On 7 January 2018 at 08:46, Milenko Markovic 
> wrote:
>
>> Zdravo Aleks
>>
>> OS-Ubuntu 16.04
>> ldd --version
>> ldd (Ubuntu GLIBC 2.23-0ubuntu9) 2.23
>> gcc --version
>> gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
>> OpenSSL-libssl Version: 1.1.0g
>>
>
> I assume that  you should add -lpthread to ADDLIB
>
> You should have seen such a lib at compile output of openssl, maybe you
> will see it also in the output of
>
> /tmp/staticlibssl/bin/openssl version -a
>
> Greetings
> Aleks
>
> Pozdrav,
>> M.
>>
>
>> On 7 January 2018 at 08:27, Aleksandar Lazic  wrote:
>>
>>> Hi.
>>>
>>> -- Originalnachricht --
>>> Von: "Milenko Markovic" 
>>> An: haproxy@formilux.org
>>> Gesendet: 07.01.2018 07:53:44
>>> Betreff: Makefile:813: recipe for target 'haproxy' failed
>>>
>>> Dear Sir or Madam

 When I run
 make TARGET=linux24 USE_OPENSSL=1 SSL_INC=$STATICLIBSSL/include
 SSL_LIB=$STATICLIBSSL/lib ADDLIB=-ldl
 this appears on screen
 Makefile:813: recipe for target 'haproxy' failed

 I have attached the whole output as txt file. It would be nice if
 someone could help me.

>>> Looks like that you have build the 'staticlibssl' with thread support
>>> but the gcc miss the '-lpthread' or similar library.
>>>
>>>
>>> ```
>>> gcc -g -o haproxy src/haproxy.o src/base64.o src/protocol.o
>>> src/uri_auth.o src/standard.o src/buffer.o src/log.o src/task.o src/chunk.o
>>> src/channel.o src/listener.o src/lru.o src/xxhash.o src/time.o src/fd.o
>>> src/pipe.o src/regex.o src/cfgparse.o src/server.o src/checks.o src/queue.o
>>> src/frontend.o src/proxy.o src/peers.o src/arg.o src/stick_table.o
>>> src/proto_uxst.o src/connection.o src/proto_http.o src/raw_sock.o
>>> src/backend.o src/tcp_rules.o src/lb_chash.o src/lb_fwlc.o src/lb_fwrr.o
>>> src/lb_map.o src/lb_fas.o src/stream_interface.o src/stats.o
>>> src/proto_tcp.o src/applet.o src/session.o src/stream.o src/hdr_idx.o
>>> src/ev_select.o src/signal.o src/acl.o src/sample.o src/memory.o
>>> src/freq_ctr.o src/auth.o src/proto_udp.o src/compression.o src/payload.o
>>> src/hash.o src/pattern.o src/map.o src/namespace.o src/mailers.o src/dns.o
>>> src/vars.o src/filters.o src/flt_http_comp.o src/flt_trace.o src/flt_spoe.o
>>> src/cli.o src/ev_poll.o src/ssl_sock.o src/shctx.o ebtree/ebtree.o
>>> ebtree/eb32tree.o ebtree/eb64tree.o ebtree/ebmbtree.o ebtree/ebsttree.o
>>> ebtree/ebimtree.o ebtree/ebistree.o -lcrypt -ldl -L/tmp/staticlibssl/lib
>>> -lssl -lcrypto -ldl -ldl
>>> /tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function
>>> `CRYPTO_THREAD_lock_new':
>>> threads_pthread.c:(.text+0x25): undefined reference to
>>> `pthread_rwlock_init'
>>> /tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function
>>> `CRYPTO_THREAD_read_lock':
>>> threads_pthread.c:(.text+0x65): undefined reference to
>>> `pthread_rwlock_rdlock'
>>> /tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function
>>> `CRYPTO_THREAD_write_lock':
>>> threads_pthread.c:(.text+0x85): undefined reference to
>>> `pthread_rwlock_wrlock'
>>> /tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function
>>> `CRYPTO_THREAD_unlock':
>>> threads_pthread.c:(.text+0xa5): undefined reference to
>>> `pthread_rwlock_unlock'
>>> /tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function
>>> `CRYPTO_THREAD_lock_free':
>>> threads_pthread.c:(.text+0xca): undefined reference to
>>> `pthread_rwlock_destroy'
>>> /tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function
>>> `CRYPTO_THREAD_run_once':
>>> threads_pthread.c:(.text+0xf5): undefined reference to `pthread_once'
>>> /tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function
>>> `CRYPTO_THREAD_init_local':
>>> threads_pthread.c:(.text+0x115): undefined reference to
>>> `pthread_key_create'
>>> /tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function
>>> `CRYPTO_THREAD_set_local':
>>> threads_pthread.c:(.text+0x147): undefined reference to
>>> `pthread_setspecific'
>>> /tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function
>>> `CRYPTO_THREAD_cleanup_local':
>>> threads_pthread.c:(.text+0x167): undefined reference to
>>> `pthread_key_delete'
>>> 

Re[2]: Makefile:813: recipe for target 'haproxy' failed

2018-01-07 Thread Aleksandar Lazic

Zadravo Milenko.

Please keep the list in the communication, thank you very much.

-- Originalnachricht --
Von: "Milenko Markovic" 
An: "Aleksandar Lazic" 
Gesendet: 07.01.2018 09:02:56
Betreff: Re: Makefile:813: recipe for target 'haproxy' failed


locate libpthread.so

/lib/x86_64-linux-gnu/libpthread.so.0
/lib32/libpthread.so.0
/usr/lib/x86_64-linux-gnu/libpthread.so

On 7 January 2018 at 08:46, Milenko Markovic 
 wrote:

Zdravo Aleks

OS-Ubuntu 16.04
ldd --version
ldd (Ubuntu GLIBC 2.23-0ubuntu9) 2.23
gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
OpenSSL-libssl Version: 1.1.0g


I assume that  you should add -lpthread to ADDLIB

You should have seen such a lib at compile output of openssl, maybe you 
will see it also in the output of


/tmp/staticlibssl/bin/openssl version -a

Greetings
Aleks

Pozdrav,
M.

On 7 January 2018 at 08:27, Aleksandar Lazic  
wrote:

Hi.

-- Originalnachricht --
Von: "Milenko Markovic" 
An: haproxy@formilux.org
Gesendet: 07.01.2018 07:53:44
Betreff: Makefile:813: recipe for target 'haproxy' failed


Dear Sir or Madam

When I run
make TARGET=linux24 USE_OPENSSL=1 SSL_INC=$STATICLIBSSL/include 
SSL_LIB=$STATICLIBSSL/lib ADDLIB=-ldl

this appears on screen
Makefile:813: recipe for target 'haproxy' failed

I have attached the whole output as txt file. It would be nice if 
someone could help me.
Looks like that you have build the 'staticlibssl' with thread support 
but the gcc miss the '-lpthread' or similar library.



```
gcc -g -o haproxy src/haproxy.o src/base64.o src/protocol.o 
src/uri_auth.o src/standard.o src/buffer.o src/log.o src/task.o 
src/chunk.o src/channel.o src/listener.o src/lru.o src/xxhash.o 
src/time.o src/fd.o src/pipe.o src/regex.o src/cfgparse.o 
src/server.o src/checks.o src/queue.o src/frontend.o src/proxy.o 
src/peers.o src/arg.o src/stick_table.o src/proto_uxst.o 
src/connection.o src/proto_http.o src/raw_sock.o src/backend.o 
src/tcp_rules.o src/lb_chash.o src/lb_fwlc.o src/lb_fwrr.o 
src/lb_map.o src/lb_fas.o src/stream_interface.o src/stats.o 
src/proto_tcp.o src/applet.o src/session.o src/stream.o src/hdr_idx.o 
src/ev_select.o src/signal.o src/acl.o src/sample.o src/memory.o 
src/freq_ctr.o src/auth.o src/proto_udp.o src/compression.o 
src/payload.o src/hash.o src/pattern.o src/map.o src/namespace.o 
src/mailers.o src/dns.o src/vars.o src/filters.o src/flt_http_comp.o 
src/flt_trace.o src/flt_spoe.o src/cli.o src/ev_poll.o src/ssl_sock.o 
src/shctx.o ebtree/ebtree.o ebtree/eb32tree.o ebtree/eb64tree.o 
ebtree/ebmbtree.o ebtree/ebsttree.o ebtree/ebimtree.o 
ebtree/ebistree.o -lcrypt -ldl -L/tmp/staticlibssl/lib -lssl -lcrypto 
-ldl -ldl
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_lock_new':
threads_pthread.c:(.text+0x25): undefined reference to 
`pthread_rwlock_init'
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_read_lock':
threads_pthread.c:(.text+0x65): undefined reference to 
`pthread_rwlock_rdlock'
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_write_lock':
threads_pthread.c:(.text+0x85): undefined reference to 
`pthread_rwlock_wrlock'
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_unlock':
threads_pthread.c:(.text+0xa5): undefined reference to 
`pthread_rwlock_unlock'
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_lock_free':
threads_pthread.c:(.text+0xca): undefined reference to 
`pthread_rwlock_destroy'
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_run_once':

threads_pthread.c:(.text+0xf5): undefined reference to `pthread_once'
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_init_local':
threads_pthread.c:(.text+0x115): undefined reference to 
`pthread_key_create'
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_set_local':
threads_pthread.c:(.text+0x147): undefined reference to 
`pthread_setspecific'
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_cleanup_local':
threads_pthread.c:(.text+0x167): undefined reference to 
`pthread_key_delete'
/tmp/staticlibssl/lib/libcrypto.a(threads_pthread.o): In function 
`CRYPTO_THREAD_get_local':
threads_pthread.c:(.text+0x133): undefined reference to 
`pthread_getspecific'

collect2: error: ld returned 1 exit status
Makefile:813: recipe for target 'haproxy' failed
```

As usual please can you tell us more about your system.

Which OS?
Which glibc?
Which dev libs?
Which gcc?
Which libssl?


All the best

Milenko


Regards
Aleks

Re[2]: haproxy without balancing

2018-01-06 Thread Aleksandar Lazic

Hi Angelo.

-- Originalnachricht --
Von: "Angelo Hongens" 
An: "Aleksandar Lazic" ; haproxy@formilux.org
Gesendet: 06.01.2018 18:20:47
Betreff: Re: haproxy without balancing


Hey Aleksandar,

On 05-01-2018 22:05, Aleksandar Lazic wrote:
We run a lot of balancers with varnish+hitch+haproxy+corosync for 
high-available loadbalancing. Perhaps high-availability is not a 
requirement, but it's also nice to be able to do maintenance during 
the day and have your standby node take over..
Just for my curiosity why hitch and not only haproxy for ssl 
termination?


I use varnish as a single point of entry for requests and for caching. 
I guess because it's a really good product, and we've been using it for 
a long time. It has some custom business logic built in our vcl as 
well, and allows for a lot of http magic. I got training on varnish 
tuning and monitoring, and all of our scripts revolve around varnish 
and its logs. And they have very cool real-time analysis tools like 
varnishlog, varnishhist, varnishstat, etc.


Varnish passes all requests to a local haproxy instance, which passes 
requests to the right backends based on hostname. So we use haproxy for 
balancing to backends.


When the time came we needed ssl termination, I wanted a simple 
solution that does that one thing well, and I still wanted varnish as 
entry point. We played around with different products (squid, nginx), 
but then the varnish team forked stud and called it hitch. And the nice 
thing is almost all varnish users use hitch for ssl termination, and 
the varnish team is willing to offer commercial support for both.


I've been thinking about different setups as well, such as running one 
haproxy instance for ssl termination, passing requests to varnish and 
then pass it to another instance of haproxy that sends requests to the 
backends, but I think my current setup serves us best and we use the 
best tool for the jobs at hand. I think hitch is a great ssl 
terminator, varnish is a great cache/spoonfeeder, and haproxy is the 
best balancer.


--
met vriendelijke groet,
Angelo Höngens

Thank you very much for your detailed answer.
I fully agree with you, a specially as you have a working and supported 
set-up.


It would be interesting if hitch can be replaced with haproxy without 
any issues.


I plan to use haproxy in front of varnish and I would be very 
appreciative for any hints, maybe off-list so that we don't upset the 
haproxy list members.


Best regards
Aleks




Re[2]: haproxy-1.8 in Fedora

2018-01-05 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "Ryan O'Hara" 
An: "Aleksandar Lazic" 
Cc: haproxy@formilux.org
Gesendet: 05.01.2018 23:35:10
Betreff: Re: haproxy-1.8 in Fedora




On Fri, Jan 5, 2018 at 3:12 PM, Aleksandar Lazic  
wrote:

Hi Ryan.

-- Originalnachricht --
Von: "Ryan O'Hara" 
An: haproxy@formilux.org
Gesendet: 05.01.2018 17:19:15
Betreff: haproxy-1.8 in Fedora

Just wanted to inform Fedora users that haproxy-1.8.3 is now in the 
master branch and built for Rawhide. I will not be updating haproxy 
to 1.8 in current stable releases of Fedora since I received some 
complaints about doing major updates (eg. 1.6 to 1.7) is previous 
stables releases. That said, the source rpm will build on Fedora 27. 
If there is enough interest, I can build haproxy-1.8 in copr and 
provide a repository for current stable Fedora releases.
I don't know what 'copr' is but how about to add the haproxy 1.8 into 
the software collection similar like nginx 1.8 and apache httpd 2.4 ?


The customer then is able to use haproxy 1.8 with the software 
collection subscription.


​Which software collection are you referring to? Fedora? CentOS? RHEL? 
Either way, it is something that we have discussed and are planning to 
do for the next release of RHSCL, but we've not had any requests for 
other collections.

Uff so much 8-O?

I just know and use the RHSCL on customer setups. This is for the RHEL 
subscriptions, afaik.


What's the naming for the others?

You can learn more about copr here [1] and here [2]. Basically I can 
take my package and build for specific releases, create a repo for the 
built package(s), etc. Useful for builds that aren't included in a 
certain release.


Ryan

[1] https://copr.fedorainfracloud.org/
[2] https://developer.fedoraproject.org/deployment/copr/about.html

Best Regards
Aleks

Re[2]: haproxy without balancing

2018-01-05 Thread Aleksandar Lazic

Hi Angelo.

-- Originalnachricht --
Von: "Angelo Hongens" 
An: haproxy@formilux.org
Gesendet: 05.01.2018 11:49:55
Betreff: Re: haproxy without balancing


On 05-01-2018 11:28, Johan Hendriks wrote:

Secondly we could use a single ip and use ACL to route the traffic to
the right backend server.
The problem with the second option is that we have around 2000 
different

subdomains and this number is still growing. So my haproxy config will
then consists over 4000 lines of acl rules.
and I do not know if haproxy can deal with that or if it will slowdown
request to much.

Maybe there are other options I did not think about?
For me the second config is the best option because of the single IP,
but i do not know if haproxy can handle 2000 acl rules.


I would choose the second option. I don't think the 2000 acls is a 
problem. I've been running with more than that without any problems.


A single point of entry is easiest.

We run a lot of balancers with varnish+hitch+haproxy+corosync for 
high-available loadbalancing. Perhaps high-availability is not a 
requirement, but it's also nice to be able to do maintenance during the 
day and have your standby node take over..
Just for my curiosity why hitch and not only haproxy for ssl 
termination?



--

met vriendelijke groet,
Angelo Höngens


Regards
Aleks




Re[2]: Poll: haproxy 1.4 support ?

2018-01-03 Thread Aleksandar Lazic



-- Originalnachricht --
Von: "Marco Corte" 
An: haproxy@formilux.org
Gesendet: 03.01.2018 13:20:31
Betreff: Re: Poll: haproxy 1.4 support ?


Hello.

My vote to drop support for version 1.4

+1



.marcoc


regards
Aleks




Re: Re[2]: CI/CD HAProxy

2017-12-15 Thread Илья Шипицин
2017-12-15 13:07 GMT+05:00 Aleksandar Lazic :

> Hi
>
> -- Originalnachricht --
> Von: "Илья Шипицин" 
> An: "Aleksandar Lazic" 
> Cc: "Olivier Doucet" ; "HAProxy" <
> haproxy@formilux.org>
> Gesendet: 14.12.2017 14:57:29
> Betreff: Re: CI/CD HAProxy
>
>
>>
>> 2017-09-16 20:01 GMT+05:00 Aleksandar Lazic :
>>
>>> Hi Olivie.
>>>
>>> Olivier Doucet wrote on 15.09.2017:
>>>
>>> > Hi,
>>> >
>>> > I wanted to open a new thread, as "cppcheck finding" was hijacked with
>>> this CICD / testing ;)
>>>
>>> +1
>>>
>>> > I think the best is the enemy of the good : why not start with a few
>>> > easy tests ? For example just a mix of tiny / big config files to test
>>> the parser.
>>> >
>>> > I understand the difficult part of the test is to setup complicated
>>> > infrastructures with many softwares to test edge cases, but that can
>>> be done later.
>>> >
>>> > Willy, there is no need to setup and maintain a buildfarm (no one has
>>> > time for this) : as this is an open project, we can use platform like
>>> > Travis-CI : all tests are described in a yaml file incorporated in the
>>> > project. It works out of the box for any project hosted at github. We
>>> can use a mirror for this.
>>>
>>> I like the idea.
>>>
>>> Does anyone know who own https://github.com/haproxy/haproxy <
>>> https://github.com/haproxy/haproxy> ?
>>>
>>> I can start with some easy checks, the question is should we have a git
>>> hook when a commit is done on http://git.haproxy.org ?
>>>
>>> > What is great about these is that you can easily plug syntax check
>>> softwares ;)
>>>
>>> +1 ;-)
>>>
>>> > Olivier
>>>
>>> --
>>> Best Regards
>>> Aleks
>>>
>>>
>>>
>>
>> Hello,
>>
>> I made weird thing
>>
>>
>> https://gitlab.com/chipitsine/haproxy/blob/master/.gitlab-ci.yml
>>
>> repo itself is merged hourly with upstream repo (by using external syncer)
>>
>> https://gitlab.com/chipitsine/haproxy/-/jobs
>>
>
> I can't read it, is it a public repo?
>

I set it to "internal", that means any authenticated user can access it
probably, I can make repo itself public, but pipelines are not available
publicly


>
> Best regards
> aleks
>
>


Re[2]: CI/CD HAProxy

2017-12-15 Thread Aleksandar Lazic

Hi

-- Originalnachricht --
Von: "Илья Шипицин" 
An: "Aleksandar Lazic" 
Cc: "Olivier Doucet" ; "HAProxy" 


Gesendet: 14.12.2017 14:57:29
Betreff: Re: CI/CD HAProxy




2017-09-16 20:01 GMT+05:00 Aleksandar Lazic :

Hi Olivie.

Olivier Doucet wrote on 15.09.2017:

> Hi,
>
> I wanted to open a new thread, as "cppcheck finding" was hijacked 
with this CICD / testing ;)


+1

> I think the best is the enemy of the good : why not start with a few
> easy tests ? For example just a mix of tiny / big config files to 
test the parser.

>
> I understand the difficult part of the test is to setup complicated
> infrastructures with many softwares to test edge cases, but that can 
be done later.

>
> Willy, there is no need to setup and maintain a buildfarm (no one 
has
> time for this) : as this is an open project, we can use platform 
like
> Travis-CI : all tests are described in a yaml file incorporated in 
the
> project. It works out of the box for any project hosted at github. 
We can use a mirror for this.


I like the idea.

Does anyone know who own https://github.com/haproxy/haproxy 
 ?


I can start with some easy checks, the question is should we have a 
git

hook when a commit is done on http://git.haproxy.org ?

> What is great about these is that you can easily plug syntax check 
softwares ;)


+1 ;-)

> Olivier

--
Best Regards
Aleks





Hello,

I made weird thing


https://gitlab.com/chipitsine/haproxy/blob/master/.gitlab-ci.yml

repo itself is merged hourly with upstream repo (by using external 
syncer)


https://gitlab.com/chipitsine/haproxy/-/jobs


I can't read it, is it a public repo?

Best regards
aleks




Re[2]: Use haproxy 1.8.x to balance web applications only reachable through Internet proxy

2017-12-11 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "Moemen MHEDHBI" 
An: "Gbg" ; haproxy@formilux.org
Gesendet: 11.12.2017 18:45:16
Betreff: Re: Use haproxy 1.8.x to balance web applications only 
reachable through Internet proxy



On 11/12/2017 17:21, Gbg wrote:

Hello  Moemen,

 unless I got this wrong this isn't the setup I search for. I 
don't  need haproxy to *be* a proxy but rather *use* a proxy while 
 serving content over http as a reverse proxy


 Perhaps I should have given the thread this name


   I get your point, HAProxy does not support using a "http/socks
proxy".
   HAProxy intends to be "an HTTP reverse-proxy" retrieving resources   
 from backend servers and the requests sent to a server are different   
 from the ones sent to a "forward proxy".
   That being said HAProxy  can still "pass" proxy requests to
http/socks proxies if the client is configured to use a proxy.
How about to use delegate ( http://delegate.org/documents/ ) for this, 
just as an idea.


haproxy -> delegate FORWARD=... CONNECT=proxy:

http://delegate.org/delegate/Manual.shtml?FORWARD
http://delegate.org/delegate/Manual.shtml?CONNECT

I know it's not that elegant but it could work.



   ++


Am 11. Dezember 2017 16:56:12 MEZ schriebMoemen MHEDHBI 
 :

On 11/12/2017 15:02, Gbg wrote:


 I need to contact applications through a socks or http proxy.

 My current setup looks like this but only works when the Computer
 haproxy runs on has direct Internet connection (which is not the case
 in our datacenter, I tried this at home)

 frontend main
 bind *:8000
 acl is_extweb1 path_beg -i /policies
 acl is_extweb2 path_beg -i /produkte
 use_backend externalweb1 if is_extweb1
 use_backend externalweb2 if is_extweb2

 backend externalweb1
 server static www.google.com:80 check

 backend externalweb2
 server static www.gmx.net:80 check

 There is an SO post which addresses the same question and provides
 some more details:
 
https://stackoverflow.com/questions/47605766/use-haproxy-as-an-reverse-proxy-with-an-application-behind-internet-proxy





Hi Gbg,

For this to work you need the client (browser for example) to be aware
of the forward proxy.
So first you need to configure the client to use HAProxy as a forward
proxy, then in the HAProxy conf you need to use the forward proxy in the
backend and the configuration may look like this:

frontend main
bind *:8000
acl is_extweb path_beg -i /policies /produkte
use_backend forward_proxy if is_extweb
default_backend another_backend

backend forward_proxy
  server static < IP-of-the-forward-proxy > : < Port > check


++

Moemen MHEDHBI






 --
 Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail  
gesendet.


--
Moemen MHEDHBI







Re[2]: [ANNOUNCE] haproxy-1.8.0

2017-11-27 Thread Aleksandar Lazic

Hi Willy.

-- Originalnachricht --
Von: "Willy Tarreau" 
An: "Aleksandar Lazic" 
Cc: haproxy@formilux.org
Gesendet: 27.11.2017 23:54:31
Betreff: Re: [ANNOUNCE] haproxy-1.8.0


Hi Aleks,

On Mon, Nov 27, 2017 at 09:18:35PM +, Aleksandar Lazic wrote:
> I'm pleased to announce that haproxy 1.8.0 is now officially 
released!

Amazing ;-)


So after 15 years working on this project you still manage to be 
amazed,

I'm impressed ;-)

You are right I wanted to say "great".

Note to myself: I should not write mails when I'm in passing mood.

Hm time flies, 15 years and still happy to be part of this great 
project.


Thanks to all of us community members and the company behind the project 
;-)


I hope I'm not to sentimental.


As usual the docker image is also updated.

https://hub.docker.com/r/me2digital/haproxy18/


Thank you for maintaining this!
Willy

Regards
Aleks




Re[2]: [ANNOUNCE] haproxy-1.8-rc1 : the last mile

2017-11-01 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "Willy Tarreau" 
An: "Cyril Bonté" 
Cc: haproxy@formilux.org
Gesendet: 01.11.2017 07:44:23
Betreff: Re: [ANNOUNCE] haproxy-1.8-rc1 : the last mile


Hi Cyril,

On Wed, Nov 01, 2017 at 01:03:42AM +0100, Cyril Bonté wrote:
This announcement was exciting enough to take some time to regenerate 
an up

to date HTML documentation ! 1.8-rc1 is now available :
http://cbonte.github.io/haproxy-dconv/1.8/configuration.html


Cool, thank you!


> Have fun,
> Willy -- feeling exhausted like a marathoner :-)

Great job ! Now it's time to test and track nasty bugs before the 
final 1.8

release ;-)


Yep. And we know certain points already have to be fixed. The real 
great
thing is to be allowed to sleep a full night for the first time in a 
few

months ;-)

Have a good and refreshing sleep ;-)

Thanks for the hard work ;-)

There is now a shiny new docker image with the rc1.

docker run --rm --entrypoint /usr/local/sbin/haproxy 
me2digital/haproxy18 -vv



HA-Proxy version 1.8-rc1-901f75c 2017/10/31
Copyright 2000-2017 Willy Tarreau 

Build options :
  TARGET = linux2628
  CPU = generic
  CC = gcc
  CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -Wno-unused-label
  OPTIONS = USE_LINUX_SPLICE=1 USE_GETADDRINFO=1 USE_ZLIB=1 
USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1 
USE_TFO=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 
200


Built with OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with network namespace support.
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Encrypted password support via crypt(3): yes
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes

Available polling systems :
  epoll : pref=300, test result OK
   poll : pref=200, test result OK
 select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
 [SPOE] spoe
 [COMP] compression
 [TRACE] trace




Cheers,
Willy


Regards
Aleks




Re[2]: Tcp logging in haproxy

2017-10-27 Thread Aleksandar Lazic

Hi

-- Originalnachricht --
Von: "kushal bhattacharya" 
An: haproxy@formilux.org
Gesendet: 27.10.2017 12:47:37
Betreff: Re: Tcp logging in haproxy

Sorry if it is generated as a new topic i  am attaching my 
configuration until now below
On UDP 127.0.0.1:514 must the syslog server listen and the facility 
"local0" must be configured to write to a logfile.


When you add this line to haproxy config should you get some logs from 
haproxy in the logfile.


http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#log

global
  log 127.0.0.1:514 local0


defaults

  log global


mode tcp
timeout connect22m
timeout client 22m
timeout server 22m

frontend localnodes
bind *:9875
option tcplog
log global
default_backend nodes


backend nodes
mode tcp
balance roundrobin
server web01 192.168.0.5:9878  maxconn 2000
server web02 192.168.0.5:9877  maxconn 2000
server web03 192.168.0.5:9876  maxconn 2000


but now i am confused as to how do i recieve logs

On Thu, Oct 26, 2017 at 2:50 PM, kushal bhattacharya 
 wrote:
I have included tcp logging in the configuration of haproxy.But I want 
to know how it will be loggged in and where the log will be done.My 
main moto is to dump log output in some custom file but watch the logs 
dumped into it.

Thanks,
Kushal







Re[2]: Multiple Monitor-net

2015-10-16 Thread Bryan Rodriguez

Thank you!

Worked perfectly!


[Bryan]



-- Original Message --
From: "Willy Tarreau" 
To: "Bryan Rodriguez" 
Cc: haproxy@formilux.org
Sent: 10/16/2015 10:28:13 AM
Subject: Re: Multiple Monitor-net


On Fri, Oct 16, 2015 at 05:18:24PM +, Bryan Rodriguez wrote:
 AWS health check monitoring comes from the following networks.  
Logging

 is going crazy.  I read that only the last monitor-net is read.  Is
 there a way to filter from the logs all the following requests?

monitor-net 54.183.255.128/26
monitor-net 54.228.16.0/26
monitor-net 54.232.40.64/26
monitor-net 54.241.32.64/26
monitor-net 54.243.31.192/26
monitor-net 54.244.52.192/26
monitor-net 54.245.168.0/26
monitor-net 54.248.220.0/26
monitor-net 54.250.253.192/26
monitor-net 54.251.31.128/26
monitor-net 54.252.254.192/26
monitor-net 54.252.79.128/26
monitor-net 54.255.254.192/26
monitor-net 107.23.255.0/26
monitor-net 176.34.159.192/26
monitor-net 177.71.207.128/26


Yes, instead of using monitor-net, you can use a redirect (if the 
checker

accepts it) or go to a specific backend instead, and use the "silent"
log-level :

  http-request set-log-level silent if { src -f aws-checks.list }
  http-request redirect location /  if { src -f aws-checks.list }

Or :

  use-backend aws-checks if { src -f aws-checks.list }

  backend aws-checks
 http-request set-log-level silent
 error-file 503 /path/to/forged/response.http

Then you put all those networks (one per line) in a file called
"aws-checks.list" and that will be easier.

Hoping this helps,
Willy






Re[2]: Multiple Monitor-net

2015-10-16 Thread Bryan Rodriguez
What about TCP requests or not HTTP traffic?   It seems TCP traffic is 
still logged when using:


http-request set-log-level silent if { src -f aws-checks.list }



[Bryan]



-- Original Message --
From: "Willy Tarreau" 
To: "Bryan Rodriguez" 
Cc: haproxy@formilux.org
Sent: 10/16/2015 10:28:13 AM
Subject: Re: Multiple Monitor-net


On Fri, Oct 16, 2015 at 05:18:24PM +, Bryan Rodriguez wrote:
 AWS health check monitoring comes from the following networks.  
Logging

 is going crazy.  I read that only the last monitor-net is read.  Is
 there a way to filter from the logs all the following requests?

monitor-net 54.183.255.128/26
monitor-net 54.228.16.0/26
monitor-net 54.232.40.64/26
monitor-net 54.241.32.64/26
monitor-net 54.243.31.192/26
monitor-net 54.244.52.192/26
monitor-net 54.245.168.0/26
monitor-net 54.248.220.0/26
monitor-net 54.250.253.192/26
monitor-net 54.251.31.128/26
monitor-net 54.252.254.192/26
monitor-net 54.252.79.128/26
monitor-net 54.255.254.192/26
monitor-net 107.23.255.0/26
monitor-net 176.34.159.192/26
monitor-net 177.71.207.128/26


Yes, instead of using monitor-net, you can use a redirect (if the 
checker

accepts it) or go to a specific backend instead, and use the "silent"
log-level :

  http-request set-log-level silent if { src -f aws-checks.list }
  http-request redirect location /  if { src -f aws-checks.list }

Or :

  use-backend aws-checks if { src -f aws-checks.list }

  backend aws-checks
 http-request set-log-level silent
 error-file 503 /path/to/forged/response.http

Then you put all those networks (one per line) in a file called
"aws-checks.list" and that will be easier.

Hoping this helps,
Willy






Re: 2 services (frontend+backend), both with cookies, failure

2014-10-13 Thread Kari Mattsson
 Hi,
 
 On Sat, Oct 11, Kari Mattsson wrote:
  this got repeated for 50+ times when refreshing on Chrome browser.
  Then to Firefox..
  Oct 11 20:25:17 localhost haproxy[5179]: 10.6.159.238:4248
  [11/Oct/2014:20:25:14.300] service_1_outside_80
  service_1_inside/App_101 3264/0/0/1/+3265 200 +275 - - --NI
  1/1/1/1/0 0/0 {service1.example.com} {7|} GET / HTTP/1.1
  Oct 11 20:25:22 localhost haproxy[5179]: 10.6.159.238:4252
  [11/Oct/2014:20:25:22.854] service_2_outside_80
  service_2_inside/App_142 0/0/0/1/+1 200 +275 - - --NI 1/1/1/1/0
  0/0 {service1.example.com} {5|} GET / HTTP/1.1
 
 --NI = client provided no cookie, proxy inserted one

Above, Firefox was accessing service1. Got web page from it correctly (first 
line above). The 8 seconds later when F5 was pressed again on Firefox, it 
landed on service2 backend server erroneously and proxy inserted cookie from 
there.

Problem is, the second line is totally in error. Firefox had service1 URL on 
address line, and it was showing web page from service2.

If you want, I can provide you the full config and real URLs.

  Oct 11 20:25:27 localhost haproxy[5179]: 10.6.159.238:4254
  [11/Oct/2014:20:25:27.914] service_2_outside_80
  service_2_inside/App_142 0/0/0/1/+1 304 +120 SERVICE_2=app142 -
  --VN 1/1/1/1/0 0/0 {service1.example.com} {|} GET / HTTP/1.1
  Oct 11 20:27:31 localhost haproxy[5179]: 10.6.159.238:4283
  [11/Oct/2014:20:27:31.947] service_1_outside_80
  service_1_inside/App_101 0/0/0/1/+1 200 +237 SERVICE_1=app101 -
  --VN 1/1/1/1/0 0/0 {service1.example.com} {7|} GET / HTTP/1.1
  
  Looks like browser will not receive a cookie for the first 2 page
  loads.
  On third it received... but a wrong cookie.
 
 --VN = client provided valid cookie, proxy didn't set cookie
 On the third log line what was supposed to happen ?
 Looks like haproxy received SERVICE_2=app142 cookie and the
 connection
 was send to service_2_inside/App_142

Above, on first line browser was accessing service1, and HAProxy provided 
content from service2 backend servers. Frontend and backend got mixed again.

  After 2 minutes fourth reload, and if will receive the right
  cookie.
  Reloading page from this on, keeps it on the browser right
  frontend/backend.
  Weird.
 
 What are those {service1.example.com} {7|} in logs ? 

I really don't know :-/  Before it is the FQDN what browser is trying to 
access. First line above is mixed/error, second line is correct (access 
service1, receive data from service1).

 I'm assuming that
 SERVICE_1=/SERVICE_2=... is capture cookie SERVICE_1
 or capture cookie SERVICE_2 ?
 
  Now back to Chrome again for one more page reload:
  Oct 11 20:29:28 localhost haproxy[5179]: 10.6.159.238:4311
  [11/Oct/2014:20:29:28.561] service_2_outside_80
  service_2_inside/App_141 0/0/1/0/+1 200 +237 SERVICE_2=app141 -
  --VN 1/1/1/1/0 0/0 {service1.example.com} {5|} GET / HTTP/1.1
  
  Damn. Chrome falls to wrong frontend/backend.
 
 Where the connection from chrome should have gone ?

Shoud have gone to: service1.example.com
It really went to: service_2_outside_80 and from there to 
service_2_inside/App_141

  One more. Fireforx, 2 page re-loads for service1.example.com:
  Oct 11 20:31:52 localhost haproxy[5179]: 10.6.159.238:4350
  [11/Oct/2014:20:31:52.023] service_2_outside_80
  service_2_inside/App_142 0/0/0/1/+1 200 +237 SERVICE_2=app142 -
  --VN 1/1/1/1/0 0/0 {service1.example.com} {5|} GET / HTTP/1.1
  Oct 11 20:31:55 localhost haproxy[5179]: 10.6.159.238:4352
  [11/Oct/2014:20:31:55.419] service_1_outside_80
  service_1_inside/App_101 0/0/0/1/+1 200 +237 SERVICE_1=app101 -
  --VN 1/1/1/1/0 0/0 {service1.example.com} {7|} GET / HTTP/1.1
  
  ...first wrong, then right. So, it is flip-floping.
 
 Is the {service1.example.com} captured host header ? 

Yes as shown on haproxy.log. This is where browser wants to go.

 The connection
 goes
 to two different frontends (first goes to service_2_outside_80 and
 second goes to service_1_outside_80). Should it have gone to the same
 frontend ?

Correct.

For service1.example.com there is frontend service_1_outside_80 
For service2.example.com there is frontend service_2_outside_80 

...and for
service_1_outside_80 there is backend service_1_inside
service_2_outside_80 there is backend service_2_inside

and...
service_1_inside has real web servers App_101 + App_102
service_2_inside has real web servers App_141 + App_142

 Do you have multiple ip addresses for service1.example.com in
 /etc/hosts or dns ? (one address for service_1_outside_80 and on one
 for
 service_2_outside_80 ?)

For frontend:
service1.example.com has A + PTR records properly defined and just single IP, 
and
service2.example.com has A + PTR records properly defined and just single IP.
Both server1 and server2 are in different public subnets.

For backend:
service1 backend is NATed. The App_101 and App_102 have private IP numbers.
service2 backend is public. The App_141 and App_142 have public IP numbers.

For service2 further..


Re: 2 services (frontend+backend), both with cookies, failure

2014-10-12 Thread Jarno Huuskonen
Hi,

On Sat, Oct 11, Kari Mattsson wrote:
 this got repeated for 50+ times when refreshing on Chrome browser. Then to 
 Firefox..
 Oct 11 20:25:17 localhost haproxy[5179]: 10.6.159.238:4248 
 [11/Oct/2014:20:25:14.300] service_1_outside_80 service_1_inside/App_101 
 3264/0/0/1/+3265 200 +275 - - --NI 1/1/1/1/0 0/0 {service1.example.com} {7|} 
 GET / HTTP/1.1
 Oct 11 20:25:22 localhost haproxy[5179]: 10.6.159.238:4252 
 [11/Oct/2014:20:25:22.854] service_2_outside_80 service_2_inside/App_142 
 0/0/0/1/+1 200 +275 - - --NI 1/1/1/1/0 0/0 {service1.example.com} {5|} GET / 
 HTTP/1.1

--NI = client provided no cookie, proxy inserted one

 Oct 11 20:25:27 localhost haproxy[5179]: 10.6.159.238:4254 
 [11/Oct/2014:20:25:27.914] service_2_outside_80 service_2_inside/App_142 
 0/0/0/1/+1 304 +120 SERVICE_2=app142 - --VN 1/1/1/1/0 0/0 
 {service1.example.com} {|} GET / HTTP/1.1
 Oct 11 20:27:31 localhost haproxy[5179]: 10.6.159.238:4283 
 [11/Oct/2014:20:27:31.947] service_1_outside_80 service_1_inside/App_101 
 0/0/0/1/+1 200 +237 SERVICE_1=app101 - --VN 1/1/1/1/0 0/0 
 {service1.example.com} {7|} GET / HTTP/1.1
 
 Looks like browser will not receive a cookie for the first 2 page loads.
 On third it received... but a wrong cookie.

--VN = client provided valid cookie, proxy didn't set cookie
On the third log line what was supposed to happen ?
Looks like haproxy received SERVICE_2=app142 cookie and the connection
was send to service_2_inside/App_142

 After 2 minutes fourth reload, and if will receive the right cookie.
 Reloading page from this on, keeps it on the browser right frontend/backend.
 Weird.

What are those {service1.example.com} {7|} in logs ? I'm assuming that
SERVICE_1=/SERVICE_2=... is capture cookie SERVICE_1
or capture cookie SERVICE_2 ? 

 Now back to Chrome again for one more page reload:
 Oct 11 20:29:28 localhost haproxy[5179]: 10.6.159.238:4311 
 [11/Oct/2014:20:29:28.561] service_2_outside_80 service_2_inside/App_141 
 0/0/1/0/+1 200 +237 SERVICE_2=app141 - --VN 1/1/1/1/0 0/0 
 {service1.example.com} {5|} GET / HTTP/1.1
 
 Damn. Chrome falls to wrong frontend/backend.

Where the connection from chrome should have gone ? 

 One more. Fireforx, 2 page re-loads for service1.example.com:
 Oct 11 20:31:52 localhost haproxy[5179]: 10.6.159.238:4350 
 [11/Oct/2014:20:31:52.023] service_2_outside_80 service_2_inside/App_142 
 0/0/0/1/+1 200 +237 SERVICE_2=app142 - --VN 1/1/1/1/0 0/0 
 {service1.example.com} {5|} GET / HTTP/1.1
 Oct 11 20:31:55 localhost haproxy[5179]: 10.6.159.238:4352 
 [11/Oct/2014:20:31:55.419] service_1_outside_80 service_1_inside/App_101 
 0/0/0/1/+1 200 +237 SERVICE_1=app101 - --VN 1/1/1/1/0 0/0 
 {service1.example.com} {7|} GET / HTTP/1.1
 
 ...first wrong, then right. So, it is flip-floping.

Is the {service1.example.com} captured host header ? The connection goes
to two different frontends (first goes to service_2_outside_80 and
second goes to service_1_outside_80). Should it have gone to the same
frontend ?

Do you have multiple ip addresses for service1.example.com in
/etc/hosts or dns ? (one address for service_1_outside_80 and on one for
service_2_outside_80 ?) 
 
  - you could also use tcpdump to see what cookies firefox - haproxy
send/receive ?

Sorry, what I had in mind was to use tcpdump/wireshark to see what
cookies the client(browser) receives/sends to haproy (for example follow
tcp stream in wireshark).
You can probably use chrome developer tools (ctrl+shift+i) (network)
to see the request/response headers. (Or firebug with firefox).

 Changing from cookie stickiness to source ip...
   stick-table type ip
   stick on src
 ...also makes no difference. Same errorneous behaviour.

Do you see any entries in the stick table ? Something like
echo show table service_2_inside | nc -U /path/to/stats.socket
(or with socat instead of nc -U).

-Jarno

-- 
Jarno Huuskonen



Re: 2 services (frontend+backend), both with cookies, failure

2014-10-11 Thread Kari Mattsson

 Hi,

Hi, and thanks for your reply!

 On Mon, Oct 06, Kari Mattsson wrote:
  (IP numbers are imaginary, not real.)
  When I go to http://200.200.200.111 and http://200.200.200.222, and
  press F5 (refresh) on Firefox for a few time, I end up with 4
  cookies instead of 2.
 
 For example when you go to .111 and hit refresh few times do the
 requests go the same (backend)server or to both servers ?

Few times, max 10, traffic goes to the same backend server. Then is suddenly 
switches to the backend of the other frontend, which is clearly an error. When 
I repeat refreshing on browser, it usually comes back to the original correct 
backend... and then to wrong one again later...


 Couple of things to check:
 - what do you get in haproxy log (option httplog) when you do:
   firefox refresh test ?
   your logs should show when haproxy inserts the cookie:
   http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.5

correct example log entry:
Oct 11 20:19:10 localhost haproxy[5179]: 10.6.159.238:4153 
[11/Oct/2014:20:19:10.671] service_1_outside_80 service_1_inside/App_101 
0/0/0/1/+1 200 +237 SERVICE_1=app101 - --VN 1/1/1/1/0 0/0 
{service1.example.com} {7|} GET / HTTP/1.1

this got repeated for 50+ times when refreshing on Chrome browser. Then to 
Firefox..
Oct 11 20:25:17 localhost haproxy[5179]: 10.6.159.238:4248 
[11/Oct/2014:20:25:14.300] service_1_outside_80 service_1_inside/App_101 
3264/0/0/1/+3265 200 +275 - - --NI 1/1/1/1/0 0/0 {service1.example.com} {7|} 
GET / HTTP/1.1
Oct 11 20:25:22 localhost haproxy[5179]: 10.6.159.238:4252 
[11/Oct/2014:20:25:22.854] service_2_outside_80 service_2_inside/App_142 
0/0/0/1/+1 200 +275 - - --NI 1/1/1/1/0 0/0 {service1.example.com} {5|} GET / 
HTTP/1.1
Oct 11 20:25:27 localhost haproxy[5179]: 10.6.159.238:4254 
[11/Oct/2014:20:25:27.914] service_2_outside_80 service_2_inside/App_142 
0/0/0/1/+1 304 +120 SERVICE_2=app142 - --VN 1/1/1/1/0 0/0 
{service1.example.com} {|} GET / HTTP/1.1
Oct 11 20:27:31 localhost haproxy[5179]: 10.6.159.238:4283 
[11/Oct/2014:20:27:31.947] service_1_outside_80 service_1_inside/App_101 
0/0/0/1/+1 200 +237 SERVICE_1=app101 - --VN 1/1/1/1/0 0/0 
{service1.example.com} {7|} GET / HTTP/1.1

Looks like browser will not receive a cookie for the first 2 page loads.
On third it received... but a wrong cookie.
After 2 minutes fourth reload, and if will receive the right cookie.
Reloading page from this on, keeps it on the browser right frontend/backend.
Weird.

Now back to Chrome again for one more page reload:
Oct 11 20:29:28 localhost haproxy[5179]: 10.6.159.238:4311 
[11/Oct/2014:20:29:28.561] service_2_outside_80 service_2_inside/App_141 
0/0/1/0/+1 200 +237 SERVICE_2=app141 - --VN 1/1/1/1/0 0/0 
{service1.example.com} {5|} GET / HTTP/1.1

Damn. Chrome falls to wrong frontend/backend.

One more. Fireforx, 2 page re-loads for service1.example.com:
Oct 11 20:31:52 localhost haproxy[5179]: 10.6.159.238:4350 
[11/Oct/2014:20:31:52.023] service_2_outside_80 service_2_inside/App_142 
0/0/0/1/+1 200 +237 SERVICE_2=app142 - --VN 1/1/1/1/0 0/0 
{service1.example.com} {5|} GET / HTTP/1.1
Oct 11 20:31:55 localhost haproxy[5179]: 10.6.159.238:4352 
[11/Oct/2014:20:31:55.419] service_1_outside_80 service_1_inside/App_101 
0/0/0/1/+1 200 +237 SERVICE_1=app101 - --VN 1/1/1/1/0 0/0 
{service1.example.com} {7|} GET / HTTP/1.1

...first wrong, then right. So, it is flip-floping.

 - you could also use tcpdump to see what cookies firefox - haproxy
   send/receive ?

With 'tcpdump -n -i eth0 src 10.6.159.238 and dst 194.1.1.15' I got:

20:35:29.083506 IP 86.60.159.238.ds-mail  194.100.100.150.http: Flags [S], seq 
1217634156, win 8192, options [mss 1260,nop,wscale 2,nop,nop,sackOK], length 0
20:35:29.090671 IP 10.6.159.238.ds-mail  194.1.1.15.http: Flags [.], ack 
267776592, win 16695, length 0
20:35:29.090862 IP 10.6.159.238.ds-mail  194.1.1.15.http: Flags [P.], seq 
0:449, ack 1, win 16695, length 449
20:35:29.293289 IP 10.6.159.238.ds-mail  194.1.1.15.http: Flags [.], ack 245, 
win 16634, length 0
20:35:32.102248 IP 10.6.159.238.ds-mail  194.1.1.15.http: Flags [.], ack 246, 
win 16634, length 0
20:35:32.102338 IP 10.6.159.238.ds-mail  194.1.1.15.http: Flags [F.], seq 449, 
ack 246, win 16634, length 0

 - have you tried testing w/out using stick table / stick on cookie ?
 (For
   debugging purposes?) I think just the cookie SERVICE_1 insert and
   cookie app* on server lines should be enough to get session
   persistence.

Commenting out lines
  stick-table type string
  stick on cookie
makes zero difference.

When running with just 1 frontend service, SERVICE_1 or SERVICE_2 everything 
works as advertised, ie. perfectly.

Changing from cookie stickiness to source ip...
  stick-table type ip
  stick on src
...also makes no difference. Same errorneous behaviour.

 - what are you trying to store with the stick table ? I think you are
   going to have only two entries in the stick table:
   key=appl01 and key=appl02 ?

There are 2 to N backend 

Re: 2 services (frontend+backend), both with cookies, failure

2014-10-09 Thread Jarno Huuskonen
Hi,

On Mon, Oct 06, Kari Mattsson wrote:
 (IP numbers are imaginary, not real.)
 When I go to http://200.200.200.111 and http://200.200.200.222, and press F5 
 (refresh) on Firefox for a few time, I end up with 4 cookies instead of 2.

For example when you go to .111 and hit refresh few times do the
requests go the same (backend)server or to both servers ?

Couple of things to check:
- what do you get in haproxy log (option httplog) when you do:
  firefox refresh test ?
  your logs should show when haproxy inserts the cookie:
  http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.5

- you could also use tcpdump to see what cookies firefox - haproxy
  send/receive ?

- have you tried testing w/out using stick table / stick on cookie ? (For
  debugging purposes?) I think just the cookie SERVICE_1 insert and
  cookie app* on server lines should be enough to get session
  persistence.

- what are you trying to store with the stick table ? I think you are
  going to have only two entries in the stick table:
  key=appl01 and key=appl02 ?

-Jarno

 backend service_1_inside
   mode http
   balance roundrobin   # source roundrobin leastconn ...
 
   stick-table type string len 32 size 100k expire 1h store 
 conn_cur,conn_rate(60s)
   stick on cookie(SERVICE_1)
   cookie SERVICE_1 insert indirect maxlife 1h
 
   default-server maxconn 1000 weight 100 inter 2s fastinter 700ms downinter 
 10s fall 3 rise 2
   server App_101 10.10.10.101:80 cookie app101 check
   server App_102 10.10.10.102:80 cookie app102 check

-- 
Jarno Huuskonen



Re[2]:

2011-03-19 Thread Antony
Hi all,

Actually I asked this question because I saw a lot of times systems that had 
more than 10Gb of free physical memory and they anyway used swap 
partition(about 1-5 Mb). I saw that happened on FreeBSD and on Linux, so I 
thought it's possible to see that again when I'll run HAProxy.
And I doubt that userspace application can control itself memory management 
process, i.e. you can't say in your program give me memory and never use 
paging/swapping for it. I suppose it's the authority of OS. (I might be wrong 
of course, and maybe HAProxy deals with it in exactly such way). And as Ben 
said there's an option to tune /proc/sys/vm/swapiness. And as far as I can 
understand now it's the only option to prevent swapping...

Sat, 19 Mar 2011 08:59:14 +0100 письмо от Baptiste bed...@gmail.com:

 Hey,
 
 You can also play with /proc/sys/vm/swapiness to avoid  / limit swapping...
 But as explained, it's a bad idea to let a lot balancer swapping. It's
 supposed to introduce a very very low delay and swapping would
 increase that delay.
 Just ensure you have enough memory to handle the load you need/want.
 
 cheers
 
 
 On Fri, Mar 18, 2011 at 7:33 PM, Ben Timby bti...@gmail.com wrote:
  On Fri, Mar 18, 2011 at 2:00 PM, Antony ddj...@mail.ru wrote:
  Hi guys!
 
  I'm new to HAProxy and currently I'm testing it.
  So I've read this on the main page of the web site:
  The reliability can significantly decrease when the system is pushed to
 its limits. This is why finely tuning the sysctls is important. There is no
 general rule, every system and every application will be specific. However, it
 is important to ensure that the system will never run out of memory and that
 it will never swap. A correctly tuned system must be able to run for years at
 full load without slowing down nor crashing.
  And now have the question.
 
  How do you usually prevent system to swap? I use Linux but solutions for
 any other OSes are interesting for me too.
 
  I think it isn't just to swapoff -a and to del appropriate line in
 /etc/fstab. Because some people say that it isn't good choise..
 
  Prevent swapping by ensuring your resource limits (max connections)
  etc. keep the application from exceeding the amount of physical
  memory.
 
  Or conversely by ensuring that your physical memory is sufficient to
  handle the load you will be seeing.
 
  This is what is referred to in the documentation, you need to tune
  your limits and available memory for the workload you are seeing. Of
  course simple things like not running other memory hungry applications
  on the same machine apply as well. This is an iterative process
  whereby you observe the application, make adjustments and repeat. You
  must generate test load within the range of normal operations for this
  tweaking to be true-to-life. Of course once you go into production the
  tweaking will continue, no simulation is a replacement for production
  usage.
 
  The reason running without swap is bad is because if you hit the limit
  of your physical memory, the OOM killer is invoked. Any process is
  subject to termination by the OOM killer, so in most cases decreased
  performance is more acceptable than loss of a critical system process.
 
 




Re: Re[2]:

2011-03-19 Thread Malcolm Turnbull
On 19 March 2011 10:58, Antony ddj...@mail.ru wrote:
 Hi all,

 Actually I asked this question because I saw a lot of times systems that had 
 more than 10Gb of free physical memory and they anyway used swap 
 partition(about 1-5 Mb). I saw that happened on FreeBSD and on Linux, so I 
 thought it's possible to see that again when I'll run HAProxy.

Antony,

The argument has come up on the kernel mailing list a few times,
people tend to get religious about it.
Personally I never have a swap partition on a server. (and its always
worked well for me).
Yes, If something goes hideously wrong then the OOM killer will be
invoked (but swap will only slow down the system even more and then
die, at least OOM has a small chance to take out the offending
process..)


-- 
Regards,

Malcolm Turnbull.

Loadbalancer.org Ltd.
Phone: +44 (0)870 443 8779
http://www.loadbalancer.org/



Re[2]: HEADSUP: Freebsd port net/haproxy-devel to update from v1.3.25 to v1.5-dev1

2010-09-08 Thread Ross West
(Ugh, sorry Willy for the direct email.)

And to close out the thread: The new version is now live with version
v1.5-dev2 (port version : v1.5.d2) in the ports tree. Do a portsnap
update to see it and install it.

WT That's not expected since the version is hard-coded in the VERSION file.
WT Either you got the wrong source package, or your build scripts have the
WT version explicitly forced on the makefile. I have just rechecked to ensure
WT the source package is correct, and it is, so it would be worth finding
WT what went wrong with the port.

I just realized I sent my reply to Willy direct, and not the list - I
screwed up and double installed haproxy on my dev/build environment
while playing around, so it's not a bug at all.

-- 

-- 




Re[2]: HEADSUP: Freebsd port net/haproxy-devel to update from v1.3.25 to v1.5-dev1

2010-08-31 Thread Ross West

 With the recent announcement of haproxy v1.5-dev1 with it's new
 features that I'm sure people will want to test, I'll be bringing the
 port net/haproxy-devel in line with it's name, ie: development.

WT That's great news ! However please upgrade to 1.5-dev2 to limit bug
WT reports.

The freebsd pr has been submitted to do the upgrade to v1.5-dev2
(ports version # is 1.5.d2).   People should see it in the next few
days.

Side note: in my testing, the haproxy stats page/socket reports v1.4.8 as the
version number instead of v1.5-dev2.

Cheers,
  Ross.

-- 




Re[2]: [ANNOUNCE] haproxy-1.4.2

2010-03-18 Thread Ross West

HJ By compiling Haproxy 1.4.2 on Opensolaris b134 I noticed a warning in
HJ dumpstats.c. I don't know for sure if this is a problem, but I thought I
HJ let you know.

Got the same on Freebsd - but a touch more descriptive message.

-= start
gcc -Iinclude -Iebtree -Wall -O2 -g -DTPROXY -DCONFIG_HAP_CRYPT
-DENABLE_POLL -DENABLE_KQUEUE -DUSE_PCRE -I/usr/local/include
-DCONFIG_HAPROXY_VERSION=\1.4.2\
-DCONFIG_HAPROXY_DATE=\2010/03/17\ -c -o src/dumpstats.o
src/dumpstats.c

src/dumpstats.c: In function 'stats_dump_full_sess_to_buffer':
src/dumpstats.c:2469: warning: format '%d' expects type 'int', but argument 5 
has type 'long int'
src/dumpstats.c:2469: warning: format '%d' expects type 'int', but argument 6 
has type 'long int'
src/dumpstats.c:2469: warning: format '%d' expects type 'int', but argument 7 
has type 'long int'
src/dumpstats.c:2499: warning: format '%d' expects type 'int', but argument 5 
has type 'long int'
src/dumpstats.c:2499: warning: format '%d' expects type 'int', but argument 6 
has type 'long int'
src/dumpstats.c:2499: warning: format '%d' expects type 'int', but argument 7 
has type 'long int'
-= end


-- 




Re[2]: [ANNOUNCE] haproxy-1.4.0

2010-03-04 Thread Ross West

WT Using the following patch I can build it everywhere here without a
WT warning. Could you please test on your various FreeBSD versions, I
WT see no reason why it should change anything, it's just for the sake
WT of completeness.

Compiles without error on FB 7+8!

Cheers,
  Ross.

-- 




Re[2]: [ANNOUNCE] haproxy-1.4.0

2010-03-03 Thread Ross West
Good morning everyone,

WT I hope you don't mind that I CC the list and Krzysztof who needed
WT the line which caused the problem on your side.

No problem, I just hit reply, rather than reply-all out of habit.

 However, instead of using _XOPEN_SOURCE we may use something less
 invasive (I hope), like for example _GNU_SOURCE.
 
 Could you please check, if it helps? Also, there is no point in 
 including unistd.c and adding one of the above defines for crypt() if 
 CONFIG_HAP_CRYPT is not defined. So, the final fix may look like this:

WT Let's wait for Ross to test on FreeBSD and if that's OK, I apply
WT the patch and release 1.4.1.

Applying Krzysztof's patch worked on both FB 7.0 and FB 8.0 for me!

Cheers,
  Ross.

-- 




Re: Re[2]: FreeBSD Ports: bumping haproxy from v1.2.18 - v1.4.x

2010-02-26 Thread joris dedieu
 Also, changing -devel right now at the same will cause all sorts of
 support issues as people deal with the migration - not everyone reads
 the UPDATING file before issuing portupgrade -a.

Even a solution should be  to mark the haproxy-devel has Moved (see
/usr/ports/MOVED)
I see in portupgrade's man page that there is an --ignore-moved
switch. So we can suppose
that portupgrade reads MOVED file.

So maybe :
moving haproxy-devel to haproxy13
creating haproxy14 and an haproxy15-devel (when time will come)
should be a solution.

For now, I think the best idea is to open a pr and see
what FreeBSD ports teem think about it.


Cheers,

Joris



Re[2]: [ANNOUNCE] haproxy 1.4-dev5 with keep-alive :-)

2010-01-13 Thread Ross West

 That's more of an issue with the site than a (proxy based) load
 balancer - the LB would be doing the exact same thing as the client.

WT Precisely not and that's the problem. The proxy cannot ask the user
WT if he wants to retry on sensible requests, and the cannot precisely
WT know what is at risk and what is not. The client knows a lot more about
WT that. For instance, I think that the client will not necessarily repost
WT a GET form without asking the user, but it might automatically report a
WT request for an image.

I can see a small confusion here because I've used the wrong
terminology. Proxy is not the correct term, as there are actual proxy
devices out there (eg: Squid) which are generally visible to the
client/server and shouldn't be intentionally resending requests upon
failure.

To describe what I mean is that the loadbalancer would keep a copy
(silently) of the client's request until a server gave a valid
response. So should the connection drop unexpectedly with server A
after the request, the load balancer would assume something went wrong
with that server, and then resend the request to Server B.
Throughout this, the end client would have only sent 1 request to the
loadbalancer (as it sees the LB as the end server).

Obviously this also allowed the loadbalancer to manipulate the headers
and route requests as required.

 WT So probably that a reasonable balance can be found but it is
 WT clear that from time to time a user will get an error.
 
 That sounds like the mantra of the internet in general.  :-)

WT I don't 100% agree.

Sorry, I meant out of the context of this conversation - there's many
many times that your statement has had context within other
conversations about internet connectivity in general and admin's views
on it (usually ending up with it's good enough - Especially with
DPI manipulated telco connectivity).

WT Oh I precisely see how it works, it has several names from vendor to vendor,
WT often they call it connection pooling. Doing a naive implementation is not
WT hard at all if you don't want to care about errors. The problems start when
WT you want to add the expected reliability in the process...

I will mention the vendor's software we used has since then been
completely re-written from ground up, probably to cover some of those
issues and get much better performance at higher speeds.

WT In practice, instead of doing an expensive copy, I think that 1) configuring
WT a maximum number of times a connection can be used, 2) configuring the 
maximum
WT duration of a connection and 3) configuring a small idle timeout on a 
connection
WT can prevent most of the issues. Then we could also tag some requests at 
risk
WT and other ones riskless and have an option for always renewing a 
connection
WT on risked requests. In practice on a web site, most of the requests are 
images
WT and a few ones are transactions. You can already lower the load by keeping 
95%
WT of the requests on keep-alive connections.

That does sound very logical.

WT I believe you that it worked fine. But my concern is not to confirm
WT after some tests that finally it works fine, but rather to design it
WT so that it works fine. Unfortunately HTTP doesn't permit it, so there
WT are tradeoffs to make, and that causes me a real problem you see.

Yes, the more I re-read the rfc, the more I feel your pain when they
specify SHOULD/MAY rather than MUST/MUST NOT allowing for those
corner cases to occur in the first place.

WT Indeed. Not to mention that applications today use more and more resources
WT because they're written by stacking up piles of crap and sometimes the
WT network has absolutely no impact at all due to the amount of crap being
WT executed during a request.

I don't want to get started in the [non-]quality of the asp programmer's
code of that project.  I still have nightmares.



Cheers,
  Ross.

-- 




Re[2]: [ANNOUNCE] haproxy 1.4-dev5 with keep-alive :-)

2010-01-12 Thread Ross West

I'll enter in this conversation as I've used (successfully) a load
balancer which did server-side keep-alive a while ago.

WT Hmmm that's different. There are issues with the HTTP protocol
WT itself making this extremely difficult. When you're keeping a
WT connection alive in order to send a second request, you never
WT know if the server will suddenly close or not. If it does, then
WT the client must retransmit the request because only the client
WT knows if it takes a risk to resend or not. An intermediate
WT equipemnt is not allowed to do so because it might send two
WT orders for one request.

This might be an architecture based issue and probably depends on the
amount of caching/proxying of the request that the load balancer does
(ie: holds the full request until server side completes successfully).

WT So by doing what you describe, your clients would regularly get some
WT random server errors when a server closes a connection it does not
WT want to sustain anymore before haproxy has a chance to detect it.

Never had any complaints of random server issues that could be
attributed to connection issues.  But that's probably attributable to
the above architectural comment.

WT Another issue is that there are (still) some buggy applications which
WT believe that all the requests from a same session were initiated by
WT the same client. So such a feature must be used with extreme care.

We found the biggest culprit is Microsoft's NTLM authentication
system. It actually breaks the http spec by authenticating the tcp
session, not the individual http requests (except the first one in the
tcp session). Last time I looked into it, the squid people had made
some progress into it, but hadn't gotten it to successfully proxy.

WT Last, I'd say there is in my opinion little benefit to do that. Where
WT the most time is elapsed is between the client and haproxy. Haproxy
WT and the server are on the same LAN, so a connection setup/teardown
WT here is extremely cheap, as it's where we manage to run at more than
WT 4 connections per second (including connection setup, send request,
WT receive response and close). That means only 25 microseconds for the
WT whole process which isn't measurable at all by the client and is
WT extremely cheap for the server.

When we placed the load balancer in front of our IIS based cluster, we
got around a 80-100% (!!) performance improvement immediately.  We
were estimating around a 25% increase only with our experience with
Microsoft's tcp stack.

Running against a unix based stack (Solaris  BSD) got us a much more
realistic 5-10% improvement.

nb: Improvement mainly being defined as a reduction in server side
processing/load.  Actual request speed was about the same.

Obviously over the years OS vendors have improved their systems'
stacks greatly, but server side keep-alives did work quite well for
us in saving server resources, as have the better integration of
network stacks and the hardware (chipsets) they use.  I doubt that
you'd get the same kind of performance improvements we did.

Cheers,
  Ross.

-- 




Re[2]: [ANNOUNCE] haproxy 1.4-dev5 with keep-alive :-)

2010-01-12 Thread Ross West

WT It's not only a matter of caching the request to replay it, it is that
WT you're simply not allowed to. I know a guy who ordered a book at a
WT large well-known site. His order was processed twice. Maybe there is
WT something on this site which grants itself the right to replay a user's
WT request when a server connection suddenly closes on keep-alive timeout
WT or count.

That's more of an issue with the site than a (proxy based) load
balancer - the LB would be doing the exact same thing as the client.

According to the rfc, if a connection is prematurely closed, then the
client would (silently) retry the request. In our case the LB just
emulated the client's behavior towards the servers.

Unfortunately for your friend, it could mean the code on the site
didn't do any duplicate order checking.  A corner case taken care of
by their support department I guess.

WT So probably that a reasonable balance can be found but it is
WT clear that from time to time a user will get an error.

That sounds like the mantra of the internet in general.  :-)

WT Maybe your LB was regularly sending dummy requests on the connections
WT to keep them alive, but since there is no NOP instruction in HTTP, you
WT have to send real work anyway.

Well, the site was busy enough that it didn't require to do the
equivalent of a NOP to keep connections open. :-) But the idea of NOPs
can be mitigated by adjusting timeouts on stale connections.

My understanding was that the loadbalancer actually just used a pool
of open tcp sessions, and would send the next request (from any of
it's clients) down the next open tcp connection that wasn't busy. If
none were free, a new connection was established, which would
eventually timeout and close naturally. I don't believe it was
pipelining the requests.

This would mean that multiple requests from clients A, B, C may go
down tcp connections X, Y, Z in a 'random' order. (eg: tcp connection
X may have requests from A, B, A, A, C, B)

Sounds rather chaotic, but actually worked fine.

 Last time I looked into it, the squid people had made some progress into
 it, but hadn't gotten it to successfully proxy.

After checking, I stand corrected - it looks to be that Squid have a
working proxy helper application to make ntlm authentication work.

WT Was it really just an issue with the TCP stack ? maybe there was a firewall
WT loaded on the machine ? Maybe IIS was logging connections and not requests,
WT so that it almost stopped logging ?

There was additional security measures on the machines, so yes, I
should say the stack wasn't fully the issue, but once they got
disabled in testing, we definitely still had better performance that
before.

WT It depends a lot on what the server does behind. File serving will not
WT change, it's generally I/O bound. However if the server was CPU-bound,
WT you might have won something, especially if there was a firewall on
WT the server.

CPU was our main issue - as this was quite a while ago, things have
since dramatically improved with better offload support in drivers and
on network cards, plus much profiling been done by OS vendors in their
kernels with regards to network performance.  So I doubt people would
get the same level of performance increase these days that we saw back
then.

Cheers,
  Ross.




-- 




Re[2]: Geographic loadbalancing

2009-01-26 Thread Ross West

JL I would like to hear anyone using anycast with TCP.  What if two servers are
JL equal distance.  Wouldn't you have a fair chance of equal 50% packets going
JL each way, killing tcp state connections.  The more servers out there
JL advertising the same IP, the more likely you will have cases of equal
JL distance...  or will packets typically go to the same server each time (or
JL at least for several minutes) even if the costs are the same?

It works surprisingly well - quite a few sites that are using it.

True packet balancing between pipes (ie: equal path) is generally
going between the same source/destination routing gear anyways, so it
arrives at the same place.  The main routing decisions are done on a
geographic basis in bgp, and it's very rare to have redundant
connections to different geographic destinations in the same router.

There are a few corner cases to deal with, but that usually only shows
up when your gear is very close together in the final network point
in which you should probably be using a load balancer [like haproxy!]
anyways.

Often enough to get around corner cases with people who keep long
sessions going (ie: sites you log into), you use anycast to
redirect load balancers which redirects the client to the actual
endpoint server which stays active.   eg: hit www.example.com (anycasted load 
balancer)
which will redirect you to server01.site02.example.com (physical
server/site).

Cheers,
  Ross.

--