Re: Haproxy 2.2.3 source

2020-09-09 Thread Alex Evonosky
Thank you Willy!

A

On Wed, Sep 9, 2020 at 1:31 PM Willy Tarreau  wrote:

> On Wed, Sep 09, 2020 at 07:20:17PM +0200, Willy Tarreau wrote:
> > Feel free to pick this patch if that helps for your builds, I'm going
> > to backport it to 2.2 once all platforms are happy.
>
> All builds are OK now, the commit was backported to 2.2 and the patch
> can be retrieved here:
>
>   http://git.haproxy.org/?p=haproxy-2.2.git;a=commitdiff_plain;h=10c627ab
>
> Sorry for the mess :-/
>
> Willy
>


Re: Haproxy 2.2.3 source

2020-09-08 Thread Alex Evonosky
Correct..  this is arm based on my side as well.



Sent from my Pixel 3XL


On Tue, Sep 8, 2020, 5:47 PM Vincent Bernat  wrote:

>  ❦  8 septembre 2020 16:13 -04, Alex Evonosky:
>
> > Just compiling 2.2.3 and getting this reference:
> >
> >
> > /haproxy-2.2.3/src/thread.c:212: undefined reference to
> > `_Unwind_Find_FDE'
>
> I am getting the same issue on armhf only. Other platforms don't get
> this issue. On this platform, we only get:
>
>   w   DF *UND*    GLIBC_2.4   __gnu_Unwind_Find_exidx
> 000165d0 gDF .text  000c  GCC_3.0 _Unwind_DeleteException
> d1f6 gDF .text  0002  GCC_3.0 _Unwind_GetTextRelBase
> 00016e1c gDF .text  0022  GCC_4.3.0   _Unwind_Backtrace
> 00016df8 gDF .text  0022  GCC_3.0 _Unwind_ForcedUnwind
> 00016dd4 gDF .text  0022  GCC_3.3 _Unwind_Resume_or_Rethrow
> d1f0 gDF .text  0006  GCC_3.0 _Unwind_GetDataRelBase
> 0001662c gDF .text  0036  GCC_3.5 _Unwind_VRS_Set
> 00016db0 gDF .text  0022  GCC_3.0 _Unwind_Resume
> 000169d8 gDF .text  02ba  GCC_3.5 _Unwind_VRS_Pop
> 00017178 gDF .text  000a  GCC_3.0 _Unwind_GetRegionStart
> 000165cc gDF .text  0002  GCC_3.5 _Unwind_Complete
> 00017184 gDF .text  0012  GCC_3.0
>  _Unwind_GetLanguageSpecificData
> 000165dc gDF .text  0036  GCC_3.5 _Unwind_VRS_Get
> 000164f0 gDF .text  0004  GCC_3.3 _Unwind_GetCFA
> 00016d8c gDF .text  0022  GCC_3.0 _Unwind_RaiseException
>
> So, older symbols are:
>
> 000165d0 gDF .text  000c  GCC_3.0 _Unwind_DeleteException
> d1f6 gDF .text  0002  GCC_3.0 _Unwind_GetTextRelBase
> 00016df8 gDF .text  0022  GCC_3.0 _Unwind_ForcedUnwind
> d1f0 gDF .text  0006  GCC_3.0 _Unwind_GetDataRelBase
> 00016db0 gDF .text  0022  GCC_3.0 _Unwind_Resume
> 00017178 gDF .text  000a  GCC_3.0 _Unwind_GetRegionStart
> 00017184 gDF .text  0012  GCC_3.0
>  _Unwind_GetLanguageSpecificData
> 00016d8c gDF .text  0022  GCC_3.0 _Unwind_RaiseException
>
> Moreover, comment says _Unwind_Find_DFE doesn't take arguments, but the
> signature I have in glibc is:
>
> fde *
> _Unwind_Find_FDE (void *pc, struct dwarf_eh_bases *bases)
> --
> Don't sacrifice clarity for small gains in "efficiency".
> - The Elements of Programming Style (Kernighan & Plauger)
>


Haproxy 2.2.3 source

2020-09-08 Thread Alex Evonosky
Hello Haproxy group-

Just compiling 2.2.3 and getting this reference:


/haproxy-2.2.3/src/thread.c:212: undefined reference to `_Unwind_Find_FDE'


Is there a new lib thats required?


Thank you!


Re: CORS support

2019-12-17 Thread Alex Evonosky
Thank you Willy.  That did it!
Great work!



Sent from my Pixel 3XL


On Mon, Dec 16, 2019, 11:50 PM Willy Tarreau  wrote:

> Hello Alex,
>
> On Mon, Dec 16, 2019 at 01:22:42PM -0500, Alex Evonosky wrote:
> > Hello Haproxy group-
> >
> > migrating from haproxy 2.0 to 2.1 and noticed some directives changed:
> >
> > === 2.0.10 ===
> >
> > capture request header origin len 128
> > http-response add-header Access-Control-Allow-Origin
> %[capture.req.hdr(0)]
> > if { capture.req.hdr(0) -m end aiqwest.com }
> > http-response add-header Access-Control-Allow-Headers:\ Origin,\
> > X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m
> found }
> >
> >
> > === 2.1 ===
> >
> > capture request header origin len 128
> > http-response add-header Access-Control-Allow-Origin
> %[capture.req.hdr(0)]
> > if { capture.req.hdr(0) -m end aiqwest.com }
> > rspadd add-header Access-Control-Allow-Headers:\ Origin,\
> > X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m
> found }
> >
> >
> > getting:
> >
> > The 'rspadd' directive is not supported anymore since HAProxy 2.1. Use
> > 'http-response add-header' instead.
> >
> > when adding the said above, getting an error:
> >
> > 'http-response add-header' expects exactly 2 arguments.
>
> You almost had it right, actually it's exactly the same as in your
> first rule: first arg is the header name, second one is the value:
>
>http-response add-header Access-Control-Allow-Headers Origin,\
> X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m found }
>
> Please let us know if anything is not clear enough in the message or
> in the doc so that we can improve it.
>
> Willy
>


CORS support

2019-12-16 Thread Alex Evonosky
Hello Haproxy group-

migrating from haproxy 2.0 to 2.1 and noticed some directives changed:

=== 2.0.10 ===

capture request header origin len 128
http-response add-header Access-Control-Allow-Origin %[capture.req.hdr(0)]
if { capture.req.hdr(0) -m end aiqwest.com }
http-response add-header Access-Control-Allow-Headers:\ Origin,\
X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m found }


=== 2.1 ===

capture request header origin len 128
http-response add-header Access-Control-Allow-Origin %[capture.req.hdr(0)]
if { capture.req.hdr(0) -m end aiqwest.com }
rspadd add-header Access-Control-Allow-Headers:\ Origin,\
X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m found }


getting:

The 'rspadd' directive is not supported anymore since HAProxy 2.1. Use
'http-response add-header' instead.

when adding the said above, getting an error:

'http-response add-header' expects exactly 2 arguments.


main question is, how can I convert the resadd statement to work with the
new 2.1?


Thank you!


Re: [ANNOUNCE] haproxy-2.0.1

2019-06-27 Thread Alex Evonosky
after compiling the new 2.0.1, it seems the HTTP2 issue *we were seeing* on
2.0 but not on 1.9.8 are fixed.

Thank you.

On Thu, Jun 27, 2019 at 7:19 AM Aleksandar Lazic  wrote:

> Am 26.06.2019 um 19:28 schrieb Christopher Faulet:
> > Hi,
> >
> > HAProxy 2.0.1 was released on 2019/06/26. It added 27 new commits
> > after version 2.0.0.
> >
> > This new version fixes several annoying bugs with various visible
> effects. Among
> > others, two majors bugs have been fixed. The first one is a regression on
> > stick-tables. HAProxy was unable to start when a stick-table was used in
> > "if/unless" ACL condition. An error claimed the stick-table name was
> > missing. The second major bug is in the H1 multiplexer. The area of a
> trash
> > chunk was easily able to be released by error when an outgoing HTTP
> message was
> > formatted. So it is a pretty old bug and it is strange we never spotted
> it
> > before. But it led to a memory corruption and thus to a wide variety of
> bugs.
> >
> > Several bugs in the HTX was fixed. One of them concerned the H2. When
> cookie
> > headers were grouped during the conversion of an H2 request into an HTX
> message,
> > the HTX message was not fully updated. When it happened, most of time the
> > connection hung. Another bug concerned the way 1xx informational
> messages was
> > emitted by HAProxy. An EOM was mistakenly added in these HTX messages.
> It was
> > totally valid on HAProxy-1.9. But in 2.0, these messages are part of the
> > response and must never have EOM block. This unexpected error was not
> correctly
> > caught, blocking the connection. Now, when HAProxy generates such
> transitional
> > responses, it does not emit EOM block. And if an unexpected error
> happens during
> > H1 output formatting, a fatal error is triggered and the connection is
> closed.
> >
> > On the H1 multiplexer, parsing errors when a too big message was
> received were
> > not correctly caught, blocking connections. It was due to an
> optimization to
> > allow zero copy transfers. In the H2 multiplexer, the frame padding was
> not
> > correctly handled in two ways, leading in both cases to protocol errors.
> >
> > Olivier fixed a bug on the connection's layer when the PROXY protocol was
> > used. The xprt handshake was not always present to send the PROXY
> protocol
> > header, leading to an infinite loop. He also fixed a bug in the SSL that
> was
> > able to crash HAProxy. In the function ssl_subscribe(), before doing
> anything,
> > we must be sure to have an xprt context. Finally he fixed a bug on
> > stream-interfaces. The flag SI_FL_ERR was unconditionally set when an
> error was
> > detected on the connection or on the conn-stream. But it must only be
> set when
> > the stream-interface is connected or is attempting a connection.
> >
> > A segfault was fixed in the leastconn LB algorithm because of an unsafe
> test
> > outside the LB lock. Thanks to Tim Duesterhus, HAProxy now set the
> header "Vary"
> > in compressed responses. William fixed two bugs in the master-worker.
> The first
> > was a segfault when the master switched to wait mode because the thread
> and
> > the fdtab deinit functions were called. The second was about the master
> cli that
> > was unable to send commands to several workers.
> >
> > Finally, as always, some small other bugs were fixed here and there.
> Thanks to
> > everyone to report and/or fixed bugs, or just for testing this new major
> > release. Of course, we encourage everyone to upgrade. Several bugs
> considered as
> > fixed are a bit hard or a bit long to reproduce. So we hope this release
> is
> > better than the last one. But please continue to report any issue you'll
> meet!
> >
> >
> > Please find the usual URLs below :
> >Site index   : http://www.haproxy.org/
> >Discourse: http://discourse.haproxy.org/
> >Slack channel: https://slack.haproxy.org/
> >Issue tracker: https://github.com/haproxy/haproxy/issues
> >Sources  : http://www.haproxy.org/download/2.0/src/
> >Git repository   : http://git.haproxy.org/git/haproxy-2.0.git/
> >Git Web browsing : http://git.haproxy.org/?p=haproxy-2.0.git
> >Changelog: http://www.haproxy.org/download/2.0/src/CHANGELOG
> >Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
>
> TLS 1.3 Image ready: https://hub.docker.com/r/me2digital/haproxy20-centos
>
> ```
> HA-Proxy version 2.0.1 2019/06/26 - https://haproxy.org/
> Build options :
>   TARGET  = linux-glibc
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
> -fwrapv
> -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
> -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
> -Wno-missing-field-initializers -Wtype-limits
>   OPTIONS = USE_PCRE=1 USE_PCRE_JIT=1 USE_PTHREAD_PSHARED=1 USE_REGPARM=1
> USE_OPENSSL=1 USE_LUA=1 USE_SLZ=1
>
> Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER +PCRE
> 

Re: haproxy architecture

2019-05-20 Thread Alex Evonosky
I (personally) think this is a matter of preference and load, and my be
unique in each situation.  In my instance I have two sets of pods.
Internal and external


Internal is for any CockroachDB connections, mariaDB connections, Redis to
use.

External is for LetsEncrypt SSL terminations and front-facing dockers to
the internet.


So any docker can use the internal pods for database connections, and
external for end-users...



On Mon, May 20, 2019 at 11:54 AM Jeff Abrahamson  wrote:

> Ah, cool, thanks very much, that seems to go a long way to filling the
> holes in my knowledge.  (And thanks, Илья, too.)
>
> This leaves only a second piece of my question:  am I being reasonable
> running multiple services through one (pod of) haproxies and letting the
> haproxies (all with the same config) tease them apart based on host name
> and maybe part of path?
>
> Jeff
>
>
> On 20/05/2019 17:48, Alex Evonosky wrote:
>
> example:
>
> pod1:
>
> primary: 1.1.1.2
> secondary: 1.1.1.3
> virtual: 1.1.1.1
>
>
> pod2:
>
> primary: 1.1.1.5
> secondary: 1.1.1.6
> virtual: 1.1.1.4
>
>
> The mechanism to utilize the virtual IP is VRRP (apps like keepalived).
>
>
> Then on the DNS server, you can use A records for 1.1.1.1 and 1.1.1.4
>
>
> On Mon, May 20, 2019 at 11:37 AM Jeff Abrahamson  wrote:
>
>> Thanks, Alex.
>>
>> I'd understood that, but not the mechanism.  Each host has an A record.
>> Did I miss a DNS mapping type for virtual addresses?  Or do the two hosts
>> run a protocol between them and some other party?  (But if one of my
>> haproxies dies, what is the mechanism of notification?)
>>
>> Said differently, I'm a client and I want to send a packet to
>> service.example.com (a CNAME).  I do a DNS lookup, I get an IP address,
>> 1.2.3.4.  (Did the CNAME map only to 1.2.3.4?)  I establish an https
>> connection to 1.2.3.4.  Who/what on the network decides that that
>> connection terminates at service2.example.com and not at
>> service1.example.com?
>>
>> Does this mean that letsencrypt is incapable of issuing SSL certs because
>> my IP resolves to different hosts at different moments?
>>
>> Sorry if my questions are overly basic.  I'm just trying to get a grip on
>> what this means and how to do it.
>>
>> Jeff
>>
>>
>> On 20/05/2019 17:12, Alex Evonosky wrote:
>>
>> Jeff-
>>
>> ViP - Virtual IP.  this is a shared IP between nodes.  One node is
>> primary and the other is hot-standby.  If the heartbeat fails between the
>> two, then the secondary becomes primary.
>>
>> The end application/user only needs to know about the virtual IP.  So in
>> DNS, you can create X amount of these pods  to distribute the load among
>> the pods.
>>
>>
>> and we run this setup in Apache mesos with about 100 dockers and 4 Ha
>> proxy pods.
>>
>>
>>
>>
>> On Mon, May 20, 2019 at 10:49 AM Jeff Abrahamson  wrote:
>>
>>> Thanks.  Have you tried that, bringing down an haproxy during some high
>>> load period and watching traffic to see how long it takes for traffic all
>>> to migrate to the remaining haproxy?  My fear (see below) is that that time
>>> is quite long and still expose you to quite a lot of failed clients.  (It's
>>> better than losing one's sole haproxy, to be sure.)
>>>
>>> In any case, and more concretely, that raises a few additional questions
>>> for me, mostly due to my specialty not being networking.
>>>
>>> *1.  VIP addresses.*  I've not managed to fully understand how VIP
>>> addresses work.  Everything I've read either (1) seems to be using the term
>>> incorrectly, with a sort of short TTL DNS resolution and a manual
>>> fail-over, or (2) requires that the relevant servers act as routers (
>>> OSPF <https://en.wikipedia.org/wiki/Open_Shortest_Path_First>, etc.) if
>>> not outright playing link-level tricks.  On (1), we try to engineer our
>>> infra so that our troubles will be handled automatically or by machines
>>> before being handled by us.  I worry that (2) is a long rabbit hole, but
>>> I'd still like to understand what that rabbit hole is, either in case I'm
>>> wrong or so that I understand when it's the right time.
>>>
>>> *2.  RR DNS.  *People talk about RR DNS for availability, but I've seen
>>> no evidence that it's applicable beyond load balancing.  Indeed, RFC
>>> 1794 <https://tools.ietf.org/html/rfc1794> (1995) only talks about load
>>> balancing.  As long as the haproxy hosts are all up, clients pick an
>

Re: haproxy architecture

2019-05-20 Thread Alex Evonosky
example:

pod1:

primary: 1.1.1.2
secondary: 1.1.1.3
virtual: 1.1.1.1


pod2:

primary: 1.1.1.5
secondary: 1.1.1.6
virtual: 1.1.1.4


The mechanism to utilize the virtual IP is VRRP (apps like keepalived).


Then on the DNS server, you can use A records for 1.1.1.1 and 1.1.1.4


On Mon, May 20, 2019 at 11:37 AM Jeff Abrahamson  wrote:

> Thanks, Alex.
>
> I'd understood that, but not the mechanism.  Each host has an A record.
> Did I miss a DNS mapping type for virtual addresses?  Or do the two hosts
> run a protocol between them and some other party?  (But if one of my
> haproxies dies, what is the mechanism of notification?)
>
> Said differently, I'm a client and I want to send a packet to
> service.example.com (a CNAME).  I do a DNS lookup, I get an IP address,
> 1.2.3.4.  (Did the CNAME map only to 1.2.3.4?)  I establish an https
> connection to 1.2.3.4.  Who/what on the network decides that that
> connection terminates at service2.example.com and not at
> service1.example.com?
>
> Does this mean that letsencrypt is incapable of issuing SSL certs because
> my IP resolves to different hosts at different moments?
>
> Sorry if my questions are overly basic.  I'm just trying to get a grip on
> what this means and how to do it.
>
> Jeff
>
>
> On 20/05/2019 17:12, Alex Evonosky wrote:
>
> Jeff-
>
> ViP - Virtual IP.  this is a shared IP between nodes.  One node is primary
> and the other is hot-standby.  If the heartbeat fails between the two, then
> the secondary becomes primary.
>
> The end application/user only needs to know about the virtual IP.  So in
> DNS, you can create X amount of these pods  to distribute the load among
> the pods.
>
>
> and we run this setup in Apache mesos with about 100 dockers and 4 Ha
> proxy pods.
>
>
>
>
> On Mon, May 20, 2019 at 10:49 AM Jeff Abrahamson  wrote:
>
>> Thanks.  Have you tried that, bringing down an haproxy during some high
>> load period and watching traffic to see how long it takes for traffic all
>> to migrate to the remaining haproxy?  My fear (see below) is that that time
>> is quite long and still expose you to quite a lot of failed clients.  (It's
>> better than losing one's sole haproxy, to be sure.)
>>
>> In any case, and more concretely, that raises a few additional questions
>> for me, mostly due to my specialty not being networking.
>>
>> *1.  VIP addresses.*  I've not managed to fully understand how VIP
>> addresses work.  Everything I've read either (1) seems to be using the term
>> incorrectly, with a sort of short TTL DNS resolution and a manual
>> fail-over, or (2) requires that the relevant servers act as routers (OSPF
>> <https://en.wikipedia.org/wiki/Open_Shortest_Path_First>, etc.) if not
>> outright playing link-level tricks.  On (1), we try to engineer our infra
>> so that our troubles will be handled automatically or by machines before
>> being handled by us.  I worry that (2) is a long rabbit hole, but I'd still
>> like to understand what that rabbit hole is, either in case I'm wrong or so
>> that I understand when it's the right time.
>>
>> *2.  RR DNS.  *People talk about RR DNS for availability, but I've seen
>> no evidence that it's applicable beyond load balancing.  Indeed, RFC 1794
>> <https://tools.ietf.org/html/rfc1794> (1995) only talks about load
>> balancing.  As long as the haproxy hosts are all up, clients pick an
>> address at random (I think, I haven't found written evidence of that as a
>> client requirement.)  But if an haproxy goes down, every client has to time
>> out and try again independently, which doesn't make me happy.  It might
>> still be the best I can do.
>>
>> I'm very open to pointers or insights.  And I'm quite aware that the
>> relationship between availability and cost is super-linear.  My goal is to
>> engineer the best solutions we can with the constraints we have and to
>> understand why we do what we do.
>>
>> Anecdotally, I noticed a while back that Google and others, which used to
>> have DNS resolutions from one name to multiple IP's, now resolve to a
>> single IP.
>>
>> Jeff Abrahamson
>> http://p27.eu/jeff/
>> http://transport-nantes.com/
>>
>>
>> On 20/05/2019 15:04, Alex Evonosky wrote:
>>
>> You could make it a bit more agile and scale it:
>>
>> you can run them in "pods", such as two haproxy instances running
>> keepalived between them and use the ViP IP as the DNS record, so if an
>> HAproxy instance was to die, the alternate HAproxy instance can take over.
>> Set more pods up and use DNS round robin.
>>
>>
>>
>&g

Re: haproxy architecture

2019-05-20 Thread Alex Evonosky
Jeff-

ViP - Virtual IP.  this is a shared IP between nodes.  One node is primary
and the other is hot-standby.  If the heartbeat fails between the two, then
the secondary becomes primary.

The end application/user only needs to know about the virtual IP.  So in
DNS, you can create X amount of these pods  to distribute the load among
the pods.


and we run this setup in Apache mesos with about 100 dockers and 4 Ha proxy
pods.




On Mon, May 20, 2019 at 10:49 AM Jeff Abrahamson  wrote:

> Thanks.  Have you tried that, bringing down an haproxy during some high
> load period and watching traffic to see how long it takes for traffic all
> to migrate to the remaining haproxy?  My fear (see below) is that that time
> is quite long and still expose you to quite a lot of failed clients.  (It's
> better than losing one's sole haproxy, to be sure.)
>
> In any case, and more concretely, that raises a few additional questions
> for me, mostly due to my specialty not being networking.
>
> *1.  VIP addresses.*  I've not managed to fully understand how VIP
> addresses work.  Everything I've read either (1) seems to be using the term
> incorrectly, with a sort of short TTL DNS resolution and a manual
> fail-over, or (2) requires that the relevant servers act as routers (OSPF
> <https://en.wikipedia.org/wiki/Open_Shortest_Path_First>, etc.) if not
> outright playing link-level tricks.  On (1), we try to engineer our infra
> so that our troubles will be handled automatically or by machines before
> being handled by us.  I worry that (2) is a long rabbit hole, but I'd still
> like to understand what that rabbit hole is, either in case I'm wrong or so
> that I understand when it's the right time.
>
> *2.  RR DNS.  *People talk about RR DNS for availability, but I've seen
> no evidence that it's applicable beyond load balancing.  Indeed, RFC 1794
> <https://tools.ietf.org/html/rfc1794> (1995) only talks about load
> balancing.  As long as the haproxy hosts are all up, clients pick an
> address at random (I think, I haven't found written evidence of that as a
> client requirement.)  But if an haproxy goes down, every client has to time
> out and try again independently, which doesn't make me happy.  It might
> still be the best I can do.
>
> I'm very open to pointers or insights.  And I'm quite aware that the
> relationship between availability and cost is super-linear.  My goal is to
> engineer the best solutions we can with the constraints we have and to
> understand why we do what we do.
>
> Anecdotally, I noticed a while back that Google and others, which used to
> have DNS resolutions from one name to multiple IP's, now resolve to a
> single IP.
>
> Jeff Abrahamson
> http://p27.eu/jeff/
> http://transport-nantes.com/
>
>
> On 20/05/2019 15:04, Alex Evonosky wrote:
>
> You could make it a bit more agile and scale it:
>
> you can run them in "pods", such as two haproxy instances running
> keepalived between them and use the ViP IP as the DNS record, so if an
> HAproxy instance was to die, the alternate HAproxy instance can take over.
> Set more pods up and use DNS round robin.
>
>
>
> On Mon, May 20, 2019 at 8:59 AM Jeff Abrahamson  wrote:
>
>> We set up an haproxy instance to front several rails servers.  It's
>> working well, so we're quickly wanting to use it for other services.
>>
>> Since the load on the haproxy host is low (even miniscule), we're
>> tempted to push everything through a single haproxy instance and to let
>> haproxy notice based on requested hostname to which backend to dispatch
>> requests.
>>
>> Is there any good wisdom here on how much to pile onto a single haproxy
>> instance or when to stop?
>>
>> --
>>
>> Jeff Abrahamson
>> http://p27.eu/jeff/
>> http://transport-nantes.com/
>>
>>
>>
>>
>> --
>
> Jeff Abrahamson
> +33 6 24 40 01 57
> +44 7920 594 255
> http://p27.eu/jeff/http://transport-nantes.com/
>
>


Re: haproxy architecture

2019-05-20 Thread Alex Evonosky
You could make it a bit more agile and scale it:

you can run them in "pods", such as two haproxy instances running
keepalived between them and use the ViP IP as the DNS record, so if an
HAproxy instance was to die, the alternate HAproxy instance can take over.
Set more pods up and use DNS round robin.



On Mon, May 20, 2019 at 8:59 AM Jeff Abrahamson  wrote:

> We set up an haproxy instance to front several rails servers.  It's
> working well, so we're quickly wanting to use it for other services.
>
> Since the load on the haproxy host is low (even miniscule), we're
> tempted to push everything through a single haproxy instance and to let
> haproxy notice based on requested hostname to which backend to dispatch
> requests.
>
> Is there any good wisdom here on how much to pile onto a single haproxy
> instance or when to stop?
>
> --
>
> Jeff Abrahamson
> http://p27.eu/jeff/
> http://transport-nantes.com/
>
>
>
>
>


Re: What to look out for when going from 1.6 to 1.8?

2018-07-16 Thread Alex Evonosky
Tim-

I can speak from a production point of view that we had HAproxy on the 1.6
branch inside docker containers for mesos load balancing with pretty much
the same requirements as you spoke of.  After compiling Haproxy to the 1.8x
branch the same config worked without issues.

-Alex


On Mon, Jul 16, 2018 at 9:39 AM, Tim Verhoeven 
wrote:

> Hello all,
>
> We have been running the 1.6 branch of HAProxy, without any issues, for a
> while now. And reading the updates around 1.8 here in the mailing list it
> looks like its time to upgrade to this branch.
>
> So I was wondering if there are any things I need to look of for when
> doing this upgrade? We are not doing anything special with HAProxy (I
> think). We run it as a single process, we use SSL/TLS termination, some
> ACL's and a bunch of backends. We only use HTTP 1.1 and TCP connections.
>
> From what I've been able to gather my current config will works just as
> good with 1.8. But some extra input from all the experts here is always
> appreciated.
>
> Thanks,
> Tim
>


Re: [ANNOUNCE] haproxy-1.8.0

2017-11-27 Thread Alex Evonosky
Congratulations!

On Mon, Nov 27, 2017 at 8:41 AM, Arnall  wrote:

> Le 26/11/2017 à 19:57, Willy Tarreau a écrit :
>
>> Hi all,
>>
>> After one year of intense development and almost one month of debugging,
>> polishing, and cross-review work trying to prevent our respective
>> coworkers
>> from winning the first bug award, I'm pleased to announce that haproxy
>> 1.8.0
>> is now officially released!
>>
>
> Congratulations to everyone involved  !
>
> Haproxy is trully a great product.
>
>
>