Hey Aleks,

Thank you for the reply, I should have included my version. I am currently
using HAProxy 1.8, but moving up a version is a possibility. I understand
what your example is doing, but it has the same issue my original example
has I think, that I have to have one unix socket per cluster. In my case,
"cluster" is just a small collection of servers with the same service, but
there could be dozens of these clusters.

In our setup, an inbound request gets routed to the correct backend based
on the host header (in my original example, this backend would be
"haproxy-test"). But then I effectively want to A/B test between the two
clusters that can serve this backend. I could put ever server in this one
backend, with the proper weights, but that isn't exactly what I am looking
for. Ideally, I would like if one cluster goes out completely that 503s get
returned for any requests that would normally get round robined to that
cluster. The only way I could find to actually enforce weighting between
two clusters was to forward the request through a socket to a new
"frontend" (functionally this is the same as running a proxy instance per
cluster). This seems to work, but I am looking for a way to do it without
opening up a large amount of unix sockets.


On Wed, Feb 6, 2019 at 11:43 AM Aleksandar Lazic <al-hapr...@none.at> wrote:

> Hi James.
>
> Am 06.02.2019 um 16:16 schrieb James Root:
> > Hi All,
> >
> > I am doing some research and have not really found a great way to
> configure
> > HAProxy to get the desired results. The problem I face is that I a
> service
> > backed by two separate collections of servers. I would like to split
> traffic
> > between these two clusters (either using percentages or weights).
> Normally, I
> > would configure a single backend and calculate my weights to get the
> desired
> > effect. However, for my use case, the list of servers can be update
> dynamically
> > through the API. To maintain correct weighting, I would then have to
> > re-calculate the weights of every entry to maintain a correct balance.
> >
> > An alternative I found was to do the following in my configuration file:
> >
> > backend haproxy-test
> > balance roundrobin
> > server cluster1 u...@cluster1.sock weight 90
> > server cluster2 u...@cluster2.sock weight 10
> >
> > listen cluster1
> >     bind u...@cluster1.sock
> >     balance roundrobin
> >     server s1 127.0.0.1:8081 <http://127.0.0.1:8081>
> >
> > listen cluster2
> >     bind u...@cluster2.sock
> >     balance roundrobin
> >     server s1 127.0.0.1:8082 <http://127.0.0.1:8082>
> >     server s2 127.0.0.1:8083 <http://127.0.0.1:8083>
> >
> > This works, but is a bit nasty because it has to take another round trip
> through
> > the kernel. Ideally, there would be a way to accomplish this without
> having to
> > open unix sockets, but I couldn't find any examples or any leads in the
> haproxy
> > docs.
> >
> > I was wondering if anyone on this list had any ideas to accomplish this
> without
> > using extra unix sockets? Or an entirely different way to get the same
> effect?
>
> Well as we don't know which version of HAProxy do you use I will suggest
> you a
> solution based on 1.9.
>
> I would try to use the set-priority-* feature
>
>
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-request%20set-priority-class
>
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-http-request%20set-priority-offset
>
>
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.2-prio_class
>
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.2-prio_offset
>
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7.3.3-src
>
> I would try the following, untested but I think you get the idea.
>
> frontend clusters
>
>   bind u...@cluster1.sock
>   bind u...@cluster2.sock
>
>   balance roundrobin
>
>   # I'm not sure if src works with unix sockets like this
>   # maybe you need to remove the unix@ part.
>   acl src-cl1 src u...@cluster1.sock
>   acl src-cl2 src u...@cluster2.sock
>
>   http-request set-priority-class -10s if src-cl1
>   http-request set-priority-class +10s if src-cl2
>
> #  http-request set-priority-offset 5s if LOGO
> #  http-request set-priority-offset 5s if LOGO
>
>   use_backend cluster1 if priority-class < 5
>   use_backend cluster2 if priority-class > 5
>
>
> backend cluster1
>     server s1 127.0.0.1:8081
>
> backend cluster2
>     server s1 127.0.0.1:8082
>     server s2 127.0.0.1:8083
>
> There are a lot of fetching functions so maybe you find a better solution
> with
> another fetch function as I don't know your application.
>
> https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#7
>
> In case you haven't seen it there is also a management interface for
> haproxy.
>
> https://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3
> https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/
>
> > Thanks,
> > James Root
>
> Regards
> Aleks
>

Reply via email to