u, Feb 11, 2021 at 03:11:09AM +0530, Sachin Shetty wrote:
> > Hi,
> >
> > We have a lua block that connects to memcache when a request arrives
> >
> > """
> > function get_from_gds(host, port, key)local sock = core.tcp()
> > sock:set
Hi,
We have a lua block that connects to memcache when a request arrives
"""
function get_from_gds(host, port, key)local sock = core.tcp()
sock:settimeout(20)local result = DOMAIN_NOT_FOUNDlocal
status, error = sock:connect(host, port)if not status then
core.Alert(GDS_LOG_PREFIX
Hi,
As per the documentation (
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-http-request)
stats socket set map should handle adding a new key as well as a updating
an existing key, but my tests show otherwise
echo "set map /somemap.txt abcKey abcValue" | socat stdio
Hi,
We are using maps extensively in our architecture to map host headers to
backends. The maps are seeded dynamically with a lua handler to an external
service as requests arrive, there are no pre-seeded values in the map, the
physical map file is empty
On haproxy reload at peak traffic, the
wrote:
> Hi Sachin,
>
> On Wed, Jan 02, 2019 at 07:33:03PM +0530, Sachin Shetty wrote:
> > Hi Willy,
> >
> > It seems the http-send-name-header directive is not sent with
> health-check
> > and I need it in the health-check as well :)
>
> Indeed it's not
Hi Willy,
It seems the http-send-name-header directive is not sent with health-check
and I need it in the health-check as well :)
is there a way to make it work with health-check as well?
Thanks
Sachin
On Tue, Dec 18, 2018 at 5:18 PM Sachin Shetty wrote:
> Thankyou Willy. http-send-n
deprecated:
> https://tools.ietf.org/html/rfc6648
>
> Norman Branitsky
>
> On Dec 16, 2018, at 03:50, Willy Tarreau wrote:
>
> Hi Sachin,
>
> On Sat, Dec 15, 2018 at 10:32:21PM +0530, Sachin Shetty wrote:
>
> Hi,
>
>
> We have a tricky requirement to s
Hi,
We have a tricky requirement to set a different header value in the request
based on which server in a backend is picked.
backend pod0
...
server server1 server1:6180 check
server server2 server2:6180 check
server server3 server3:6180 check
so when request is forwarded to
Fri, Oct 05, 2018 at 12:38:15PM +0530, Sachin Shetty wrote:
> > Hi,
> >
> > I see this in the documentation:
> >
> > Compression is disabled when:
> > * ...
> > * response header "Transfer-Encoding" contains "chunked" (Tempor
Hi,
I see this in the documentation:
Compression is disabled when:
* ...
* response header "Transfer-Encoding" contains "chunked" (Temporary
Workaround)
*
Is this still accurate?
I have tested a lot of responses from Server with compression enabled
in backend
and server
nd
result = s
end
sock:close()
core.Alert("Returning from GDS:" .. key .. ":" .. result)
return result
end
but the receive still does not timeout in 3 seconds.
On Mon, Aug 13, 2018 at 2:19 AM, Cyril Bonté wrote:
> Le 12/08/2018 à 18:21, Sachin Shetty a écrit
in getting
key: get apache.qaus16march2017:execution timeout
Aug 12 16:16:21 l1ratelimit01 haproxy_l1_webui[23965]: Received
Response:qaus16march2017:Error<
Aug 12 16:16:21 l1ratelimit01 haproxy_l1_webui[23965]: 127.0.0.1:39304
[12/Aug/2018:16:16:12.164] http_l1_webui apache_l1/
9026/-1/-1/-1/
Hi Cyril,
Any idea how I can deterministically set the readtimeout as well?
Thanks
Sachin
On Fri, Jul 27, 2018 at 1:23 PM, Sachin Shetty wrote:
> Thankyou Cyril, your patch fixed the connect issue.
>
> Read timeout still seems a bit weird though, at settimeout(1), readtimeou
Hi,
We are using a http-req lua action to dynamically set some app specific
metadata headers. The lua handler connects to a upstream memcache like
service over tcp to fetch additional metadata.
Here is a simplified config:
function get_from_gds(txn)
local key = txn.sf:req_fhdr("host")
‘host’)
>
> Other point: I’m not sure that the split() function exists.
>
> Thierry
>
>
> > On 27 Jul 2018, at 14:38, Sachin Shetty wrote:
> >
> > Hi,
> >
> > We are doing about 10K requests/minute on a single haproxy server, we
> have enough CPUs
Hi,
We are doing about 10K requests/minute on a single haproxy server, we have
enough CPUs and memory. Right now each requests looks up a map for backend
info. It works well.
Now we need to build some expire logic around the map. Like ignore some
entries in the map entries after some time. I
in
the same source file.
Thanks
Sachin
On Fri, Jul 27, 2018 at 5:18 AM, Cyril Bonté wrote:
> Hi,
>
> Le 26/07/2018 à 19:54, Sachin Shetty a écrit :
>
>> Hi,
>>
>> We are using a http-req lua action to dynamically set some app specific
>> metadata headers. Th
Hi,
We are using a http-req lua action to dynamically set some app specific
metadata headers. The lua handler connects to a upstream memcache like
service over tcp to fetch additional metadata.
Functionally everything works ok, but I am seeing that socket.settimeout
has no effect. Irrespective
Hi,
We have been using haproxy in our production systems for a long time.
Recently we spotted a slowdown in downloads in SSL compared to plain
http.
We are able to reproduce this in a test setup which has no other traffic.
We have nbproc set according to the number of cpus
Haproxy has two front
Thanks a lot Thierry, that was it, changing to http-request solved my
issue.
I am now able to leverage a fully dynamic backend :)
Thanks
Sachin
On 7/21/16, 10:52 PM, "thierry.fourn...@arpalert.org"
<thierry.fourn...@arpalert.org> wrote:
>On Tue, 19 Jul 2016 15:28:25 +0530
&
Hi,
Any suggestions?
Basically if I call http-request lua.choose_backend which seeds a map, will
the map values be available in the subsequent map look up in the same
request?
Thanks
Sachin
From: Sachin Shetty <sshe...@egnyte.com>
Date: Tuesday, July 19, 2016 at 3:28 PM
To: &q
Hi,
We always had a unique requirement of picking a backend based on response
from a external http service. In the past we have got this working by
routing requests via a modified apache and caching the headers in maps for
further request, but now I am trying to simplify our topology and trying
Thankyou Cyril. I could not get it work with 5.3 either, I am now trying
to use built in sockets with core.tcp().
On 7/19/16, 4:00 AM, "Cyril Bonté" <cyril.bo...@free.fr> wrote:
>Hi Sachin,
>
>Le 18/07/2016 à 16:16, Sachin Shetty a écrit :
>> (...)
>>
Hi,
I am trying to load a luasocket script which would make a rest call to a
upstream service to determine the backend
The script is as follows:
³²"
http = require ³socket.http"
function choose_backend(txn, arg1)
core.log(core.info, "Getting Info:" .. arg1)
result,
Hi,
As documented capture response header ³X-Via² will only log the last value
of header X-Via. We have backends that might inject more than one value for
the header X-Via.
Is there a way to log all the values?
Thanks
Sachin
To close this thread out: we found the issue to be in 1.6.4-20160426 patch
that I was using. The issue is fixed in 1.6.5.
Thanks Willy and Lukas.
Thanks
Sachin
On 5/13/16, 8:14 PM, "Willy Tarreau" <w...@1wt.eu> wrote:
>On Fri, May 13, 2016 at 07:32:36PM +0530, Sachin She
In 24 hours all servers had connections growing, we have reverted the
patch for now.
I have the show sess all output if you would like to see.
Thanks
Sachin
On 5/12/16, 10:08 PM, "Sachin Shetty" <sshe...@egnyte.com> wrote:
>Hi Lukas,
>
>Attached output.
>
>Thanks
1wt.eu> wrote:
>On Tue, May 10, 2016 at 11:10:14AM +0530, Sachin Shetty wrote:
>> We deployed the latest and we saw throughput still dropped around peak
>> hours a bit, then we swithed to nbproc 4 which is holding up ok.
>
>So probably you were reaching the processing limits f
We deployed the latest and we saw throughput still dropped around peak
hours a bit, then we swithed to nbproc 4 which is holding up ok. Note that
4 Cpus was not sufficient earlier, so I believe the latest version is
scaling better.
Thanks Lukas and Willy.
On 4/29/16, 11:09 AM, "Willy Tarreau"
Thanks Lukas and Willy. I am in the process of getting 1.6.4-20160426
deployed in our QA, I will keep you guys posted.
On 4/29/16, 11:09 AM, "Willy Tarreau" wrote:
>Hi guys,
>
>On Tue, Apr 26, 2016 at 08:46:37AM +0200, Lukas Tribus wrote:
>> Hi Sachin,
>>
>>
>> there is another
:57 PM, "Lukas Tribus" <lu...@gmx.net> wrote:
>Hi,
>
>
>Am 21.04.2016 um 08:11 schrieb Sachin Shetty:
>> Hi,
>>
>> any hints to further isolate this - we have deferred the problem by
>>adding
>> all the cores we had, but I have a feelin
Hi,
any hints to further isolate this - we have deferred the problem by adding
all the cores we had, but I have a feeling that our request rate is not
that high (7K per minute a peak) and it will show up again as traffic
increases.
Thanks
Sachin
On 4/18/16, 12:22 PM, "Sachin Shetty&q
me know if you need more of strace output.
Thanks
Sachin
On 4/7/16, 5:51 PM, "Lukas Tribus" <lu...@gmx.net> wrote:
>Hi,
>
>Am 05.04.2016 um 09:38 schrieb Sachin Shetty:
>> Hi Lukas, Pavlos,
>>
>> Thanks for your response, more info as requested.
&
Hi,
We have multiple haproxy servers receiving traffic from our firewall, we
want to apply some rate limiting that takes into account counters from all
the haproxy servers.
I am testing this with 1.6.4 and I tried the peer feature, but not able to
get it to work. I understand that counter
agree to both the points.
Thanks
Sachin
On 4/7/16, 11:24 PM, "Willy Tarreau" <w...@1wt.eu> wrote:
>On Thu, Apr 07, 2016 at 10:59:24PM +0530, Sachin Shetty wrote:
>> Hi Willy,
>>
>> Sorry for the confusion. I wrote to you much before in my
>>inves
it.
Thanks
Sachin
On 4/7/16, 6:31 PM, "Willy Tarreau" <w...@1wt.eu> wrote:
>Hi Sachin,
>
>On Thu, Apr 07, 2016 at 02:21:16PM +0200, Lukas Tribus wrote:
>> Hi,
>>
>> Am 05.04.2016 um 09:38 schrieb Sachin Shetty:
>> >Hi Lukas, Pavlos,
>> >
>
Hi Lukas, Pavlos,
Thanks for your response, more info as requested.
1. Attached conf with some obfuscation
2. Haproxy -vv
HA-Proxy version 1.5.4 2014/09/02
Copyright 2000-2014 Willy Tarreau
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2
Hi,
I am chasing some weird capacity issues in our setup.
Haproxy which also does SSL is forwarding request to various other servers
upstream. I am seeing a simple 100MB file download from our upstream
components starts to slow down time to time like hitting as low as 1MBPS,
usually is it
Hi,
We have started using Http trailers in http chunked request. Http trailers
are pretty well defined in the spec but seems like not widely used. We
have haproxy forwarding the trailers to Apache tomcat and it is all
working fine, I just wanted to confirm from the group that it is working
by
...@luffy.cx wrote:
❦ 22 juillet 2015 17:22 +0530, Sachin Shetty sshe...@egnyte.com :
We have started using Http trailers in http chunked request. Http
trailers
are pretty well defined in the spec but seems like not widely used.
Are they supported by browsers? Last time I checked
Thanks Willy. Yeah trailers are rarely used and I am having a tough time
making it work in Apache web server. Thanks for taking care of it in
Haproxy from the start. :)
On 7/22/15, 6:22 PM, Willy Tarreau w...@1wt.eu wrote:
Hi Sachin,
On Wed, Jul 22, 2015 at 05:22:00PM +0530, Sachin Shetty wrote
, Sachin Shetty sshe...@egnyte.com wrote:
Thanks Baptiste - Will http-request set-header X-track %[url] help me
track URL with query parameters as well?
On 6/3/15 6:36 PM, Baptiste bed...@gmail.com wrote:
On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com
wrote:
Hi,
I am trying
-request.
Baptiste
On Thu, Jun 4, 2015 at 12:05 PM, Sachin Shetty sshe...@egnyte.com wrote:
Tried it, I don¹t see the table populating at all.
stick-table type string size 1M expire 10m store conn_cur
acl is_range hdr_sub(Range) bytes=
acl is_path_throttled path_beg /public-api/v1/fs-content
Hi,
I am trying to write some throttles that would limit concurrent connections
for Range requests + specific urls. For example I want to allow only 2
concurrent range requests downloading a file
/public-api/v1/fs-content-download
I have a working rule:
stick-table type string size 1M expire
Thanks Baptiste - Will http-request set-header X-track %[url] help me
track URL with query parameters as well?
On 6/3/15 6:36 PM, Baptiste bed...@gmail.com wrote:
On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com wrote:
Hi,
I am trying to write some throttles that would limit
Hi,
I see that we can set compression type on a frontend or backend. Due to
some application level complication we want haproxy to not compress specific
request path for example /api and compress the rest as usual.
Any idea on how this can be done?
One way would be to route the requests through
is_a_v-10 hdr(host),map(/opt/haproxy/current/conf/proxy.map) a_v-10
is there a way I could lookup once and use the values in multiple acls?
Unfortunately I cannot refer to an acl in another acl conditions which
would have worked for me.
Thanks
Sachin
On 12/2/14 2:15 PM, Sachin Shetty sshe
Thanks willy, I need to do more than just pick a backend. So you feel even
with a map of 10K keys, multiple look ups should be ok?
Thanks
Sachin
On 12/8/14 6:15 PM, Willy Tarreau w...@1wt.eu wrote:
Hi Sachin,
On Mon, Dec 08, 2014 at 06:04:35PM +0530, Sachin Shetty wrote:
Hi Willy,
I need
, 2014 at 04:19:54PM +0530, Sachin Shetty wrote:
Hi,
In our architecture, we have thousands of host names resolving to a
single
haproxy, we dynamically decide a sticky backend based on our own custom
sharding. To determine the shard info, we let the request flow in to a
default apache proxy
Thanks Cyril, but no luck, I still see no connection reuse. For every new
connection from the same client, haproxy make a new connection to the
server and terminates it right after.
Lukas, as per the documentation, the 1.5 dev version does support server
side pooling.
Thanks Cyril, appreciate your help on this. I will take this up internally
on how we could workaround it.
Thanks again.
Thanks
Sachin
On 11/30/14 5:47 PM, Cyril Bonté cyril.bo...@free.fr wrote:
Hi again Sachin,
Le 30/11/2014 13:01, Sachin Shetty a écrit :
Thanks Cyril, but no luck, I still
Hi,
In our architecture, we have thousands of host names resolving to a single
haproxy, we dynamically decide a sticky backend based on our own custom
sharding. To determine the shard info, we let the request flow in to a
default apache proxy that processes the requests and also responds with
Hi,
We have SSL backends which are remote, so we want to use http-keep-alive to
pool connections to the the SSL backends, however it does not seem to be
working:
backend qa
option http-keep-alive
timeout http-keep-alive 30s
server qa IP:443 maxconn 100 ssl verify none
I am
, Sachin Shetty wrote:
Thanks Lukas.
Yes, I was hoping to workaround by setting a smaller maxqueue limit and
queue timeout.
So what other options do we have, I need to:
1. Send all requests for a host (mytest.mydomain.com) to one backend as
long as it can serve.
2. If the backend is swamped
goal is to never send a 503 as long as we have other nodes
available which is always the case in our pool.
Thanks
Sachin
On 8/31/13 1:17 PM, Willy Tarreau w...@1wt.eu wrote:
On Sat, Aug 31, 2013 at 01:03:34PM +0530, Sachin Shetty wrote:
Thanks Willy.
I am precisely using it for caching. I need
.
Thanks
Sachin
On 8/31/13 2:17 PM, Willy Tarreau w...@1wt.eu wrote:
On Sat, Aug 31, 2013 at 01:27:41PM +0530, Sachin Shetty wrote:
We did try consistent hashing, but I found better distribution without
it.
That's known and normal.
We don¹t add or remove servers often so we should be ok
Thanks Lukas.
Yes, I was hoping to workaround by setting a smaller maxqueue limit and
queue timeout.
So what other options do we have, I need to:
1. Send all requests for a host (mytest.mydomain.com) to one backend as
long as it can serve.
2. If the backend is swamped, it should go to any other
Hi,
We want to maintain stickiness to a backed server based on host header so
balance hdr(host) works pretty well for us, however as soon at the backend
hits max connection, requests pile up in the queue eventually timeout with a
503 and sQ in the logs. Is there a way to redispatch the requests
Thanks Emeric and Scott, I will try this out.
Thanks
Sachin
On 7/12/13 12:27 AM, Emeric BRUN eb...@exceliance.fr wrote:
original message-
De: Sachin Shetty sshe...@egnyte.com
A: haproxy@formilux.org
Date: Thu, 11 Jul 2013 23:57:40 +0530
Hi,
We need to add a header to every request that is being routed via haproxy,
we were able to achieve with a simple add header instruction:
reqadd X-Haproxy-L1:\ true
However it seems haproxy only adds this request to the first request in a
keep alive connection stream and this header is
Hi,
We use RewriteMap extensively in Apache to look up an external service on
the header host to determine which downstream pool we want to use:
Something like this in apache:
RewriteMap d2u prg:/www/bin/dash2under.pl
RewriteRule - ${d2u:%{HOST}}
Is there a way to do this in haproxy? i.e lookup
Hi,
We have four web servers in a single backend. Physically these four servers
are on two different machines. A new sessions is made sticky by hashing on
one of the headers.
Regular flow is ok, but when one of the webservers are down for an in-flight
session, the request should be
We switched to rsyslog and since then seeing a huge increase in the log
volume. Thanks for all the help!
Thanks
Sachin
-Original Message-
From: Sachin Shetty [mailto:sshe...@egnyte.com]
Sent: Saturday, October 15, 2011 11:35 AM
To: 'Willy Tarreau'
Cc: 'haproxy@formilux.org'
Subject: RE
Thanks Willy - I will these and let you know.
-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: Saturday, October 15, 2011 11:32 AM
To: Sachin Shetty
Cc: haproxy@formilux.org
Subject: Re: syslogd dropping requests from haproxy
On Sat, Oct 15, 2011 at 01:35:46AM +0530
Hi,
We have a pretty heavily loaded haproxy server - more than one are running
on a single machine. I am seeing not all requests are being logged to
syslogd - I am sure this is not related to the httpclose parameter since the
same conf file works fine on another machine where the load is
Found some more info, when haproxy configured to send logs to remote host
instead of local syslogd, it works fine. Definately something to do with the
local syslogd under heavy load.
Thanks
Sachin
From: Sachin Shetty [mailto:sshe...@egnyte.com]
Sent: Saturday, October 15, 2011 12:57 AM
-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: Tuesday, September 20, 2011 10:31 AM
To: Sachin Shetty
Cc: 'Cassidy, Bryan'; haproxy@formilux.org; 'Amrit Jassal'
Subject: Re: Apache translates 500 to 502 from haproxy
Hi Sachin,
On Mon, Sep 19, 2011 at 01:47:28PM +0530, Sachin
Message-
From: Sachin Shetty [mailto:sshe...@egnyte.com]
Sent: Wednesday, June 15, 2011 5:23 PM
To: 'Willy Tarreau'
Cc: 'Cassidy, Bryan'; 'haproxy@formilux.org'
Subject: RE: Apache translates 500 to 502 from haproxy
tried with option http-server-close instead of option httpclose - no luck it
does
...@1wt.eu]
Sent: Wednesday, June 15, 2011 11:08 AM
To: Sachin Shetty
Cc: 'Cassidy, Bryan'; haproxy@formilux.org
Subject: Re: Apache translates 500 to 502 from haproxy
On Tue, Jun 14, 2011 at 06:16:23PM +0530, Sachin Shetty wrote:
man
We already had option httpclose in our config
Does disabling httpclose also mean that haproxy will not even log subsequent
requests on the same connection?
Thanks
Sachin
-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: Wednesday, June 15, 2011 12:40 PM
To: Sachin Shetty
Cc: 'Cassidy, Bryan'; haproxy@formilux.org
tried with option http-server-close instead of option httpclose - no luck it
does not work. The only way I can get it to work is without either.
Thanks
Sachin
-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: Wednesday, June 15, 2011 12:40 PM
To: Sachin Shetty
Cc
Hey Willy,
tcpdump would be fine?
Thanks
Sachin
-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: Tuesday, June 14, 2011 12:14 PM
To: Sachin Shetty
Cc: 'Cassidy, Bryan'; haproxy@formilux.org
Subject: Re: Apache translates 500 to 502 from haproxy
On Tue, Jun 14, 2011
Yeah, I understand.
So what could we do? I am really stuck with this and not able to figure out
any workaround either.
-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: Tuesday, June 14, 2011 4:17 PM
To: Sachin Shetty
Cc: 'Cassidy, Bryan'; haproxy@formilux.org
Subject
?
Thanks
Sachin
-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: Tuesday, June 14, 2011 5:15 PM
To: Sachin Shetty
Cc: 'Cassidy, Bryan'; haproxy@formilux.org
Subject: Re: Apache translates 500 to 502 from haproxy
On Tue, Jun 14, 2011 at 04:27:00PM +0530, Sachin Shetty wrote
Willy Tarreau w at 1wt.eu writes:
On Fri, Jun 10, 2011 at 04:41:08PM +0530, Manoj Kumar wrote:
Hi,
We are forwarding specific requests from apache to haproxy which
interbally forwards it to a pool of cherry py servers. We have seen that
certain requests end up in 500 in haproxy and
Sachin Shetty sshetty@... writes:
I see a similar thread here, no solution though
https://forums.rightscale.com//showthread.php?t=210
posting smaller files, but get the bad
proxy error when posting a bigger file like 1.5MB+ file
Thanks
Sachin
-Original Message-
From: Cassidy, Bryan [mailto:bcass...@winnipeg.ca]
Sent: Tuesday, June 14, 2011 12:19 AM
To: Sachin Shetty; haproxy@formilux.org
Subject: RE: Apache translates
-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: Tuesday, June 14, 2011 11:03 AM
To: Sachin Shetty
Cc: 'Cassidy, Bryan'; haproxy@formilux.org
Subject: Re: Apache translates 500 to 502 from haproxy
On Tue, Jun 14, 2011 at 12:28:55AM +0530, Sachin Shetty wrote:
Hey Bryan,
I did check Cherrypy
78 matches
Mail list logo