subscribe
subscribe -- Jerry Scharf Pure Storage
[PATCH] MINOR: lua: Add a flag to disable logging to stderr
By default, messages printed from LUA log functions are sent both to the configured log target and additionally to stderr (in most cases). This introduces tune.lua.also-log-to-stderr for disabling that second copy of the message being sent to stderr. Addresses https://github.com/haproxy/haproxy/issues/2316 This could be backported if wanted, since it preserves the behaviour that existed prior to it. --- doc/configuration.txt | 6 ++ doc/lua.txt | 4 src/hlua.c| 50 +-- 3 files changed, 49 insertions(+), 11 deletions(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index 88a576795..771a569c0 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -1195,6 +1195,7 @@ The following keywords are supported in the "global" section : - tune.lua.service-timeout - tune.lua.session-timeout - tune.lua.task-timeout + - tune.lua.also-log-to-stderr - tune.max-checks-per-thread - tune.maxaccept - tune.maxpollevents @@ -3180,6 +3181,11 @@ tune.lua.task-timeout remain alive during of the lifetime of HAProxy. For example, a task used to check servers. +tune.lua.also-log-to-stderr { on | off } + Enables ('on') or disables ('off') logging the output of lua log functions + to stderr in addition to the configured log target. To preserve historical + behaviour, this defaults to 'on'. + tune.max-checks-per-thread Sets the number of active checks per thread above which a thread will actively try to search a less loaded thread to run the health check, or diff --git a/doc/lua.txt b/doc/lua.txt index 8d5561668..5e5712938 100644 --- a/doc/lua.txt +++ b/doc/lua.txt @@ -630,6 +630,10 @@ It displays a log during the HAProxy startup: [alert] 285/083533 (14465) : Hello World ! +Note: By default, logs created from a LUA script are printed to the log target +in your configuration and additionally to stderr, unless the flag +tune.lua.also-log-to-stderr is set to 'off'. + Default path and libraries -- diff --git a/src/hlua.c b/src/hlua.c index c686f222a..261aee763 100644 --- a/src/hlua.c +++ b/src/hlua.c @@ -69,6 +69,12 @@ #include #include +/* Global LUA on/off flags */ +/* if on, LUA-originating logs are duplicated to stderr */ +#define HLUA_TUNE_ALSO_LOG_TO_STDERR (1<<0) + +static int hlua_tune_flags = HLUA_TUNE_ALSO_LOG_TO_STDERR; + /* Lua uses longjmp to perform yield or throwing errors. This * macro is used only for identifying the function that can * not return because a longjmp is executed. @@ -1366,8 +1372,9 @@ const char *hlua_show_current_location(const char *pfx) return NULL; } -/* This function is used to send logs. It try to send on screen (stderr) - * and on the default syslog server. +/* This function is used to send logs. It tries to send them to: + * - the log target applicable in the current context, AND + * - stderr if not in quiet mode or explicitly disabled */ static inline void hlua_sendlog(struct proxy *px, int level, const char *msg) { @@ -1394,6 +1401,9 @@ static inline void hlua_sendlog(struct proxy *px, int level, const char *msg) send_log(px, level, "%s\n", trash.area); if (!(global.mode & MODE_QUIET) || (global.mode & (MODE_VERBOSE | MODE_STARTING))) { + if (!(hlua_tune_flags & (HLUA_TUNE_ALSO_LOG_TO_STDERR))) + return; + if (level == LOG_DEBUG && !(global.mode & MODE_DEBUG)) return; @@ -12433,6 +12443,23 @@ static int hlua_parse_maxmem(char **args, int section_type, struct proxy *curpx, return 0; } +static int hlua_also_log_to_stderr(char **args, int section_type, struct proxy *curpx, + const struct proxy *defpx, const char *file, int line, + char **err) +{ + if (too_many_args(1, args, err, NULL)) + return -1; + + if (strcmp(args[1], "on") == 0) + hlua_tune_flags |= HLUA_TUNE_ALSO_LOG_TO_STDERR; + else if (strcmp(args[1], "off") == 0) + hlua_tune_flags &= ~HLUA_TUNE_ALSO_LOG_TO_STDERR; + else { + memprintf(err, "'%s' expects either 'on' or 'off' but got '%s'.", args[0], args[1]); + return -1; + } + return 0; +} /* This function is called by the main configuration key "lua-load". It loads and * execute an lua file during the parsing of the HAProxy configuration file. It is @@ -12673,15 +12700,16 @@ static int hlua_config_prepend_path(char **args, int section_type, struct proxy /* configuration keywords declaration */ static struct cfg_kw_list cfg_kws = {{ },{ - { CFG_GLOBAL, "lua-prepend-path", hlua_config_prepend_path }, - { CFG_GLOBAL, "lua-load", hlua_load }, - { CFG_GLOBAL, "lua-load-per-thread", hlua_load_per_thread }, - { CFG_GLOBAL, "tune.lua.session-timeout", hlua_session_timeout }, -
Re: [PATCH 0/4] Support server-side sending and forwarding of arbitrary PPv2 TLVs
Hi Alexander, On Tue, Oct 17, 2023 at 05:38:45PM +, Stephan, Alexander wrote: > Hi Willy, > > Do you know whether this can/will make it to the next release? It would be > crucial for us to know. I sincerely want it to, but the last annoyance around H2 etc derailed our activities a bit and I'm still trying to catch up on plenty of things that others depend on :-/ I'm still having your series in my todo-list and do intend to review it. I also know that if tiny adaptations were needed you don't mind so we'd save a round trip anyway. I'll keep you updated, just trying to do my best :-( Willy
RE: [PATCH 0/4] Support server-side sending and forwarding of arbitrary PPv2 TLVs
Hi Willy, Do you know whether this can/will make it to the next release? It would be crucial for us to know. Best, Alexander -Original Message- From: Willy Tarreau Sent: Thursday, October 5, 2023 2:42 PM To: Stephan, Alexander Cc: haproxy@formilux.org Subject: Re: [PATCH 0/4] Support server-side sending and forwarding of arbitrary PPv2 TLVs Hi Alexander, On Thu, Oct 05, 2023 at 11:13:16AM +, Stephan, Alexander wrote: > Hi Willy, > > Ah, what a pity. Anyway, I sent them again with you in CC. Does it look > alright now? Yep, received both ways this time, thank you! Willy
How to limit client body/upload size?
Hi, we are currently migrating servers and decided to drop NGINX in favour of HAProxy, however we had issues in the past where people would bomb us with massive file uploads on some services. Is there an equivalent like nginx's 'client_max_body_size' directive? Thanks in advance, Gilles Van Vlasselaer
Re: [ANNOUNCE] haproxy-2.9-dev7
Hi On 10/11/23 16:05, Willy Tarreau wrote: No, I remember Tim raised this point a while ago basically saying "hey don't break the DNS I use it for my servers". For me simple server For reference, you're probably thinking of this email: https://www.mail-archive.com/haproxy@formilux.org/msg42026.html Best regards Tim Düsterhus
Re: Some filter discussion for the future
Hi Aleksandar, That is a welcome follow-up to the tangent we went on in the announce thread. As there was the discussion about the future of the SPOE filter, let me start a discussion about some possible filter options. [...] The question which I have is how difficult is it to add a http filter based on httpclient similar to SPOE or FCGI filter. Another option is to add some language specific filter like haproxy-rs-api shown in this comment https://github.com/khvzak/mlua/issues/320#issuecomment-1762027351 . I personally find the latter much more appealing. If only because the http client is "just" a much more restricted version of it. And since I was the first (in that thread, certainly not everywhere) to complain about the current language of choice for extending HAProxy (LUA), I have to say again that a target "language" like WASM sounds like an ideal selection: - no need to pick/enforce/encourage a specific input language - plenty of languages already compile to it, and likely to continue trending up since browsers support it The Idea to add the http filter is that there are so many http based tools out there and with that could HAProxy use such tools based on http. That is true, but needing an HTTP API + the loss in efficiency sounds a bit painful. And very painful if the response isn't so easy to parse. Thinking of cases where XML decoding becomes relevant, for example SAML-related ones which are common for auth-related matters still. Any opinion on that? Well on my end I certainly want to see this too. That said Willy had a few counterpoints of relevance in that other thread that are worth addressing here: > WASM on the other hand would provide more performance and compile-time > checks but I fear that it could also bring new classes of issues such > as higher memory usage, higher latencies, and would make it less > convenient to deploy updates since these would require to be rebuilt. I'd say first that there are interpreters (and JITs) so the rebuild is not necessary. However, even if it was, I'm not sure that the buildless use-case has that much traction as long as the build doesn't have to happen on the LBs directly. For example I don't remember seeing complaints that SPOEs essentially require a build step. > Also we don't want to put too much of the application into the load balancer. That's a much more fundamental question however. This is your project, not mine, so your call. But I have to emphasize that one reason I use HAProxy is specifically because it's extremely configurable and allows me to offload a lot of application-related logic directly at the edge. In a more impersonal way, that is also a direction many are interested in in general. See things like https://blog.cloudflare.com/cloudflare-snippets-alpha/ which are essentially ACL-triggered filters in HAProxy terms. One example case I see up and again is tee-ing a request, for various reasons: - for silent A/B testing between 2 backends (ie tee to 1 control and 1 test) - for routing the request that triggers a cached response both to the cache and to something interested in it for statistics; so users gets fast response and you still ALSO get to count those requests And of course that has concerns related to memory used for buffering the content if there are 2 targets and thus you can't purely stream through. But in some places it has applicability I think. > But as I said I haven't had a look at the details so I don't know > if we can yield like in Lua, implement our own bindings for internal > functions, or limit the memory usage per request being processed. That is much more difficult for me to answer, so to save you some time these seem to be the 3 main C-embeddable runtimes at the time of writing: - https://github.com/bytecodealliance/wasm-micro-runtime - https://github.com/wasm3/wasm3 - https://github.com/wasmerio/wasmer I had a look and however didn't see a way to control memory or force yielding... so it's not encouraging. But maybe I missed it. > During the Lua integration we used to say that it would teach us > new use cases that we're not aware of and that could ultimately > end up as native actions/sample fetches/converters for some of them > if they were popular. I fully get that, and I think it wouldn't really change either way. For example if it didn't exist yet, a cache like the native one would be something to show up for sure. But there are still quite a few more site-specific things that have no chance of ever making it in mainline (and that's a good thing) but also become more complex than a "just a few lines". As in you COULD write it in 20 instructions yes, but the source being 150 would make it more readable even if in the end the actual amount of executed instructions remains 20. And having a decent developer experience while doing that is quite helpful rather than randomly tweaking things around until it doesn't 503
Re: [PATCH] MINOR: support for http-response set-timeout
On Mon, Oct 16, 2023 at 05:09:13PM +0300, Vladimir Vdovin wrote: > Added set-timeout action for http-response. Adapted reg-tests and > documentation. Now merged, thank you Vladimir! Willy