Re: stdout logging makes syslog logging fail.. 1.9-dev10-6e0d8ae

2018-12-14 Thread Willy Tarreau
Hi Pieter,

On Sat, Dec 15, 2018 at 12:40:08AM +0100, PiBa-NL wrote:
> Hi List, Willy,
> 
> stdout logging makes syslog logging fail.. regtest that reproduces the issue
> attached.
> 
> Attached test (a modification of /log/b0.vtc) fails, by just adding a
> stdout logger: ***  h1    0.0 debug|[ALERT] 348/000831 (51048) :
> sendmsg()/writev() failed in logger #2: Socket operation on non-socket
> (errno=38), which apparently modifies the syslog behavior.?
> 
> Tested with version 1.9-dev10-6e0d8ae, but i think it never worked since
> stdout logging was introduced.

I've been using both in my test configs, but alternatively, so I can have
missed something. I'll check, thank you.

Willy



Re: crash with regtest: /reg-tests/connection/h00001.vtc after commit f157384

2018-12-14 Thread Willy Tarreau
Hi Pieter,

On Sat, Dec 15, 2018 at 12:32:28AM +0100, PiBa-NL wrote:
> Hi List, Willy,
> 
> Current 1.9-dev master ( 6e0d8ae ) crashes with regtest:
> /reg-tests/connection/h1.vtc stack below, it fails after commit f157384.
> 
> Can someone check? Thanks.

Thanks, I'll have a check. I don't understand how a server can be not
set here, but I'm seeing that we can have this case when it's declared
as dispatch, in which case the stats must not be incremented.

I'll fix it, thank you.

Willy



Re: Quick update on 1.9

2018-12-14 Thread Willy Tarreau
Hi Pieter,

On Sat, Dec 15, 2018 at 01:32:49AM +0100, PiBa-NL wrote:
> Hi Willy,
> 
> Op 14-12-2018 om 22:32 schreef Willy Tarreau:
> > if we manage to get haproxy.org to work reasonably stable this week-
> > end, it will be a sign that we can release it.
> 
> There are still several known issues that should be addressed before
> 'release' imho.
> 
> - Compression corrupts data(Christopher is investigating):
> https://www.mail-archive.com/haproxy@formilux.org/msg32059.html

This one was fixed, he had to leave quickly last evening so he
couldn't respond, but it was due to some of my changes to avoid
copies, I failed to grasp some corner cases of htx.

> - Dispatch server crashes haproxy (i found it today):
> https://www.mail-archive.com/haproxy@formilux.org/msg32078.html

Ah thanks for this one, will check.

> - stdout logging makes syslog logging fail (i mentioned it before, but i
> thought lets 'officially' re-report it now):
> https://www.mail-archive.com/haproxy@formilux.org/msg32079.html

I missed this one, I can have a look as well.

> - As you mention haproxy serving the haproxy.org website apparently crashed
> several times today when you tried a recent build..

Yep, and another bug last time. Some of these ones will ultimately result
in new VTC tests being created, but definitely the production is more
creative than people when it comes to mixing features and corner cases.

> I think a week of
> running without a single crash would be a better indicator than a single
> week-end that a release could be imminent.?.

It should not change much. Currently, bugs happen in no less than 2 hours,
this basically represents the interval during which ~100% of the config's
features are covered by regular traffic. Longer delays are only useful to
detect leaks, which are always possible in other conditions anyway.

> - Several of the '/checks/' regtests don't work. Might be a problem with
> varnishtest though, not sure.. But you already discovered that.

Yes, it's varnishtest which crashes in the libc, I suspect the regexes
used might be too heavy for some regex implementations.

> And thats just the things i am aware off a.t.m..

That's it for me as well (your two issues are the extra ones I missed).

> I'm usually not 'scared' to run a -dev version on my production box for a
> while and try a few new experimental features that seem useful to me over a
> weekend, but i do need to have the idea that it will 'work' as good as the
> version i update from, and to me it just doesn't seem there yet. (i would
> really like the compression to be functional before i try again..)

I totally agree with you. As you know, a release doesn't mean it's
bug-free, it means that it's forbidden to bring any change that break
compatibility or which adds a feature that is not strictly necessary,
thus the focus becomes 100% bug-fixes. I will obviously never encourage
anyone to deploy a version when I don't run it myself. But at the same
time, I know that if it's good enough for me, there are many people who
will consider it surely is good enough for their specific use case. It
doesn't mean they will deploy immediately. I generally recommend not to
deploy during the first weeks. But when you work on a project where you
have to choose a version, you need to know which one will be available
at your project's deadline, and having something that slips forever is
a big problem as you can't be sure it will be available by then, and
this is something I care about as well.

> So with still several known bugs to solve imho its not yet a good time to
> release it as a 'stable' version already in few days time.?.

Based on the nature of the changes, I don't expect to efficiently discover
more issues in more time, which means it will have to have more exposure.

> Or did i
> misunderstand the 'sign' to release, is it one of several signs that needs
> to be checked.?. I think a -dev11 or perhaps a -RC if someone likes that
> term, would probably be more appropriate,

We can possibly do that, but nobody will deploy it during the next few
weeks anyway (xmas and new year's vacation), however some people might
want to play with it in their lab when there's less activity. And we've
been drifting a bit already.

> before distro's start including
> the new release expecting stability while actually bringing a seemingly
> largish potential of breaking some features that used to work.

Distros have been used not to pick our first versions, so I'm not worried
with this. In addition I already recommended a few of them not to replace
an existing version with any 1.9 since it will not be long-term maintained.

> So even
> current new commits are still introducing new breakage, while shortly before
> release i would expect mostly little fixes to issues to get committed. That
> 'new' features arn't 100% stable, that might not be a blocker. But existing
> features that used to work properly should imho not get released in a broken
> state..

I agree with this and 

Re: Quick update on 1.9

2018-12-14 Thread PiBa-NL

Hi Willy,

Op 14-12-2018 om 22:32 schreef Willy Tarreau:

if we manage to get haproxy.org to work reasonably stable this week-
end, it will be a sign that we can release it.


There are still several known issues that should be addressed before 
'release' imho.


- Compression corrupts data(Christopher is investigating): 
https://www.mail-archive.com/haproxy@formilux.org/msg32059.html
- Dispatch server crashes haproxy (i found it today): 
https://www.mail-archive.com/haproxy@formilux.org/msg32078.html
- stdout logging makes syslog logging fail (i mentioned it before, but i 
thought lets 'officially' re-report it now): 
https://www.mail-archive.com/haproxy@formilux.org/msg32079.html
- As you mention haproxy serving the haproxy.org website apparently 
crashed several times today when you tried a recent build.. I think a 
week of running without a single crash would be a better indicator than 
a single week-end that a release could be imminent.?.
- Several of the '/checks/' regtests don't work. Might be a problem with 
varnishtest though, not sure.. But you already discovered that.

And thats just the things i am aware off a.t.m..

I'm usually not 'scared' to run a -dev version on my production box for 
a while and try a few new experimental features that seem useful to me 
over a weekend, but i do need to have the idea that it will 'work' as 
good as the version i update from, and to me it just doesn't seem there 
yet. (i would really like the compression to be functional before i try 
again..)


So with still several known bugs to solve imho its not yet a good time 
to release it as a 'stable' version already in few days time.?. Or did i 
misunderstand the 'sign' to release, is it one of several signs that 
needs to be checked.?. I think a -dev11 or perhaps a -RC if someone 
likes that term, would probably be more appropriate, before distro's 
start including the new release expecting stability while actually 
bringing a seemingly largish potential of breaking some features that 
used to work. So even current new commits are still introducing new 
breakage, while shortly before release i would expect mostly little 
fixes to issues to get committed. That 'new' features arn't 100% stable, 
that might not be a blocker. But existing features that used to work 
properly should imho not get released in a broken state..


my 2 cent.

Regards,
PiBa-NL (Pieter)




stdout logging makes syslog logging fail.. 1.9-dev10-6e0d8ae

2018-12-14 Thread PiBa-NL

Hi List, Willy,

stdout logging makes syslog logging fail.. regtest that reproduces the 
issue attached.


Attached test (a modification of /log/b0.vtc) fails, by just adding 
a stdout logger: ***  h1    0.0 debug|[ALERT] 348/000831 (51048) : 
sendmsg()/writev() failed in logger #2: Socket operation on non-socket 
(errno=38), which apparently modifies the syslog behavior.?


Tested with version 1.9-dev10-6e0d8ae, but i think it never worked since 
stdout logging was introduced.


Regards,

PiBa-NL (Pieter)

# commit d02286d
# BUG/MINOR: log: pin the front connection when front ip/ports are logged
#
# Mathias Weiersmueller reported an interesting issue with logs which Lukas
# diagnosed as dating back from commit 9b061e332 (1.5-dev9). When front
# connection information (ip, port) are logged in TCP mode and the log is
# emitted at the end of the connection (eg: because %B or any log tag
# requiring LW_BYTES is set), the log is emitted after the connection is
# closed, so the address and ports cannot be retrieved anymore.
#
# It could be argued that we'd make a special case of these to immediately
# retrieve the source and destination addresses from the connection, but it
# seems cleaner to simply pin the front connection, marking it "tracked" by
# adding the LW_XPRT flag to mention that we'll need some of these elements
# at the last moment. Only LW_FRTIP and LW_CLIP are affected. Note that after
# this change, LW_FRTIP could simply be removed as it's not used anywhere.
#
# Note that the problem doesn't happen when using %[src] or %[dst] since
# all sample expressions set LW_XPRT.

varnishtest "Wrong ip/port logging"
feature ignore_unknown_macro

server s1 {
rxreq
txresp
} -start

syslog Slg_1 -level notice {
recv
recv
recv info
expect ~ 
\"dip\":\"${h1_fe_1_addr}\",\"dport\":\"${h1_fe_1_port}.*\"ts\":\"[cC]D\",\"
} -start

haproxy h1 -conf {
global
log stdout format short daemon
log ${Slg_1_addr}:${Slg_1_port} local0

defaults
log global
timeout connect 3000
timeout client 1
timeout server  1

frontend fe1
bind "fd@${fe_1}"
mode tcp
log-format 
{\"dip\":\"%fi\",\"dport\":\"%fp\",\"c_ip\":\"%ci\",\"c_port\":\"%cp\",\"fe_name\":\"%ft\",\"be_name\":\"%b\",\"s_name\":\"%s\",\"ts\":\"%ts\",\"bytes_read\":\"%B\"}
default_backendbe_app

backend be_app
server app1 ${s1_addr}:${s1_port}
} -start

client c1 -connect ${h1_fe_1_sock} {
txreq -url "/"
delay 0.02
} -run

syslog Slg_1 -wait



crash with regtest: /reg-tests/connection/h00001.vtc after commit f157384

2018-12-14 Thread PiBa-NL

Hi List, Willy,

Current 1.9-dev master ( 6e0d8ae ) crashes with regtest: 
/reg-tests/connection/h1.vtc stack below, it fails after commit f157384.


Can someone check? Thanks.

Regards,

PiBa-NL (Pieter)

Program terminated with signal 11, Segmentation fault.
#0  0x0057f34f in connect_server (s=0x802616500) at 
src/backend.c:1384

1384 HA_ATOMIC_ADD(&srv->counters.connect, 1);
(gdb) bt full
#0  0x0057f34f in connect_server (s=0x802616500) at 
src/backend.c:1384

    cli_conn = (struct connection *) 0x8026888c0
    srv_conn = (struct connection *) 0x802688a80
    old_conn = (struct connection *) 0x0
    srv_cs = (struct conn_stream *) 0x8027b8180
    srv = (struct server *) 0x0
    reuse = 0
    reuse_orphan = 0
    err = 0
    i = 5
#1  0x004a8acc in sess_update_stream_int (s=0x802616500) at 
src/stream.c:928

    conn_err = 8
    srv = (struct server *) 0x0
    si = (struct stream_interface *) 0x802616848
    req = (struct channel *) 0x802616510
#2  0x004a37c2 in process_stream (t=0x80265c320, 
context=0x802616500, state=257) at src/stream.c:2302

    srv = (struct server *) 0x0
    s = (struct stream *) 0x802616500
    sess = (struct session *) 0x8027be000
    rqf_last = 9469954
    rpf_last = 2147483648
    rq_prod_last = 7
    rq_cons_last = 0
    rp_cons_last = 7
    rp_prod_last = 0
    req_ana_back = 0
    req = (struct channel *) 0x802616510
    res = (struct channel *) 0x802616570
    si_f = (struct stream_interface *) 0x802616808
    si_b = (struct stream_interface *) 0x802616848
#3  0x005e9da7 in process_runnable_tasks () at src/task.c:432
    t = (struct task *) 0x80265c320
    state = 257
    ctx = (void *) 0x802616500
    process = (struct task *(*)(struct task *, void *, unsigned 
short)) 0x4a0480 

    t = (struct task *) 0x80265c320
    max_processed = 200
#4  0x00511592 in run_poll_loop () at src/haproxy.c:2620
    next = 0
    exp = 0
#5  0x0050dc00 in run_thread_poll_loop (data=0x802637080) at 
src/haproxy.c:2685

---Type  to continue, or q  to quit---
    start_lock = {lock = 0, info = {owner = 0, waiters = 0, 
last_location = {function = 0x0, file = 0x0, line = 0}}}

    ptif = (struct per_thread_init_fct *) 0x92ee30
    ptdf = (struct per_thread_deinit_fct *) 0x0
#6  0x0050a2b6 in main (argc=4, argv=0x7fffea48) at 
src/haproxy.c:3314

    tids = (unsigned int *) 0x802637080
    threads = (pthread_t *) 0x802637088
    i = 1
    old_sig = {__bits = 0x7fffe770}
    blocked_sig = {__bits = 0x7fffe780}
    err = 0
    retry = 200
    limit = {rlim_cur = 4042, rlim_max = 4042}
    errmsg = 0x7fffe950 ""
    pidfd = -1
Current language:  auto; currently minimal


haproxy -vv
HA-Proxy version 1.9-dev10-6e0d8ae 2018/12/14
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -DDEBUG_THREAD -DDEBUG_MEMORY -pipe -g -fstack-protector 
-fno-strict-aliasing -fno-strict-aliasing -Wdeclaration-after-statement 
-fwrapv -fno-strict-overflow -Wno-address-of-packed-member 
-Wno-null-dereference -Wno-unused-label -DFREEBSD_PORTS -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_ACCEPT4=1 USE_REGPARM=1 
USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE
  h2 : mode=HTX    side=FE|BE
    : mode=HTX    side=FE|BE
    : mode=TCP|HTTP   side=FE|BE

Available filters :
    [SPOE] spoe
    [COMP] compression
    [CACHE] cache
    [TRACE] trace

# commit d02286d
# BUG/MINOR: log: pin the front connection when front ip/ports are logged
#
# Mathias Weiersmueller re

HTTP/2 to backend server fails health check when 'option httpchk' set

2018-12-14 Thread Nick Ramirez
This may be something very simple that I am missing. I am using the 
latest HAProxy Docker image, which is using HAProxy 1.9-dev10 
2018/12/08. It is using HTTP/2 to the backend web server (Caddy).


It fails its health check if I uncomment the "option httpchk" line:

backend webservers
  balance roundrobin
  #option httpchk
  server server1 web:443 check ssl verify none alpn h2


With that line commented out, it works.

The project is on Github: 
https://github.com/NickMRamirez/experiment-haproxy-http2


Am I doing something wrong? It also works if I remove "option 
http-use-htx" and "alpn h2" and uncomment "option httpchk".


Thanks,
Nick Ramirez

Quick update on 1.9

2018-12-14 Thread Willy Tarreau
Hi all,

things are getting way better. Today haproxy.org has been served by the
latest master all the day, with all knobs turned on (HTX, H2, compression,
caching, server pools and connection reuse). Some very likely noticed
several outages. I'm sorry for this but at some point when it starts to
become impossible to trigger any bug, code has to go into production for
a better exposure and variety of issues.

It looks like the cause is always the same, the 15 cores indicate haproxy
died from a broken list in the HTX-specific cache code. The issue is rare,
the process kept working for up to two hours. For now I'm disabling the
cache to try again, I want to stress HTX a bit further. If it's the only
issue left, it won't stop me from releasing, I'll just put a big yellow
and red sticker on it to warn about the shock hazard :-)

I still have a few minor stuff in my mbox to apply (some cosmetic updates
to the master-worker CLI, a few regtests, some option name adjustments and
some adjustments to the connection pools management). I still have a bit
of clean up to do (rename some flags and callbacks that are confusing),
and if we manage to get haproxy.org to work reasonably stable this week-
end, it will be a sign that we can release it.

This week I discussed with a few people and came to the conclusion that
we should switch http-reuse on by default, but in safe mode, and server
pools as well. The reason these ones remained off by default till now
was mostly historical and due to the fact that 15 years ago it was
critical to close all the time and avoid keep-alive to save server
resources. Things have changed a lot nowadays, all servers support
keep-alive, some expect to get multiplexed connections, and I can't
imagine what users would think if they enable H2 to the server and see
it connect and disconnect all the time without multiplexing streams!

Given that at this point we're emitting a technical version which people
should be careful with at the beginning, it's probably the best moment
to switch reuse and pools on by default with very conservative levels.
This way all users who don't know what to tune will benefit from them,
and those who care about fine details will still be able to adjust the
behaviour using the various options.

OK enough talking, let's get back to the code, stay tuned!

Willy



[PATCH 1/1] REGTEST: Add a reg test for HTTP cookies.

2018-12-14 Thread flecaille
From: Frédéric Lécaille 

This script tests the "cookie  insert indirect" directive with
header checks on server and client side. syslog messages are also
checked, especially --II (invalid, insert) flags logging.

Signed-off-by: Frédéric Lécaille 
---
 reg-tests/http-cookies/h0.vtc | 58 +++
 1 file changed, 58 insertions(+)
 create mode 100644 reg-tests/http-cookies/h0.vtc

diff --git a/reg-tests/http-cookies/h0.vtc 
b/reg-tests/http-cookies/h0.vtc
new file mode 100644
index ..3ea16acc
--- /dev/null
+++ b/reg-tests/http-cookies/h0.vtc
@@ -0,0 +1,58 @@
+varnishtest "HTTP cookie basic test"
+feature ignore_unknown_macro
+
+# This script tests "cookie  insert indirect" directive.
+# The client sends a wrong "SRVID=s2" cookie.
+# haproxy removes it.
+# The server replies with "SRVID=S1" after having checked that
+# no cookies were sent by haproxy.
+# haproxy replies "SRVID=server-one" to the client.
+# We log the HTTP request to a syslog server and check their "--II"
+# (invalid, insert) flags.
+
+syslog S1 -level notice {
+recv
+expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy (fe|be)1 started."
+recv
+expect ~ "[^:\\[ ]\\[${h1_pid}\\]: Proxy (fe|be)1 started."
+recv info
+expect ~ "[^:\\[ ]\\[${h1_pid}\\]: .* fe1 be1/srv1 .* --II .* \"GET / 
HTTP/1\\.1\""
+} -start
+
+server s1 {
+   rxreq
+   expect req.http.cookie == 
+txresp -hdr "Cookie: SRVID=S1"
+} -start
+
+haproxy h1 -conf {
+   global
+log ${S1_addr}:${S1_port} len 2048 local0 debug err
+
+defaults
+mode http
+   option httplog
+timeout client 1s
+timeout server 1s
+timeout connect 1s
+log global
+
+backend be1
+cookie SRVID insert indirect
+server srv1 ${s1_addr}:${s1_port} cookie server-one
+
+frontend fe1
+   option httplog
+bind "fd@${fe1}"
+use_backend be1
+} -start
+
+client c1 -connect ${h1_fe1_sock} {
+txreq -hdr "Cookie: SRVID=s2"
+rxresp
+expect resp.http.Set-Cookie ~ "^SRVID=server-one;.*"
+} -start
+
+
+client c1 -wait
+syslog S1 -wait
-- 
2.11.0




agent-check requires newline in response?

2018-12-14 Thread Nick Ramirez
In the documentation for agent-check 
(https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#agent-check) 
it says that the string returned by the agent may be optionally 
terminated by '\r' or '\n'. However, in my tests, it was mandatory to 
end the response with this. Should the word "optionally" be removed from 
the docs?


haproxy.cfg:

backend apiservers

   balance roundrobin

   server server1 192.168.50.3:80 check weight 100 agent-check 
agent-inter 5s agent-addr 192.168.50.3 agent-port 




My agent code in Golang, which updates the weight of servers if CPU Idle 
falls below a threshold:


package main


import (

   "fmt"

   "time"

   "github.com/firstrow/tcp_server"

   "github.com/mackerelio/go-osstat/cpu"

)


func main() {

   server := tcp_server.New(":")


   server.OnNewClient(func(c *tcp_server.Client) {

   fmt.Println("Client connected")

   cpuIdle, err := getIdleTime()


   if err != nil {

   fmt.Println(err)

   c.Close()

   return

   }


   if cpuIdle < 10 {

   // Set server weight to half

   c.Send("50%\n")

   } else {

   c.Send("100%\n")

   }


   c.Close()

   })


   server.Listen()

}


func getIdleTime() (float64, error) {

   before, err := cpu.Get()

   if err != nil {

   return 0, err

   }

   time.Sleep(time.Duration(1) * time.Second)

   after, err := cpu.Get()

   if err != nil {

   return 0, err

   }

   total := float64(after.Total - before.Total)

   cpuIdle := float64(after.Idle-before.Idle) / total * 100

   return cpuIdle, nil

}


Removing "\n" from the "c.Send()" lines causes the checks to hang.



Re: [PATCH] ssl: Fix compilation without deprecated OpenSSL 1.1 APIs

2018-12-14 Thread Willy Tarreau
On Fri, Dec 14, 2018 at 08:15:07AM -0800, Rosen Penev wrote:
> On Thu, Dec 13, 2018 at 8:41 PM Willy Tarreau  wrote:
> >
> > Hello,
> >
> > On Thu, Dec 13, 2018 at 02:20:06PM -0800, Rosen Penev wrote:
> > > Signed-off-by: Rosen Penev 
> >
> > Could you please provide a real commit message explaining what is the
> > problem you're trying to solve, how it manifests itself, and in what
> > condition it was tested as appropriate ?
> Will do so.

Thanks.

> >
> > In addition, do you know if it still works with libressl/boringssl ?
> This will break LibreSSL as they broke OPENSSL_VERSION_NUMBER.
> BoringSSL should be fine.

OK, do you know how not to break libressl ? Maybe adding an
"&& !defined(LIBRESSL)" or something like this at a few places ?

Thanks,
Willy



[PATCHv2] ssl: Fix compilation without deprecated OpenSSL 1.1 APIs

2018-12-14 Thread Rosen Penev
Removing deprecated APIs is an optional part of OpenWrt's build system to
save some space on embedded devices.

Also added compatibility for LibreSSL.

Signed-off-by: Rosen Penev 
---
 LibreSSL support is totally untested. I went based off the git repository
 src/ssl_sock.c | 35 ++-
 1 file changed, 26 insertions(+), 9 deletions(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 5fd4f4e9..b08d8a68 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -39,6 +39,7 @@
 #include 
 #include 
 
+#include 
 #include 
 #include 
 #include 
@@ -60,6 +61,17 @@
 #include 
 #endif
 
+#ifndef OPENSSL_VERSION
+#define OPENSSL_VERSIONSSLEAY_VERSION
+#define OpenSSL_version(x) SSLeay_version(x)
+#define OpenSSL_version_numSSLeay
+#endif
+
+#if (OPENSSL_VERSION_NUMBER < 0x1010L) || (LIBRESSL_VERSION_NUMBER < 
0x2070L)
+#define X509_getm_notBeforeX509_get_notBefore
+#define X509_getm_notAfter X509_get_notAfter
+#endif
+
 #include 
 #include 
 
@@ -220,7 +232,7 @@ static struct {
.capture_cipherlist = 0,
 };
 
-#ifdef USE_THREAD
+#if defined(USE_THREAD) && ((OPENSSL_VERSION_NUMBER < 0x1010L) || 
defined(LIBRESSL_VERSION_NUMBER))
 
 static HA_RWLOCK_T *ssl_rwlocks;
 
@@ -1735,8 +1747,8 @@ ssl_sock_do_create_cert(const char *servername, struct 
bind_conf *bind_conf, SSL
ASN1_INTEGER_set(X509_get_serialNumber(newcrt), 
HA_ATOMIC_ADD(&ssl_ctx_serial, 1));
 
/* Set duration for the certificate */
-   if (!X509_gmtime_adj(X509_get_notBefore(newcrt), (long)-60*60*24) ||
-   !X509_gmtime_adj(X509_get_notAfter(newcrt),(long)60*60*24*365))
+   if (!X509_gmtime_adj(X509_getm_notBefore(newcrt), (long)-60*60*24) ||
+   !X509_gmtime_adj(X509_getm_notAfter(newcrt),(long)60*60*24*365))
goto mkcert_error;
 
/* set public key in the certificate */
@@ -6420,7 +6432,7 @@ smp_fetch_ssl_x_notafter(const struct arg *args, struct 
sample *smp, const char
goto out;
 
smp_trash = get_trash_chunk();
-   if (ssl_sock_get_time(X509_get_notAfter(crt), smp_trash) <= 0)
+   if (ssl_sock_get_time(X509_getm_notAfter(crt), smp_trash) <= 0)
goto out;
 
smp->data.u.str = *smp_trash;
@@ -6520,7 +6532,7 @@ smp_fetch_ssl_x_notbefore(const struct arg *args, struct 
sample *smp, const char
goto out;
 
smp_trash = get_trash_chunk();
-   if (ssl_sock_get_time(X509_get_notBefore(crt), smp_trash) <= 0)
+   if (ssl_sock_get_time(X509_getm_notBefore(crt), smp_trash) <= 0)
goto out;
 
smp->data.u.str = *smp_trash;
@@ -9274,10 +9286,12 @@ static void __ssl_sock_init(void)
 #endif
 
xprt_register(XPRT_SSL, &ssl_sock);
+#if OPENSSL_VERSION_NUMBER < 0x1010L
SSL_library_init();
+#endif
cm = SSL_COMP_get_compression_methods();
sk_SSL_COMP_zero(cm);
-#ifdef USE_THREAD
+#if defined(USE_THREAD) && ((OPENSSL_VERSION_NUMBER < 0x1010L) || 
defined(LIBRESSL_VERSION_NUMBER))
ssl_locking_init();
 #endif
 #if (OPENSSL_VERSION_NUMBER >= 0x1000200fL && !defined OPENSSL_NO_TLSEXT && 
!defined OPENSSL_IS_BORINGSSL && !defined LIBRESSL_VERSION_NUMBER)
@@ -9320,8 +9334,8 @@ static void ssl_register_build_options()
 #else /* OPENSSL_IS_BORINGSSL */
OPENSSL_VERSION_TEXT
"\nRunning on OpenSSL version : %s%s",
-  SSLeay_version(SSLEAY_VERSION),
-  ((OPENSSL_VERSION_NUMBER ^ SSLeay()) >> 8) ? " (VERSIONS 
DIFFER!)" : "");
+  OpenSSL_version(OPENSSL_VERSION),
+  ((OPENSSL_VERSION_NUMBER ^ OpenSSL_version_num()) >> 8) ? " 
(VERSIONS DIFFER!)" : "");
 #endif
memprintf(&ptr, "%s\nOpenSSL library supports TLS extensions : "
 #if OPENSSL_VERSION_NUMBER < 0x00907000L
@@ -9400,12 +9414,15 @@ static void __ssl_sock_deinit(void)
}
 #endif
 
+#if (OPENSSL_VERSION_NUMBER < 0x1010L) || defined(LIBRESSL_VERSION_NUMBER)
 ERR_remove_state(0);
 ERR_free_strings();
 
 EVP_cleanup();
+#endif
 
-#if OPENSSL_VERSION_NUMBER >= 0x00907000L
+#if ((OPENSSL_VERSION_NUMBER >= 0x00907000L) && (OPENSSL_VERSION_NUMBER < 
0x1010L)) \
+|| defined(LIBRESSL_VERSION_NUMBER)
 CRYPTO_cleanup_all_ex_data();
 #endif
 }
-- 
2.20.0




Re: [PATCH] ssl: Fix compilation without deprecated OpenSSL 1.1 APIs

2018-12-14 Thread Rosen Penev
On Thu, Dec 13, 2018 at 8:41 PM Willy Tarreau  wrote:
>
> Hello,
>
> On Thu, Dec 13, 2018 at 02:20:06PM -0800, Rosen Penev wrote:
> > Signed-off-by: Rosen Penev 
>
> Could you please provide a real commit message explaining what is the
> problem you're trying to solve, how it manifests itself, and in what
> condition it was tested as appropriate ?
Will do so.
>
> In addition, do you know if it still works with libressl/boringssl ?
This will break LibreSSL as they broke OPENSSL_VERSION_NUMBER.
BoringSSL should be fine.
> Some users rely on these forks and I know that we very easily break
> them once in a while when touching the API. I'm fine if you don't
> know since these forks are not our primary target, but it's good to
> know upfront what to expect (especially for those who might have to
> get back to this patch if some breakage is detected).
>
> Thanks,
> Willy