I did this too, because I have:
1) a single external IP 2) multiple internal HTTP-based services 3) a port-based firewall policy This whole issue would disappear, and remove a single point of failure (relayd), if my firewall directed inbound traffic based on URLs (for port 80) and SNI (for port 443). Alas, I’m not there yet, and this setup has been working okay for me for a couple years now. My biggest complaint is that, if one internal site is down (e.g., 'www'), `relayd` will direct traffic to that site to another (‘blog’), which may seem innocent, but could be a real problem if, e.g., the external IP is shared by multiple domains and traffic to 'www.domain1.com' gets mapped to ‘www.domain2.com’. I’ve never really looked into solving the issue, as it’s easier to restart the downed service, but would be thrilled if someone could explain how to fix it. A redacted version of my /etc/relayd.conf follows. But note that I also have `httpd` running on this machine, listening for inbound port 80 requests, in order to 1) handle ACME requests and 2) redirect all port 80 requests to port 443. Both configs follow. PS: there are many ways to skin the cat. For example, you’re running different httpd instances on ports versus my running them on different VMs. Also, how we approach handling port 80 and ACME requests. Still, hopefully seeing my config helps. K. ==== /etc/httpd.conf ==== # This rule is used to redirect all (except ACME) external # HTTP/80 requests to the HTTPS/443 equivalent. # # Note that `relayd` (/etc/relayd.conf) terminates *all* # external HTTPS/443 requests and forward them to # the appropriate HTTP/80 server server "default" { listen on egress port 80 location "/.well-known/acme-challenge/*" { root "/acme" request strip 2 } location "*" { block return 301 "https://$HTTP_HOST$REQUEST_URI" } } ==== /etc/relayd.conf ==== # this is the ONLY machine that accepts inbound connections to # <external-ip>:443 # # it uses the certificate maintained by Let's Encrypt (acme-client) # # it fowards the request to the correct <destination-server>:80 # via inspecting the "Host" HTTP header field's value # define some variables www_example_net="10.0.1.X" blog_example_net=“10.0.1.Y" git_example_net=“10.0.1.Z” # make a table out of each table <www_example_net_table> { $www_example_net } table <blog_example_net_table> { $blog_example_net } table <git_example_net_table> { $git_example_net } # http protocol-specific rules http protocol "my_http_protocol_config" { match request header "Host" value "www.example.net" forward to <www_example_net_table> match request header "Host" value "blog.example.net" forward to <blog_example_net_table> match request header "Host" value "git.example.net" forward to <git_example_net_table> match response header remove "Server" # is this supposed to be "request" or "response"? (I see both in the forums!) match request header set "Connection" value "close" match response header set "Connection" value "close" tcp { nodelay, sack } tls keypair example.net } # handle inbound port 443 traffic relay "my_relay" { listen on egress port 443 tls protocol my_http_protocol_config forward to <www_example_net_table> port 80 check tcp forward to <blog_example_net_table> port 80 check tcp forward to <git_example_net_table> port 80 check tcp }