The existing documentation [1] for setting up SSL with Fossil is pretty thin.  
“Just set up an SSL web proxy,” it says.  Yeah, “just.” :)

Other advice recently given on this list is to use stunnel, but that’s only of 
use when Fossil hosts the whole web site, so you just need to proxy Fossil.  If 
Fossil is just a *part* of an existing web site — e.g. a typical open source 
project site, with docs, downloads, a forum, a blog, etc. all hosted outside 
Fossil, such as static files, a CMS, etc. — you’re probably using a full web 
server or a proxy server of some kind, and need to enable SSL/TLS in *that* 
layer instead.

Another thing about the existing documentation is that it’s focused on 
self-signed certs, which requires messing with the platform’s certificate trust 
store to allow Fossil to trust your certificate.  Here in 2016, we don’t need 
to mess with self-signed certs any more.

I’ve recently worked out solutions to all of the above problems, so I thought 
I’d document the process here.

The main thing that makes all of this relatively painless is Let’s Encrypt [2], 
which provides free globally-trusted TLS certificates for you, on demand.  It’s 
in a beta state right now, a fact which will cause us a bit of grief below, but 
even in its current state it’s still better than the bad old manual way, 
involving a lot of openssl command line gymnastics.

If you’re using Apache as your front end proxy (e.g. mod_proxy) you can use the 
letsencrypt-auto helper, which will let you skip a few of the steps below as 
they’re taken care of by the helper program.

I prefer nginx for simple proxying, however, because its takes a lot less RAM 
than Apache, which directly affects how much I need to spend per month on 
hosting fees: VPS and cloud hosting providers charge for the RAM you use, so 
the less RAM you need, the lower your hosting fees.

Unfortunately, the automatic Let’s Encrypt certificate installer doesn’t yet 
know how to safely modify your nginx configuration files, so you have to do it 
by hand.  The rest of this article will assume you’re going down this 
semi-manual path.



STEP 1: Split your “server” configurations
------

Throughout this article, we’re going to assume that we’re setting up HTTPS as a 
simple alternative for most of the site, which will continue to be accessible 
over HTTP.  The only exception will be the Fossil sub-section, which we’ll 
force to HTTPS for security reasons.

You probably have your site’s nginx configuration written as a single server { 
} block at the moment, because it is only serving on port 80/HTTP.  The way 
nginx works, you need a separate block to serve the same content on port 
443/HTTPS.

Since most of the port 80 configuration will be the same as the port 443 
configuration, we’ll follow the DRY principle and extract the common bits to a 
separate file.  Thus, this:

   server {
       listen 80;
       server_name .example.com 1.2.3.4 “”;
       location / {
           root /var/www/example.com;
           ...
       }
       ...
   }

…needs to become this:

   server {
       listen 80;
       include local/letsencrypt-challenge;
       include local/site;
   }

…plus a separate per-site file, which we’re calling local/site stored relative 
to your nginx configuration directory:

       server_name .example.com 1.2.3.4 “”;
       location / {
           root /var/www/example.com;
           ...
       }
       ...

We’ll write the second server { } block for the HTTPS case later, after we’ve 
generated the TLS keys.

If your nginx server has multiple name-based virtual hosts and you want this 
TLS cert to cover all of them, split those, too.  Each one needs to include the 
letsencrypt-challenge file, which we’ll create in the next step.



STEP 2: Prepare for the Let’s Encrypt challenge/response sequence
------

The server { } block above includes a file called local/letsencrypt-challenge, 
which contains this:

    location '/.well-known/acme-challenge' {
        default_type "text/plain";
        root /var/www/letsencrypt;
    }

This simply declares that any URL beginning with the Let’s Encrypt ACME 
protocol challenge prefix is served from a directory somewhere under your OS’s 
web root.

The letsencrypt program writes temporary challenge/response files in that 
directory for remote access by the Let’s Encrypt ACME service, so it needs to 
be a) writeable by the one who runs the wrapper script in a later step; and b) 
readable by the web server, which on SELinux-protected machines often means a 
specific sub-tree of the filesystem, like /var/www. 

(Because these challenge/response files are ephemeral, served only during the 
ACME negotiation, some tutorials you’ll find online will recommend using 
something in /tmp instead, but that doesn’t work under MAC systems like SELinux 
that restrict the web server to reading from only certain directories.  That’s 
why I recommend using /var/www/letsencrypt above.  If your host OS has a MAC 
system but it’s configured differently, you may need to adjust this path to a 
directory that the web server is allowed to read from.)



STEP 3: Write the wrapper script
------ 

Save the following to a file called letsencrypt-wrapper:

    #!/bin/sh
    sudo ~/.local/share/letsencrypt/bin/letsencrypt certonly \
        --webroot-path=/var/www/letsencrypt -a webroot \
        --server https://acme-v01.api.letsencrypt.org/directory \
        -d example.com -d www.example.com

You will need to make certain changes:

1. If you chose a different webroot path in the nginx configuration fragment 
above, adjust the --webroot-path option’s value here to match.  The script 
won’t create this directory for you, because your OS probably needs it to have 
a particular set of permissions to allow nginx to read from it: readable by the 
nginx user, appropriate SELinux labels set for it, etc.

2. The path to the letsencrypt program may not be correct for your system.  It 
might be in /usr/bin if you installed it via your OS’s package manager, or it 
might be in /usr/local/bin if you installed it from source as root.  I 
installed it from source as a normal user, so it landed under my home 
directory, as you see here.

3. Adjust the -d flag values to your site’s domain name, not forgetting all of 
the CNAMEs and other aliases you want this cert to be valid for.  At minimum, 
you probably need both the www. and “bare” versions of your domain name.  Maybe 
you have multiple domains aliased to the same server, such as the .com, .net, 
and .org variants; include all of those, with and without www. prefixes.  You 
can list up to 100 -d flags per certificate at this time, so don’t be shy about 
tossing all of the possibilities you might need in here.  There is a low limit 
on the number of times you can change the set of domains covered by your 
cert,[5] so it behooves you to think this through carefully up front.

4. If your system doesn’t use sudo, remove that, but remember to run the script 
as root in the next step.  (On success, the letsencrypt program will write some 
files in locations that only root typically has write access to.)



STEP 4: Restart ngnix and run the wrapper
------

We’re going to reload the nginx server configuration now, even though it is not 
complete, because part of the Let’s Encrypt process is a challenge/response 
sequence negotiated by the letsencrypt helper program over your site’s HTTP 
connection.  This sequence proves to the Let’s Encrypt ACME server that you 
control the web server for each domain you named with -d flags in the wrapper 
script.

With ngnix running with the new configuration above, run the wrapper script.  
It should percolate for a few seconds, then report that it wrote your 
certificates out underneath /etc/letsencrypt.  (That’s why it runs the helper 
under sudo.)



STEP 5: Create the base SSL/TLS configuration
------

We extracted the site configuration from our server { } blocks above because we 
now need to create a second such block for each site that nginx serves.

    server {
        include local/ssl;
        include local/site;
    }

That is, instead of including the letsencrypt-challenge file — since we only 
serve the Let’s Encrypt challenge/response sequence via HTTP — we include the 
following SSL configuration file:

    listen 443 ssl;

    ssl on;
    ssl_stapling on;
    ssl_stapling_verify on;

    ssl_ciphers 
'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';

    ssl_dhparam /etc/ssl/private/dhparams.pem;
    ssl_prefer_server_ciphers on;
    ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

    ssl_session_timeout 5m;

    ssl_certificate         /etc/letsencrypt/live/example.com/cert.pem;
    ssl_certificate_key     /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/fullchain.pem;

This file is purposely semi-generic, so you can include it into as many server 
{ } blocks as you need to.

It is possible to write a much shorter local/ssl file, but that will give you a 
C grade or worse in online SSL certificate tests.[3][4]  The stock nginx SSL 
configuration is vulnerable to a number of attacks, which we must fix by 
configuration.  The above configuration addresses several of these:

1. We enable OCSP stapling, which works around a weakness in the way 
certificate revocation was originally designed to work, which in practice 
resulted in revocations often being ignored.  If we ever have to revoke one of 
these keys, we don’t want browsers to continue to accept it merely because it 
isn’t yet expired because they’re being lazy in their revocation checks.  (Yes, 
browsers really do do that, trading security for speed!)

2. We restrict the default wide-open cipher suite set, disabling a bunch that 
are either known to be weak (e.g. RC4) or that allow downgrade attacks (e.g. 
export-grade encryption and null encryption).  This unfortunately means certain 
older browsers (e.g. IE6) won’t be able to connect to us, but scraping them off 
gives better security to the majority remaining.

You may want to pare the list down even further than this, depending on the 
client types you need to support.  One of the SSL testing services linked below 
tests a long list of browser profiles against your configuration and reports 
which ones will connect and which won’t.

3. The dhparams.pem file prevents the Logjam attack.  You need to generate it 
on the server with this command:

    openssl dhparam -out dhparams.pem 2048

Beware, this will take a few minutes of CPU time.

4. We restrict the protocol suite to drop SSL 3.0 and older, as they’ve got 
known vulnerabilities.  (POODLE and such.)  

5. The *.pem files mentioned at the end will be named differently on your 
system, since you aren’t doing this for example.com, but for your own domain 
name(s).  If you gave several domains with -d in the letsencrypt-wrapper 
script, you still only get one set of certificate files, named after the first 
-d flag you gave.  That’s why the *.pem files are named in the generic 
local/ssl file, and not repeated in each per-site server { } block.

One things I don’t do here, which will result in a marginally higher grade from 
the SSL testing services (e.g. “A” to “A+”) is enable HSTS: HTTP Strict 
Transport Security.  This lets a site declare that it must always be accessed 
via TLS.  Once a browser has accessed a server with HSTS enabled, it will 
forever more refuse to use HTTP with that site in order to avoid MITM attacks, 
even if you explicitly type “http://“ in the browser’s location bar.

I don’t enable HSTS on my sites because it means if I ever screw up and let a 
TLS cert lapse, all my users will be locked out of the site.  If you trust 
yourself to be more on-the-ball about keeping your cert up-to-date than I trust 
myself, you should enable HSTS because it closes the redirect window; a MITM 
could sit on the HTTP port and instead of passing the redirect to the HTTPS 
side, continue to talk to the victim over HTTP, thus extracting all of its 
secrets.

Let’s Encrypt certs only last for 90 days, which means it’s an ongoing task to 
keep this up-to-date.  Until Let’s Encrypt learns about safe nginx 
configuration file modification, it’s a manual process.  (With Apache, 
letsencrypt-auto sets up a background auto-renewal process so you can’t forget 
to renew.  You could script this manually for nginx, if you wanted.)

Another thing I don’t do here is make the HTTP site simply redirect to the 
HTTPS site.  That goes against the major premise of this article, however, 
which is that the Fossil repo service is only part of the overall web service 
from your site.  Presumably you, like me, have parts of the site you don’t need 
to protect, which are fine if served insecurely.

Serving everything via HTTPS also makes bootstrapping Let’s Encrypt impossible, 
because you can’t redirect all HTTP traffic to HTTPS until after you’ve got 
your cert set up.



STEP 6: Restart and test
------

At this point, restarting nginx again should bring up the same content on both 
ports 80 and 443.  You might want to run SSL tests against it at this 
point.[3][4]



STEP 7: Proxy Fossil
------

Now that you have SSL up and running, you can add the following to the server { 
} blocks to proxy access to Fossil via TLS.

In the HTTP block, add this:

    location /code {
        rewrite ^ https://$host$request_uri permanent;
    }

That forces all accesses to http://mydomain.com/code/* to be redirected to 
HTTPS, so that no sensitive data ever gets sent over HTTP.  You can change the 
/code part to anything else you prefer, like /repo.

Then in the HTTPS block, put this:

    location /code {
        include scgi_params;
        scgi_pass 127.0.0.1:48325;
        scgi_param SCRIPT_NAME "/code";
    }

If you changed /code above, make the same changes here.

Finally, start Fossil on the server with this script, which I call fslsrv:

    #!/bin/sh
    OLDPID=`pgrep fossil`
    if [ -n "$OLDPID" ]
    then
        echo "Killing old Fossil instance (PID $OLDPID) first..."
        kill $OLDPID
    fi
    fossil server --localhost --port 48325 --scgi \
        --baseurl https://example.com/code \
        /path/to/museum/repo.fossil > /dev/null &
    echo Fossil server running, PID $!.

Because it’s binding to a port > 1023 and it’s serving from a repo file owned 
by your user, it doesn’t have to (and shouldn’t!) run as root.  You might even 
want to run this under a purpose-created unprivileged user, and even put it in 
a chroot or jail.  Principle of least privilege and all that.

You will need to change the --baseurl script to match your domain name.  If you 
changed /code in the nginx configuration above to something else, match the 
change here, too.  And of course, you need to give the path to your Fossil 
repository file/directory.

The port number is random.  Feel free to choose a different value, and make the 
matching change in the nginx configuration above.  The important thing is that 
Fossil is listening only on localhost, so that an outsider cannot access it 
except via the HTTPS front-end proxy service provided by nginx.



STEP 8: Re-sync
------

Now that you’re serving Fossil over HTTPS, you may need to change existing 
Fossil sync URLs.  The redirect we added above means you don’t absolutely need 
to do this, but it saves an HTTP round-trip if you do:

    $ cd ~/path/to/checkout
    $ f sync https://example.com/code



Above, I’ve assumed that you’re serving Fossil underneath /code within your 
site’s existing URL structure.  Another way to go would be to add a CNAME 
record for your web host server’s IP like fossil.example.com.  In that case, 
you would write two separate server { } blocks for that subdomain, one 
redirecting unconditionally to the other.  You wouldn’t need the scgi_param 
SCRIPT_NAME "/code”; bit in the configuration in that case, either, since the 
top-level URL as seen by Fossil would be the same as the top-level URL as seen 
by nginx.



[1]: http://fossil-scm.org/index.html/doc/trunk/www/ssl.wiki
[2]: https://letsencrypt.org/
[3]: https://www.ssllabs.com/ssltest/
[4]: https://www.htbridge.com/ssl/
[5]: https://community.letsencrypt.org/t/rate-limits-for-lets-encrypt/
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to