Warning: This post is slightly off-topic since it is about setting fossil up to not only serve via https, but do many more things to make it a complete development-team collaboration system.

I just recently was setting fossil (to run on windows and linux hosted docker) to support HTTPS, serving multiple domains with different views of multiple repositories with TH1 server side scripts submittable via wiki pages to make custom reports, database extensions, etc. It has taken me about a week to configure that and make the code changes in fossil to support that functionality.

p.s., I have been a long time user of fossil, sqlite, and spent much of my professional career designing high-performance dynamic languages (including Smalltalk and Javascript in which I always had embedded database engines, and ultimately incorporated sqlite many years ago).

1. The fossil website information on using nginx with fossil is not "helpful" in that scgi had numerous problems, and ultimately serving via proxy configuration was straightforward and reliable. 2. https handling via nginx is straightforward to configure and manage and does not rely on fossil doing anything. Mostly you need to understand how certificates work, how https works, and familiarity with openssl. 3. nginx makes it pretty easy to use one or more front-end servers to serve multiple domains and proxy an appropriate server-ip+port back-end server combinations with https encryption. 4. utilizing vhdx, or osx disk images (sparse bundles), or equivalent qemu/kvm img files to create virtual drives to serve repos from. Create symbolic-links on the drives to provide file/folder visibility by domain. 5. set fossil up to run as a service. enable fossil to serve multiple domains from a root directory by modding it to take a new command line parameter to include the domain in the local file path lookup. That allows you to have a single fossil server instance respond to different domains with different directories each of which is really a symbolic-link to a given view of your master repos that you probably have stored on a hi-speed network nas running samba. (I have a 10GB fiber network running a variety of windows, osx, and linux machines - 10GB fiber is much cheaper than cat6/cat7 copper). 6. rewrote the fossil wikiedit code to address some memory leaks (not really important the way fossil spawns), changed the ui/behavior model to integrate wysiwyg with server-side preview and make it one-liner to switch between the built-in editor and ckeditor or tinymce. Extended it to make the mimetype clearly visible and controllable during editing so that it is properly tracked in the manifest card record and utilized throughout the rendering and permission system. Added the TH1 mimetype to the list, fixed some markdown rendering bugs. 7. modified the TH1 interpreter with Th_RenderEx and enable sendText to be redirected to a blob, rather than only enabling stdout or cgi output. Thus making it re-entrantly callable to interpret arbitrary text. modified the basic parser to support an alternate syntax that is server-side html source friendly. I.e., $:var and $::var expansion, along with <th1><!-- code here --></th1> filtering. So that TH1 can losslessly round trip easily with HTML editors. 8. modified fossil to have more git and tag-branch awareness for more seamlessly handling git interop interchange and markers. Specifically to enable nodejs/grunt tasks to automatically mirror changes between fossil-repos and git-repos on our servers. 9. modified the --repolist to enable a custom TH1 server-side page to be run that would allow customizing the list repo-indexes being served from a server root-path, and incorporate requested domain info. 10. modified fossil so that on all platforms .fslckout was the default workspace db name (that enables windows to bind an app to that suffix). So that I could bind all the .fossil, .fslckout extensions to SqLiteBrowser to enable instant local editing of the repos. 11. extended the ticket system to support a richer model of path/area/tags, owners, assigned-to, dates, work-estimates 12. extended directory-opus (DOpus) with modifications to directly support fossil status in file-system view columns (jscript'ed columns) and other fossil commands directly in the desktop shell UI (menus, toolbar, etc). DOpus is incredible windows desktop shell replacement tool. 13. created a link to fossil.exe called fr.exe to support a shorter fast-typing command interface (not important but is convenient) 14. experimenting currently with tagxref table extensions for chmod like access permissions to various "owned" tag table items (like wiki pages and technical reports). 15. creating numerous server side reports directly within the wiki editing system for a rich team management tool experience with fossil. [Much like a smalltalk, or lisp, etc server system by team-collaborative].

While still an ongoing work in progress, my main purpose was to enable myself and others to quickly and easily spin up a repository for an area of work or life. integrate seamlessly to a world of git systems and users. Share (LDAP style) login permissions between repos. Support multiple domains on a single server with different sets of shared repos cross-linked for related projects. Manage entire team collaboration purely within the fossil model. Right now it takes about a minute to spin up a new repo, configure users, customize the skin, integrate reports, lock it down, and share it on one or more domains via https.

Half of the work has actually reading through sparse fossil information and filtering through the nginx user communities misinformation on how to configure nginx. Using wireshark, fiddler and postmaster-for-chrome really helped. One critical thing I did was pull down a local copy of the fossil repo db and turn on full text indexing so that I could search through it. That gave me access to lots of difficult to find information. The second part was setting up a visual-studio nmake environment where I could easily use all the rich debugging and refactoring tools search, cross-ref, and navigate through the code.

I'll be happy to share that work and my learnings over the coming months. If its valuable, then I will make it available via the http://fossil.st website.

Fossil is an absolutely amazing foundation for solo users, open-source projects, startup or soho businesses. For our partners who have absolute internet lockdown to the outside world, it is the ONLY way to exchange rich information on source, documentation, project tickets, and wiki+rss with git interchange.

lol, ok. Back to work on our phone product launch and materials needed for our partner Foxconn.

-- David Simmons (thelightphone.com)

Warren Young <mailto:w...@etr-usa.com>
Thursday, April 14, 2016 3:38 PM
On Apr 14, 2016, at 3:50 PM, Joerg Sonnenberger<jo...@britannica.bec.de>  wrote:
On Thu, Apr 14, 2016 at 03:32:48PM -0600, Warren Young wrote:
STEP 1: Split your “server” configurations
I don't think this is necessary at all.

It is.  We want the proxy server to redirect all Fossil-over-HTTP requests to 
HTTPS, which means URLs may be interpreted differently depending on the 
protocol you use.

It is only all other URLs that are interpreted the same irrespective of the 
protocol.

There’s also no point to serving Let’s Encrypt’s challenge/response files over 
HTTPS.  They’re only needed during the ACME negotiation, which always proceeds 
over HTTP, else you couldn’t bootstrap the system.

STEP 2: Prepare for the Let’s Encrypt challenge/response sequence
This part can just statically go into the same server block, no need for
a separate file.

My nginx server serves four different sites, and I wanted the generated SSL key 
to be good for all of them.  If you don’t extract this to an include file, you 
have to repeat it in each site’s server { } block.

For each domain you give to letsencrypt via the -d flag, Let’s Encrypt’s ACME 
service repeats the challenge/response process.  If any domain given via -d 
returns a 404 error when the ACME service attempts to access 
/.well-known/acme-challenge/* on that domain name, the cert generation process 
will fail.

Thus, you must repeat this block in each server { } block for which you give a 
-d flag, else the whole process fails.  Let’s Encrypt won’t just skip over them 
and give you a key for a subset of the domains you requested.  It’s 
all-or-nothing.

STEP 3: Write the wrapper script
Personally, I would recomment just using the SSH FUSE binding and doing
the dance from a separate machine. No need to have letsencrypt and all
dependencies running on a server.

Fossil only does user authentication and permission management over HTTP.  If 
you serve it over plain SSH, all users effectively get admin-level privileges 
on the repository, which is only acceptable when serving a private repo among 
trusted colleagues.

There are games you can play with OpenSSH configuration files to get 
Fossil-over-HTTP-over-SSH and thus still let Fossil do user/permission 
management, but that’s about as complex to set up as this nginx proxying 
scheme.  It’s an ongoing complexity, too: for every user you add, you must 
duplicate the SSH configuration hackery to give them access to the Fossil repo. 
 Whereas with HTTPS proxying, it’s a one-shot effort.

The only advantage SSH has here over HTTPS is that it doesn’t require you to 
use a centralized PKI for keys; it’s perfectly acceptable to use decentralized 
PSKs with SSH.  Now that we have Let’s Encrypt, the disadvantages of a 
centralized PKI in the TLS case disappear.

Let’s Encrypt certs only last for 90 days, which means it’s an ongoing
task to keep this up-to-date.  Until Let’s Encrypt learns about safe
nginx configuration file modification, it’s a manual process.  (With
Apache, letsencrypt-auto sets up a background auto-renewal process so
you can’t forget to renew.  You could script this manually for nginx,
if you wanted.)
Given that it is *not* supposed to change the configuration on renewal
at all, that's a non-issue.

The nginx configuration doesn’t change, but the content of the *.pem files *do* 
change on each renewal.  If you don’t re-generate those files and reload the 
nginx configuration every 90 days at most, it continues to serve those 
now-expired certs until you do.  With HSTS enabled, that means clients that 
obey the HSTS demand will stop talking to your server, because they were told 
to only accept HTTPS, and they refuse to talk to an HTTPS server with an 
expired cert.

Something I forgot to mention in the article is that we created the 
letsencrypt-wrapper script instead of just giving the command in the script 
interactively because the renewal process is basically “re-run the wrapper and 
restart nginx”.  So, you could cron that process to run every 80 days or so, 
avoiding the risk of locking your users out by forgetting to renew the cert.

I’m still wary of doing that, since I’ve had server upgrades empty the crontab 
files so that all the scheduled jobs don’t run any more.  You only discover 
this after you get the tech support call.
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
Joerg Sonnenberger <mailto:jo...@britannica.bec.de>
Thursday, April 14, 2016 2:50 PM
On Thu, Apr 14, 2016 at 03:32:48PM -0600, Warren Young wrote:
STEP 1: Split your “server” configurations

I don't think this is necessary at all.

STEP 2: Prepare for the Let’s Encrypt challenge/response sequence

This part can just statically go into the same server block, no need for
a separate file.

STEP 3: Write the wrapper script

Personally, I would recomment just using the SSH FUSE binding and doing
the dance from a separate machine. No need to have letsencrypt and all
dependencies running on a server.

STEP 5: Create the base SSL/TLS configuration
------

We extracted the site configuration from our server { } blocks above because we 
now need to create a second such block for each site that nginx serves.

     server {
         include local/ssl;
         include local/site;
     }

That is, instead of including the letsencrypt-challenge file — since we only 
serve the Let’s Encrypt challenge/response sequence via HTTP — we include the 
following SSL configuration file:

     listen 443 ssl;

     ssl on;

No need for "ssl on" in combination with the ssl on the listen line --
means all the config can be shared with plain HTTP.

Let’s Encrypt certs only last for 90 days, which means it’s an ongoing
task to keep this up-to-date.  Until Let’s Encrypt learns about safe
nginx configuration file modification, it’s a manual process.  (With
Apache, letsencrypt-auto sets up a background auto-renewal process so
you can’t forget to renew.  You could script this manually for nginx,
if you wanted.)

Given that it is *not* supposed to change the configuration on renewal
at all, that's a non-issue.

Joerg
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
Warren Young <mailto:w...@etr-usa.com>
Thursday, April 14, 2016 2:32 PM
The existing documentation [1] for setting up SSL with Fossil is pretty thin. “Just set up an SSL web proxy,” it says. Yeah, “just.” :)

Other advice recently given on this list is to use stunnel, but that’s only of use when Fossil hosts the whole web site, so you just need to proxy Fossil. If Fossil is just a *part* of an existing web site — e.g. a typical open source project site, with docs, downloads, a forum, a blog, etc. all hosted outside Fossil, such as static files, a CMS, etc. — you’re probably using a full web server or a proxy server of some kind, and need to enable SSL/TLS in *that* layer instead.

Another thing about the existing documentation is that it’s focused on self-signed certs, which requires messing with the platform’s certificate trust store to allow Fossil to trust your certificate. Here in 2016, we don’t need to mess with self-signed certs any more.

I’ve recently worked out solutions to all of the above problems, so I thought I’d document the process here.

The main thing that makes all of this relatively painless is Let’s Encrypt [2], which provides free globally-trusted TLS certificates for you, on demand. It’s in a beta state right now, a fact which will cause us a bit of grief below, but even in its current state it’s still better than the bad old manual way, involving a lot of openssl command line gymnastics.

If you’re using Apache as your front end proxy (e.g. mod_proxy) you can use the letsencrypt-auto helper, which will let you skip a few of the steps below as they’re taken care of by the helper program.

I prefer nginx for simple proxying, however, because its takes a lot less RAM than Apache, which directly affects how much I need to spend per month on hosting fees: VPS and cloud hosting providers charge for the RAM you use, so the less RAM you need, the lower your hosting fees.

Unfortunately, the automatic Let’s Encrypt certificate installer doesn’t yet know how to safely modify your nginx configuration files, so you have to do it by hand. The rest of this article will assume you’re going down this semi-manual path.



STEP 1: Split your “server” configurations
------

Throughout this article, we’re going to assume that we’re setting up HTTPS as a simple alternative for most of the site, which will continue to be accessible over HTTP. The only exception will be the Fossil sub-section, which we’ll force to HTTPS for security reasons.

You probably have your site’s nginx configuration written as a single server { } block at the moment, because it is only serving on port 80/HTTP. The way nginx works, you need a separate block to serve the same content on port 443/HTTPS.

Since most of the port 80 configuration will be the same as the port 443 configuration, we’ll follow the DRY principle and extract the common bits to a separate file. Thus, this:

server {
listen 80;
server_name .example.com 1.2.3.4 “”;
location / {
root /var/www/example.com;
...
}
...
}

…needs to become this:

server {
listen 80;
include local/letsencrypt-challenge;
include local/site;
}

…plus a separate per-site file, which we’re calling local/site stored relative to your nginx configuration directory:

server_name .example.com 1.2.3.4 “”;
location / {
root /var/www/example.com;
...
}
...

We’ll write the second server { } block for the HTTPS case later, after we’ve generated the TLS keys.

If your nginx server has multiple name-based virtual hosts and you want this TLS cert to cover all of them, split those, too. Each one needs to include the letsencrypt-challenge file, which we’ll create in the next step.



STEP 2: Prepare for the Let’s Encrypt challenge/response sequence
------

The server { } block above includes a file called local/letsencrypt-challenge, which contains this:

location '/.well-known/acme-challenge' {
default_type "text/plain";
root /var/www/letsencrypt;
}

This simply declares that any URL beginning with the Let’s Encrypt ACME protocol challenge prefix is served from a directory somewhere under your OS’s web root.

The letsencrypt program writes temporary challenge/response files in that directory for remote access by the Let’s Encrypt ACME service, so it needs to be a) writeable by the one who runs the wrapper script in a later step; and b) readable by the web server, which on SELinux-protected machines often means a specific sub-tree of the filesystem, like /var/www.

(Because these challenge/response files are ephemeral, served only during the ACME negotiation, some tutorials you’ll find online will recommend using something in /tmp instead, but that doesn’t work under MAC systems like SELinux that restrict the web server to reading from only certain directories. That’s why I recommend using /var/www/letsencrypt above. If your host OS has a MAC system but it’s configured differently, you may need to adjust this path to a directory that the web server is allowed to read from.)



STEP 3: Write the wrapper script
------

Save the following to a file called letsencrypt-wrapper:

#!/bin/sh
sudo ~/.local/share/letsencrypt/bin/letsencrypt certonly \
--webroot-path=/var/www/letsencrypt -a webroot \
--server https://acme-v01.api.letsencrypt.org/directory \
-d example.com -d www.example.com

You will need to make certain changes:

1. If you chose a different webroot path in the nginx configuration fragment above, adjust the --webroot-path option’s value here to match. The script won’t create this directory for you, because your OS probably needs it to have a particular set of permissions to allow nginx to read from it: readable by the nginx user, appropriate SELinux labels set for it, etc.

2. The path to the letsencrypt program may not be correct for your system. It might be in /usr/bin if you installed it via your OS’s package manager, or it might be in /usr/local/bin if you installed it from source as root. I installed it from source as a normal user, so it landed under my home directory, as you see here.

3. Adjust the -d flag values to your site’s domain name, not forgetting all of the CNAMEs and other aliases you want this cert to be valid for. At minimum, you probably need both the www. and “bare” versions of your domain name. Maybe you have multiple domains aliased to the same server, such as the .com, .net, and .org variants; include all of those, with and without www. prefixes. You can list up to 100 -d flags per certificate at this time, so don’t be shy about tossing all of the possibilities you might need in here. There is a low limit on the number of times you can change the set of domains covered by your cert,[5] so it behooves you to think this through carefully up front.

4. If your system doesn’t use sudo, remove that, but remember to run the script as root in the next step. (On success, the letsencrypt program will write some files in locations that only root typically has write access to.)



STEP 4: Restart ngnix and run the wrapper
------

We’re going to reload the nginx server configuration now, even though it is not complete, because part of the Let’s Encrypt process is a challenge/response sequence negotiated by the letsencrypt helper program over your site’s HTTP connection. This sequence proves to the Let’s Encrypt ACME server that you control the web server for each domain you named with -d flags in the wrapper script.

With ngnix running with the new configuration above, run the wrapper script. It should percolate for a few seconds, then report that it wrote your certificates out underneath /etc/letsencrypt. (That’s why it runs the helper under sudo.)



STEP 5: Create the base SSL/TLS configuration
------

We extracted the site configuration from our server { } blocks above because we now need to create a second such block for each site that nginx serves.

server {
include local/ssl;
include local/site;
}

That is, instead of including the letsencrypt-challenge file — since we only serve the Let’s Encrypt challenge/response sequence via HTTP — we include the following SSL configuration file:

listen 443 ssl;

ssl on;
ssl_stapling on;
ssl_stapling_verify on;

ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';

ssl_dhparam /etc/ssl/private/dhparams.pem;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

ssl_session_timeout 5m;

ssl_certificate /etc/letsencrypt/live/example.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/fullchain.pem;

This file is purposely semi-generic, so you can include it into as many server { } blocks as you need to.

It is possible to write a much shorter local/ssl file, but that will give you a C grade or worse in online SSL certificate tests.[3][4] The stock nginx SSL configuration is vulnerable to a number of attacks, which we must fix by configuration. The above configuration addresses several of these:

1. We enable OCSP stapling, which works around a weakness in the way certificate revocation was originally designed to work, which in practice resulted in revocations often being ignored. If we ever have to revoke one of these keys, we don’t want browsers to continue to accept it merely because it isn’t yet expired because they’re being lazy in their revocation checks. (Yes, browsers really do do that, trading security for speed!)

2. We restrict the default wide-open cipher suite set, disabling a bunch that are either known to be weak (e.g. RC4) or that allow downgrade attacks (e.g. export-grade encryption and null encryption). This unfortunately means certain older browsers (e.g. IE6) won’t be able to connect to us, but scraping them off gives better security to the majority remaining.

You may want to pare the list down even further than this, depending on the client types you need to support. One of the SSL testing services linked below tests a long list of browser profiles against your configuration and reports which ones will connect and which won’t.

3. The dhparams.pem file prevents the Logjam attack. You need to generate it on the server with this command:

openssl dhparam -out dhparams.pem 2048

Beware, this will take a few minutes of CPU time.

4. We restrict the protocol suite to drop SSL 3.0 and older, as they’ve got known vulnerabilities. (POODLE and such.)

5. The *.pem files mentioned at the end will be named differently on your system, since you aren’t doing this for example.com, but for your own domain name(s). If you gave several domains with -d in the letsencrypt-wrapper script, you still only get one set of certificate files, named after the first -d flag you gave. That’s why the *.pem files are named in the generic local/ssl file, and not repeated in each per-site server { } block.

One things I don’t do here, which will result in a marginally higher grade from the SSL testing services (e.g. “A” to “A+”) is enable HSTS: HTTP Strict Transport Security. This lets a site declare that it must always be accessed via TLS. Once a browser has accessed a server with HSTS enabled, it will forever more refuse to use HTTP with that site in order to avoid MITM attacks, even if you explicitly type “http://“ in the browser’s location bar.

I don’t enable HSTS on my sites because it means if I ever screw up and let a TLS cert lapse, all my users will be locked out of the site. If you trust yourself to be more on-the-ball about keeping your cert up-to-date than I trust myself, you should enable HSTS because it closes the redirect window; a MITM could sit on the HTTP port and instead of passing the redirect to the HTTPS side, continue to talk to the victim over HTTP, thus extracting all of its secrets.

Let’s Encrypt certs only last for 90 days, which means it’s an ongoing task to keep this up-to-date. Until Let’s Encrypt learns about safe nginx configuration file modification, it’s a manual process. (With Apache, letsencrypt-auto sets up a background auto-renewal process so you can’t forget to renew. You could script this manually for nginx, if you wanted.)

Another thing I don’t do here is make the HTTP site simply redirect to the HTTPS site. That goes against the major premise of this article, however, which is that the Fossil repo service is only part of the overall web service from your site. Presumably you, like me, have parts of the site you don’t need to protect, which are fine if served insecurely.

Serving everything via HTTPS also makes bootstrapping Let’s Encrypt impossible, because you can’t redirect all HTTP traffic to HTTPS until after you’ve got your cert set up.



STEP 6: Restart and test
------

At this point, restarting nginx again should bring up the same content on both ports 80 and 443. You might want to run SSL tests against it at this point.[3][4]



STEP 7: Proxy Fossil
------

Now that you have SSL up and running, you can add the following to the server { } blocks to proxy access to Fossil via TLS.

In the HTTP block, add this:

location /code {
rewrite ^ https://$host$request_uri permanent;
}

That forces all accesses to http://mydomain.com/code/* to be redirected to HTTPS, so that no sensitive data ever gets sent over HTTP. You can change the /code part to anything else you prefer, like /repo.

Then in the HTTPS block, put this:

location /code {
include scgi_params;
scgi_pass 127.0.0.1:48325;
scgi_param SCRIPT_NAME "/code";
}

If you changed /code above, make the same changes here.

Finally, start Fossil on the server with this script, which I call fslsrv:

#!/bin/sh
OLDPID=`pgrep fossil`
if [ -n "$OLDPID" ]
then
echo "Killing old Fossil instance (PID $OLDPID) first..."
kill $OLDPID
fi
fossil server --localhost --port 48325 --scgi \
--baseurl https://example.com/code \
/path/to/museum/repo.fossil > /dev/null &
echo Fossil server running, PID $!.

Because it’s binding to a port > 1023 and it’s serving from a repo file owned by your user, it doesn’t have to (and shouldn’t!) run as root. You might even want to run this under a purpose-created unprivileged user, and even put it in a chroot or jail. Principle of least privilege and all that.

You will need to change the --baseurl script to match your domain name. If you changed /code in the nginx configuration above to something else, match the change here, too. And of course, you need to give the path to your Fossil repository file/directory.

The port number is random. Feel free to choose a different value, and make the matching change in the nginx configuration above. The important thing is that Fossil is listening only on localhost, so that an outsider cannot access it except via the HTTPS front-end proxy service provided by nginx.



STEP 8: Re-sync
------

Now that you’re serving Fossil over HTTPS, you may need to change existing Fossil sync URLs. The redirect we added above means you don’t absolutely need to do this, but it saves an HTTP round-trip if you do:

$ cd ~/path/to/checkout
$ f sync https://example.com/code



Above, I’ve assumed that you’re serving Fossil underneath /code within your site’s existing URL structure. Another way to go would be to add a CNAME record for your web host server’s IP like fossil.example.com. In that case, you would write two separate server { } blocks for that subdomain, one redirecting unconditionally to the other. You wouldn’t need the scgi_param SCRIPT_NAME "/code”; bit in the configuration in that case, either, since the top-level URL as seen by Fossil would be the same as the top-level URL as seen by nginx.



[1]: http://fossil-scm.org/index.html/doc/trunk/www/ssl.wiki
[2]: https://letsencrypt.org/
[3]: https://www.ssllabs.com/ssltest/
[4]: https://www.htbridge.com/ssl/
[5]: https://community.letsencrypt.org/t/rate-limits-for-lets-encrypt/
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to