As always thanks for the discussion, Chris. More replies and a new idea below:

On 10/6/2020 2:45 PM, Christopher Schultz wrote:
…
What if your Docker container would just run certbot on launch?

But then I'm back to being a sysadmin, because the Docker container is like a little OS and I have to set up the OS, update it, install certbot if needed (based on the OS version!), ensure it's the certbot that has the bugfixes I want and behaves as I expect, install Tomcat on the OS, etc. I have to set up certbot with systemd or or whatever to run periodically. All that stuff goes into my Dockerfile. So I'm really doing the same thing as I would on a VM, except that Docker makes it easier to reproduce. But it's conceptually not much different than being sysadmin on a VM, then freezing the VM and duplicating that.


If you build the keystore in-memory, I'm not exactly sure what you'd
need to do in order to get Tomcat to bounce the SSLSocketFactory in that
way.

OK, I'll look into it. But it boggles my mind that an SSLSocketFactory would have a harder time using actual bytes of a certificate than it would loading it from disk. Because once it loads it from disk, it has the bytes in memory, no? So the only problem would be the API—whether it was built to unnecessarily require disk storage, or if it allows a "hook" to provide the bytes at the point in the logic between loading-from-disk and using-the-cert.


I have no idea what a PEM-encoded DER file is, but I'll certainly learn.
This:

-----BEGIN CERTIFICATE-----
stuff
-----END CERTIFICATE-----

Ooh, that stuff. Yeah, I've definitely used that. I just never knew what it was. I never really got into the deep murky depths of certificates, but I guess now I'll have, seeing as that no one else in the last 20 years has really made this easy.

I still don't get why files have to be involved. I pulled a DER file
from S3.
Sorry. I think of that as a "file".

Ah, there's an important distinction to be made here. As developers we often say "file" when we talk about a bunch of bytes that has some coherent format — a JPEG file, an Ogg Vorbis file, a JSON file. But (and the specs only really started making this clear around the time XML became a spec, as least as far as I know) from another view it's only a file if it's stored in a file system. (I'll explain why below.) So really we can talk about an "XML document", for example, that may never be stored in a file — we may construct an XML DOM tree completely in memory and pass it to some other method.


So your web server is S3.

Ah, no!

S3 is a database. It could be PostgreSQL or Cassandra. It's a data store. In particular it's a key-value store. It stores things. It has a feature that allows you to set a configuration so that some web server will automatically serve the BLOBs from the S3 data store, along with some metadata. But is the web server "part of" S3? I'll bet not. Surely there is some pool of workers that pull data from the S3 data store and serves the data. I could probably write my own web server to do the same thing. But AWS makes it transparent; it's part of the infrastucture. I just provide the data and declaratively tell it what I want served. Then it is served. I don't have to configure a server. There is some server in the cloud.

But there's actually another step! Since I'm serving the site via CloudFront, the CloudFront layer (servers around the world in different countries!) actually connects to the web site serving the S3 data and /copies it to CloudFront/! So my data is actually being served to the end user by some CloudFront server, in some country closest to the user. Is this Tomcat? Is it NGINX? Is it Apache? I don't know. Do I care what it is? No. In fact, I'd rather not know, because I want to configure the distribution and serving /independent from any server implementation/.

So my web server is something on some CloudFront deployment in some country. For all I know different CloudFront local deployments use different server types. I don't know and I don't care.


ELB ~= CloudFront, right?

Sort of. It is equivalent in that 1) it is the direct endpoint connection with the user, and 2) I can configure it to use the AWS managed certificates.


If CloudFront handles your SSL for you, why
not let ELB do it for you in this context?


That's what I'm doing now! And that's what I would prefer doing. And that's what I would need to do for a large application!

But the problem (going back to the motivation for all this) is that the ELB costs a lot of money. I don't absolutely need an ELB for a small app that I'm testing, or an app that will have 10 users, or an app that I don't care that crashes and immediately gets restarted. Why should I pay $16 or more per month for an ELB if I don't need it? When I have five little apps I want to deploy?

Think about the thousands of little apps people want to deploy around the world. Is their choice really between paying $5/month on Digital Ocean and completely managing a VM, or setting up some intricate set of replicate servers with load balancers and the like? Why can't they just drop a JAR somewhere and let it run with SSL, pay $3/month for it, and not have to become CentOs sysadmin? That is what I'm getting at.

It doesn't have to be as hard as you are making it sound.

OK, I guess I just haven't found the guide for the simple drop-in-a-JAR and get an application on a custom domain with SSL. Could you send me that URL and I'll read it.


If you are building a Docker container and it won't kill you to have
anything running on it besides "java -jar myapp.jar" then I would
suggest that you set things up using the process I detailed in that 2019
ACNA presentation you mentioned.

Could you provide me a link to that? Because I think I only have your 2018 Tomcat Let's Encrypt slides. The 2019 one is the one I was looking for.

It would require that you:

1. Fetch files from S3 on Docker deployment (this seeds the node with
any existing keys + certs)

2. Run certbot on Docker deployment (which may decide not to update if
the files are fresh enough)

3. Configure cron to call the script included with the ACNA 2019
presentation once per week

4. That cron script needs to be updated to push any new files generated
by certbot -> S3

Ah, if this is the "simple" you were talking about, then … that's not simple in my book. It requires you know all sorts of intricacies about certbot. I've ran into bugs with different versions, and different versions of CentOS come distributed with different certbot versions, for instance. So then am I going to have to build certbot from scratch? Then comes the nightmare of making sure I have the right libraries that match what certbot needs, and these things are always changing.

Oh, and cron? Sure, I'll bet you know that inside and out because you've been working on it for years. I have no doubt you know it 10 times better than I do. But it's another bunch of intricacies and gotchas someone has to learn.

But shouldn't we be moving on from cron anyway? https://trstringer.com/systemd-timer-vs-cronjob/

But why should I learn about any of this!!! I just want to drop a JAR and have it run on the web! All the stuff you're talking about is what the infrastructure should handle. It is completely orthogonal to the application functionality.

If I keep talking about this it will seem like I'm ranting and arguing, and I don't want to come across that way because I really want to produce something useful, and I'll very much need your help with the questions and problems I run into. So I was just getting excited explaining my thoughts. ;)

But anyway, let me tell you the idea I had this morning. In a way, you hinted at it in your reply. Why do I need to use S3 as a store if my application is running on AWS, and AWS already has the AWS Certificate Manager which already manages an SSL certificate with renewal! In essence the AWS Certificate Manager is the "data store/state" like S3, and I don't even need to call Let's Encrypt.

What I need to do is to write code in my application to merely ask AWS Certificate Manager for the certificate when it starts up, and then pass that to Tomcat.

I have already written code for Guise that sets up a certificate in AWS Certificate Manager and configures the DNS in Route 53 so that the certificate gets updated. It's all a single command `guise deploy`. It's all written.

Now I just need to write the code to 1) get the certificate from AWS, and 2) give it to Tomcat. And #2 will be the hardest, probably, so I'll be asking lots of questions.

In the future I can go back and add the automatic Let's Encrypt certificate request part, but for the first step I'll look into using the one configured in AWS Certificate Manager. I'll go do more research …

Garret

Reply via email to