On Thu, 9 Oct 2025, at 10:06, Mihai Popescu wrote:
> Looking of how fast snapshots files are changing in time, one could
> presume a CDN mirror of snapshots will never have valid files.
> Am I wrong?

IMO that is a simplistic view of things that ignores the features available in 
modern CDNs, like requesting extremely fast global invalidation on a per-URL 
basis, via an uncomplicated API

sthen@ has previously indicated that <unspecified complications> prevent 
improving the situation.

I’d like to know more about the complications.

At the point of uploading a new version of a file, the process knows the entire 
URL path portion of the final URL, and the ‘cdn.openbsd.org’ part is fixed, so 
it should be able to (a) know if the new file is different to the old  file, 
and (b) request an invalidation if it is.

One complication is probably that the package tree may have a LOT of URLs to 
invalidate. They’d probably need to be batched to avoid CDN API rate limits, or 
not cached at all.

This shouldn’t be a problem for the base sets — and unlike packages, they don’t 
have a long tail. It would be even less complicated if deraadt@ (I think?) 
followed thru on the threat^H^H^H^H^H^Hidea of making it one big set to 
eliminate the “I didn’t install all the sets because I’m extremely special” 
problems ;-)

When I started working with CDNs in 2014 at a media company, Akamai took (IIRC) 
double-digit minutes to do a global invalidation. In 2025 Fastly claim to be 
able to complete a global purge in 150ms.

It’s almost certainly fixable, but it needs backend work and how many people 
have access to that infrastructure? and those people are likely busy with a lot 
of other more important things.

John

Reply via email to