On 22 Oct 22:59, Daniel Jakots wrote:
> On Thu, 22 Oct 2020 21:49:20 -0500, "Rafael Possamai"
> <raf...@thinkpad.io> wrote:
> 
> > >Hi Bob, it was in the middle of the night and I got quite kinda
> > >stressed because all services depending on our ldap proxy stopped
> > >working after the upgrade and it took me a while to figure the
> > >problem out.  
> > 
> > Perhaps this is unsolicited advice, but maybe you can setup a test
> > system first, perform major upgrade on it to make sure everything
> > works. If so, then do it in production. 
> > 
> 
> Even better, try -current a few weeks before release (a possible hint
> is -beta). This way you can get any encountered bug fixed in time for
> -release. Your prod but also every one else will benefit from it.
> 
> Cheers,
> Daniel
> 

That's a very good advice.

I have for most services a very similiar setup at home (even with ldap). I run
always -current at my workstations - one workstation is updated more or less
daily and if that works I upgrade the 2nd one (important for ports too).

At home I regularly install snapshots (~ every 2nd week) - because before I
implement something at work I usually try and test that also at home - often
with "cutting edge" features.

When upgrading at work I always upgrade dev first. And all infrastructure
critical services are "carped" so even when upgrading prod then node by
node***.  But exactly in this ssl case this failed for me with this bug. At
home I use letsencrypt certs so that means ssl used /etc/ssl/cert.pem. The
same for my dev landscape where I stored the L2 ca also in /etc/ssl/cert.pem
(without remembering that I did that once). So unfortunately dev and prod were
not 100% identical :(

But lesson learned. I did already tons of automatization (salt/git) so I will
focus more on that again (when I have the time ...).

***Also the latest bug in carp load balancing couldn't be properly detected in
this way because in a mixed setup 6.7/6.8 it worked :/

-- 
wq: ~uw

Reply via email to