Gotcha.  Thanks for the input regardless.  I suppose I'll continue what I'm 
doing, and plan on doing an upgrade via quay.io in the near future.

-----Original Message-----
From: Gregory Farnum <gfar...@redhat.com> 
Sent: Monday, October 4, 2021 7:14 PM
To: Edward R Huyer <erh...@rit.edu>
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Daemon Version Mismatch (But Not Really?) After 
Deleting/Recreating OSDs

On Mon, Oct 4, 2021 at 12:05 PM Edward R Huyer <erh...@rit.edu> wrote:
>
> Apparently the default value for container_image in the cluster configuration 
> is "docker.io/ceph/daemon-base:latest-pacific-devel".  I don't know where 
> that came from.  I didn't set it anywhere.  I'm not allowed to edit it, 
> either (from the dashboard, anyway).
>
> The container_image_base for the cephadm module is "docker.io/ceph/ceph".
>
> Also, 16.2.6 is already out, so I'm not sure why I'd be getting 16.2.5 
> development releases.
>
> Is this possibly related to the issues with docker.io and move to quay.io?

A good guess, but like I said this whole area is way outside my wheelhouse. I 
just know how to decode Ceph's URL and git version conventions. ;)

>
> -----Original Message-----
> From: Gregory Farnum <gfar...@redhat.com>
> Sent: Monday, October 4, 2021 2:33 PM
> To: Edward R Huyer <erh...@rit.edu>
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] Daemon Version Mismatch (But Not Really?) 
> After Deleting/Recreating OSDs
>
> On Mon, Oct 4, 2021 at 7:57 AM Edward R Huyer <erh...@rit.edu> wrote:
> >
> > Over the summer, I upgraded my cluster from Nautilus to Pacific, and 
> > converted to use cephadm after doing so.  Over the past couple weeks, I've 
> > been converting my OSDs to use NVMe drives for db+wal storage.  Schedule a 
> > node's worth of OSDs to be removed, wait for that to happen, delete the PVs 
> > and zap the drives, let the orchestrator do its thing.
> >
> > Over this past weekend, the cluster threw up a HEALTH_WARN due to 
> > mismatched daemon versions.  Apparently the recreated OSDs are reporting 
> > different version information from the old daemons.
> >
> > New OSDs:
> >
> > -          Container Image Name:  
> > docker.io/ceph/daemon-base:latest-pacific-devel
> >
> > -          Container Image ID: d253896d959e
> >
> > -          Version: 16.2.5-226-g7c9eb137
>
> I haven't done any work with cephadm, but this container name and the version 
> tag look like you've installed the in-development next version of Pacific, 
> not the released 16.2.5. Did you perhaps manage to put a phrase similar to 
> "pacific-dev" somewhere instead of "pacific"?
>
> >
> > Old OSDs and other daemons:
> >
> > -          Container Image Name: docker.io/ceph/ceph:v16
> >
> > -          Container Image ID: 6933c2a0b7dd
> >
> > -          Version: 16.2.5
> >
> > I'm assuming this is not actually a problem and will go away when I next 
> > upgrade the cluster, but I figured I'd throw it out here in case someone 
> > with more knowledge than I thinks otherwise.  If it's not a problem, is 
> > there a way to silence it until I next run an upgrade?  Is there an 
> > explanation for why it happened?
> >
> > -----
> > Edward Huyer
> > Golisano College of Computing and Information Sciences Rochester 
> > Institute of Technology Golisano 70-2373
> > 152 Lomb Memorial Drive
> > Rochester, NY 14623
> > 585-475-6651
> > erh...@rit.edu<mailto:erh...@rit.edu>
> >
> > Obligatory Legalese:
> > The information transmitted, including attachments, is intended only for 
> > the person(s) or entity to which it is addressed and may contain 
> > confidential and/or privileged material. Any review, retransmission, 
> > dissemination or other use of, or taking of any action in reliance upon 
> > this information by persons or entities other than the intended recipient 
> > is prohibited. If you received this in error, please contact the sender and 
> > destroy any copies of this information.
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
> > email to ceph-users-le...@ceph.io
> >
>

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to