[ceph: root@cn01 /]# ceph -W cephadm,
cluster:
id: bfa2ad58-c049-11eb-9098-3c8cf8ed728d
health: HEALTH_OK

services:
mon: 5 daemons, quorum cn05,cn02,cn03,cn04,cn01 (age 111m)
mgr: cn06.rpkpwg(active, since 7h), standbys: cn02.arszct, cn03.elmwhu
mds: 2/2 daemons up, 2 standby
osd: 35 osds: 35 up (since 111m), 35 in (since 5h)

data:
volumes: 2/2 healthy
pools: 8 pools, 545 pgs
objects: 8.13M objects, 7.7 TiB
usage: 31 TiB used, 95 TiB / 126 TiB avail
pgs: 545 active+clean

io:
client: 4.1 MiB/s rd, 885 KiB/s wr, 128 op/s rd, 14 op/s wr

progress:
Upgrade to quay.io/ceph/ceph:v16.2.11 (0s)
[............................]

Cluster is healthy.

Is there an easy way to see if anything was upgraded through the orchestrator?

-jeremy

> On Monday, Feb 27, 2023 at 10:58 PM, Curt <light...@gmail.com 
> (mailto:light...@gmail.com)> wrote:
> Did any of your cluster get partial upgrade? What about ceph -W cephadm, does 
> that return anything or just hang, also what about ceph health detail? You 
> can always try ceph orch upgrade pause and then orch upgrade resume, might 
> kick something loose, so to speak.
> On Tue, Feb 28, 2023, 10:39 Jeremy Hansen <jer...@skidrow.la 
> (mailto:jer...@skidrow.la)> wrote:
> > {
> > "target_image": "quay.io/ceph/ceph:v16.2.11 
> > (http://quay.io/ceph/ceph:v16.2.11)",
> > "in_progress": true,
> > "services_complete": [],
> > "progress": "",
> > "message": ""
> > }
> >
> > Hasn’t changed in the past two hours.
> >
> > -jeremy
> >
> >
> >
> > > On Monday, Feb 27, 2023 at 10:22 PM, Curt <light...@gmail.com 
> > > (mailto:light...@gmail.com)> wrote:
> > > What does Ceph orch upgrade status return?
> > > On Tue, Feb 28, 2023, 10:16 Jeremy Hansen <jer...@skidrow.la 
> > > (mailto:jer...@skidrow.la)> wrote:
> > > > I’m trying to upgrade from 16.2.7 to 16.2.11. Reading the 
> > > > documentation, I cut and paste the orchestrator command to begin the 
> > > > upgrade, but I mistakenly pasted directly from the docs and it 
> > > > initiated an “upgrade” to 16.2.6. I stopped the upgrade per the docs 
> > > > and reissued the command specifying 16.2.11 but now I see no progress 
> > > > in ceph -s. Cluster is healthy but it feels like the upgrade process is 
> > > > just paused for some reason.
> > > >
> > > > Thanks!
> > > > -jeremy
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@ceph.io 
> > > > (mailto:ceph-users@ceph.io)
> > > > To unsubscribe send an email to ceph-users-le...@ceph.io 
> > > > (mailto:ceph-users-le...@ceph.io)

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to