Thank you so much Adam. I will check into the older release being used and 
update the ticket.

Anantha


From: Adam King <adk...@redhat.com>
Sent: Friday, March 31, 2023 5:46 AM
To: Adiga, Anantha <anantha.ad...@intel.com>
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] ceph orch ps mon, mgr, osd shows <unknown> for 
version, image and container id

I can see the json output for the osd you posted doesn't list any version

{
        "style": "cephadm:v1",
        "name": "osd.61",
        "fsid": "8dbfcd81-fee3-49d2-ac0c-e988c8be7178",
        "systemd_unit": 
ceph-8dbfcd81-fee3-49d2-ac0c-e988c8be7178@osd.61<mailto:ceph-8dbfcd81-fee3-49d2-ac0c-e988c8be7178@osd.61>,
        "enabled": true,
        "state": "running",
        "memory_request": null,
        "memory_limit": null,
        "ports": null,
        "container_id": 
"bb7d491335323689dfc6dcb8ae1b6c022f93b3721d69d46c6ed6036bbdd68255",
        "container_image_name": 
"docker.io/ceph/daemon:latest-pacific<http://docker.io/ceph/daemon:latest-pacific>",
        "container_image_id": 
"6e73176320aaccf3b3fb660b9945d0514222bd7a83e28b96e8440c630ba6891f",
        "container_image_digests": [
            
docker.io/ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586<mailto:docker.io/ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586>
        ],
{

The way the version is gathered for OSDs (and ceph daemons in general) by 
cephadm is to exec into the container and run "ceph -v"). I'm not sure why that 
wouldn't be working for the OSD here, but is for the mon. The one other thing I 
noted is the use of the 
docker.io/ceph/daemon:latest-pacific<http://docker.io/ceph/daemon:latest-pacific>
 image. We haven't been putting ceph images on docker for quite some time, so 
that's actually a fairly old pacific version (over 2 years old when I checked). 
There are much more recent pacific images on quay. Any reason for using that 
particular image? It's hard to remember if there was maybe some bug with such 
an old version or something that could cause this.

On Thu, Mar 30, 2023 at 1:40 PM Adiga, Anantha 
<anantha.ad...@intel.com<mailto:anantha.ad...@intel.com>> wrote:
Hi Adam,


Cephadm ls lists all details:

NAME                                                   HOST             PORTS   
     STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION         IMAGE ID  
    CONTAINER ID
osd.61                                                 zp3110b001a0101          
     running           3m ago   8M        -    22.0G  <unknown>       <unknown> 
    <unknown>
mon.zp3110b001a0101                                    zp3110b001a0101          
     running           3m ago   8M        -    2048M  <unknown>       <unknown> 
    <unknown>
{
        "style": "cephadm:v1",
        "name": "osd.61",
        "fsid": "8dbfcd81-fee3-49d2-ac0c-e988c8be7178",
        "systemd_unit": 
ceph-8dbfcd81-fee3-49d2-ac0c-e988c8be7178@osd.61<mailto:ceph-8dbfcd81-fee3-49d2-ac0c-e988c8be7178@osd.61>,
        "enabled": true,
        "state": "running",
        "memory_request": null,
        "memory_limit": null,
        "ports": null,
        "container_id": 
"bb7d491335323689dfc6dcb8ae1b6c022f93b3721d69d46c6ed6036bbdd68255",
        "container_image_name": 
"docker.io/ceph/daemon:latest-pacific<http://docker.io/ceph/daemon:latest-pacific>",
        "container_image_id": 
"6e73176320aaccf3b3fb660b9945d0514222bd7a83e28b96e8440c630ba6891f",
        "container_image_digests": [
            
docker.io/ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586<mailto:docker.io/ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586>
        ],
{
        "style": "cephadm:v1",
        "name": "mon.zp3110b001a0101",
        "fsid": "8dbfcd81-fee3-49d2-ac0c-e988c8be7178",
        "systemd_unit": 
ceph-8dbfcd81-fee3-49d2-ac0c-e988c8be7178@mon.zp3110b001a0101<mailto:ceph-8dbfcd81-fee3-49d2-ac0c-e988c8be7178@mon.zp3110b001a0101>,
        "enabled": true,
        "state": "running",
        "memory_request": null,
        "memory_limit": null,
        "ports": null,
        "container_id": 
"32ba68d042c3dd7e7cf81a12b6b753cf12dfd8ed1faa8ffc0ecf9f55f4f26fe4",
        "container_image_name": 
"docker.io/ceph/daemon:latest-pacific<http://docker.io/ceph/daemon:latest-pacific>",
        "container_image_id": 
"6e73176320aaccf3b3fb660b9945d0514222bd7a83e28b96e8440c630ba6891f",
        "container_image_digests": [
            
docker.io/ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586<mailto:docker.io/ceph/daemon@sha256:261bbe628f4b438f5bf10de5a8ee05282f2697a5a2cb7ff7668f776b61b9d586>
        ],
        "memory_usage": 1104880336,
        "version": "16.2.5",
        "started": "2023-03-29T22:41:52.754971Z",
        "created": "2022-07-13T16:31:48.766907Z",
        "deployed": "2022-07-13T16:30:48.528809Z",
        "configured": "2022-07-13T16:31:48.766907Z"
    },


The unknown is only for osd, mon and mgr  services. It is across all nodes.
Also  some other items missing are  PORTS, STATUS (time),  MEM USE,
NAME                                                                            
HOST                     PORTS        STATUS         REFRESHED  AGE  MEM USE  
MEM LIM  VERSION         IMAGE ID      CONTAINER ID
rgw.default.default.zp3110b001a0103.ftizjg       zp3110b001a0103  *:8080       
running (12h)     5m ago   8M     145M        -                  16.2.5         
 6e73176320aa  bd6c4d4262b3
alertmanager.zp3110b001a0101                           zp3110b001a0101          
            running               3m ago   8M        -                    -     
   <unknown>       <unknown>     <unknown>
mds.cephfs.zp3110b001a0102.sihibe                   zp3110b001a0102             
  stopped           9m ago   4M        -        -  <unknown>       <unknown>    
 <unknown>
mgr.zp3110b001a0101                                            zp3110b001a0101  
             running           3m ago   8M        -        -  <unknown>       
<unknown>     <unknown>
mgr.zp3110b001a0102                                            zp3110b001a0102  
             running           9m ago   8M        -        -  <unknown>       
<unknown>     <unknown>
mon.zp3110b001a0101                                           zp3110b001a0101   
            running           3m ago   8M        -    2048M  <unknown>       
<unknown>     <unknown>
mon.zp3110b001a0102                                          zp3110b001a0102    
           running           9m ago   8M        -    2048M  <unknown>       
<unknown>     <unknown>

Thank you,
Anantha

From: Adam King <adk...@redhat.com<mailto:adk...@redhat.com>>
Sent: Thursday, March 30, 2023 8:08 AM
To: Adiga, Anantha <anantha.ad...@intel.com<mailto:anantha.ad...@intel.com>>
Cc: ceph-users@ceph.io<mailto:ceph-users@ceph.io>
Subject: Re: [ceph-users] ceph orch ps mon, mgr, osd shows <unknown> for 
version, image and container id

if you put a copy of the cephadm binary onto one of these hosts (e.g. a002s002) 
and run "cephadm ls" what does it give for the OSDs? That's where the orch ps 
information comes from.

On Thu, Mar 30, 2023 at 10:48 AM 
<anantha.ad...@intel.com<mailto:anantha.ad...@intel.com>> wrote:
Hi ,

Why is ceph orch ps showing ,unknown  version, image and container id ?

root@a002s002:~# cephadm shell ceph mon versions
Inferring fsid 682863c2-812e-41c5-8d72-28fd3d228598
Using recent ceph image 
quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997<http://quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997>
{
    "ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific 
(stable)": 3
}
root@a002s002:~# cephadm shell ceph mgr versions
Inferring fsid 682863c2-812e-41c5-8d72-28fd3d228598
Using recent ceph image 
quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997<http://quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997>
{
    "ceph version 16.2.5 (0883bdea7337b95e4b611c768c0279868462204a) pacific 
(stable)": 3
}

root@a002s002:~# cephadm shell ceph orch ps --daemon-type mgr
Inferring fsid 682863c2-812e-41c5-8d72-28fd3d228598
Using recent ceph image 
quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997<http://quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997>
NAME          HOST      PORTS  STATUS   REFRESHED  AGE  MEM USE  MEM LIM  
VERSION    IMAGE ID
mgr.a002s002  a002s002         running     4m ago  11M        -        -  
<unknown>  <unknown>
mgr.a002s003  a002s003         running    87s ago  11M        -        -  
<unknown>  <unknown>
mgr.a002s004  a002s004         running     4m ago  11M        -        -  
<unknown>  <unknown>
root@a002s002:~# cephadm shell ceph orch ps --daemon-type mon
Inferring fsid 682863c2-812e-41c5-8d72-28fd3d228598
Using recent ceph image 
quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997<http://quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997>
NAME          HOST      PORTS  STATUS        REFRESHED  AGE  MEM USE  MEM LIM  
VERSION    IMAGE ID      CONTAINER ID
mon.a002s002  a002s002         running          4m ago  11M        -    2048M  
<unknown>  <unknown>     <unknown>
mon.a002s003  a002s003         running         95s ago  11M        -    2048M  
<unknown>  <unknown>     <unknown>
mon.a002s004  a002s004         running (4w)     4m ago   5M    1172M    2048M  
16.2.5     6e73176320aa  d38b94e00d28
root@a002s002:~# cephadm shell ceph orch ps --daemon-type osd
Inferring fsid 682863c2-812e-41c5-8d72-28fd3d228598
Using recent ceph image 
quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997<http://quay.io/ceph/daemon@sha256:9889075a79f425c2f5f5a59d03c8d5bf823856ab661113fa17a8a7572b16a997>
NAME    HOST      PORTS  STATUS   REFRESHED  AGE  MEM USE  MEM LIM  VERSION    
IMAGE ID
osd.0   a002s002         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.1   a002s003         running     5m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.10  a002s004         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.11  a002s003         running     5m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.12  a002s002         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.13  a002s004         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.14  a002s003         running     5m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.15  a002s002         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.16  a002s004         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.17  a002s003         running     5m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.18  a002s002         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.19  a002s004         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.2   a002s004         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.20  a002s003         running     5m ago  11M        -    10.9G  <unknown>  
<unknown>
osd.21  a002s002         running     8m ago  11M        -    10.9G  <unknown>  
<unknown>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to 
ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to