Hello,Yes I did but wasn't able to suggest anything further to get around it, 
however:1/ There is currently an issue with 15.2.2 so I would advise holding 
off any upgrade2/ Another mail list user replied to one of your older emails in 
the thread asking for some manager logs not sure if you have seen this.Thanks 
---- On Fri, 22 May 2020 01:21:26 +0800  gen...@gencgiyen.com  wrote ----Hi 
Ashley,Have you seen my previous reply? If so, and no solution then does anyone 
has any idea how can this be done with 2 node?Thanks,Gencer.
                                        
                        On 20.05.2020 16:33:53, Gencer W. Genç 
<gen...@gencgiyen.com> wrote:
                                        
                                        
                                            
                                        
                                        
                                        This is 2 node setup. I have no third 
node :(
                                        
                                        I am planning to add more in the future 
but currently 2 nodes only.At the moment, is there a --force command for such 
usage?
                        On 20.05.2020 16:32:15, Ashley Merrick 
<singap...@amerrick.co.uk> wrote:Correct, however it will need to stop one to 
do the upgrade leaving you with only one working MON (this is what I would 
suggest the error means seeing i had the same thing when I only had a single 
MGR), normally is suggested to have 3 MONs due to quorum.Do you not have a node 
you can run a mon for the few minutes to complete the upgrade?---- On Wed, 20 
May 2020 21:28:19 +0800 Gencer W. Genç <gen...@gencgiyen.com> wrote ----I have 
2 mons and 2 mgrs.  cluster:    id:     7d308992-8899-11ea-8537-7d489fa7c193    
health: HEALTH_OK  services:    mon: 2 daemons, quorum 
vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)    mgr: 
vx-rg23-rk65-u43-130.arnvag(active, since 28m), standbys: 
vx-rg23-rk65-u43-130-1.pxmyie    mds: cephfs:1 
{0=cephfs.vx-rg23-rk65-u43-130.kzjznt=up:active} 1 up:standby    osd: 24 osds: 
24 up (since 69m), 24 in (since 3w)  task status:    scrub status:        
mds.cephfs.vx-rg23-rk65-u43-130.kzjznt: idle  data:    pools:   4 pools, 97 pgs 
   objects: 1.38k objects, 4.8 GiB    usage:   35 GiB used, 87 TiB / 87 TiB 
avail    pgs:     97 active+clean  io:    client:   5.3 KiB/s wr, 0 op/s rd, 0 
op/s wr  progress:    Upgrade to docker.io/ceph/ceph:v15.2.2 (33s)      
[=...........................] (remaining: 9m)Isn't both mons already up? I 
have no way to add third mon btw.Thnaks,Gencer.On 20.05.2020 16:21:03, Ashley 
Merrick <singap...@amerrick.co.uk> wrote:Yes, I think it's because your only 
running two mons, so the script is halting at a check to stop you being in the 
position of just one running (no backup).I had the same issue with a single MGR 
instance and had to add a second to allow to upgrade to continue, can you bring 
up an extra MON?Thanks---- On Wed, 20 May 2020 21:18:09 +0800 Gencer W. Genç 
<gen...@gencgiyen.com> wrote ----Hi Ashley,I see this:[INF] Upgrade: Target is 
docker.io/ceph/ceph:v15.2.2 with id 
4569944bbW86c3f9b5286057a558a3f852156079f759c9734e54d4f64092be9fa[INF] Upgrade: 
It is NOT safe to stop mon.vx-rg23-rk65-u43-130Does this meaning anything to 
you?I've also attached full log. See especially after line #49. I stopped and 
restart upgrade there.Thanks,Gencer.On 20.05.2020 16:13:00, Ashley Merrick 
<singap...@amerrick.co.uk> wrote:ceph config set mgr 
mgr/cephadm/log_to_cluster_level debugceph -W cephadm --watch-debugSee if you 
see anything that stands out as an issue with the update, seems it has 
completed only the two MGR instancesIf not:ceph orch upgrade stopceph orch 
upgrade start --ceph-version 15.2.2and monitor the watch-debug logMake sure at 
the end you run:ceph config set mgr mgr/cephadm/log_to_cluster_level info---- 
On Wed, 20 May 2020 21:07:43 +0800 Gencer W. Genç <gen...@gencgiyen.com> wrote 
----Ah yes,{    "mon": {        "ceph version 15.2.1 
(9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 2    },    "mgr": 
{        "ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) 
octopus (stable)": 2    },    "osd": {        "ceph version 15.2.1 
(9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 24    },    
"mds": {        "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) 
octopus (stable)": 2    },    "overall": {        "ceph version 15.2.1 
(9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus (stable)": 28,        "ceph 
version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable)": 2  
  }}How can i fix this?Gencer.On 20.05.2020 16:04:33, Ashley Merrick 
<singap...@amerrick.co.uk> wrote:Does:ceph versionsshow any services yet 
running on 15.2.2?---- On Wed, 20 May 2020 21:01:12 +0800 Gencer W. Genç 
<gen...@gencgiyen.com> wrote ----Hi Ashley,$ ceph orch upgrade status {    
"target_image": "docker.io/ceph/ceph:v15.2.2",    "in_progress": true,    
"services_complete": [],    "message": ""}Thanks,Gencer.On 20.05.2020 15:58:34, 
Ashley Merrick <singap...@amerrick.co.uk> wrote:What does ceph orch upgrade 
status show?---- On Wed, 20 May 2020 20:52:39 +0800 Gencer W. Genç 
<gen...@gencgiyen.com> wrote ----Hi,  I've 15.2.1 installed on all machines. On 
primary machine I executed ceph upgrade command:  $ ceph orch upgrade start 
--ceph-version 15.2.2   When I check ceph -s I see this:    progress:     
Upgrade to docker.io/ceph/ceph:v15.2.2 (30m)       
[=...........................] (remaining: 8h)  It says 8 hours. It is already 
ran for 3 hours. No upgrade processed. It get stuck at this point.  Is there 
any way to know why this has stuck?  Thanks, 
Gencer._______________________________________________ceph-users mailing list 
-- ceph-us...@ceph.ioto unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to