Thanks Brad!  I completely forgot about that trick!  I copied the output and 
modified the command as suggested and the monitor came up.  So at least that 
does work, now I just need to figure out why the normal service setup is 
borked.  I was quick concerned that it wouldn’t come back at all and I would 
have to go back to a snapshot!

Regards,
Brent

-----Original Message-----
From: Brad Hubbard <bhubb...@redhat.com> 
Sent: Sunday, March 24, 2019 9:49 PM
To: Brent Kennedy <bkenn...@cfl.rr.com>
Cc: Ceph Users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] OS Upgrade now monitor wont start

Do a "ps auwwx" to see how a running monitor was started and use the equivalent 
command to try to start the MON that won't start. "ceph-mon --help" will show 
you what you need. Most important is to get the ID portion right and to add 
"-d" to get it to run in teh foreground and log to stdout. HTH and good luck!

On Mon, Mar 25, 2019 at 11:10 AM Brent Kennedy <bkenn...@cfl.rr.com> wrote:
>
> Upgraded all the OS’s in the cluster to Ubuntu 14.04 LTS from Ubuntu 12.02 
> LTS then finished the upgrade from Firefly to Luminous.
>
>
>
> I then tried to upgrade the first monitor to Ubuntu 16.04 LTS, the OS upgrade 
> went fine, but then the monitor and manager wouldn’t start.  I then used 
> ceph-deploy to install over the existing install to ensure the new packages 
> were installed.  Monitor and manager still wont start.  Oddly enough, it 
> seems that logging wont populate either.  I was trying to find the command to 
> run the monitor manually to see if could read the output since the logging in 
> /var/log/ceph isn’t populating.  I did a file system search to see if a log 
> file was created in another directory, but it appears that’s not the case.  
> Monitor and cluster were healthy before I started the OS upgrade.  Nothing in 
> “Journalctl –xe” other than the services starting up without any errors.  
> Cluster shows 1/3 monitors down in health status though.
>
>
>
> I hope to upgrade all the remaining monitors to 16.04.  I already upgraded 
> the gateways to 16.04 without issue.  All the OSDs are being replaced with 
> newer hardware and going to CentOS 7.6.
>
>
>
>
>
> Regards,
>
> -Brent
>
>
>
> Existing Clusters:
>
> Test: Luminous 12.2.11 with 3 osd servers, 1 mon/man, 1 gateway ( all 
> virtual on SSD )
>
> US Production(HDD): Jewel 10.2.11 with 5 osd servers, 3 mons, 3 
> gateways behind haproxy LB
>
> UK Production(HDD): Luminous 12.2.11 with 15 osd servers, 3 mons/man, 
> 3 gateways behind haproxy LB
>
> US Production(SSD): Luminous 12.2.11 with 6 osd servers, 3 mons/man, 3 
> gateways behind haproxy LB
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Cheers,
Brad

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to