Re: [ceph-users] mons die with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)...

2016-04-12 Thread Eric Hall
The "out" OSD was "out" before the crash and doesn't hold any data as it 
was weighted out prior.


Restarting OSDs named as repeat offenders as listed by 'ceph health 
detail' has cleared problems.


Thanks to all for the guidance and suffering my panic,
--
Eric


On 4/12/16 12:38 PM, Eric Hall wrote:

Ok, mon2 and mon3 are happy together, but mon1 dies with
mon/MonitorDBStore.h: 287: FAILED assert(0 == "failed to write to db")

I take this to mean mon1:store.db is corrupt as I see no permission issues.

So... remove mon1 and add a mon?

Nothing special to worry about re-adding a mon on mon1, other than rm/mv
the current store.db path, correct?

Thanks again,
--
Eric

On 4/12/16 11:18 AM, Joao Eduardo Luis wrote:

On 04/12/2016 05:06 PM, Joao Eduardo Luis wrote:

On 04/12/2016 04:27 PM, Eric Hall wrote:

On 4/12/16 9:53 AM, Joao Eduardo Luis wrote:


So this looks like the monitors didn't remove version 1, but this may
just be a red herring.

What matters, really, is the values in 'first_committed' and
'last_committed'. If either first or last_committed happens to be '1',
then there may be a bug somewhere in the code, but I doubt that. This
seems just an artefact.

So, it would be nice if you could provide the value of both
'osdmap:first_committed' and 'osdmap:last_committed'.


mon1:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, fist_committed) does not exist

mon2:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, fist_committed) does not exist

mon3:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, first_committed)
 : b8 94 00 00 00 00 00 00


Wow! This is unexpected, but fits the assertion just fine.

The solution, I think, will be rewriting first_committed and
last_committed on all monitors - except on mon1.


Let me clarify this a bit: the easy way out for mon1 would be to fix the
other two monitors and recreate mon1.

If you prefer to also fix mon1, you can simply follow the same steps on
the previous email for all the monitors, but ensuring osdmap:full_latest
on mon1 reflects the last available full_ version on its store.

   -Joao

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mons die with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)...

2016-04-12 Thread Eric Hall
Removed mon on mon1, added mon on mon1 via ceph-deply.  mons now have 
quorum.


I am left with:
   cluster 5ee52b50-838e-44c4-be3c-fc596dc46f4e
 health HEALTH_WARN 1086 pgs peering; 1086 pgs stuck inactive; 1086 
pgs stuck unclean; pool vms has too few pgs
 monmap e5: 3 mons at 
{cephsecurestore1=172.16.250.7:6789/0,cephsecurestore2=172.16.250.8:6789/0,cephsecurestore3=172.16.250.9:6789/0}, 
election epoch 28, quorum 0,1,2 
cephsecurestore1,cephsecurestore2,cephsecurestore3

 mdsmap e2: 0/0/1 up
 osdmap e38769: 67 osds: 67 up, 66 in
  pgmap v33886066: 7688 pgs, 24 pools, 4326 GB data, 892 kobjects
11620 GB used, 8873 GB / 20493 GB avail
   3 active+clean+scrubbing+deep
1086 peering
6599 active+clean

All OSDs are up/in as reported.  But I see no recovery I/O for those in 
inactive/peering/unclean.


Thanks,
--
Eric

On 4/12/16 1:14 PM, Joao Eduardo Luis wrote:

On 04/12/2016 06:38 PM, Eric Hall wrote:

Ok, mon2 and mon3 are happy together, but mon1 dies with
mon/MonitorDBStore.h: 287: FAILED assert(0 == "failed to write to db")

I take this to mean mon1:store.db is corrupt as I see no permission
issues.

So... remove mon1 and add a mon?

Nothing special to worry about re-adding a mon on mon1, other than rm/mv
the current store.db path, correct?


You'll actually need to recreate the mon with 'ceph-mon --mkfs' for that
to work, and that will likely require you to rm/mv the mon data directory.

You *could* copy the mon dir from one of the other monitors and use that
instead. But given you have a functioning quorum, I don't think there's
any reason to resort to that.

Follow the docs on removing monitors[1] and recreate the monitor from
scratch, adding it to the cluster. It will sync up from scratch from the
other monitors. That'll make them happy.

   -Joao

[1]
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/#removing-monitors




Thanks again,
--
Eric

On 4/12/16 11:18 AM, Joao Eduardo Luis wrote:

On 04/12/2016 05:06 PM, Joao Eduardo Luis wrote:

On 04/12/2016 04:27 PM, Eric Hall wrote:

On 4/12/16 9:53 AM, Joao Eduardo Luis wrote:


So this looks like the monitors didn't remove version 1, but this may
just be a red herring.

What matters, really, is the values in 'first_committed' and
'last_committed'. If either first or last_committed happens to be
'1',
then there may be a bug somewhere in the code, but I doubt that. This
seems just an artefact.

So, it would be nice if you could provide the value of both
'osdmap:first_committed' and 'osdmap:last_committed'.


mon1:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, fist_committed) does not exist

mon2:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, fist_committed) does not exist

mon3:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, first_committed)
 : b8 94 00 00 00 00 00 00


Wow! This is unexpected, but fits the assertion just fine.

The solution, I think, will be rewriting first_committed and
last_committed on all monitors - except on mon1.


Let me clarify this a bit: the easy way out for mon1 would be to fix the
other two monitors and recreate mon1.

If you prefer to also fix mon1, you can simply follow the same steps on
the previous email for all the monitors, but ensuring osdmap:full_latest
on mon1 reflects the last available full_ version on its store.

   -Joao



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mons die with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)...

2016-04-12 Thread Eric Hall
Ok, mon2 and mon3 are happy together, but mon1 dies with 
mon/MonitorDBStore.h: 287: FAILED assert(0 == "failed to write to db")


I take this to mean mon1:store.db is corrupt as I see no permission issues.

So... remove mon1 and add a mon?

Nothing special to worry about re-adding a mon on mon1, other than rm/mv 
the current store.db path, correct?


Thanks again,
--
Eric

On 4/12/16 11:18 AM, Joao Eduardo Luis wrote:

On 04/12/2016 05:06 PM, Joao Eduardo Luis wrote:

On 04/12/2016 04:27 PM, Eric Hall wrote:

On 4/12/16 9:53 AM, Joao Eduardo Luis wrote:


So this looks like the monitors didn't remove version 1, but this may
just be a red herring.

What matters, really, is the values in 'first_committed' and
'last_committed'. If either first or last_committed happens to be '1',
then there may be a bug somewhere in the code, but I doubt that. This
seems just an artefact.

So, it would be nice if you could provide the value of both
'osdmap:first_committed' and 'osdmap:last_committed'.


mon1:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, fist_committed) does not exist

mon2:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, fist_committed) does not exist

mon3:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, first_committed)
 : b8 94 00 00 00 00 00 00


Wow! This is unexpected, but fits the assertion just fine.

The solution, I think, will be rewriting first_committed and
last_committed on all monitors - except on mon1.


Let me clarify this a bit: the easy way out for mon1 would be to fix the
other two monitors and recreate mon1.

If you prefer to also fix mon1, you can simply follow the same steps on
the previous email for all the monitors, but ensuring osdmap:full_latest
on mon1 reflects the last available full_ version on its store.

   -Joao

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mons die with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)...

2016-04-12 Thread Eric Hall

On 4/12/16 9:53 AM, Joao Eduardo Luis wrote:


So this looks like the monitors didn't remove version 1, but this may
just be a red herring.

What matters, really, is the values in 'first_committed' and
'last_committed'. If either first or last_committed happens to be '1',
then there may be a bug somewhere in the code, but I doubt that. This
seems just an artefact.

So, it would be nice if you could provide the value of both
'osdmap:first_committed' and 'osdmap:last_committed'.


mon1:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, fist_committed) does not exist

mon2:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, fist_committed) does not exist

mon3:
(osdmap, last_committed)
 : 01 00 00 00 00 00 00 00 : 
(osdmap, first_committed)
 : b8 94 00 00 00 00 00 00


Furthermore, the code is asserting on a basic check on
OSDMonitor::update_from_paxos(), which is definitely unexpected to fail.
It would also be nice if you could point us to a mon log with
'--debug-mon 20' from start to hitting the assertion. Feel free to send
it directly to me if you don't want the sitting on the internet.


Here is from mon2 (cephsecurestore2 IRL), which starts and dies with the 
assert:

http://www.isis.vanderbilt.edu/mon2.log

Here is from mon3 (cephsecurestore3 IRL), which starts and runs, but 
can't form quorum and never gives up on mon1 and mon2.  Removing mon1 
and mon2 from mon3's via modmap extract/rm/inject results in same FAILED 
assert as others:

http://www.isis.vanderbilt.edu/mon3.log


My thought was that if I could resolve the last_committed problem on 
mon3, then it might have a change sans mon1 and mon2.


Thank you,
--
Eric

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mons die with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)...

2016-04-12 Thread Eric Hall

On 4/12/16 9:02 AM, Gregory Farnum wrote:

On Tue, Apr 12, 2016 at 4:41 AM, Eric Hall <eric.h...@vanderbilt.edu> wrote:

On 4/12/16 12:01 AM, Gregory Farnum wrote:

Exactly what values are you reading that's giving you those values?
The "real" OSDMap epoch is going to be at least 38630...if you're very
lucky it will be exactly 38630. But since it reset itself to 1 in the
monitor's store, I doubt you'll be lucky.


It's been my week...


I'm getting this from ceph-kvstore-tool list.


I meant the keys that it was outputting...I forgot we actually had one
called "osdmap".


From ceph-kvstore-tool /path/monN/store.db list |grep osd:

mon1:
osdmap:1
osdmap:38072
[...]
osdmap:38630
osdmap:first_committed
osdmap:full_38072
[...]
osdmap:full_38456
osdmap:last_committed

mon2:
osdmap:1
osdmap:38072
[...]
osdmap:38630
osdmap:first_committed
osdmap:full_38072
[...]
osdmap:full_38630
osdmap:full_latest
osdmap:last_committed

mon3:
osdmap:1
osdmap:38072
[...]
osdmap:38630
osdmap:first_committed
osdmap:full_38072
[...]
osdmap:full_38630
osdmap:full_latest
osdmap:last_committed


So in order to get your cluster back up, you need to find the largest
osdmap version in your cluster. You can do that, very tediously, by
looking at the OSDMap stores. Or you may have debug logs indicating it
more easily on the monitors.



I don't see info like this in any logs.  How/where do I inspect this?


If you had debugging logs up high enough, it would tell you things
like each map commit. And every time the monitor subsystems (like the
OSD Monitor) print out any debugging info they include what
epoch/version they are on, so it's in the log output prefix.


I doubt I have debug high enough... example lines from mon3 log:
2016-04-11 02:59:27.534149 7fef19a86700  0 mon.mon3@2(peon) e1 
handle_command mon_command({"prefix": "status"} v 0) v1
2016-04-11 02:59:34.556487 7fef19a86700  1 mon.mon3@2(peon).log 
v32366957 check_sub sending message to client.6567304 
172.16.250.1:0/3381977473 with 1 entries (version 32366957)


Where is the OSDMap store if not in store.db?

Thank you,
--
Eric



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mons die with mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)...

2016-04-12 Thread Eric Hall

On 4/12/16 12:01 AM, Gregory Farnum wrote:

On Mon, Apr 11, 2016 at 3:45 PM, Eric Hall <eric.h...@vanderbilt.edu> wrote:

Power failure in data center has left 3 mons unable to start with
mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)

Have found simliar problem discussed at
http://irclogs.ceph.widodh.nl/index.php?date=2015-05-29, but am unsure how
to proceed.

If I read
ceph-kvstore-tool /var/lib/ceph/mon/ceph-cephsecurestore1/store.db list
correctly, they believe osdmap is 1, but they also have osdmap:full_38456
and osdmap:38630 in the store.


Exactly what values are you reading that's giving you those values?
The "real" OSDMap epoch is going to be at least 38630...if you're very
lucky it will be exactly 38630. But since it reset itself to 1 in the
monitor's store, I doubt you'll be lucky.


I'm getting this from ceph-kvstore-tool list.


So in order to get your cluster back up, you need to find the largest
osdmap version in your cluster. You can do that, very tediously, by
looking at the OSDMap stores. Or you may have debug logs indicating it
more easily on the monitors.


I don't see info like this in any logs.  How/where do I inspect this?

Thank you,
--
Eric



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com