Hello Tomasz,

I observe a strange accumulation of inconsistencies for an RGW-only (+multisite) setup, with errors just like those you reported. I collected some info and raised a bug ticket:  https://tracker.ceph.com/issues/53663 Two more inconsistencies have just shown up hours after repairing the other, adding to the theory of something really odd going on.



Did you upgrade to Octopus in the end then? Any more issues with such inconsistencies on your side Tomasz?



Regards

Christian



On 20/10/2021 10:33, Tomasz Płaza wrote:
As the upgrade process states, rgw are the last one to be upgraded, so they are still on nautilus (centos7). Those logs showed up after upgrade of the first osd host. It is a multisite setup so I am a little afraid of upgrading rgw now.

Etienne:

Sorry for answering in this thread, but somehow I do not get messages directed only to ceph-users list. I did "rados list-inconsistent-pg" and got many entries like:

{
  "object": {
    "name": ".dir.99a07ed8-2112-429b-9f94-81383220a95b.7104621.23.7",
    "nspace": "",
    "locator": "",
    "snap": "head",
    "version": 82561410
  },
  "errors": [
    "omap_digest_mismatch"
  ],
  "union_shard_errors": [],
  "selected_object_info": {
    "oid": {
      "oid": ".dir.99a07ed8-2112-429b-9f94-81383220a95b.7104621.23.7",
      "key": "",
      "snapid": -2,
      "hash": 3316145293,
      "max": 0,
      "pool": 230,
      "namespace": ""
    },
    "version": "107760'82561410",
    "prior_version": "106468'82554595",
    "last_reqid": "client.392341383.0:2027385771",
    "user_version": 82561410,
    "size": 0,
    "mtime": "2021-10-19T16:32:25.699134+0200",
    "local_mtime": "2021-10-19T16:32:25.699073+0200",
    "lost": 0,
    "flags": [
      "dirty",
      "omap",
      "data_digest"
    ],
    "truncate_seq": 0,
    "truncate_size": 0,
    "data_digest": "0xffffffff",
    "omap_digest": "0xffffffff",
    "expected_object_size": 0,
    "expected_write_size": 0,
    "alloc_hint_flags": 0,
    "manifest": {
      "type": 0
    },
    "watchers": {}
  },
  "shards": [
    {
      "osd": 56,
      "primary": true,
      "errors": [],
      "size": 0,
      "omap_digest": "0xf4cf0e1c",
      "data_digest": "0xffffffff"
    },
    {
      "osd": 58,
      "primary": false,
      "errors": [],
      "size": 0,
      "omap_digest": "0xf4cf0e1c",
      "data_digest": "0xffffffff"
    },
    {
      "osd": 62,
      "primary": false,
      "errors": [],
      "size": 0,
      "omap_digest": "0x4bd5703a",
      "data_digest": "0xffffffff"
    }
  ]
}

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to