Please include your promtai; logs, loki logs, promtail configuration, and your
loki configuration.
From: Peter van Heusden
Date: Wednesday, February 8, 2023 at 7:45 AM
To: ceph-users@ceph.io
Subject: [ceph-users] OSD logs missing from Centralised Logging
You don't often get email from p...@sanb
Good Afternoon,
I am experiencing an issue where east-1 is no longer able to replicate from
west-1, however, after a realm pull, west-1 is now able to replicate from
east-1.
In other words:
West <- Can Replicate <- East
West -> Cannot Replicate -> East
After confirming the access and secret ke
approach is both sides are “active”, meaning the client
has been writing data to both endpoints. Will this cause an issue where “west”
will have data that the metadata does not have record of, and then delete the
data?
Thanks
From: Tarrago, Eli (RIS-BCT)
Date: Thursday, April 20, 2023 at 3:13
gw async rados processor:
store->fetch_remote_obj() returned r=-5
2023-05-09T15:46:21.069+ 7f20827ec700 0 rgw async rados processor:
store->fetch_remote_obj() returned r=-5
From: Casey Bodley
Date: Thursday, April 27, 2023 at 12:37 PM
To: Tarrago, Eli (RIS-BCT)
Cc: Ceph Users
Adding a bit more context to this thread.
I added an additional radosgw to each cluser. Radosgw 1-3 are customer facing.
Radosgw #4 is dedicated to syncing
Radosgw 1-3 now have an additional lines:
rgw_enable_lc_threads = False
rgw_enable_gc_threads = False
Radosgw4 has the additional line:
rgw
Additional names that could be considered:
Sea Monster
Sea Bear
Star (for Patrick star)
And for the best underwater squirrel friend, Sandy.
From: Boris
Date: Thursday, August 15, 2024 at 9:35 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: squid release codename
*** External email: use ca
Good Morning Ceph Users,
I’m currently engaged in troubleshooting an issue and I wanted to post here to
get some feedback. If there is no response or feedback that this looks like a
bug, then I’ll write up a bug report.
Cluster:
Reef 18.2.4
Ubuntu 20.04
ceph -s
cluster:
id: 93e49b2e-
Here is the backtrace from a ceph crash
ceph crash info
'2024-08-20T16:07:39.319197Z_8bcdf3df-f9b5-451a-b971-16f8190ab351'
{
"assert_condition": "!p",
"assert_file": "/build/ceph-18.2.4/src/mds/MDCache.cc",
"assert_func": "void MDCache::add_inode(CInode*)",
"assert_line": 251,