2024年6月18日(火) 5:49 Laura Flores :
>
> Need to update the OS Recommendations doc to represent latest supported
> distros
> - https://docs.ceph.com/en/latest/start/os-recommendations/#platforms
> - PR from Zac to be reviewed CLT: https://github.com/ceph/ceph/pull/58092
>
> arm64 CI check ready to be
Hi Patrick, Xiubo and List,
finally we managed to get the filesystem repaired and running again!
YEAH, I'm so happy!!
Big thanks for your support Patrick and Xiubo! (Would love invite you
for a beer)!
Please see some comments and (important?) questions below:
On 6/25/24 03:14, Patrick
On 24/06/2024 21:18, Matthew Vernon wrote:
2024-06-24T17:33:26.880065+00:00 moss-be2001 ceph-mgr[129346]: [rgw
ERROR root] Non-zero return from ['radosgw-admin', '-k',
'/var/lib/ceph/mgr/ceph-moss-be2001.qvwcaq/keyring', '-n',
'mgr.moss-be2001.qvwcaq', 'realm', 'pull', '--url',
On Tue, Jun 25, 2024 at 6:38 PM Ivan Clayson wrote:
> Hi Dhairya,
>
> Thank you for your rapid reply. I tried recovering the dentries for the
> file just before the crash I mentioned before and then splicing the
> transactions from the journal which seemed to remove that issue for that
> inode
Le 24/06/2024 à 19:15, Anthony D'Atri a écrit :
* Subscription is now moderated
* The three worst spammers (you know who they are) have been removed
* I’ve deleted tens of thousands of crufty mail messages from the queue
The list should work normally now. Working on the backlog of held
Hi Dhairya,
Thank you for your rapid reply. I tried recovering the dentries for the
file just before the crash I mentioned before and then splicing the
transactions from the journal which seemed to remove that issue for that
inode but resulted in the MDS crashing on the next inode in the
Hello all.
After a disk changed, we see that cephadm does not recreate the OSD. Going all
back to pvs command I ended up on this issue:
https://tracker.ceph.com/issues/62862 and this PR:
https://github.com/ceph/ceph/pull/53500. The PR is unfortunately closed.
Is this a non-bug? I tries to
Hello Wesley,
I couldn't find any tracker related to this and since min_size=1 has been
involved in many critical situations with data loss, I created this one:
https://tracker.ceph.com/issues/66641
Regards,
Frédéric.
- Le 17 Juin 24, à 19:14, Wesley Dillingham w...@wesdillingham.com a
Hi Ivan,
This looks to be similar to the issue [0] that we're already addressing at
[1]. So basically there is some out-of-sync event that led the client to
make use of the inodes that MDS wasn't aware of/isn't tracking and hence
the crash. It'd be really helpful if you can provide us more logs.
Hi Pablo,
> We are willing to work with a Ceph Consultant Specialist, because the data
> at stage is very critical, so if you're interested please let me know
> off-list, to discuss the details.
I totally understand that you want to communicate with potential consultants
off-list,
but I, and
10 matches
Mail list logo