On 23/02/2023 05:56, Thomas Widhalm wrote:
Ah, sorry. My bad.
The MDS crashed and I restarted them. And I'm waiting for them to
crash again.
There's a tracker for this or a related issue:
https://tracker.ceph.com/issues/58489
Is the call trace the same with this tracker ?
Thanks,
Is th
Hi,
I enabled debug and the same - 1500 keys is where it ends.. I also enabled
debug_filestore and ...
2023-02-23T00:02:34.876+0100 7f8ef26d1700 20 filestore.osr(0x55fb27780540)
_register_apply 0x55fb297e7920 already registered
2023-02-23T00:02:34.876+0100 7f8ef26d1700 5 filestore(/var/lib/ce
Ah, sorry. My bad.
The MDS crashed and I restarted them. And I'm waiting for them to crash
again.
There's a tracker for this or a related issue:
https://tracker.ceph.com/issues/58489
Is there any place I can upload you anything from the logs? I'm still a
bit new to Ceph but I guess, you'd
On Wed, Feb 22, 2023 at 12:10 PM Thomas Widhalm wrote:
>
> Hi,
>
> Thanks for the idea!
>
> I tried it immediately but still, MDS are in up:replay mode. So far they
> haven't crashed but this usually takes a few minutes.
>
> So no effect so far. :-(
The commands I gave were for producing hopefull
Hi,
Thanks for the idea!
I tried it immediately but still, MDS are in up:replay mode. So far they
haven't crashed but this usually takes a few minutes.
So no effect so far. :-(
Cheers,
Thomas
On 22.02.23 17:58, Patrick Donnelly wrote:
On Wed, Jan 25, 2023 at 3:36 PM Thomas Widhalm wrote:
Hello Satish,
On Thu, Feb 9, 2023 at 11:52 AM Satish Patel wrote:
>
> Folks,
>
> Any idea what is going on, I am running 3 node quincy version of openstack
> and today suddenly i noticed the following error. I found reference link
> but not sure if that is my issue or not
> https://tracker.ceph.
On Wed, Jan 25, 2023 at 3:36 PM Thomas Widhalm wrote:
>
> Hi,
>
> Sorry for the delay. As I told Venky directly, there seems to be a
> problem with DMARC handling of the Ceph users list. So it was blocked by
> the company I work for.
>
> So I'm writing from my personal e-mail address, now.
>
> Did
On Mon, Jan 16, 2023 at 11:43 AM wrote:
>
> Good morning everyone.
>
> On this Thursday night we went through an accident, where they accidentally
> renamed the .data pool of a File System making it instantly inaccessible,
> when renaming it again to the correct name it was possible to mount and
Everything you say is to be expected. I was not aware `reshard` could be run
when the prior shards are removed, but apparently it can, and it creates new
bucket index shards that are empty. Normally `reshard` reads entries from the
old shards and copies their data to the new shards but since the
Hello Robert,
It's probably an instance of this bug: https://tracker.ceph.com/issues/24403
We think we know the cause and a reproducer/fix is planned.
On Wed, Jan 18, 2023 at 4:14 AM Robert Sander
wrote:
>
> Hi,
>
> I have a healthy (test) cluster running 17.2.5:
>
> root@cephtest20:~# ceph sta
Hi Cephers,
These are the minutes of today's meeting (quicker than usual since some CLT
members were at Ceph Days NYC):
- *[Yuri] Upcoming Releases:*
- Pending PRs for Quincy
- Sepia Lab still absorbing the PR queue after the past issues
- [Ernesto] Github started sending depen
On 22.02.23 14:42, David Orman wrote:
If it's a test cluster, you could try:
root@ceph01:/# radosgw-admin bucket check -h |grep -A1 check-objects
--check-objects bucket check: rebuilds bucket index according to
actual objects state
After a "bi purge"
If it's a test cluster, you could try:
root@ceph01:/# radosgw-admin bucket check -h |grep -A1 check-objects
--check-objects bucket check: rebuilds bucket index according to
actual objects state
On Wed, Feb 22, 2023, at 02:22, Robert Sander wrote:
> On 21
On 21.02.23 22:52, Richard Bade wrote:
A colleague and I ran into this a few weeks ago. The way we managed to
get access back to delete the bucket properly (using radosgw-admin
bucket rm) was to reshard the bucket.
This created a new bucket index and therefore it was then possible to delete i
14 matches
Mail list logo