[ceph-users] Re: 16.2.11 pacific QE validation status

2022-12-15 Thread Brad Hubbard
On Fri, Dec 16, 2022 at 3:15 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/58257#note-1 > Release Notes - TBD > > Seeking approvals for: > > rados - Neha (https://github.com/ceph/ceph/pull/49431 is still being > tested and will be mer

[ceph-users] Re: 16.2.11 pacific QE validation status

2022-12-22 Thread Brad Hubbard
On Fri, Dec 16, 2022 at 8:33 AM Brad Hubbard wrote: > > On Fri, Dec 16, 2022 at 3:15 AM Yuri Weinstein wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/58257#note-1 > > Release Notes - TBD > > >

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-22 Thread Brad Hubbard
On Sat, Jan 21, 2023 at 2:39 AM Yuri Weinstein wrote: > > The overall progress on this release is looking much better and if we > can approve it we can plan to publish it early next week. > > Still seeking approvals > > rados - Neha, Laura > rook - Sébastien Han > cephadm - Adam > dashboard - Erne

[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-26 Thread Brad Hubbard
On Sat, Mar 25, 2023 at 5:46 AM Yuri Weinstein wrote: > > Details of this release are updated here: > > https://tracker.ceph.com/issues/59070#note-1 > Release Notes - TBD > > The slowness we experienced seemed to be self-cured. > Neha, Radek, and Laura please provide any findings if you have them.

[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-01 Thread Brad Hubbard
On Fri, Apr 28, 2023 at 7:21 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/59542#note-1 > Release Notes - TBD > > Seeking approvals for: > > smoke - Radek, Laura > rados - Radek, Laura > rook - Sébastien Han > cephadm - Adam K >

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-08-01 Thread Brad Hubbard
On Mon, Jul 31, 2023 at 1:46 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/62231#note-1 > > Seeking approvals/reviews for: > > smoke - Laura, Radek > rados - Neha, Radek, Travis, Ernesto, Adam King > rgw - Casey > fs - Venky > orch -

[ceph-users] Re: ref v18.2.0 QE Validation status

2023-08-02 Thread Brad Hubbard
On Thu, Aug 3, 2023 at 8:31 AM Yuri Weinstein wrote: > Updates: > > 1. bookworm distro build support > We will not build bookworm until Debian bug > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1030129 is resolved > > 2. nfs ganesha > fixed (Thanks Guillaume and Kaleb) > > 3. powercycle fail

[ceph-users] Re: [RFC][UADK integration][Acceleration of zlib compressor]

2024-07-11 Thread Brad Hubbard
On Thu, Jul 11, 2024 at 10:42 PM Rongqi Sun wrote: > > Hi Ceph community, Hi Rongqi, Thanks for proposing this and for attending CDM to discuss it yesterday. I see we have received some good feedback in the PR and it's awaiting some suggested changes. I think this will be a useful and performant

[ceph-users] Re: squid 19.1.1 RC QE validation status

2024-08-14 Thread Brad Hubbard
On Tue, Aug 6, 2024 at 6:33 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/67340#note-1 > > Release Notes - N/A > LRC upgrade - N/A > Gibba upgrade -TBD > > Seeking approvals/reviews for: > > rados - Radek, Laura (https://github.com/ce

[ceph-users] Re: squid 19.1.1 RC QE validation status

2024-08-15 Thread Brad Hubbard
On Thu, Aug 15, 2024 at 11:50 AM Brad Hubbard wrote: > > On Tue, Aug 6, 2024 at 6:33 AM Yuri Weinstein wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/67340#note-1 > > > > Release Notes - N/A &g

[ceph-users] Re: radosgw process crashes multiple times an hour

2021-02-01 Thread Brad Hubbard
On Tue, Feb 2, 2021 at 9:20 AM Andrei Mikhailovsky wrote: > > bump Can you create a tracker for this? I'd suggest the first step would be working out what "NOTICE: invalid dest placement: default-placement/REDUCED_REDUNDANCY" is trying to tell you. Someone more familiar with rgw than I should be

[ceph-users] Re: Nautilus 14.2.19 mon 100% CPU

2021-04-11 Thread Brad Hubbard
PSA. https://docs.ceph.com/en/latest/releases/general/#lifetime-of-stable-releases https://docs.ceph.com/en/latest/releases/#ceph-releases-index On Sat, Apr 10, 2021 at 10:11 AM Robert LeBlanc wrote: > > On Fri, Apr 9, 2021 at 4:04 PM Dan van der Ster wrote: > > > > Here's what you should look

[ceph-users] Re: Nautilus 14.2.19 mon 100% CPU

2021-04-12 Thread Brad Hubbard
On Mon, Apr 12, 2021 at 11:35 AM Robert LeBlanc wrote: > > On Sun, Apr 11, 2021 at 4:19 PM Brad Hubbard wrote: > > > > PSA. > > > > https://docs.ceph.com/en/latest/releases/general/#lifetime-of-stable-releases > > > > https://docs.ceph.com/en/latest/

[ceph-users] Re: Nautilus 14.2.19 mon 100% CPU

2021-04-12 Thread Brad Hubbard
On Tue, Apr 13, 2021 at 8:40 AM Robert LeBlanc wrote: > > Do you think it would be possible to build Nautilus FUSE or newer on > 14.04, or do you think the toolchain has evolved too much since then? > An interesting question. # cat /etc/os-release NAME="Ubuntu" VERSION="14.04.6 LTS, Trusty Tahr"

[ceph-users] Re: #ceph in Matrix [was: Re: we're living in 2005.]

2021-07-26 Thread Brad Hubbard
On Tue, Jul 27, 2021 at 5:53 AM Nico Schottelius wrote: > > > Good evening dear mailing list, > > while I do think we have a great mailing list (this is one of the most > helpful open source mailing lists I'm subscribed to), I do agree with > the ceph IRC channel not being so helpful. The join/lea

[ceph-users] Re: we're living in 2005.

2021-07-26 Thread Brad Hubbard
On Tue, Jul 27, 2021 at 3:49 AM Marc wrote: > > > > I feel like ceph is living in 2005. > > No it is just you. Why don't you start reading > https://docs.ceph.com/en/latest/ > > >It's quite hard to find help on > > issues related to ceph and it's almost impossible to get involved into > > helping

[ceph-users] Re: 16.2.8 pacific QE validation status, RC2 available for testing

2022-05-09 Thread Brad Hubbard
It's the current HEAD of the pacific branch or, alternatively, https://github.com/ceph/ceph-ci/tree/pacific-16.2.8_RC2. $ git branch -r --contains 73636a1b00037ff974bcdc969b009c5ecec626cc ceph-ci/pacific-16.2.8_RC2 upstream/pacific HTH. On Mon, May 9, 2022 at 7:05 PM Benoît Knecht wrote: > >

[ceph-users] Re: octopus v15.2.17 QE Validation status

2022-07-27 Thread Brad Hubbard
On Wed, Jul 27, 2022 at 12:40 AM Yuri Weinstein wrote: > > Ack > > We need to get all approvals and resolve ceph-ansbile issue. The primary cause of the issues with ca is that octopus was pinned to the stable_6.0 branch of ca for octopus should be using stable_5.0 according to https://docs.ceph.c

[ceph-users] Re: Updating Git Submodules -- a documentation question

2022-10-11 Thread Brad Hubbard
out of sync with the submodules in the upstream repository. > > In this example, my local working copy has fallen out of sync. This will be > obvious to adepts, but this procedure does not need to be communicated to > them. > > This procedure was given to me by Brad Hubbard.

[ceph-users] Re: Updating Git Submodules -- a documentation question

2022-10-17 Thread Brad Hubbard
git > submodule update --init --recursive" in order to clean out the offending > directory. This has to be done for each such directory. > > I do not know what causes the local working copy to get into this dirty > state. (That's what it's called in the git-scm documentati

[ceph-users] Re: virtual machines crashes after upgrade to octopus

2020-05-14 Thread Brad Hubbard
1(tap33511c4d-2c) entered disabled state > May 13 09:35:33 compute5 kernel: [123074.838520] device tap33511c4d-2c left > promiscuous mode > May 13 09:35:33 compute5 kernel: [123074.838527] brqa72d845b-e9: port > 1(tap33511c4d-2c) entered disabled state > May 13 09:35:33 compute5 networkd-d

[ceph-users] Re: Unable to clarify error using vfs_ceph (Samba gateway for CephFS)

2020-11-12 Thread Brad Hubbard
I don't know much about the vfs plugin (nor cephfs for that matter) but I would suggest enabling client debug logging on the machine so you can see what the libcephfs code is doing since that's likely where the ENOENT is coming from. https://docs.ceph.com/en/latest/rados/troubleshooting/log-and-de

[ceph-users] Re: bluestore rocksdb behavior

2019-12-05 Thread Brad Hubbard
There's some good information here which may assist in your understanding. https://www.youtube.com/channel/UCno-Fry25FJ7B4RycCxOtfw/search?query=bluestore On Thu, Dec 5, 2019 at 10:36 PM Igor Fedotov wrote: > > Unfortunately can't recall any > > On 12/4/2019 11:07 PM, Frank R wrote: > > Thanks.

[ceph-users] Re: continued warnings: Large omap object found

2020-02-27 Thread Brad Hubbard
Check the thread titled "[ceph-users] Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool" from a few days ago. On Fri, Feb 28, 2020 at 9:03 AM Seth Galitzer wrote: > > I do not have a large ceph cluster, only 4 nodes plus a mon/mgr with 48 > OSDs. I have one data pool and one metadata pool with

[ceph-users] Re: RGW jaegerTracing

2020-03-08 Thread Brad Hubbard
+d...@ceph.io On Sun, Mar 8, 2020 at 5:16 PM Abhinav Singh wrote: > > I am trying to implement jaeger tracing in RGW, I need some advice > regarding on which functions should I actually tracing to get a good actual > performance status of clusters > > Till now I am able to deduce followings : > 1

[ceph-users] Re: Nautilus cluster damaged + crashing OSDs

2020-04-20 Thread Brad Hubbard
Wait for recovery to finish so you know whether any data from the down OSDs is required. If not just reprovision them. If data is required from the down OSDs you will need to run a query on the pg(s) to find out what OSDs have the required copies of the pg/object required. you can then export the

[ceph-users] Re: PG deep-scrub does not finish

2020-04-20 Thread Brad Hubbard
On Mon, Apr 20, 2020 at 11:01 PM Andras Pataki wrote: > > On a cluster running Nautilus (14.2.8), we are getting a complaint about > a PG not being deep-scrubbed on time. Looking at the primary OSD's > logs, it looks like it tries to deep-scrub the PG every hour or so, > emits some complaints tha

[ceph-users] Re: Nautilus cluster damaged + crashing OSDs

2020-04-21 Thread Brad Hubbard
On Tue, Apr 21, 2020 at 6:35 PM Paul Emmerich wrote: > > On Tue, Apr 21, 2020 at 3:20 AM Brad Hubbard wrote: > > > > Wait for recovery to finish so you know whether any data from the down > > OSDs is required. If not just reprovision them. > > Recovery will not fin

[ceph-users] Re: PG deep-scrub does not finish

2020-04-21 Thread Brad Hubbard
y()+0x10) [0x560bfb70] > Apr 19 03:39:17 popeye-oss-3-03 ceph-osd: 14: (()+0x7e65) [0x75025e65] > Apr 19 03:39:17 popeye-oss-3-03 ceph-osd: 15: (clone()+0x6d) > [0x73ee988d] > > I ended up recreating the OSD (and thus overwriting all data) to fix the > issue. > &

[ceph-users] Re: Sporadic mgr segmentation fault

2020-04-22 Thread Brad Hubbard
On Tue, Apr 21, 2020 at 11:39 PM XuYun wrote: > > Dear ceph users, > > We are experiencing sporadic mgr crash in all three ceph clusters (version > 14.2.6 and version 14.2.8), the crash log is: > > 2020-04-17 23:10:08.986 7fed7fe07700 -1 > /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_

[ceph-users] Re: Sporadic mgr segmentation fault

2020-04-26 Thread Brad Hubbard
0x80f12f) [0x7f8cfa19812f] > 9: (()+0x7e65) [0x7f8cf74cce65] > 10: (clone()+0x6d) [0x7f8cf617a88d] > NOTE: a copy of the executable, or `objdump -rdS ` is needed to > interpret this. > > Is there an issue opened for it? > > BR, > Xu Yun > > 2020年4月23日 上午10:28,XuYu

[ceph-users] Re: virtual machines crashes after upgrade to octopus

2020-05-07 Thread Brad Hubbard
On Fri, May 8, 2020 at 3:42 AM Erwin Lubbers wrote: > > Hi, > > Did anyone found a way to resolve the problem? I'm seeing the same on a clean > Octopus Ceph installation on Ubuntu 18 with an Octopus compiled KVM server > running on CentOS 7.8. The KVM machine shows: > > [ 7682.233684] fn-radoscl

[ceph-users] Re: ceph-mgr high CPU utilization

2020-05-07 Thread Brad Hubbard
Could you create a tracker for this and attach an osdmap as well as some recent balancer output (perhaps at a higher debug level if possible)? There are some improvements awaiting backport to nautilus for the C++/python interface just FYI [0] You might also look at gathering output using somethin

[ceph-users] Re: virtual machines crashes after upgrade to octopus

2020-05-07 Thread Brad Hubbard
wn: > 2020-05-07T13:02:28.706+0300 7f88d4ff9700 10 librbd::image::CloseRequest: > 0x7f88c8175fd0 handle_shut_down_object_dispatcher: r=0 > 2020-05-07T13:02:28.706+0300 7f88d4ff9700 10 librbd::image::CloseRequest: > 0x7f88c8175fd0 send_flush_op_work_queue > 2020-05-07T13:02:28.706+

[ceph-users] Re: Cluster rename procedure

2020-05-08 Thread Brad Hubbard
Are they LVM based? The keyring files should be just the filenames, yes. Here's a recent list I saw which was missing the keyring step but is reported to be complete otherwise. - Stop RGW services - Set the flags (noout,norecover,norebalance,nobackfill,nodown,pause) - Stop OSD/MGR/MON services -

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-02 Thread Brad Hubbard
On Sat, Aug 31, 2024 at 12:43 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/67779#note-1 > > Release Notes - TBD > Gibba upgrade -TBD > LRC upgrade - TBD > > It was decided and agreed upon that there would be limited testing for > thi