On Fri, Dec 16, 2022 at 3:15 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/58257#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> rados - Neha (https://github.com/ceph/ceph/pull/49431 is still being
> tested and will be mer
On Fri, Dec 16, 2022 at 8:33 AM Brad Hubbard wrote:
>
> On Fri, Dec 16, 2022 at 3:15 AM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/58257#note-1
> > Release Notes - TBD
> >
>
On Sat, Jan 21, 2023 at 2:39 AM Yuri Weinstein wrote:
>
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dashboard - Erne
On Sat, Mar 25, 2023 at 5:46 AM Yuri Weinstein wrote:
>
> Details of this release are updated here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The slowness we experienced seemed to be self-cured.
> Neha, Radek, and Laura please provide any findings if you have them.
On Fri, Apr 28, 2023 at 7:21 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59542#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Radek, Laura
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
>
On Mon, Jul 31, 2023 at 1:46 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62231#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
> orch -
On Thu, Aug 3, 2023 at 8:31 AM Yuri Weinstein wrote:
> Updates:
>
> 1. bookworm distro build support
> We will not build bookworm until Debian bug
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1030129 is resolved
>
> 2. nfs ganesha
> fixed (Thanks Guillaume and Kaleb)
>
> 3. powercycle fail
On Thu, Jul 11, 2024 at 10:42 PM Rongqi Sun wrote:
>
> Hi Ceph community,
Hi Rongqi,
Thanks for proposing this and for attending CDM to discuss it
yesterday. I see we have received some good feedback in the PR and
it's awaiting some suggested changes. I think this will be a useful
and performant
On Tue, Aug 6, 2024 at 6:33 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/67340#note-1
>
> Release Notes - N/A
> LRC upgrade - N/A
> Gibba upgrade -TBD
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura (https://github.com/ce
On Thu, Aug 15, 2024 at 11:50 AM Brad Hubbard wrote:
>
> On Tue, Aug 6, 2024 at 6:33 AM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/67340#note-1
> >
> > Release Notes - N/A
&g
On Tue, Feb 2, 2021 at 9:20 AM Andrei Mikhailovsky wrote:
>
> bump
Can you create a tracker for this?
I'd suggest the first step would be working out what "NOTICE: invalid
dest placement: default-placement/REDUCED_REDUNDANCY" is trying to
tell you. Someone more familiar with rgw than I should be
PSA.
https://docs.ceph.com/en/latest/releases/general/#lifetime-of-stable-releases
https://docs.ceph.com/en/latest/releases/#ceph-releases-index
On Sat, Apr 10, 2021 at 10:11 AM Robert LeBlanc wrote:
>
> On Fri, Apr 9, 2021 at 4:04 PM Dan van der Ster wrote:
> >
> > Here's what you should look
On Mon, Apr 12, 2021 at 11:35 AM Robert LeBlanc wrote:
>
> On Sun, Apr 11, 2021 at 4:19 PM Brad Hubbard wrote:
> >
> > PSA.
> >
> > https://docs.ceph.com/en/latest/releases/general/#lifetime-of-stable-releases
> >
> > https://docs.ceph.com/en/latest/
On Tue, Apr 13, 2021 at 8:40 AM Robert LeBlanc wrote:
>
> Do you think it would be possible to build Nautilus FUSE or newer on
> 14.04, or do you think the toolchain has evolved too much since then?
>
An interesting question.
# cat /etc/os-release
NAME="Ubuntu"
VERSION="14.04.6 LTS, Trusty Tahr"
On Tue, Jul 27, 2021 at 5:53 AM Nico Schottelius
wrote:
>
>
> Good evening dear mailing list,
>
> while I do think we have a great mailing list (this is one of the most
> helpful open source mailing lists I'm subscribed to), I do agree with
> the ceph IRC channel not being so helpful. The join/lea
On Tue, Jul 27, 2021 at 3:49 AM Marc wrote:
>
>
> > I feel like ceph is living in 2005.
>
> No it is just you. Why don't you start reading
> https://docs.ceph.com/en/latest/
>
> >It's quite hard to find help on
> > issues related to ceph and it's almost impossible to get involved into
> > helping
It's the current HEAD of the pacific branch or, alternatively,
https://github.com/ceph/ceph-ci/tree/pacific-16.2.8_RC2.
$ git branch -r --contains 73636a1b00037ff974bcdc969b009c5ecec626cc
ceph-ci/pacific-16.2.8_RC2
upstream/pacific
HTH.
On Mon, May 9, 2022 at 7:05 PM Benoît Knecht wrote:
>
>
On Wed, Jul 27, 2022 at 12:40 AM Yuri Weinstein wrote:
>
> Ack
>
> We need to get all approvals and resolve ceph-ansbile issue.
The primary cause of the issues with ca is that octopus was pinned to
the stable_6.0 branch of ca for octopus should be using stable_5.0
according to https://docs.ceph.c
out of sync with the submodules in the upstream repository.
>
> In this example, my local working copy has fallen out of sync. This will be
> obvious to adepts, but this procedure does not need to be communicated to
> them.
>
> This procedure was given to me by Brad Hubbard.
git
> submodule update --init --recursive" in order to clean out the offending
> directory. This has to be done for each such directory.
>
> I do not know what causes the local working copy to get into this dirty
> state. (That's what it's called in the git-scm documentati
1(tap33511c4d-2c) entered disabled state
> May 13 09:35:33 compute5 kernel: [123074.838520] device tap33511c4d-2c left
> promiscuous mode
> May 13 09:35:33 compute5 kernel: [123074.838527] brqa72d845b-e9: port
> 1(tap33511c4d-2c) entered disabled state
> May 13 09:35:33 compute5 networkd-d
I don't know much about the vfs plugin (nor cephfs for that matter)
but I would suggest enabling client debug logging on the machine so
you can see what the libcephfs code is doing since that's likely where
the ENOENT is coming from.
https://docs.ceph.com/en/latest/rados/troubleshooting/log-and-de
There's some good information here which may assist in your understanding.
https://www.youtube.com/channel/UCno-Fry25FJ7B4RycCxOtfw/search?query=bluestore
On Thu, Dec 5, 2019 at 10:36 PM Igor Fedotov wrote:
>
> Unfortunately can't recall any
>
> On 12/4/2019 11:07 PM, Frank R wrote:
>
> Thanks.
Check the thread titled "[ceph-users] Frequest LARGE_OMAP_OBJECTS in
cephfs metadata pool" from a few days ago.
On Fri, Feb 28, 2020 at 9:03 AM Seth Galitzer wrote:
>
> I do not have a large ceph cluster, only 4 nodes plus a mon/mgr with 48
> OSDs. I have one data pool and one metadata pool with
+d...@ceph.io
On Sun, Mar 8, 2020 at 5:16 PM Abhinav Singh
wrote:
>
> I am trying to implement jaeger tracing in RGW, I need some advice
> regarding on which functions should I actually tracing to get a good actual
> performance status of clusters
>
> Till now I am able to deduce followings :
> 1
Wait for recovery to finish so you know whether any data from the down
OSDs is required. If not just reprovision them.
If data is required from the down OSDs you will need to run a query on
the pg(s) to find out what OSDs have the required copies of the
pg/object required. you can then export the
On Mon, Apr 20, 2020 at 11:01 PM Andras Pataki
wrote:
>
> On a cluster running Nautilus (14.2.8), we are getting a complaint about
> a PG not being deep-scrubbed on time. Looking at the primary OSD's
> logs, it looks like it tries to deep-scrub the PG every hour or so,
> emits some complaints tha
On Tue, Apr 21, 2020 at 6:35 PM Paul Emmerich wrote:
>
> On Tue, Apr 21, 2020 at 3:20 AM Brad Hubbard wrote:
> >
> > Wait for recovery to finish so you know whether any data from the down
> > OSDs is required. If not just reprovision them.
>
> Recovery will not fin
y()+0x10) [0x560bfb70]
> Apr 19 03:39:17 popeye-oss-3-03 ceph-osd: 14: (()+0x7e65) [0x75025e65]
> Apr 19 03:39:17 popeye-oss-3-03 ceph-osd: 15: (clone()+0x6d)
> [0x73ee988d]
>
> I ended up recreating the OSD (and thus overwriting all data) to fix the
> issue.
>
&
On Tue, Apr 21, 2020 at 11:39 PM XuYun wrote:
>
> Dear ceph users,
>
> We are experiencing sporadic mgr crash in all three ceph clusters (version
> 14.2.6 and version 14.2.8), the crash log is:
>
> 2020-04-17 23:10:08.986 7fed7fe07700 -1
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_
0x80f12f) [0x7f8cfa19812f]
> 9: (()+0x7e65) [0x7f8cf74cce65]
> 10: (clone()+0x6d) [0x7f8cf617a88d]
> NOTE: a copy of the executable, or `objdump -rdS ` is needed to
> interpret this.
>
> Is there an issue opened for it?
>
> BR,
> Xu Yun
>
> 2020年4月23日 上午10:28,XuYu
On Fri, May 8, 2020 at 3:42 AM Erwin Lubbers wrote:
>
> Hi,
>
> Did anyone found a way to resolve the problem? I'm seeing the same on a clean
> Octopus Ceph installation on Ubuntu 18 with an Octopus compiled KVM server
> running on CentOS 7.8. The KVM machine shows:
>
> [ 7682.233684] fn-radoscl
Could you create a tracker for this and attach an osdmap as well as
some recent balancer output (perhaps at a higher debug level if
possible)?
There are some improvements awaiting backport to nautilus for the
C++/python interface just FYI [0]
You might also look at gathering output using somethin
wn:
> 2020-05-07T13:02:28.706+0300 7f88d4ff9700 10 librbd::image::CloseRequest:
> 0x7f88c8175fd0 handle_shut_down_object_dispatcher: r=0
> 2020-05-07T13:02:28.706+0300 7f88d4ff9700 10 librbd::image::CloseRequest:
> 0x7f88c8175fd0 send_flush_op_work_queue
> 2020-05-07T13:02:28.706+
Are they LVM based?
The keyring files should be just the filenames, yes.
Here's a recent list I saw which was missing the keyring step but is
reported to be complete otherwise.
- Stop RGW services
- Set the flags (noout,norecover,norebalance,nobackfill,nodown,pause)
- Stop OSD/MGR/MON services
-
On Sat, Aug 31, 2024 at 12:43 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/67779#note-1
>
> Release Notes - TBD
> Gibba upgrade -TBD
> LRC upgrade - TBD
>
> It was decided and agreed upon that there would be limited testing for
> thi
36 matches
Mail list logo