On Thu, May 23, 2024 at 4:48 AM Yuma Ogami wrote:
>
> Hello.
>
> I'm currently verifying the behavior of RBD on failure. I'm wondering
> about the consistency of RBD images after network failures. As a
> result of my investigation, I found that RBD sets a Watcher to RBD
> image if a client mounts
On Tue, Apr 23, 2024 at 8:28 PM Stefan Kooman wrote:
>
> On 23-04-2024 17:44, Ilya Dryomov wrote:
> > On Mon, Apr 22, 2024 at 7:45 PM Stefan Kooman wrote:
> >>
> >> Hi,
> >>
> >> We are testing rbd-mirroring. There seems to be a permission erro
On Mon, Apr 22, 2024 at 7:45 PM Stefan Kooman wrote:
>
> Hi,
>
> We are testing rbd-mirroring. There seems to be a permission error with
> the rbd-mirror user. Using this user to query the mirror pool status gives:
>
> failed to query services: (13) Permission denied
>
> And results in the
On Fri, Apr 12, 2024 at 8:38 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/65393#note-1
> Release Notes - TBD
> LRC upgrade - TBD
>
> Seeking approvals/reviews for:
>
> smoke - infra issues, still trying, Laura PTL
>
> rados - Radek,
On Mon, Apr 8, 2024 at 10:22 AM Marc wrote:
> I have a guaranteed crash + reboot with el7 - nautilus accessing a snapshot.
>
> rbd snap ls vps-xxx -p rbd
> rbd map vps-xxx@vps-xxx.bak1 -p rbd
>
> some lvm stuff like this (pvscan --cache; pvs; lvchange -a y VGxxx/LVyyy)
>
> mount -o ro
On Sat, Mar 9, 2024 at 4:42 AM Nathan Morrison wrote:
>
> This was asked in reddit and was requested to post here:
>
> So in RBD, say I want to make an image that's got an object size of 1M
> instead of the default 4M (if it will be a VM say, and likely not have
> too many big files in it, just
On Wed, Mar 6, 2024 at 7:41 AM Feng, Hualong wrote:
>
> Hi Dongchuan
>
> Could I know which version or which commit that you are building and your
> environment: system, CPU, kernel?
>
> ./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo this command should be OK
> without QAT.
Hi Hualong,
I
On Tue, Feb 20, 2024 at 4:59 PM Yuri Weinstein wrote:
>
> We have restarted QE validation after fixing issues and merging several PRs.
> The new Build 3 (rebase of pacific) tests are summarized in the same
> note (see Build 3 runs) https://tracker.ceph.com/issues/64151#note-1
>
> Seeking
On Thu, Feb 1, 2024 at 5:23 PM Yuri Weinstein wrote:
>
> Update.
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Adam King (see Laura's comments below)
> rgw - Casey approved
> fs - Venky approved
> rbd - Ilya
No issues in RBD, formal approval is pending on [1] which also
On Tue, Jan 30, 2024 at 9:24 PM Yuri Weinstein wrote:
>
> Update.
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
> rbd - Ilya
Hi Yuri,
rbd looks good overall but we are missing iSCSI coverage due to
On Sat, Nov 25, 2023 at 7:01 PM Tony Liu wrote:
>
> Thank you Eugen! "rbd du" is it.
> The used_size from "rbd du" is object count times object size.
> That's the actual storage taken by the image in backend.
Somebody just quoted this sentence out of context, so I feel like
I need to elaborate.
On Wed, Jan 24, 2024 at 8:52 PM Ilya Dryomov wrote:
>
> On Wed, Jan 24, 2024 at 7:31 PM Eugen Block wrote:
> >
> > We do like the separation of nova pools as well, and we also heavily
> > use ephemeral disks instead of boot-from-volume instances. One of the
> >
On Wed, Jan 24, 2024 at 7:31 PM Eugen Block wrote:
>
> We do like the separation of nova pools as well, and we also heavily
> use ephemeral disks instead of boot-from-volume instances. One of the
> reasons being that you can't detach a root volume from an instances.
> It helps in specific
On Fri, Jan 19, 2024 at 2:38 PM Marc wrote:
>
> Am I doing something weird when I do on a ceph node (nautilus, el7):
>
> rbd snap ls vps-test -p rbd
> rbd map vps-test@vps-test.snap1 -p rbd
>
> mount -o ro /dev/mapper/VGnew-LVnew /mnt/disk <--- reset/reboot ceph node
Hi Marc,
It's not clear
On Mon, Jan 8, 2024 at 10:43 PM Peter wrote:
>
> rbd --version
> ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus
> (stable)
Hi Peter,
The PWL cache was introduced in Pacific (16.2.z).
Thanks,
Ilya
___
On Sat, Jan 6, 2024 at 12:02 AM Peter wrote:
>
> Thanks for ressponse! Yes, it is in use
>
> "watcher=10.1.254.51:0/1544956346 client.39553300 cookie=140244238214096"
> this is indicating the client is connect the image.
> I am using fio perform write task on it.
>
> I guess it is the feature
On Thu, Jan 4, 2024 at 4:41 PM Peter wrote:
>
> I follow below document to setup image level rbd persistent cache,
> however I get error output while i using the command provide by the document.
> I have put my commands and descriptions below.
> Can anyone give some instructions? thanks in
On Fri, Dec 15, 2023 at 12:52 PM Eugen Block wrote:
>
> Hi,
>
> I've been searching and trying things but to no avail yet.
> This is uncritical because it's a test cluster only, but I'd still
> like to have a solution in case this somehow will make it into our
> production clusters.
> It's an
On Wed, Dec 13, 2023 at 12:48 AM Satoru Takeuchi
wrote:
>
> Hi Ilya,
>
> 2023年12月12日(火) 21:23 Ilya Dryomov :
> > Not at the moment. Mykola has an old work-in-progress PR which extends
> > "rbd import-diff" command to make this possible [1].
>
> I didn'
On Tue, Dec 12, 2023 at 1:03 AM Satoru Takeuchi
wrote:
>
> Hi,
>
> I'm developing RBD images' backup system. In my case, a backup data
> must be stored at least two weeks. To meet this requirement, I'd like
> to take backups as follows:
>
> 1. Take a full backup by rbd export first.
> 2. Take a
On Tue, Nov 28, 2023 at 8:18 AM Tony Liu wrote:
>
> Hi,
>
> I have an image with a snapshot and some changes after snapshot.
> ```
> $ rbd du backup/f0408e1e-06b6-437b-a2b5-70e3751d0a26
> NAME
> PROVISIONED USED
>
-
> Agoda Services Co., Ltd.
> e: istvan.sz...@agoda.com
> -------
>
>
>
>
> From: Ilya Dryomov
> Sent: Thursday, November 30, 2023 6:27 PM
> To: Szabo, Istvan (Agoda)
> Cc: Ceph Users
> Subject: Re: [ceph-users] Spac
On Thu, Nov 30, 2023 at 8:25 AM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> Is there any config on Ceph that block/not perform space reclaim?
> I test on one pool which has only one image 1.8 TiB in used.
>
>
> rbd $p du im/root
> warning: fast-diff map is not enabled for root. operation may be
On Sat, Nov 25, 2023 at 4:19 AM Tony Liu wrote:
>
> Hi,
>
> The context is RBD on bluestore. I did check extent on Wiki.
> I see "extent" when talking about snapshot and export/import.
> For example, when create a snapshot, we mark extents. When
> there is write to marked extents, we will make a
On Thu, Nov 16, 2023 at 5:26 PM Matt Larson wrote:
>
> Ilya,
>
> Thank you for providing these discussion threads on the Kernel fixes for
> where there was a change and details on this affects the clients.
>
> What is the expected behavior in CephFS client when there are multiple data
> pools
On Thu, Nov 16, 2023 at 3:21 AM Xiubo Li wrote:
>
> Hi Matt,
>
> On 11/15/23 02:40, Matt Larson wrote:
> > On CentOS 7 systems with the CephFS kernel client, if the data pool has a
> > `nearfull` status there is a slight reduction in write speeds (possibly
> > 20-50% fewer IOPS).
> >
> > On a
On Wed, Nov 15, 2023 at 5:57 PM Wesley Dillingham
wrote:
>
> looking into how to limit snapshots at the ceph level for RBD snapshots.
> Ideally ceph would enforce an arbitrary number of snapshots allowable per
> rbd.
>
> Reading the man page for rbd command I see this option:
>
On Mon, Nov 6, 2023 at 10:31 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63443#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> rados - Neha, Radek, Travis,
On Mon, Oct 23, 2023 at 5:15 PM Yuri Weinstein wrote:
>
> If no one has anything else left, we have all issues resolved and
> ready for the 17.2.7 release
A last-minute issue with exporter daemon [1][2] necessitated a revert
[3]. 17.2.7 builds would need to be respinned: since the tag created
On Mon, Oct 16, 2023 at 8:52 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this
On Fri, Sep 22, 2023 at 8:40 AM Dominique Ramaekers
wrote:
>
> Hi,
>
> A question to avoid using a to elaborate method in finding de most recent
> snapshot of a RBD-image.
>
> So, what would be the preferred way to find the latest snapshot of this image?
>
> root@hvs001:/# rbd snap ls
On Wed, Sep 13, 2023 at 4:49 PM Stefan Kooman wrote:
>
> On 13-09-2023 14:58, Ilya Dryomov wrote:
> > On Wed, Sep 13, 2023 at 9:20 AM Stefan Kooman wrote:
> >>
> >> Hi,
> >>
> >> Since the 6.5 kernel addressed the issue with regards to regressio
On Wed, Sep 13, 2023 at 9:20 AM Stefan Kooman wrote:
>
> Hi,
>
> Since the 6.5 kernel addressed the issue with regards to regression in
> the readahead handling code... we went ahead and installed this kernel
> for a couple of mail / web clusters (Ubuntu 6.5.1-060501-generic
> #202309020842 SMP
On Fri, Aug 25, 2023 at 5:26 PM Laura Flores wrote:
>
> All known issues in pacific p2p and smoke. @Ilya Dryomov
> and @Casey Bodley may want to
> double-check that the two for pacific p2p are acceptable, but they are
> known.
>
> pacific p2p:
> - TestClsRbd.mirror_sna
On Wed, Aug 23, 2023 at 4:41 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
>
On Fri, Aug 4, 2023 at 7:49 AM Tony Liu wrote:
>
> Hi,
>
> We know snapshot is on a point of time. Is this point of time tracked
> internally by
> some sort of sequence number, or the timestamp showed by "snap ls", or
> something else?
Hi Tony,
The timestamp in "rbd snap ls" output is the
On Sun, Jul 30, 2023 at 5:46 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62231#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
> orch -
ds.
Thanks,
Ilya
>
> All the best,
> Florian
>
>
> From: Ilya Dryomov
> Sent: Wednesday, July 19, 2023 3:16:20 PM
> To: Engelmann Florian
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] RBD image QoS rbd_qos_write_bps_limit
On Wed, Jul 19, 2023 at 11:01 AM Engelmann Florian
wrote:
>
> Hi,
>
> I noticed an incredible high performance drop with mkfs.ext4 (as well as
> mkfs.xfs) when setting (almost) "any" value for rbd_qos_write_bps_limit (or
> rbd_qos_bps_limit).
>
> Baseline: 4TB rbd volume
On Mon, Jul 17, 2023 at 6:26 PM David Orman wrote:
>
> I'm hoping to see at least one more, if not more than that, but I have no
> crystal ball. I definitely support this idea, and strongly suggest it's given
> some thought. There have been a lot of delays/missed releases due to all of
> the
On Thu, Jul 13, 2023 at 10:23 PM Ilya Dryomov wrote:
>
> On Thu, Jul 13, 2023 at 6:16 PM Tony Liu wrote:
> >
> > Hi,
> >
> > How RBD mirror tracks mirroring process, on local storage?
> > Say, RBD mirror is running on host-1, when host-1 goes down,
> >
On Thu, Jul 13, 2023 at 6:16 PM Tony Liu wrote:
>
> Hi,
>
> How RBD mirror tracks mirroring process, on local storage?
> Say, RBD mirror is running on host-1, when host-1 goes down,
> start RBD mirror on host-2. In that case, is RBD mirror on host-2
> going to continue the mirroring?
Hi Tony,
On Mon, Jul 3, 2023 at 6:58 PM Mark Nelson wrote:
>
>
> On 7/3/23 04:53, Matthew Booth wrote:
> > On Thu, 29 Jun 2023 at 14:11, Mark Nelson wrote:
> > This container runs:
> > fio --rw=write --ioengine=sync --fdatasync=1
> > --directory=/var/lib/etcd --size=100m --bs=8000
On Tue, May 30, 2023 at 6:54 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/61515#note-1
> Release Notes - TBD
>
> Seeking approvals/reviews for:
>
> rados - Neha, Radek, Travis, Ernesto, Adam King (we still have to
> merge
On Thu, May 11, 2023 at 7:13 AM Szabo, Istvan (Agoda)
wrote:
>
> I can answer my question, even in the official ubuntu repo they are using by
> default the octopus version so for sure it works with kernel 5.
>
> https://packages.ubuntu.com/focal/allpackages
>
>
> -Original Message-
>
On Thu, May 4, 2023 at 11:27 AM Kamil Madac wrote:
>
> Thanks for the info.
>
> As a solution we used rbd-nbd which works fine without any issues. If we will
> have time we will also try to disable ipv4 on the cluster and will try kernel
> rbd mapping again. Are there any disadvantages when
On Wed, May 3, 2023 at 11:24 AM Kamil Madac wrote:
>
> Hi,
>
> We deployed pacific cluster 16.2.12 with cephadm. We experience following
> error during rbd map:
>
> [Wed May 3 08:59:11 2023] libceph: mon2 (1)[2a00:da8:ffef:1433::]:6789
> session established
> [Wed May 3 08:59:11 2023] libceph:
On Fri, Apr 29, 2023 at 7:52 AM Will Gorman wrote:
>
> Is there a way to enable the LUKS encryption format on a snapshot that was
> created from an unencrypted image without losing data? I've seen in
> https://docs.ceph.com/en/quincy/rbd/rbd-encryption/ that "Any data written to
> the image
On Thu, Apr 27, 2023 at 11:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59542#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Radek, Laura
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
>
On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi wrote:
>
>
> Hi,
>
> Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov :
>>
>> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
>> >
>> > yes, I used the same ecpool_hdd also for cephfs file systems
On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi wrote:
>
> yes, I used the same ecpool_hdd also for cephfs file systems. The new pool
> ecpool_test I've created for a test, I've also created it with application
> profile 'cephfs', but there aren't any cephfs filesystem attached to it.
This is not
On Tue, Apr 18, 2023 at 11:34 PM Reto Gysi wrote:
>
> Ah, yes indeed I had disabled log-to-stderr in cluster wide config.
> root@zephir:~# rbd -p rbd snap create ceph-dev@backup --id admin --debug-ms 1
> --debug-rbd 20 --log-to-stderr=true >/home/rgysi/log.txt 2>&1
Hi Reto,
So "rbd snap
On Tue, Apr 18, 2023 at 5:45 PM Reto Gysi wrote:
>
> Hi Ilya
>
> Sure.
>
> root@zephir:~# rbd snap create ceph-dev@backup --id admin --debug-ms 1
> --debug-rbd 20 >/home/rgysi/log.txt 2>&1
You probably have custom log settings in the cluster-wide config. Please
append "--log-to-stderr true"
On Tue, Apr 18, 2023 at 3:21 PM Reto Gysi wrote:
>
> Hi,
>
> Yes both snap create commands were executed as user admin:
> client.admin
>caps: [mds] allow *
>caps: [mgr] allow *
>caps: [mon] allow *
>caps: [osd] allow *
>
> deep scrubbing+repair of ecpool_hdd is
On Mon, Apr 17, 2023 at 6:37 PM Reto Gysi wrote:
>
> Hi Ilya,
>
> Thanks for the reply. Here's is the output:
>
> root@zephir:~# rbd status ceph-dev
> Watchers:
>watcher=192.168.1.1:0/338620854 client.19264246
> cookie=18446462598732840969
>
> root@zephir:~# rbd snap create
On Mon, Apr 17, 2023 at 2:01 PM Reto Gysi wrote:
>
> Dear Ceph Users,
>
> After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able
> to create snapshots
> of images that have an erasure coded datapool.
>
> root@zephir:~# rbd snap create ceph-dev@backup_20230417
> Creating snap:
On Sat, Apr 15, 2023 at 4:58 PM Max Boone wrote:
>
>
> After a critical node failure on my lab cluster, which won't come
> back up and is still down, the RBD objects are still being watched
> / mounted according to ceph. I can't shell to the node to rbd unbind
> them as the node is down. I am
On Wed, Mar 22, 2023 at 10:51 PM Tony Liu wrote:
>
> Hi,
>
> I want
> 1) copy a snapshot to an image,
> 2) no need to copy snapshots,
> 3) no dependency after copy,
> 4) all same image format 2.
> In that case, is rbd cp the same as rbd clone + rbd flatten?
> I ran some tests, seems like it, but
On Tue, Mar 21, 2023 at 9:06 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The reruns were in the queue for 4 days because of some slowness issues.
> The core team (Neha, Radek, Laura, and
On Sun, Feb 26, 2023 at 2:15 PM Patrick Schlangen wrote:
>
> Hi Ilya,
>
> > Am 26.02.2023 um 14:05 schrieb Ilya Dryomov :
> >
> > Isn't OpenSSL 1.0 long out of support? I'm not sure if extending
> > librados API to support a workaround for something that
On Sat, Feb 25, 2023 at 12:43 PM Patrick Schlangen wrote:
>
> Hi,
>
> > Am 24.02.2023 um 16:55 schrieb Patrick Schlangen :
> > I observe that using PHP's libcurl integration and other features which
> > rely on OpenSSL randomly fail when opening a TLS connection. I suspect that
> > librados
On Fri, Feb 24, 2023 at 9:05 AM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Actually I didn't try other caps.
>
> The setup of RBD images and authorizations is automised with a bash
> script that worked in the past w/o issues.
> I need to understand the root cause in order to adapt the script
On Thu, Feb 23, 2023 at 3:53 PM Thomas Schneider <74cmo...@gmail.com> wrote:
>
> Hm... I'm not sure about the correct rbd command syntax, but I thought
> it's correct.
>
> Anyway, using a different ID fails, too:
> # rbd map hdb_backup/VCT --id client.VCT --keyring
>
On Thu, Feb 23, 2023 at 3:31 PM Kuhring, Mathias
wrote:
>
> Hey Ilya,
>
> I'm not sure if the things I find in the logs are actually anything related
> or useful.
> But I'm not really sure, if I'm looking in the right places.
>
> I enabled "debug_ms 1" for the OSDs as suggested above.
> But this
On Tue, Feb 21, 2023 at 1:01 AM Xiubo Li wrote:
>
>
> On 20/02/2023 22:28, Kuhring, Mathias wrote:
> > Hey Dan, hey Ilya
> >
> > I know this issue is two years old already, but we are having similar
> > issues.
> >
> > Do you know, if the fixes got ever backported to RHEL kernels?
>
> It's
to investigate on
> upgrading.
>
> Best,
>
> Jiatong Shen
>
>
>
> On Sun, Jan 29, 2023 at 6:55 PM Ilya Dryomov wrote:
>>
>> On Sun, Jan 29, 2023 at 11:29 AM Jiatong Shen wrote:
>> >
>> > Hello community experts,
>> >
>> >I w
On Sun, Jan 29, 2023 at 11:29 AM Jiatong Shen wrote:
>
> Hello community experts,
>
>I would like to know the status of rbd image sparsify. From the website,
> it should be added at Nautilus (
> https://docs.ceph.com/en/latest/releases/nautilus/ from pr (26226
>
On Fri, Jan 27, 2023 at 4:09 PM Frank Schilder wrote:
>
> Hi Ilya,
>
> yes, it has race conditions. However, it seems to address the specific case
> that is causing us headaches.
>
> About possible improvements. I tried to understand the documentation about
> rbd image locks, but probably
On Fri, Jan 27, 2023 at 11:21 AM Frank Schilder wrote:
>
> Hi Mark,
>
> thanks a lot! This seems to address the issue we observe, at least to a large
> degree.
>
> I believe we had 2 VMs running after a failed live-migration as well and in
> this case it doesn't seem like it will help. Maybe
On Mon, Jan 23, 2023 at 6:51 PM Yuri Weinstein wrote:
>
> Ilya, Venky
>
> rbd, krbd, fs reruns are almost ready, pls review/approve
rbd and krbd approved.
Thanks,
Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
On Fri, Jan 20, 2023 at 5:38 PM Yuri Weinstein wrote:
>
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dashboard -
On Wed, Jan 18, 2023 at 3:25 PM Frank Schilder wrote:
>
> Hi Ilya,
>
> thanks a lot for the information. Yes, I was talking about the exclusive lock
> feature and was under the impression that only one rbd client can get write
> access on connect and will keep it until disconnect. The problem
On Wed, Jan 18, 2023 at 1:19 PM Frank Schilder wrote:
>
> Hi all,
>
> we are observing a problem on a libvirt virtualisation cluster that might
> come from ceph rbd clients. Something went wrong during execution of a
> live-migration operation and as a result we have two instances of the same
On Tue, Jan 17, 2023 at 4:46 PM Yuri Weinstein wrote:
>
> Please see the test results on the rebased RC 6.6 in this comment:
>
> https://tracker.ceph.com/issues/58257#note-2
>
> We're still having infrastructure issues making testing difficult.
> Therefore all reruns were done excluding the rhel
On Thu, Dec 15, 2022 at 11:56 PM Laura Flores wrote:
>
> I reviewed the upgrade runs:
>
> https://pulpito.ceph.com/yuriw-2022-12-13_15:57:57-upgrade:nautilus-x-pacific_16.2.11_RC-distro-default-smithi/
>
On Thu, Dec 15, 2022 at 6:15 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/58257#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> rados - Neha (https://github.com/ceph/ceph/pull/49431 is still being
> tested and will be
Ilya
>
> Regards
> Marcus
>
>
> > Am 21.11.2022 um 22:22 schrieb Ilya Dryomov :
> >
> > On Mon, Nov 21, 2022 at 5:48 PM Marcus Müller
> > wrote:
> >>
> >> Hi all,
> >>
> >> we created a RBD image for usage in a K8s cl
On Mon, Nov 21, 2022 at 5:48 PM Marcus Müller wrote:
>
> Hi all,
>
> we created a RBD image for usage in a K8s cluster. We use a own user and
> namespace for that RBD image.
>
> If we want to use this RBD image as a volume in k8s, it won’t work as k8s
> can’t find the image - without a
On Fri, Nov 18, 2022 at 3:46 PM Tobias Bossert wrote:
>
> Dear List
>
> I'm searching for a way to automate the snapshot creation/cleanup of RBD
> volumes. Ideally, there would be something like the "Snapshot Scheduler for
> cephfs"[1] but I understand
> this is not as "easy" with RBD devices
On Tue, Nov 8, 2022 at 1:25 PM Stefan Kooman wrote:
>
> On 11/3/22 14:05, Mike Perez wrote:
> > Hi everyone,
> >
> > Today is the first of our series in Ceph Virtual 2022! Our agenda will
> > include a Ceph project update, community update, and telemetry talk by
> > Yaarit Hatuka. Join us today
On Thu, Oct 27, 2022 at 9:05 AM Nizamudeen A wrote:
>
> >
> > lab issues blocking centos container builds and teuthology testing:
> > * https://tracker.ceph.com/issues/57914
> > * delays testing for 16.2.11
>
>
> The quay.ceph.io has been down for some days now. Not sure who is actively
>
On Fri, Oct 21, 2022 at 12:48 PM Konstantin Shalygin wrote:
>
> CC'ed David
Hi Konstantin,
David has decided to pursue something else and is no longer working on
Ceph [1].
>
> Maybe Ilya can tag someone from DevOps additionally
I think Dan answered this question yesterday [2]:
> there are no
On Wed, Oct 12, 2022 at 9:37 AM 郑亮 wrote:
>
> Hi all,
> I have create a pod using rbd image as backend storage, then map rbd image
> to local block device, and mount it with ext4 filesystem. The `df`
> displays the disk usage much larger than the available space displayed
> after disabling ext4
On Fri, Sep 30, 2022 at 7:36 PM Filipe Mendes wrote:
>
> Hello!
>
>
> I'm considering switching my current storage solution to CEPH. Today we use
> iscsi as a communication protocol and we use several different hypervisors:
> VMware, hyper-v, xcp-ng, etc.
Hi Filipe,
Ceph's main hypervisor
On Thu, Sep 15, 2022 at 3:33 PM Arthur Outhenin-Chalandre
wrote:
>
> Hi Ronny,
>
> > On 15/09/2022 14:32 ronny.lippold wrote:
> > hi arthur, some time went ...
> >
> > i would like to know, if there are some news of your setup.
> > do you have replication active running?
>
> No, there was no
On Wed, Sep 14, 2022 at 11:11 AM Ilya Dryomov wrote:
>
> On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/57472#note-1
> > Release Notes - https://github.com/
On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/57472#note-1
> Release Notes - https://github.com/ceph/ceph/pull/48072
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs
On Sun, Sep 11, 2022 at 2:52 AM Angelo Hongens wrote:
>
> Does that windows driver even support ipv6?
Hi Angelo,
Adding Lucian who would know more, but there is a recent fix for IPv6
on Windows:
https://tracker.ceph.com/issues/53281
Thanks,
Ilya
>
> I remember I could not
On Thu, Sep 1, 2022 at 8:19 PM Yuri Weinstein wrote:
>
> I have several PRs that are ready for merge but failing "make check"
>
> https://github.com/ceph/ceph/pull/47650 (main related to quincy)
> https://github.com/ceph/ceph/pull/47057
> https://github.com/ceph/ceph/pull/47621
>
On Fri, Aug 19, 2022 at 1:21 PM Martin Traxl wrote:
>
> Hi Ilya,
>
> On Thu, 2022-08-18 at 13:27 +0200, Ilya Dryomov wrote:
> > On Tue, Aug 16, 2022 at 12:44 PM Martin Traxl
> > wrote:
>
> [...]
>
> > >
> > >
> >
> > Hi Martin,
On Tue, Aug 16, 2022 at 12:44 PM Martin Traxl wrote:
>
> Hi,
>
> I am running a Ceph 16.2.9 cluster with wire encryption. From my ceph.conf:
> _
> ms client mode = secure
> ms cluster mode = secure
> ms mon client mode = secure
> ms mon cluster mode = secure
> ms mon service mode =
On Wed, Aug 10, 2022 at 3:03 AM Laura Flores wrote:
>
> Hey Satoru and others,
>
> Try this link:
> https://ceph.io/en/news/blog/2022/v15-2-17-octopus-released/
Note that this release also includes the fix for CVE-2022-0670 [1]
(same as in v16.2.10 and v17.2.2 hotfix releases). I have updated
On Tue, Jul 26, 2022 at 1:41 PM Peter Lieven wrote:
>
> Am 21.07.22 um 17:50 schrieb Ilya Dryomov:
> > On Thu, Jul 21, 2022 at 11:42 AM Peter Lieven wrote:
> >> Am 19.07.22 um 17:57 schrieb Ilya Dryomov:
> >>> On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven w
On Sat, Jul 23, 2022 at 12:16 PM Konstantin Shalygin wrote:
>
> Hi,
>
> This is hotfix only release? No another patches was targeted to 16.2.10
> landed here?
Hi Konstantin,
Correct, just fixes for CVE-2022-0670 and potential s3website
denial-of-service bug.
Thanks,
Ilya
On Thu, Jul 21, 2022 at 11:42 AM Peter Lieven wrote:
>
> Am 19.07.22 um 17:57 schrieb Ilya Dryomov:
> > On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote:
> >> Am 24.06.22 um 16:13 schrieb Peter Lieven:
> >>> Am 23.06.22 um 12:59 schrieb Ilya Dryomov:
> &
On Thu, Jul 21, 2022 at 4:24 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/56484
> Release Notes - https://github.com/ceph/ceph/pull/47198
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs,
On Tue, Jul 19, 2022 at 9:55 PM Wesley Dillingham
wrote:
>
>
> Thanks.
>
> Interestingly the older kernel did not have a problem with it but the newer
> kernel does.
The older kernel can't communicate via v2 protocol so it doesn't (need
to) distinguish v1 and v2 addresses.
Thanks,
On Tue, Jul 19, 2022 at 9:12 PM Wesley Dillingham
wrote:
>
>
> from ceph.conf:
>
> mon_host = 10.26.42.172,10.26.42.173,10.26.42.174
>
> map command:
> rbd --id profilerbd device map win-rbd-test/originalrbdfromsnap
>
> [root@a2tlomon002 ~]# ceph mon dump
> dumped monmap epoch 44
> epoch 44
>
On Tue, Jul 19, 2022 at 5:01 PM Wesley Dillingham
wrote:
>
> I have a strange error when trying to map via krdb on a RH (alma8) release
> / kernel 4.18.0-372.13.1.el8_6.x86_64 using ceph client version 14.2.22
> (cluster is 14.2.16)
>
> the rbd map causes the following error in dmesg:
>
> [Tue
On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote:
>
> Am 24.06.22 um 16:13 schrieb Peter Lieven:
> > Am 23.06.22 um 12:59 schrieb Ilya Dryomov:
> >> On Thu, Jun 23, 2022 at 11:32 AM Peter Lieven wrote:
> >>> Am 22.06.22 um 15:46 schrieb Josh Baergen:
> &g
1 - 100 of 238 matches
Mail list logo