lock owner alive. The question is how to find out the alive
> owner and what is the root cause about this. Why cannot acquire lock to the
> owner?
> Thank you so much
>
> Jason Dillaman 于2021年3月24日 周三20:55写道:
>>
>> It sounds like this is a non-primary mirrored image, which
dy promoted and not mirrored ,a primary image
> I have run rbd —debug-rbd=30 and collect a log file
> Which shows locker owner still alive and unable get the lock, return -EAGAIN
> I’ll send you log later
>
> Thank you so much
>
>
> Jason Dillaman 于2021年3月24日 周三20:55写道:
>&
30 32 64 39 35 39 66 65 2d 61 36 39 33 2d
> |e-02d959fe-a693-|
> 0020 34 61 63 62 2d 39 35 65 32 2d 63 61 30 34 62 39
> |4acb-95e2-ca04b9|
> 0030 36 35 33 38 39 62 12 05 2a 60 09 c5 d4 16 12 05
> |65389b..*`..|
> 0040 2a 60 09 c5 d4 16 01 |*`.|
> 000
Can you provide the output from "rados -p volumes listomapvals rbd_trash"?
On Wed, Mar 10, 2021 at 8:03 AM Enrico Bocchi wrote:
>
> Hello everyone,
>
> We have an unpurgeable image living in the trash of one of our clusters:
> # rbd --pool volumes trash ls
> 5afa5e5a07b8bc
On Mon, Mar 1, 2021 at 3:07 PM Pawel S wrote:
>
> Hello Jason!
>
> On Mon, Mar 1, 2021, 19:48 Jason Dillaman wrote:
>
> > On Mon, Mar 1, 2021 at 1:35 PM Pawel S wrote:
> > >
> > > hello!
> > >
> > > I'm trying to understand how Bluesto
On Mon, Mar 1, 2021 at 1:35 PM Pawel S wrote:
>
> hello!
>
> I'm trying to understand how Bluestore cooperates with RBD image clones, so
> my test is simple
>
> 1. create an image (2G) and fill with data
> 2. create a snapshot
> 3. protect it
> 4. create a clone of the image
> 5. write a small
Verify you have correct values for "trusted_ip_list" [1].
[1] https://github.com/ceph/ceph-iscsi/blob/master/iscsi-gateway.cfg_sample#L29
On Mon, Mar 1, 2021 at 9:45 AM Várkonyi János
wrote:
>
> Hi All,
>
> I2d like to install a Ceph Nautilus on Ubuntu 18.04 LTS and give the storage
> to 2
On Fri, Jan 29, 2021 at 9:34 AM Adam Boyhan wrote:
>
> This is a odd one. I don't hit it all the time so I don't think its expected
> behavior.
>
> Sometimes I have no issues enabling rbd-mirror snapshot mode on a rbd when
> its in use by a KVM VM. Other times I hit the following error, the
On Thu, Jan 28, 2021 at 10:31 AM Jason Dillaman wrote:
>
> On Wed, Jan 27, 2021 at 7:27 AM Adam Boyhan wrote:
> >
> > Doing some more testing.
> >
> > I can demote the rbd image on the primary, promote on the secondary and the
> > image looks great. I can
a4bf793890f9d324c64183e5)
> pacific (rc)
>
> Unfortunately, I am hitting the same exact issues using a pacific client.
>
> Would this confirm that its something specific in 15.2.8 on the osd/mon nodes?
>
>
>
>
>
>
> From: "Jason Dillaman"
> To:
On Fri, Jan 22, 2021 at 3:29 PM Adam Boyhan wrote:
>
> I will have to do some looking into how that is done on Proxmox, but most
> definitely.
Thanks, appreciate it.
> ____
> From: "Jason Dillaman"
> To: "adamb"
> Cc: &qu
>
> This is pretty straight forward, I don't know what I could be missing here.
>
>
>
> From: "Jason Dillaman"
> To: "adamb"
> Cc: "ceph-users" , "Matt Wilder"
> Sent: Friday, January 22, 2021 2:11:36 PM
> Subject: Re: [ceph-
issing
> codepage or helper program, or other error.
>
>
> Primary still looks good.
>
> root@Ccscephtest1:~# rbd clone CephTestPool1/vm-100-disk-1@TestSnapper1
> CephTestPool1/vm-100-disk-1-CLONE
> root@Ccscephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-1-CLONE
> /dev/n
On Thu, Jan 21, 2021 at 6:18 PM Chris Dunlop wrote:
>
> On Thu, Jan 21, 2021 at 10:57:49AM +0100, Robert Sander wrote:
> > Hi,
> >
> > Am 21.01.21 um 05:42 schrieb Chris Dunlop:
> >
> >> Is there any particular reason for that MAX_OBJECT_MAP_OBJECT_COUNT, or
> >> it just "this is crazy large, if
on, bad superblock on /dev/nbd0, missing
> codepage or helper program, or other error.
>
> On the primary still no issues
>
> root@Ccscephtest1:/etc/pve/priv# rbd clone
> CephTestPool1/vm-100-disk-1@TestSnapper CephTestPool1/vm-100-disk-1-CLONE
> root@Ccscephtest1:/etc/pve/
On Thu, Jan 21, 2021 at 2:00 PM Adam Boyhan wrote:
>
> Looks like a script and cron will be a solid work around.
>
> Still interested to know if there are any options to make it so rbd-mirror
> can take more than 1 mirror snap per second.
>
>
>
> From: "adamb"
> To: "ceph-users"
> Sent:
synced as
a first step, but perhaps there are some extra guardrails we can put
on the system to prevent premature usage if the sync status doesn't
indicate that it's complete.
> ________
> From: "Jason Dillaman"
> To: "adamb"
> Cc: "Eugen Blo
We actually have a bunch of bug fixes for snapshot-based mirroring
pending for the next Octopus release. I think this stuck snapshot case
has been fixed, but I'll try to verify on the pacific branch to
ensure.
On Thu, Jan 21, 2021 at 9:11 AM Adam Boyhan wrote:
>
> Decided to request a resync to
hTestPool2/vm-100-disk-0-CLONE
> root@Bunkcephmon2:~# rbd ls CephTestPool2
> vm-100-disk-0-CLONE
>
> I am sure I will be back with more questions. Hoping to replace our Nimble
> storage with Ceph and NVMe.
>
> Appreciate it!
>
>
> From: "J
On Wed, Jan 20, 2021 at 3:10 PM Adam Boyhan wrote:
>
> That's what I though as well, specially based on this.
>
>
>
> Note
>
> You may clone a snapshot from one pool to an image in another pool. For
> example, you may maintain read-only images and snapshots as templates in one
> pool, and
On Fri, Jan 15, 2021 at 10:12 AM Rafael Diaz Maurin
wrote:
>
> Le 15/01/2021 à 15:39, Jason Dillaman a écrit :
>
> 4. But the error is still here :
> 2021-01-15 09:33:58.775 7fa088e350c0 -1 librbd::DiffIterate: diff_object_map:
> failed to load object
On Fri, Jan 15, 2021 at 4:36 AM Rafael Diaz Maurin
wrote:
>
> Hello cephers,
>
> I run Nautilus (14.2.15)
>
> Here is my context : each night a script take a snapshot from each RBD volume
> in a pool (all the disks of the VMs hosted) on my ceph production cluster.
> Then each snapshot is
e volume, could result in
hundreds of thousands of ops to the cluster. That's a great way to
hang IO.
> Do you have more information about the NBD/XFS memeory pressure issues ?
See [1].
> Thanks
>
> -Message d'origine-
> De : Jason Dillaman
> Envoyé : mardi 5 janvie
You can try using the "--timeout X" optional for "rbd-nbd" to increase
the timeout. Some kernels treat the default as infinity, but there
were some >=4.9 kernels that switched behavior and started defaulting
to 30 seconds. There is also known issues with attempting to place XFS
file systems on top
cache = false
>
> in /etc/ceph/ceph.conf should work also.
>
> Except it doesnt.
> Even after fully shutting down every node in the ceph cluster and doing a
> cold startup.
>
> is that a bug?
Nope [1]. How would changing a random configuration file on a random
node affect t
gt; So, while i am happy to file a documentation pull request.. I still need to
> find the specific command line that actually *works*, for the "rbd config"
> variant, etc.
>
>
>
> - Original Message -
> From: "Jason Dillaman"
> To: "Phili
lse
> rbd: not rbd option: cache
... the configuration option is "rbd_cache" as documented here [2].
>
>
> Very frustrating.
>
>
>
> - Original Message -
> From: "Jason Dillaman"
> To: "Eugen Block"
> Cc: "ceph-users&qu
On Thu, Dec 17, 2020 at 7:22 AM Eugen Block wrote:
>
> Hi,
>
> > [client]
> > rbd cache = false
> > rbd cache writethrough until flush = false
>
> this is the rbd client's config, not the global MON config you're
> reading here:
>
> > # ceph --admin-daemon `find /var/run/ceph -name 'ceph-mon*'`
On Tue, Dec 15, 2020 at 12:24 PM Philip Brown wrote:
>
> It wont be on the same node...
> but since as you saw, the problem still shows up with iodepth=32 seems
> we're still in the same problem ball park
> also... there may be 100 client machines.. but each client can have anywhere
>
r=0KiB/s,w=53.9MiB/s][r=0,w=13.8k IOPS][eta
> 01m:14s]
Have you tried different kernel versions? Might also be worthwhile
testing using fio's "rados" engine [1] (vs your rados bench test)
since it might not have been comparing apples-to-apples given the
>400MiB/s throughout yo
Insightful question!
> running rados bench write to the same pool, does not exhibit any problems. It
> consistently shows around 480M/sec throughput, every second.
>
> So this would seem to be something to do with using rbd devices. Which we
> need to do.
>
> Fo
On Mon, Dec 14, 2020 at 11:28 AM Philip Brown wrote:
>
>
> I have a new 3 node octopus cluster, set up on SSDs.
>
> I'm running fio to benchmark the setup, with
>
> fio --filename=/dev/rbd0 --direct=1 --rw=randrw --bs=4k --ioengine=libaio
> --iodepth=256 --numjobs=1 --time_based
On Mon, Dec 14, 2020 at 9:39 AM Marc Boisis wrote:
>
>
> Hi,
>
> I would like to know if you support iser in gwcli like the traditional
> targetcli or if this is planned in a future version of ceph ?
We don't have the (HW) resources to test with iSER so it's not
something that anyone is looking
On Sun, Dec 13, 2020 at 6:03 AM mk wrote:
>
> rados ls -p ssdshop
> outputs 20MB of lines without any bench prefix
> ...
> rbd_data.d4993cc3c89825.74ec
> rbd_data.d4993cc3c89825.1634
> journal_data.83.d4993cc3c89825.333485
> journal_data.83.d4993cc3c89825.380648
>
On Tue, Nov 10, 2020 at 1:52 PM athreyavc wrote:
>
> Hi All,
>
> We have recently deployed a new CEPH cluster Octopus 15.2.4 which consists
> of
>
> 12 OSD Nodes(16 Core + 200GB RAM, 30x14TB disks, CentOS 8)
> 3 Mon Nodes (8 Cores + 15GB, CentOS 8)
>
> We use Erasure Coded Pool and RBD block
If the remove command is interrupted after it deletes the data and
image header but before it deletes the image listing in the directory,
this can occur. If you run "rbd rm " again (assuming it
was your intent), it should take care of removing the directory
listing entry.
On Fri, Oct 30, 2020 at
This backport [1] looks suspicious as it was introduced in v14.2.12
and directly changes the initial MonMap code. If you revert it in a
dev build does it solve your problem?
[1] https://github.com/ceph/ceph/pull/36704
On Thu, Oct 22, 2020 at 12:39 PM Wido den Hollander wrote:
>
> Hi,
>
> I
On Wed, Sep 30, 2020 at 8:28 AM wrote:
>
> Hi all,
>
> I'm trying to troubleshoot an interesting problem with RBD performance for
> VMs. Tests were done using fio both outside and inside the VMs shows that
> random read/write is 20-30% slower than bulk read/write at QD=1. However, at
>
On Thu, Sep 24, 2020 at 9:53 AM Stefan Kooman wrote:
>
> On 2020-09-24 14:34, Eugen Block wrote:
> > Hi *,
> >
> > I'm curious if this idea [1] of quotas on namespace level for rbd will
> > be implemented. I couldn't find any existing commands in my lab Octopus
> > cluster so I guess it's still
On Tue, Sep 22, 2020 at 7:23 AM Eugen Block wrote:
>
> It just hit me when I pushed the "send" button: the (automatically
> created) first snapshot initiates the first full sync to catch up on
> the remote site, but from then it's either a manual process or the
> snapshot schedule. Is that it?
On Mon, Sep 14, 2020 at 5:13 AM Lomayani S. Laizer wrote:
>
> Hello,
> Last week i got time to try debug crashes of these vms
>
> Below log includes rados debug which i left last time
>
> https://storage.habari.co.tz/index.php/s/AQEJ7tQS7epC4Zn
>
> I have observed the following with these
rbd gives me a 404. ;-)
> This is better: https://tracker.ceph.com/projects/rbd/issues
Indeed -- thanks!
> Regards,
> Eugen
>
>
> Zitat von Jason Dillaman :
>
> > On Thu, Sep 10, 2020 at 7:36 AM Eugen Block wrote:
> >>
> >> Hi *,
> >>
&
On Thu, Sep 10, 2020 at 7:44 AM Eugen Block wrote:
>
> Hi *,
>
> I'm currently testing rbd-mirror on ceph version
> 15.2.4-864-g0f510cb110 (0f510cb1101879a5941dfa1fa824bf97db6c3d08)
> octopus (stable) and saw this during an rbd import of a fresh image on
> the primary site:
>
> ---snip---
>
On Fri, Sep 4, 2020 at 11:54 AM wrote:
>
> All;
>
> We've used iSCSI to support virtualization for a while, and have used
> multi-pathing almost the entire time. Now, I'm looking to move from our
> single box iSCSI hosts to iSCSI on Ceph.
>
> We have 2 independent, non-routed, subnets assigned
On Wed, Aug 26, 2020 at 10:33 AM Marc Roos wrote:
>
> >>
> >>
> >> I was wondering if anyone is using ceph csi plugins[1]? I would like
> to
> >> know how to configure credentials, that is not really described for
> >> testing on the console.
> >>
> >> I am running
> >> ./csiceph
On Wed, Aug 26, 2020 at 10:11 AM Marc Roos wrote:
>
>
>
> I was wondering if anyone is using ceph csi plugins[1]? I would like to
> know how to configure credentials, that is not really described for
> testing on the console.
>
> I am running
> ./csiceph --endpoint
On Wed, Aug 26, 2020 at 9:15 AM Willi Schiegel
wrote:
>
> Hello All,
>
> I have a Nautilus (14.2.11) cluster which is running fine on CentOS 7
> servers. 4 OSD nodes, 3 MON/MGR hosts. Now I wanted to enable iSCSI
> gateway functionality to be used by some Solaris and FreeBSD clients. I
> followed
On Tue, Aug 25, 2020 at 6:54 AM huxia...@horebdata.cn
wrote:
>
> Dear Ceph folks,
>
> I am running Openstack Queens to host a variety of Apps, with ceph backend
> storage Luminous 12.2.13.
>
> Is there a solution to support IOPS constraints on a specific rbd volume from
> Ceph side? I konw
It's an effort to expose RBD to Windows via a native driver [1]. That
driver is basically a thin NBD shim to connect with the rbd-nbd daemon
running as a Windows service.
On Thu, Aug 20, 2020 at 6:07 AM Stolte, Felix wrote:
>
> Hey guys,
>
> it seems like there was a presentation called “ceph on
tten
> op_features:
> flags:
> create_timestamp: Thu Nov 29 13:56:28 2018
>
> On Mon, 10 Aug 2020 at 09:21, Jason Dillaman wrote:
>>
>> On Fri, Aug 7, 2020 at 2:37 PM Steven Vacaroaia wrote:
>> >
>> > Hi,
>> > I would
On Fri, Aug 7, 2020 at 2:37 PM Steven Vacaroaia wrote:
>
> Hi,
> I would appreciate any help/hints to solve this issue
>iscis (gwcli) cannot see the images anymore
>
> This configuration worked fine for many months
> What changed was that ceph is "nearly full"
>
> I am in the process of
On Mon, Aug 3, 2020 at 4:11 AM Georg Schönberger
wrote:
>
> Hey Ceph users,
>
> we are currently facing some serious problems on our Ceph Cluster with
> libvirt (KVM), RBD devices and FSTRIM running inside VMs.
>
> The problem is right after running the fstrim command inside the VM the
> ext4
On Fri, Jul 31, 2020 at 8:10 AM Torsten Ennenbach
wrote:
>
> Hi Jason
>
> > Am 31.07.2020 um 14:08 schrieb Jason Dillaman :
> >
> > rados
> > -p rbd listomapvals rbd_header.f907bc6b8b4567
>
> rados -p rbd listomapvals rbd_header.f907b
dos
-p rbd listomapvals rbd_header.f907bc6b8b4567" return?
> I tried to move this to trash as a solution, but this aint working also.
>
>
> Best regards
> Torsten
>
>
> > Am 31.07.2020 um 13:58 schrieb Jason Dillaman :
> >
> > On Fri, Jul 31, 2020 at 3:37 AM Torsten Ennenb
On Wed, Jul 29, 2020 at 9:07 AM Jason Dillaman wrote:
>
> On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote:
> >
> >
> >
> > On 29/07/2020 14:54, Jason Dillaman wrote:
> > > On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
> > &
On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote:
>
>
>
> On 29/07/2020 14:54, Jason Dillaman wrote:
> > On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
> >>
> >> Hi,
> >>
> >> I'm trying to have clients read the 'rbd_defau
On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote:
>
> Hi,
>
> I'm trying to have clients read the 'rbd_default_data_pool' config
> option from the config store when creating a RBD image.
>
> This doesn't seem to work and I'm wondering if somebody knows why.
It looks like all string-based
On Tue, Jul 28, 2020 at 11:39 AM Jason Dillaman wrote:
>
> On Tue, Jul 28, 2020 at 11:19 AM Johannes Naab
> wrote:
> >
> > On 2020-07-28 15:52, Jason Dillaman wrote:
> > > On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
> > > wrote:
> > >&g
On Tue, Jul 28, 2020 at 11:19 AM Johannes Naab
wrote:
>
> On 2020-07-28 15:52, Jason Dillaman wrote:
> > On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
> > wrote:
> >>
> >> On 2020-07-28 14:49, Jason
On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
wrote:
>
> On 2020-07-28 14:49, Jason Dillaman wrote:
> >> VM in libvirt with:
> >>
> >>
> >>
> >>
> >>
> >>
> >>
>
On Tue, Jul 28, 2020 at 7:19 AM Johannes Naab
wrote:
>
> Hi,
>
> we observe crashes in librbd1 on specific workloads in virtual machines
> on Ubuntu 20.04 hosts with librbd1=15.2.4-1focal.
>
> The changes in
> https://github.com/ceph/ceph/commit/50694f790245ca90a3b8a644da7b128a7a148cc6
> could be
On Mon, Jul 27, 2020 at 3:08 PM Herbert Alexander Faleiros
wrote:
>
> Hi,
>
> On Fri, Jul 24, 2020 at 12:37:38PM -0400, Jason Dillaman wrote:
> > On Fri, Jul 24, 2020 at 10:45 AM Herbert Alexander Faleiros
> > wrote:
> > >
> > > On Fri, Jul 24, 2020
ceive images
from "master") or "rx-tx" for bi-directional mirroring?
> 2020-07-24T21:46:25.978+0200 7f931dca9700 10 rbd::mirror::RemotePollPoller:
> 0x5628339d92b0 schedule_task:
>
>
> -Ursprüngliche Nachricht-
> Von: Jason Dillaman
> Gesendet: Freitag, 24. Juli
looks this way:
> > rbd mirror pool info testpool
> > Mode: image
> > Site Name: ceph
> > Peer Sites:
> > UUID: e68b09de-1d2c-4ec6-9350-a6ccad26e1b7
> > Name: ceph
> > Mirror UUID: 4d7f87f4-47be-46dd-85f1-79caa3fa23da
> > Direction: tx-o
On Fri, Jul 24, 2020 at 10:45 AM Herbert Alexander Faleiros
wrote:
>
> On Fri, Jul 24, 2020 at 07:28:07PM +0500, Alexander E. Patrakov wrote:
> > On Fri, Jul 24, 2020 at 6:01 PM Herbert Alexander Faleiros
> > wrote:
> > >
> > > Hi,
> > >
> > > is there any way to fix it instead a reboot?
> > >
>
On Fri, Jul 24, 2020 at 9:11 AM Herbert Alexander Faleiros
wrote:
>
> Hi,
>
> is there any way to do that without disabling journaling?
Negative at this point. There are no versions of the Linux kernel that
support the journaling feature.
> # rbd map image@snap
> rbd: sysfs write failed
> RBD
On Fri, Jul 24, 2020 at 8:02 AM wrote:
>
> Hi,
>
> this is the main site:
>
> rbd mirror pool info testpool
> Mode: image
> Site Name: ceph
>
> Peer Sites:
>
> UUID: 1f1877cb-5753-4a0e-8b8c-5e5547c0619e
> Name: backup
> Mirror UUID: e9e2c4a0-1900-4db6-b828-e655be5ed9d8
> Direction: tx-only
>
>
>
On Fri, Jul 24, 2020 at 7:49 AM wrote:
>
> Hi,
>
> i have a working journal based mirror setup initially created with nautilus.
> I recently upgraded to octopus (15.2.4) to use snapshot based mirroring.
> After that I disabled mirroring for the first image and reenabled it snapshot
> based.
>
>
On Thu, Jul 9, 2020 at 12:02 AM Void Star Nill wrote:
>
>
>
> On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman wrote:
>>
>> On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill
>> wrote:
>> >
>> > Hello,
>> >
>> > My understanding is
On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill wrote:
>
> Hello,
>
> My understanding is that the time to format an RBD volume is not dependent
> on its size as the RBD volumes are thin provisioned. Is this correct?
>
> For example, formatting a 1G volume should take almost the same time as
>
ge backend. Do you
have a "1:storage" entry in your libvirtd.conf?
> Cheers
> - Original Message -
> > From: "Jason Dillaman"
> > To: "Andrei Mikhailovsky"
> > Cc: "ceph-users"
> > Sent: Tuesday, 7 July, 2020 16:33:25
&g
On Tue, Jul 7, 2020 at 11:07 AM Andrei Mikhailovsky wrote:
>
> I've left the virsh pool-list command 'hang' for a while and it did
> eventually get the results back. In about 4 hours!
Perhaps enable the debug logging of libvirt [1] to determine what it's
spending its time on?
>
On Wed, Jul 1, 2020 at 3:23 AM Daniel Stan - nav.ro wrote:
>
> Hi,
>
> We are experiencing a weird issue after upgrading our clusters from ceph
> luminous to nautilus 14.2.9 - I am not even sure if this is ceph related
> but this started to happen exactly after we upgraded, so, I am trying my
>
On Thu, Jun 25, 2020 at 7:51 PM Void Star Nill wrote:
>
> Hello,
>
> Is there a way to list all locks held by a client with the given IP address?
Negative -- you would need to check every image since the locks are
tied to the image.
> Also, I read somewhere that removing the lock with "rbd lock
at, so that you could non-force promote. How are you
writing to the original primary image? Are you flushing your data?
> Jason Dillaman 于2020年6月9日周二 下午7:19写道:
>>
>> On Mon, Jun 8, 2020 at 11:42 PM Zhenshi Zhou wrote:
>> >
>> > I have just done a test on rbd-mirror. F
> Thanks for the follow-up though!
>
> Regards,
>
> Hans
>
> On Mon, Jun 8, 2020, 13:38 Jason Dillaman wrote:
>>
>> On Sun, Jun 7, 2020 at 8:06 AM Hans van den Bogert
>> wrote:
>> >
>> > Hi list,
>> >
>> > I've aw
its time is
>> just before I demote
>> the primary image. I lost about 24 hours' data and I'm not sure whether
>> there is an interval
>> between the synchronization.
>>
>> I use version 14.2.9 and I deployed a one direction mirror.
>>
>> Zhenshi Zhou
On Sun, Jun 7, 2020 at 8:06 AM Hans van den Bogert wrote:
>
> Hi list,
>
> I've awaited octopus for a along time to be able to use mirror with
> snapshotting, since my setup does not allow for journal based
> mirroring. (K8s/Rook 1.3.x with ceph 15.2.2)
>
> However, I seem to be stuck, i've come
On Thu, Jun 4, 2020 at 3:43 AM Zhenshi Zhou wrote:
>
> My condition is that the primary image being used while rbd-mirror sync.
> I want to get the period between the two times of rbd-mirror transfer the
> increased data.
> I will search those options you provided, thanks a lot :)
When using the
On Fri, May 29, 2020 at 12:09 PM Miguel Castillo
wrote:
> Happy New Year Ceph Community!
>
> I'm in the process of figuring out RBD mirroring with Ceph and having a
> really tough time with it. I'm trying to set up just one way mirroring
> right now on some test systems (baremetal servers, all
On Fri, May 29, 2020 at 11:38 AM Palanisamy wrote:
> Hello Team,
>
> Can I get any update on this request.
>
The Ceph team is not really involved in the out-of-tree rbd-provisioner.
Both the in-tree and this out-of-tree RBD provisioner are deprecated to the
ceph-csi [1][2] RBD provisioner. The
On Thu, May 28, 2020 at 8:44 AM Hans van den Bogert
wrote:
> Hi list,
>
> When reading the documentation for the new way of mirroring [1], some
> questions arose, especially with the following sentence:
>
> > Since this mode is not point-in-time consistent, the full snapshot
> delta will need
On Thu, May 14, 2020 at 12:47 PM Kees Meijs | Nefos wrote:
> Hi Anthony,
>
> A one-way mirror suits fine in my case (the old cluster will be
> dismantled in mean time) so I guess a single rbd-mirror daemon should
> suffice.
>
> The pool consists of OpenStack Cinder volumes containing a UUID
On Thu, May 14, 2020 at 3:12 AM Brad Hubbard wrote:
> On Wed, May 13, 2020 at 6:00 PM Lomayani S. Laizer
> wrote:
> >
> > Hello,
> >
> > Below is full debug log of 2 minutes before crash of virtual machine.
> Download from below url
> >
> >
I would also like to add that the OSDs can (and will) use redirect on write
techniques (not to mention the physical device hardware as well).
Therefore, your zeroing of the device might just cause the OSDs to allocate
new extents of zeros while the old extents remain intact (albeit
unreferenced
On Wed, Apr 29, 2020 at 9:27 AM Ron Gage wrote:
> Hi everyone!
>
> I have been working for the past week or so trying to get ceph-iscsi to
> work - Octopus release. Even just getting a single node working would be a
> major victory in this battle but so far, victory has proven elusive.
>
> My
On Mon, Apr 27, 2020 at 7:38 AM Marc Roos wrote:
>
> I guess this is not good for ssd (samsung sm863)? Or do I need to devide
> 14.8 by 40?
>
The 14.8 ms number is the average latency coming from the OSDs, so no need
to divide the number by anything. What is the size of your writes? At 40
On Mon, Apr 20, 2020 at 1:20 PM Void Star Nill
wrote:
> Thanks Ilya.
>
> The challenge is that, in our environment, we could have multiple
> containers using the same volume on the same host, so we map them multiple
> times and unman them by device when one of the containers
> complete/terminate
On Mon, Apr 6, 2020 at 3:55 AM Lomayani S. Laizer wrote:
>
> Hello,
>
> After upgrade our ceph cluster to octopus few days ago we are seeing vms
> crashes with below error. We are using ceph with openstack(rocky).
> Everything running ubuntu 18.04 with kernel 5.3. We seeing this crashes in
>
On Tue, Mar 24, 2020 at 3:50 AM Ml Ml wrote:
>
> Hello List,
>
> i use rbd-mirror and i asynchronously mirror to my backup cluster.
> My backup cluster only has "spinnung rust" and wont be able to always
> perform like the live cluster.
>
> Thats is fine for me, as far as it´s not further behind
12.2.13 luminous on my backup clouster.
> >
> > To be able to mount the mirrored rbd image (without a protected snapshot):
> > rbd-nbd --read-only map cluster5-rbd/vm-114-disk-1
> > --cluster backup
> >
> > I just need to upgrade my backup cluster?
> >
> >
On Thu, Mar 19, 2020 at 6:19 AM Eugen Block wrote:
>
> Hi,
>
> one workaround would be to create a protected snapshot on the primary
> image which is also mirrored, and then clone that snapshot on the
> remote site. That clone can be accessed as required.
+1. This is the correct approach. If you
On Fri, Mar 13, 2020 at 3:31 PM Jason Dillaman wrote:
>
> On Fri, Mar 13, 2020 at 2:48 PM Matt Dunavant
> wrote:
> >
> > Jason Dillaman wrote:
> > > On Fri, Mar 13, 2020 at 11:36 AM Matt Dunavant
> > > > > >
> > > > Jason Dill
On Fri, Mar 13, 2020 at 2:48 PM Matt Dunavant
wrote:
>
> Jason Dillaman wrote:
> > On Fri, Mar 13, 2020 at 11:36 AM Matt Dunavant
> > > >
> > > Jason Dillaman wrote:
> > > > On Fri, Mar 13, 2020 at 11:17 AM Matt Dunavant
> > > >
On Fri, Mar 13, 2020 at 11:36 AM Matt Dunavant
wrote:
>
> Jason Dillaman wrote:
> > On Fri, Mar 13, 2020 at 11:17 AM Matt Dunavant
> > > >
> > > I'm not sure of the last known good release of the rbd CLI where this
> > > worked. I just
> > >
On Fri, Mar 13, 2020 at 11:17 AM Matt Dunavant
wrote:
>
> I'm not sure of the last known good release of the rbd CLI where this worked.
> I just ran the sha1sum against the images and they always come up as
> different. Might be worth knowing, this is a volume that's provisioned at
> 512GB
On Wed, Mar 11, 2020 at 9:03 AM Matt Dunavant
wrote:
>
> Should have mentioned, the VM is always off. We are not using snapshots
> either.
Is there a last-known good release of the rbd CLI where it works as
expected? If you run "rbd export -c - |
sha1sum" against both sets of images after
increasing a lot if i produce 20MB/sec traffic on that
> > replication image.
> >
> > The latency is like:
> > --- 10.10.50.1 ping statistics ---
> > 100 packets transmitted, 100 received, 0% packet loss, time 20199ms
> > rtt min/avg/max/mdev = 0.067/0.286/1.41
ased mirroring hasn't been released yet (technically)
since it's new with Octopus. It might be better in such an
environment, however, since it has the potential to reduce the number
of IOs.
> Thanks,
> Michael
>
> On Tue, Mar 10, 2020 at 3:43 PM Jason Dillaman wrote:
> >
> > On Tue,
replay will replay the writes exactly as written in the journal.
> Thanks,
> Michael
>
>
>
> On Tue, Mar 10, 2020 at 2:19 PM Jason Dillaman wrote:
> >
> > On Tue, Mar 10, 2020 at 6:47 AM Ml Ml wrote:
> > >
> > > Hello List,
> > >
> >
1 - 100 of 132 matches
Mail list logo