>
>
> Original Message
> Subject: Re: [ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap
> From: Jason Dillaman <jdill...@redhat.com>
> To: Fulvio Galeazzi <fulvio.galea...@garr.it>
> CC: Ceph Users <ceph-users@lists.ceph.com>
>
rgeted for v12.2.3?
>>>>
>>>> [1] http://tracker.ceph.com/issues/22172
>>>
>>> It should, the rbd-nbd version is 12.2.4
>>>
>>> root@lumd1:~# rbd-nbd -v
>>> ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous
>>> (stable)
Do you have a preliminary patch that we can test against?
On Wed, Apr 11, 2018 at 10:27 AM, Alex Gorbachev
wrote:
> On Wed, Apr 11, 2018 at 2:43 AM, Mykola Golub wrote:
>> On Tue, Apr 10, 2018 at 11:14:58PM -0400, Alex Gorbachev wrote:
>>
>>>
, 2018 at 12:06 PM, Jason Dillaman <jdill...@redhat.com> wrote:
> I'll give it a try locally and see if I can figure it out. Note that
> this commit [1] also dropped the call to "bd_set_size" within
> "nbd_size_update", which seems suspicious to me at initial gl
If you run "partprobe" after you resize in your second example, is the
change visible in "parted"?
On Wed, Apr 11, 2018 at 11:01 PM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> On Wed, Apr 11, 2018 at 2:13 PM, Jason Dillaman <jdill...@redhat.com> wrot
Great, thanks for the update.
Jason
On Fri, Apr 13, 2018 at 11:06 PM, Alex Gorbachev <a...@iss-integration.com>
wrote:
> On Thu, Apr 12, 2018 at 9:38 AM, Alex Gorbachev <a...@iss-integration.com>
> wrote:
>> On Thu, Apr 12, 2018 at 7:57 AM, Jason Dillaman <
On Thu, Apr 19, 2018 at 11:32 AM, Sven Barczyk wrote:
> Hi,
>
>
>
> does anyone have experience in changing auth cap in production
> environments?
>
> I’m trying to add an additional pool with rwx to my client.libvirt
> (OpenNebula).
>
>
>
> ceph auth cap client.libvirt mon
I'd check your latency between your client and your cluster. On my
development machine w/ only a single OSD running and 200 clones, each
with 1 snapshot, "rbd -l" only takes a couple seconds for me:
$ time rbd ls -l --rbd_concurrent_management_ops=1 | wc -l
403
real 0m1.746s
user 0m1.136s
sys
BD-8b2cfe76-44b7-4393-b376-f675366831c3@BASE
> 2
> RBD-0192938e-cb4b-4ee1-9988-b8145704ac81@BASE 20480M
> RBD_XenStorage-07449252-bf96-4daa-b0a6-687b7f1c369c/RBD-8b2cfe76-44b7-4393-b376-f675366831c3@BASE
> 2 yes
> ...
> RBD-feb32ab0-a5ee-44e6-9089-486e91ee8af3 20480M
> RBD_XenSt
On Wed, Mar 28, 2018 at 7:33 AM, Brad Hubbard wrote:
> On Wed, Mar 28, 2018 at 6:53 PM, Max Cuttins wrote:
>> Il 27/03/2018 13:46, Brad Hubbard ha scritto:
>>
>>
>>
>> On Tue, Mar 27, 2018 at 9:12 PM, Max Cuttins wrote:
>>>
>>> Hi
ions and more have already been answers
repeatedly on your other thread.
> Il 28/03/2018 13:36, Jason Dillaman ha scritto:
>>
>> But I don't think that CentOS7.5 will use the kernel 4.16 ... so you are
>> telling me that new feature will be backported to the kernel 3.* ?
&g
You might want to take a look at the Zipkin tracing hooks that are
(semi)integrated into Ceph [1]. The hooks are disabled by default in
release builds so you would need to rebuild Ceph yourself and then
enable tracing via the 'rbd_blkin_trace_all = true' configuration
option [2].
[1]
On Fri, Mar 23, 2018 at 8:08 AM, Mike Cave wrote:
> Greetings all!
>
>
>
> I’m currently attempting to create an EC pool for my glance images, however
> when I save an image through the OpenStack command line, the data is not
> ending up in the EC pool.
>
> So a little information
On Mon, Mar 5, 2018 at 2:07 PM, Brady Deetz wrote:
> While preparing a risk assessment for a DR solution involving RBD, I'm
> increasingly unsure of a few things.
>
> 1) Does the failover from primary to secondary cluster occur automatically
> in the case that the primary
d for OpenStack. You could
> probably consider it in case of desaster recovery for single VMs, but not
> for a whole cloud environment where you would lose all relationships between
> base images and their clones.
>
> Regards,
> Eugen
>
>
> Zitat von Eugen Block <ebl..
On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini
wrote:
> I was building ceph in order to use with iSCSI.
> But I just see from the docs that need:
>
> CentOS 7.5
> (which is not available yet, it's still at 7.4)
> https://wiki.centos.org/Download
>
> Kernel 4.17
>
On Wed, Feb 28, 2018 at 10:06 AM, Max Cuttins <m...@phoenixweb.it> wrote:
>
>
> Il 28/02/2018 15:19, Jason Dillaman ha scritto:
>>
>> On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini <m...@phoenixweb.it>
>> wrote:
>>>
>>> I was building
On Wed, Feb 28, 2018 at 9:17 AM, Max Cuttins wrote:
> Sorry for being rude Ross,
>
> I follow Ceph since 2014 waiting for iSCSI support in order to use it with
> Xen.
What OS are you using in Dom0 that you cannot just directly use krbd?
iSCSI is going to add an extra hop so
Am 28.06.2018 um 14:33 schrieb Jason Dillaman:
> > You should have "/var/log/ansible-module-igw_config.log" on the target
> > machine that hopefully includes more information about why the RBD image
> > is missing from the TCM backend.
> I have the logfile, h
You should have "/var/log/ansible-module-igw_config.log" on the target
machine that hopefully includes more information about why the RBD image is
missing from the TCM backend. In the past, I've seen issues w/ image
features and image size mismatch causing the process to abort.
On Thu, Jun 28,
Hi,
>
> Am 29.06.2018 um 17:04 schrieb Jason Dillaman:
> > Is 'tcmu-runner' running on that node?
> yes it is running
>
> > Any errors in dmesg or
> here are no errors
> > /var/log/tcmu-runner.log?
> the following error is shown:
> [ERROR] add_device:436: coul
gwcli doesn't allow you to shrink images (it silently ignores you). Use
'rbd resize' and restart the GWs to pick up the new size.
On Fri, Jun 29, 2018 at 11:36 AM Wladimir Mutel wrote:
> Wladimir Mutel wrote:
>
> > it back to gwcli/disks), I discover that its size is rounded up to 3
> > TiB,
a
"/usr/lib64/tcmu-runner/handler_rbd.so" file? Perhaps enable debug-level
logging in "/etc/tcmu/tcmu.conf" and see if that helps.
On Fri, Jun 29, 2018 at 11:40 AM Bernhard Dick wrote:
> Am 29.06.2018 um 17:26 schrieb Jason Dillaman:
> > OK, so your tcmu-runner
Is 'tcmu-runner' running on that node? Any errors in dmesg or
/var/log/tcmu-runner.log?
On Fri, Jun 29, 2018 at 10:43 AM Bernhard Dick wrote:
> Hi,
>
> Am 28.06.2018 um 18:09 schrieb Jason Dillaman:
> > Do you have the ansible backtrace from the "ceph-iscsi-gw : igw_lun
On Mon, Oct 8, 2018 at 9:24 AM wrote:
>
> Hi,
>
> I am running a Ceph cluster (Jewel, ceph version 10.2.10-17.el7cp).
>
>
> I also have 2 OpenStack clusters (Ocata (v12) and Pike (v13)).
>
> When I perform a "rbd ls -p --id openstack" on the OpenStack
> Ocata cluster it works fine, when I
gt; rbd: list: (1) Operation not permitted
> $
>
> Thanks!
> Sinan
>
> On 08-10-2018 15:37, Jason Dillaman wrote:
> > On Mon, Oct 8, 2018 at 9:24 AM wrote:
> >>
> >> Hi,
> >>
> >> I am running a Ceph cluster (Jewel, ceph version 10.2.
se you have a typo
on your "rwx" cap (you have "rxw" instead).
>
> On the problematic Openstack cluster:
> $ ceph auth get client.openstack --id openstack | grep caps
> Error EACCES: access denied
> $
>
>
> When I change "caps: [mon] allow r" t
d-13.2.2-0.el7.x86_64
> ceph-radosgw-13.2.2-0.el7.x86_64
> kernel-ml-4.18.12-1.el7.elrepo.x86_64
> python-cephfs-13.2.2-0.el7.x86_64
> ceph-selinux-13.2.2-0.el7.x86_64
>
>
>
> On Tue, Oct 9, 2018 at 3:51 PM Jason Dillaman wrote:
>>
>> On Tue, Oct 9, 2018 at 3:14 PM Brady
On Mon, Oct 15, 2018 at 4:04 PM Anthony D'Atri wrote:
>
>
> We turned on all the RBD v2 features while running Jewel; since then all
> clusters have been updated to Luminous 12.2.2 and additional clusters added
> that have never run Jewel.
>
> Today I find that a few percent of volumes in each
You can also use "rbd disk-usage " to compute the
usage of a snapshot.
On Sun, Oct 28, 2018 at 4:39 PM Paul Emmerich wrote:
>
> "rbd diff" tells you what changed in an image since a snapshot:
>
> rbd diff --from-snap /
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph
This feature is forthcoming with the Nautilus release of Ceph:
$ rbd info image1
rbd image 'image1':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 101f86439b20
block_name_prefix: rbd_data.101f86439b20
format: 2
features: layering, exclusive-lock, object-map, fast-diff,
On Thu, Nov 1, 2018 at 12:46 AM Ashley Merrick wrote:
>
> Hello,
>
> I have a small EC Pool I am using with RBD to store a bunch of large files
> attached to some VM's for personal storage use.
>
> Currently I have the EC Meta Data Pool on some SSD's, I have noticed even
> though the EC Pool
Your use of "sudo" for the rados CLI tool makes me wonder if perhaps
the "nstcc0" user cannot read "/etc/ceph/ceph.conf" or
"/etc/ceph/ceph.admin.keyring". If that's not the case, what version
of qemu-img are you using?
$ rpm -qa | grep qemu-img
qemu-img-2.11.2-4.fc28.x86_64
$ qemu-img create -f
(CCing Mike since he knows more than me)
On Sun, Oct 28, 2018 at 4:19 AM Frédéric Nass
wrote:
>
> Hello Mike, Jason,
>
> Assuming we adapt the current LIO configuration scripts and put QLogic HBAs
> in our SCSI targets, could we use FC instead of iSCSI as a SCSI transport
> protocol with LIO ?
On Mon, Oct 29, 2018 at 7:48 AM Wido den Hollander wrote:
> On 10/29/18 12:42 PM, kefu chai wrote:
> > + ceph-user for more inputs in hope to get more inputs from librados
> > and librbd 's C++ interfaces.
> >
> > On Wed, Oct 24, 2018 at 1:34 AM Jason Dillaman wrot
[--stripe-unit ]
> >> [--stripe-count ] [--data-pool
> >> ]
> >> [--journal-splay-width ]
> >> [--journal-object-size ]
> >> [--journal-pool ]
> >> [--sparse-s
s appreciated.
>
> Thanks,
>
> Uwe
>
>
>
> Am 07.11.18 um 14:39 schrieb Uwe Sauter:
> > I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.
> >
> > Am 07.11.18 um 14:31 schrieb Jason Dillaman:
> >> With the Mimic release, y
ress]
>[--export-format ] [--pool ]
> [--image ]
>
>
>
>
>
>
> Am 07.11.18 um 20:41 schrieb Jason Dillaman:
> > If your CLI supports "--export-format 2", you can just do "rbd export
> > --e
On Sun, Nov 4, 2018 at 11:59 PM Wei Jin wrote:
>
> Hi, Jason,
>
> I have a question about rbd mirroring. When enable mirroring, we observed
> that there are a lot of objects prefix with journal_data, thus it consumes a
> lot of disk space.
>
> When will these journal objects be deleted? And are
e
> > to face it.
> > Two osds in an arm board, two gb memory and 2*10T hdd disk on board, so one
> > osd has 1gb memory to support 10TB hdd disk, we must try to make cluster
> > works better as we can.
> >
> >
> > Thanks.
> >
> >> 在 2018年11
With the Mimic release, you can use "rbd deep-copy" to transfer the
images (and associated snapshots) to a new pool. Prior to that, you
could use "rbd export-diff" / "rbd import-diff" to manually transfer
an image and its associated snapshots.
On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter wrote:
>
>
or that it's so far
behind that it's just not able to keep up with the IO workload of the
image. You can run "rbd journal disconnect --image
--client-id=89024ad3-57a7-42cc-99d4-67f33b093704" to force-disconnect
the remote client and start the journal trimming process.
> > On Nov
Attempting to send 256 concurrent 4MiB writes via librbd will pretty
quickly hit the default "objecter_inflight_op_bytes = 100 MiB" limit,
which will drastically slow (stall) librados. I would recommend
re-testing librbd w/ a much higher throttle override.
On Thu, Nov 15, 2018 at 11:34 AM 赵赵贺东
other.
>> >
>> >
>> > Do the OSD daemon from primary and secondary have to talk to each other?
>> > we have same non routed networks for OSD.
>>
>> The secondary site needs to be able to communicate with all MON and
>> OSD daemons in the primary site
8 2:26 pm, Andre Goree wrote:
> > On 2018/08/21 1:24 pm, Jason Dillaman wrote:
> >> Can you collect any librados / librbd debug logs and provide them via
> >> pastebin? Just add / tweak the following in your "/etc/ceph/ceph.conf"
> >> file's "[c
On Tue, Oct 2, 2018 at 1:25 PM Andre Goree wrote:
>
> On 2018/10/02 10:26 am, Andre Goree wrote:
> > On 2018/10/02 9:54 am, Jason Dillaman wrote:
> >> Perhaps that pastebin link has the wrong log pasted? The provided log
> >> looks like it's associated with the
On Tue, Oct 2, 2018 at 1:48 PM Andre Goree wrote:
>
> On 2018/10/02 1:29 pm, Jason Dillaman wrote:
> > On Tue, Oct 2, 2018 at 1:25 PM Andre Goree wrote:
> >>
> >>
> >> Unfortunately, it would appear that I'm not getting anything in the
> >> logs
On Tue, Oct 2, 2018 at 4:47 PM Vikas Rana wrote:
>
> Hi,
>
> We have a CEPH 3 node cluster at primary site. We created a RBD image and the
> image has about 100TB of data.
>
> Now we installed another 3 node cluster on secondary site. We want to
> replicate the image at primary site to this new
On Wed, Oct 10, 2018 at 11:57 AM Florian Florensa wrote:
>
> Hello everyone,
>
> I noticed sometime ago the namespaces appeared in RBD documentation,
> and by searching it looks like it was targeted for mimic, so i wanted
> to know if anyone had any experiences with it, and if it is going to
> be
The latest master branch version on shaman should be functional:
[1] https://shaman.ceph.com/repos/ceph-iscsi-config/
[2] https://shaman.ceph.com/repos/ceph-iscsi-cli
[3] https://shaman.ceph.com/repos/tcmu-runner/
On Wed, Oct 10, 2018 at 3:39 PM Brady Deetz wrote:
>
> Here's where we are now.
>
get-api log will show
>>
>> Does
>>
>> gwcli ls
>>
>> show it cannot reach the remote gateways?
>>
>>
>> >
>> > adding the disk to the hosts failed with "client masking update" error
>> >
>> > disk add rbd.dstest
>> > CMD: ../hosts/ disk action=add disk=rbd.dstest
&
ame -a
> Linux osd03.tor.medavail.net 4.18.11-1.el7.elrepo.x86_64
>
>
> On Wed, 10 Oct 2018 at 16:22, Jason Dillaman wrote:
>>
>> Are you running the same kernel version on both nodes?
>> On Wed, Oct 10, 2018 at 4:18 PM Steven Vacaroaia wrote:
>> >
>>
ur OSDs on that 192.168.3.x subnet? What daemons are running on
192.168.3.21?
> I could do ceph -s from both side and they can see each other. Only rbd
> command is having issue.
>
> Thanks,
> -Vikas
>
>
>
>
> On Tue, Oct 2, 2018 at 5:14 PM Jason Dillaman wrote:
>&
able to communicate with all MON and
OSD daemons in the primary site.
> Thanks,
> -Vikas
>
> On Thu, Oct 4, 2018 at 10:13 AM Jason Dillaman wrote:
>>
>> On Thu, Oct 4, 2018 at 10:10 AM Vikas Rana wrote:
>> >
>> > Thanks Jason for great suggestions.
&g
OSD, and client) but I received the expected "Operation not
permitted" due to the corrupt OSD caps. Starting with Jewel v10.2.11,
the monitor will now at least prevent you from setting corrupt caps on
a user.
>
> On 08-10-2018 17:04, Jason Dillaman wrote:
> > On Mon, Oct 8, 2018
You can try adding "prometheus_exporter = false" in your
"/etc/ceph/iscsi-gateway.cfg"'s "config" section if you aren't using
"cephmetrics", or try setting "prometheus_host = 0.0.0.0" since it
sounds like you have the IPv6 stack disabled.
[1]
:16:08] "PUT
> /api/clientlun/iqn.1998-01.com.vmware:test-2d06960a HTTP/1.1" 500 -
>
> On Tue, 9 Oct 2018 at 15:42, Steven Vacaroaia wrote:
>>
>> It worked.
>>
>> many thanks
>> Steven
>>
>> On Tue, 9 Oct 2018 at 15:36, Jason Dillaman
On Tue, Oct 9, 2018 at 3:14 PM Brady Deetz wrote:
>
> I am attempting to migrate to the new tcmu iscsi gateway. Is there a way to
> configure gwcli to export an rbd that was created outside gwcli?
You should be able to just run "/disks create .
" from within "gwcli" to have it add an existing
e is
>
> "..rbd-target-gw: ValueError: invalid literal for int() with base 10:
> '0.0.0.0' "
>
> adding prometheus_exporter = false works
>
> However I'd like to use prometheus_exporter if possible
> Any suggestions will be appreciated
>
> Steven
>
On Fri, Sep 21, 2018 at 6:48 AM Glen Baars wrote:
>
> Hello Ceph Users,
>
>
>
> We have been using ceph-iscsi-cli for some time now with vmware and it is
> performing ok.
>
>
>
> We would like to use the same iscsi service to store our Hyper-v VMs via
> windows clustered shared volumes. When we
come down to your personal preferences re: baked-in time
for the release vs release EOL timing.
> Le lun. 24 sept. 2018 à 18:08, Jason Dillaman a écrit :
>>
>> It *should* work against any recent upstream kernel (>=4.16) and
>> up-to-date dependencies [1]. If you encounter an
It *should* work against any recent upstream kernel (>=4.16) and
up-to-date dependencies [1]. If you encounter any distro-specific
issues (like the PR that Mike highlighted), we would love to get them
fixed.
[1] http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/
On Mon, Sep
Thanks for reporting this -- it looks like we broke the part where
command-line config overrides were parsed out from the parsing. I've
opened a tracker ticket against the issue [1].
On Wed, Sep 19, 2018 at 2:49 PM Vikas Rana wrote:
>
> Hi there,
>
> With default cluster name "ceph" I can map
lun. 24 sept. 2018 à 19:11, Jason Dillaman a écrit :
>>
>> On Mon, Sep 24, 2018 at 12:18 PM Florian Florensa
>> wrote:
>> >
>> > Currently building 4.18.9 on ubuntu to try it out, also wondering if i
>> > should plan for xenial+luminous or directly t
Thanks for tracking this down. It appears the libvirt needs to check
whether or not the fast-diff map is invalid before attempting to use
it. However, assuming the map is valid, I don't immediately see a
difference between the libvirt and "rbd du" implementation. Can you
provide a pastebin "debug
thorized review, copy, use, disclosure, or distribution
> is STRICTLY prohibited. If you are not the intended recipient, please contact
> the sender by reply e-mail and destroy all copies of the original message.
>
> On Jan 12, 2019, at 4:01 PM, Jason Dillaman wrote:
>
> On Fri, Ja
fidential and privileged
> information. Any unauthorized review, copy, use, disclosure, or distribution
> is STRICTLY prohibited. If you are not the intended recipient, please contact
> the sender by reply e-mail and destroy all copies of the original message.
>
> On Jan 14, 2019, at 9:50 AM,
Your "mon" cap should be "profile rbd" instead of "allow r" [1].
[1]
http://docs.ceph.com/docs/master/rbd/rados-rbd-cmds/#create-a-block-device-user
On Mon, Jan 21, 2019 at 9:05 PM ST Wong (ITSC) wrote:
>
> Hi,
>
> > Is this an upgraded or a fresh cluster?
> It's a fresh cluster.
>
> > Does
On Fri, Dec 14, 2018 at 4:27 PM Vikas Rana wrote:
>
> Hi there,
>
> We are replicating a RBD image from Primary to DR site using RBD mirroring.
> We were using 10.2.10.
>
> We decided to upgrade the DR site to luminous and upgrade went fine and
> mirroring status also was good.
> We then
CRUSH map tunables support within the kernel is documented here [1]
and RBD feature support within the kernel is documented here [2].
[1]
http://docs.ceph.com/docs/master/rados/operations/crush-map/?highlight=tunables#tunables
[2] http://docs.ceph.com/docs/master/rbd/rbd-config-ref/#rbd-features
I would check to see if the images have an exclusive-lock still held
by a force-killed VM. librbd will generally automatically clear this
up unless it doesn't have the proper permissions to blacklist a dead
client from the Ceph cluster. Verify that your OpenStack Ceph user
caps are correct [1][2].
FYI -- that "entries_behind_master=175226727" bit is telling you that
it has only mirrored about 80% of the recent changes from primary to
non-primary.
Was the filesystem already in place? Are their any partitions/LVM
volumes in-use on the device? Did you map the volume read-only?
On Tue, Nov 27,
The "osd_perf_query" mgr module is just a demo / testing framework.
However, the output was tweaked prior to merge to provide more
readable values instead of the "{value summation} / {count}" in the
original submission.
On Tue, Dec 4, 2018 at 1:56 PM Michael Green wrote:
>
> Interesting, thanks
On Thu, Jan 10, 2019 at 10:50 AM Oliver Freyermuth
wrote:
>
> Dear Jason and list,
>
> Am 10.01.19 um 16:28 schrieb Jason Dillaman:
> > On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth
> > wrote:
> >>
> >> Dear Cephalopodians,
> >&
On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth
wrote:
>
> Dear Cephalopodians,
>
> I performed several consistency checks now:
> - Exporting an RBD snapshot before and after the object map rebuilding.
> - Exporting a backup as raw image, all backups (re)created before and after
> the object
I don't think libvirt has any facilities to list the snapshots of an
image for the purposes of display. It appears, after a quick scan of
the libvirt RBD backend [1] that it only internally lists image
snapshots for maintenance reasons.
[1]
I think Ilya recently looked into a bug that can occur when
CONFIG_HARDENED_USERCOPY is enabled and the IO's TCP message goes
through the loopback interface (i.e. co-located OSDs and krbd).
Assuming that you have the same setup, you might be hitting the same
bug.
On Thu, Jan 10, 2019 at 6:46 PM
krbd doesn't yet support several RBD features, including journaling
[1]. The only current way to use object-map, fast-diff, deep-flatten,
and/or journaling features against a block device is to use "rbd
device map --device-type nbd " (or use a TCMU loopback
device to create an librbd-backed SCSI
I had already created a ticket [1].
[1] http://tracker.ceph.com/issues/37876
On Sat, Jan 12, 2019 at 3:33 PM Oliver Freyermuth
wrote:
>
> Am 10.01.19 um 16:53 schrieb Jason Dillaman:
> > On Thu, Jan 10, 2019 at 10:50 AM Oliver Freyermuth
> > wrote:
> >>
> >&g
On Fri, Jan 11, 2019 at 2:09 PM Kenneth Van Alstyne
wrote:
>
> Hello all (and maybe this would be better suited for the ceph devel mailing
> list):
> I’d like to use RBD mirroring between two sites (to each other), but I have
> the following limitations:
> - The clusters use the same name
With the current releases of Ceph, the only way to accomplish this is
by gathering the IO stats on each client node. However, with the
future Nautilus release, this data will now be available directly from
the OSDs.
On Fri, Dec 28, 2018 at 6:18 AM Sinan Polat wrote:
>
> Hi all,
>
> We have a
;
> We are running Red Hat Ceph 2.x which is based on Jewel, that means we cannot
> pinpoint who or what is causing the load on the cluster, am I right?
>
> Thanks!
> Sinan
>
> > Op 28 dec. 2018 om 15:14 heeft Jason Dillaman het
> > volgende geschreven
Any chance you know the LBA or byte offset of the corruption so I can
compare it against the log?
On Wed, Sep 12, 2018 at 8:32 PM wrote:
>
> Hi Jason,
>
> On 2018-09-10 11:15:45-07:00 ceph-users wrote:
>
> On 2018-09-10 11:04:20-07:00 Jason Dillaman wrote:
>
>
>
On Wed, Sep 12, 2018 at 10:15 PM wrote:
>
> On 2018-09-12 17:35:16-07:00 Jason Dillaman wrote:
>
>
> Any chance you know the LBA or byte offset of the corruption so I can
> compare it against the log?
>
> The LBAs of the corruption are 0xA74F000 through 175435776
Are y
On Thu, Sep 13, 2018 at 1:54 PM wrote:
>
> On 2018-09-12 19:49:16-07:00 Jason Dillaman wrote:
>
>
> On Wed, Sep 12, 2018 at 10:15 PM patrick.mcl...@sony.com wrote:
> gt;
> gt; On 2018-09-12 17:35:16-07:00 Jason Dillaman wrote:
> gt;
> gt;
> gt; Any chance y
root users running "systemctl stop
rbdmap" causing issues, there are tons of other ways a root user can
destroy the system.
>
> On 1/25/19, 9:35 PM, "Jason Dillaman" wrote:
>
> The "rbdmap" systemd unit file should take care of it [1].
>
>
On Tue, Apr 2, 2019 at 4:19 AM Nikola Ciprich
wrote:
>
> Hi,
>
> on one of my clusters, I'm getting error message which is getting
> me a bit nervous.. while listing contents of a pool I'm getting
> error for one of images:
>
> [root@node1 ~]# rbd ls -l nvme > /dev/null
> rbd: error processing
h version
> ceph version 14.1.0-559-gf1a72cff25
> (f1a72cff2522833d16ff057ed43eeaddfc17ea8a) nautilus (dev)
>
> Regards,
> Eugen
>
>
> Zitat von Jason Dillaman :
>
> > On Tue, Apr 2, 2019 at 4:19 AM Nikola Ciprich
> > wrote:
> >>
> >>
For upstream, "deprecated" might be too strong of a word; however, it
is strongly cautioned against using [1]. There is ongoing work to
replace cache tiering with a new implementation that hopefully works
better and avoids lots of the internal edge cases that the cache
tiering v1 design required.
support is in-place, we can tweak the resync logic to only copy the
deltas by comparing hashes of the objects.
> I'm trying to estimate how long will it take to get a 200TB image in sync.
>
> Thanks,
> -Vikas
>
>
> -Original Message-
> From: Jason Dillaman
> Se
For better or worse, out of the box, librbd and rbd-mirror are
configured to conserve memory at the expense of performance to support
the potential case of thousands of images being mirrored and only a
single "rbd-mirror" daemon attempting to handle the load.
You can optimize writes by adding
What is the version of rbd-mirror daemon and your OSDs? It looks it
found two replicated images and got stuck on the "wait_for_deletion"
step. Since I suspect those images haven't been deleted, it should
have immediately proceeded to the next step of the image replay state
machine. Are there any
What happens when you run "rados -p rbd lock list gateway.conf"?
On Fri, Mar 29, 2019 at 12:19 PM Matthias Leopold
wrote:
>
> Hi,
>
> I upgraded my test Ceph iSCSI gateways to
> ceph-iscsi-3.0-6.g433bbaa.el7.noarch.
> I'm trying to use the new parameter "cluster_client_name", which - to me
> -
og file.
>
> We removed the pool to make sure there's no image left on DR site and
> recreated an empty pool.
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Friday, April 5, 2019 2:24 PM
> To: Vikas Rana
> Cc: ceph-users
&
so,
please use pastebin or similar service to avoid mailing the logs to
the list.
> Rbd-mirror is running as "rbd-mirror --cluster=cephdr"
>
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Monday, April 8, 2019 9:30 AM
> To
When using cache pools (which are essentially deprecated functionality
BTW), you should always reference the base tier pool. The fact that a
cache tier sits in front of a slower, base tier is transparently
handled.
On Tue, Mar 26, 2019 at 5:41 PM Götz Reinicke
wrote:
>
> Hi,
>
> I have a rbd in
out the per-pool
configuration overrides, and that will be available in Nautilus via
the new "rbd config global/pool/image ..." commands.
>
> > On Feb 26, 2019, at 5:27 PM, Jason Dillaman wrote:
> >
> > On Tue, Feb 26, 2019 at 7:49 PM Anthony D'Atri
> > wrote:
>
On Tue, Feb 26, 2019 at 7:49 PM Anthony D'Atri wrote:
>
> Hello again.
>
> I have a couple of questions about rbd-mirror that I'm hoping you can help me
> with.
>
>
> 1) http://docs.ceph.com/docs/mimic/rbd/rbd-snapshot/ indicates that
> protecting is required for cloning. We somehow had the
On Tue, Mar 12, 2019 at 11:09 PM Vikas Rana wrote:
>
> Hi there,
>
>
>
> We are replicating a RBD image from Primary to DR site using RBD mirroring.
>
> On Primary, we were using 10.2.10.
Just a note that Jewel is end-of-life upstream.
> DR site is luminous and we promoted the DR copy to test
Looks like you have the IPv6 stack disabled. You will need to override
the bind address from "[::]" to "0.0.0.0" via the "api_host" setting
[1] in "/etc/ceph/iscsi-gateway.cfg"
[1]
https://github.com/ceph/ceph-iscsi/blob/master/ceph_iscsi_config/settings.py#L100
On Mon, Mar 18, 2019 at 11:09 AM
601 - 700 of 802 matches
Mail list logo