Hi Budai,
May you take a look at
gwcli, https://docs.ceph.com/en/latest/rbd/iscsi-target-cli/
-- Original --
From: "Budai Laszlo"
Hi Gilles,
Did you ever figure this out? Also, your rados ls output indicates that the
prod cluster has fewer objects in the index pool than the backup cluster,
or am I misreading this?
David
On Wed, Dec 1, 2021 at 4:32 AM Gilles Mocellin <
gilles.mocel...@nuagelibre.org> wrote:
> Hello,
>
> We
Hi Nico,
No 2 data centers.
- We use size=4
- our ceph map is configured with OSD's assigned to 2 separate data center
locations, so we end up with 2 OSD's in use from in each DC
- min_size=2.
- we have (1) monitor in each DC
- we have a 3rd monitor that is in a 3rd DC and has a VPN connection t
On Fri, Mar 25, 2022 at 4:11 PM Ilya Dryomov wrote:
>
> On Thu, Mar 24, 2022 at 2:04 PM Budai Laszlo wrote:
> >
> > Hi Ilya,
> >
> > Thank you for your answer!
> >
> > On 3/24/22 14:09, Ilya Dryomov wrote:
> >
> >
> > How can we see whether a lock is exclusive or shared? the rbd lock ls
> > comm
Hi George,
We use 4/2 for our deployment and it works fine - but it's a huge waste of
space :)
Our reason is because we want to be able to lose a data center and still
have ceph running. You could accomplish that with size=1 on an emergency
basis, but we didn't like the redundancy loss.
Cheer
Hello ceph-users,
I was wondering if it is good practice to have an even number of replicas in a
replicated pool. For example, have size=4 and min_size=2.
Thank you!
George
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
Do read this
https://yourcmc.ru/wiki/index.php?title=Ceph_performance&mobileaction=toggle_view_desktop#Drive_cache_is_slowing_you_down
and see if one or both of the drives rather have write cache on or
off. At least some microns we have want it off for better perf in
ceph.
Den fre 25 mars 2022 k
On Thu, Mar 24, 2022 at 2:04 PM Budai Laszlo wrote:
>
> Hi Ilya,
>
> Thank you for your answer!
>
> On 3/24/22 14:09, Ilya Dryomov wrote:
>
>
> How can we see whether a lock is exclusive or shared? the rbd lock ls command
> output looks identical for the two cases.
>
> You can't. The way --exclu
Hi ,
I found more information in the OSD logs about this assertion , may be it could
help =>
ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)
in thread 7f8002357700 thread_name:msgr-worker-2
*** Caught signal (Aborted) **
what(): buffer::end_of_buffer
ter
Hi,
This is because the default client id is "admin" -- you are trying to
connect to the cluster as admin with user3's key here.
that makes sense, of course.
This is a bit broader than perhaps needed. If the intention is to
allow user3 to create and use RBD images in namespace user3 of pool
On Wed, Mar 23, 2022 at 07:14:22AM +0200, Budai Laszlo wrote:
> Hello all,
>
> what capabilities a ceph user should have in order to be able to create rbd
> images in one namespace only?
>
> I have tried the following:
>
> [root@ceph1 ~]# rbd namespace ls --format=json
> [{"name":"user1"},{"nam
On Fri, Mar 25, 2022 at 10:11 AM Eugen Block wrote:
>
> Hi,
>
> I was curious and tried the same with debug logs. One thing I noticed
> was that if I use the '-k ' option I get a different error
> message than with '--id user3'. So with '-k' the result is the same:
>
> ---snip---
> pacific:~ # rbd
Hi ,
I found more information in the OSD logs about this assertion , may be it could
help =>
ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)
in thread 7f8002357700 thread_name:msgr-worker-2
*** Caught signal (Aborted) **
what(): buffer::end_of_buffer
terminate c
Hello everybody,
Is there a way to create the scsi targets from the command line with the ceph command?
(Or a series of commands that can be put in a script.) I have reviewed the "ceph
-h" but I guess I'm missing something.
Thank you,
Laszlo
___
ceph
Can you add more information about your cluster like 'ceph -s' and
'ceph osd df tree'? I haven't seen full OSD errors yet when they are
only around 75% full. It can't be a pool quota since it would report
the pool(s) as full, not the OSDs. Is there anything in the logs of
that OSD?
Zitat
Hi,
I was curious and tried the same with debug logs. One thing I noticed
was that if I use the '-k ' option I get a different error
message than with '--id user3'. So with '-k' the result is the same:
---snip---
pacific:~ # rbd -k /etc/ceph/ceph.client.user3.keyring -p test2
--namespace
16 matches
Mail list logo