[ceph-users] OSD not starting after being mounted with ceph-objectstore-tool --op fuse

2023-09-21 Thread Budai Laszlo
Hello, I have a problem with an OSD not starting after being mounted offline using the ceph-objectstore-tool --op fuse command. The cephadm orch ps now shows me the osd in error state: osd.0   storage1   error 2m ago   5h    -    4096M  If I'm

[ceph-users] Error adding OSD

2023-09-20 Thread Budai Laszlo
Hi all, I am trying to add an OSD using cephadm but it fails with the message found below. Do you have any ide what may be wrong? The given device used to be in the cluster but it has been removed, and now the device appears as available in the `ceph orch device ls`. Thank you, Laszlo

[ceph-users] Re: same OSD in multiple CRUSH hierarchies

2023-06-19 Thread Budai Laszlo
osts (given your failure domain would be "host"), which is already the default for the replicated_rule. Did I misunderstand something? Regards, Eugen Zitat von Budai Laszlo: Hi there, I'm curious if there is anything against configuring an ODS to be part in multiple CRUSH

[ceph-users] ceph blocklist

2023-06-16 Thread Budai Laszlo
Hello everyone, can someone explain, or direct me to some documentation that explains the role of the blocklists (former blacklist). What do they useful for? how do they work? Thank you, Laszlo ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] same OSD in multiple CRUSH hierarchies

2023-06-13 Thread Budai Laszlo
Hi there, I'm curious if there is anything against configuring an ODS to be part in multiple CRUSH hierarchies. I'm thinking of the following scenario: I want to create pools that are using distinct sets of OSDs. I want to make sure that a piece data which replicated at application level will

[ceph-users] Ceph quincy cephadm orch daemon stop osd.X not working

2022-09-29 Thread Budai Laszlo
Dear All, I'm testing ceph quincy and I have problems using the cephadm ochestrator backend. When I'm trying to use it to start/stop osd daemons nothing happens. I have a "brand new" cluster deployed with cephadm. So far everything else that I tried worked just like in Pacific, but the ceph

[ceph-users] Create iscsi targets from CLI

2022-03-25 Thread Budai Laszlo
Hello everybody, Is there a way to create the scsi targets from the command line with the ceph command? (Or a series of commands that can be put in a script.) I have reviewed the "ceph -h" but I guess I'm missing something. Thank you, Laszlo ___

[ceph-users] Re: RBD Exclusive lock to shared lock

2022-03-24 Thread Budai Laszlo
Hi Ilya, Thank you for your answer! On 3/24/22 14:09, Ilya Dryomov wrote: How can we see whether a lock is exclusive or shared? the rbd lock ls command output looks identical for the two cases. You can't. The way --exclusive is implemented is the client simply refuses to release the lock

[ceph-users] RBD Exclusive lock to shared lock

2022-03-24 Thread Budai Laszlo
Hi all, is there any possibility to turn an exclusive lock into a shared one? for instance if I map a device with "rbd map testimg --exclusive" then is there any way to switch that lock to a shared one so I can map the rbd image on an other node as well? How can we see whether a lock is

[ceph-users] Re: RBD exclusive lock

2022-03-23 Thread Budai Laszlo
egards, Laszlo On 3/23/22 23:18, Budai Laszlo wrote: Hello all! I am facing the following issue with ceph RBD: I can pap the image on multiple hosts. After I map on the first host I can see its lock on the image. After that I was expecting the map to fail on the second node, but actually it

[ceph-users] RBD exclusive lock

2022-03-23 Thread Budai Laszlo
Hello all! I am facing the following issue with ceph RBD: I can pap the image on multiple hosts. After I map on the first host I can see its lock on the image. After that I was expecting the map to fail on the second node, but actually it didn't. The second node was able to map the image and

[ceph-users] Re: Ceph multitenancy

2022-03-22 Thread Budai Laszlo
in the mgr API to downgrade privileges. * Additionally, the Dashboard authentication is password-based, while cephx is key-based. Kind Regards, Ernesto On Tue, Mar 22, 2022 at 3:41 PM Budai Laszlo wrote: Dear all, is it possible to use standalone ceph for provisioning storage

[ceph-users] Ceph multitenancy

2022-03-22 Thread Budai Laszlo
Dear all, is it possible to use standalone ceph for provisioning storage resources for multiple tenants in a "Self service" way? (users log in to the dashboard, and they can manage their own resources). Any documentation link or other reference is highly appreciated. Thank you, Laszlo

[ceph-users] rbd info flags

2021-09-14 Thread Budai Laszlo
Hello all! what is the flags showing in the rbd info output? where can I read more about it? Thank you, Laszlo ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Edit crush rule

2021-09-07 Thread Budai Laszlo
_name} > Very easy. > This may kick off some backfill so I'd suggest setting norebalance > before doing this. > > Rich > > On Wed, 8 Sept 2021 at 07:51, Nathan Fish wrote: >> I believe you would create a new rule and switch? >> >> On Tue, Sep 7, 2021 at 3:46 PM

[ceph-users] Edit crush rule

2021-09-07 Thread Budai Laszlo
Dear all, is there a way to change the failure domain of a CRUSH rule using the CLI? I know I can do that by editing the crush map. I'm curious if there is a "CLI way"? Thank you, Laszlo ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] rados -p pool_name ls shows deleted object when there is a snapshot

2021-09-03 Thread Budai Laszlo
Dear all, I am experimenting with ceph pacific, and I have noticed that if I remove an object from a pool that has a snapshot, the `rados -p poolname ls` command will keep showing the object. ``` root@node1:~# rados -p testpool ls first root@node1:~# rados -p testpool mksnap snap1 created pool

[ceph-users] Re: unbalanced pg/osd allocation

2020-07-30 Thread Budai Laszlo
er/rados/operations/control/ > > > or use the balancer module in newer releases *iff* all clients are new enough > to handle pg-upmap > > https://docs.ceph.com/docs/nautilus/rados/operations/balancer/ > > > > > > >> On Jul 30, 2020, at 9:21 AM, Budai Laszlo

[ceph-users] unbalanced pg/osd allocation

2020-07-30 Thread Budai Laszlo
Dear all, We have a ceph cluster where we are have configured two SSD only pools in order to use them as cache tier for the spinning discs. Altogether there are 27 SSDs organized on 9 hosts distributed in 3 chassis. The hierarchy looks like this: $ ceph osd df tree | grep -E 'ssd|ID' ID CLASS

[ceph-users] cache tier dirty status

2020-07-27 Thread Budai Laszlo
Hello all, is there a way to interrogate a cache tier pool about the number of dirty objects/bytes that it contains? Thank you, Laszlo ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] activating cache tier while rbd is in use

2020-07-23 Thread Budai Laszlo
Hello all, is it allowed to configure and activate a cache tier for a pool that contains used RBD images? The documentation (https://docs.ceph.com/docs/mimic/rados/operations/cache-tiering/) doesn't say anything about this, but we have experienced errors with our VMS consuming ceph rbd

[ceph-users] Re: client - monitor communication.

2020-07-15 Thread Budai Laszlo
have understood CRUSH, I am > quite sure that will answer many of your questions. > > And feel free to ask about CRUSH. I would be glad to answer. > > BR > > >   > > > On Wed, Jul 15, 2020 at 8:54 AM Budai Laszlo <mailto:laszlo.bu...@gmail.com>&g

[ceph-users] Re: client - monitor communication.

2020-07-15 Thread Budai Laszlo
? Thank you, Laszlo On 7/15/20 8:12 AM, Budai Laszlo wrote: > Hi Nghia, > > in the docs (https://docs.ceph.com/docs/master/architecture/#about-pools) > there is the statement "Ceph Clients retrieve a Cluster Map from a Ceph > Monitor, and write objects to pools."

[ceph-users] Re: client - monitor communication.

2020-07-14 Thread Budai Laszlo
client get the knowledge about a change in the cluster? Thank you, Laszlo On 7/15/20 7:57 AM, Nghia Viet Tran wrote: > Hi Laszlo, > > Which client are you talking about? > > On 7/15/20, 11:54, "Budai Laszlo" wrote: > > Hello everybody, > > I'm

[ceph-users] client - monitor communication.

2020-07-14 Thread Budai Laszlo
Hello everybody, I'm trying to figure out how often the ceph client is contacting the monitors for updating its own information about the cluster map. Can anyone point me to a document describing this client <-> monitor communication? Thank you, Laszlo