[ceph-users] Forcing Posix Permissions On New CephFS Files

2024-05-09 Thread duluxoz
Hi All, I've gone and gotten myself into a "can't see the forest for the trees" state, so I'm hoping someone can take pity on me and answer a really dumb Q. So I've got a CephFS system happily bubbling along and a bunch of (linux) workstations connected to a number of common shares/folders.

[ceph-users] Re: Mysterious Space-Eating Monster

2024-05-06 Thread duluxoz
Thanks Sake, That recovered just under 4 Gig of space for us Sorry about the delay getting back to you (been *really* busy) :-) Cheers Dulux-Oz ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: ceph-users Digest, Vol 118, Issue 85

2024-04-24 Thread duluxoz
Hi Eugen, Thank you for a viable solution to our underlying issue - I'll attempt to implement it shortly.  :-) However, with all the respect in world, I believe you are incorrect when you say the doco is correct (but I will be more than happy to be proven wrong).  :-) The relevant text

[ceph-users] Re: Latest Doco Out Of Date?

2024-04-23 Thread duluxoz
Hi Zac, Any movement on this? We really need to come up with an answer/solution - thanks Dulux-Oz On 19/04/2024 18:03, duluxoz wrote: Cool! Thanks for that  :-) On 19/04/2024 18:01, Zac Dover wrote: I think I understand, after more thought. The second command is expected to work after

[ceph-users] Mysterious Space-Eating Monster

2024-04-19 Thread duluxoz
Hi All, *Something* is chewing up a lot of space on our `\var` partition to the point where we're getting warnings about the Ceph monitor running out of space (ie > 70% full). I've been looking, but I can't find anything significant (ie log files aren't too big, etc) BUT there seem to be a

[ceph-users] Re: Latest Doco Out Of Date?

2024-04-19 Thread duluxoz
Cool! Thanks for that  :-) On 19/04/2024 18:01, Zac Dover wrote: I think I understand, after more thought. The second command is expected to work after the first. I will ask the cephfs team when they wake up. Zac Dover Upstream Docs Ceph Foundation On Fri, Apr 19, 2024 at 17:51, duluxoz

[ceph-users] Re: Latest Doco Out Of Date?

2024-04-19 Thread duluxoz
before I can determine whether the documentation is wrong. Zac Dover Upstream Docs Ceph Foundation On Fri, Apr 19, 2024 at 17:51, duluxoz mailto:On Fri, Apr 19, 2024 at 17:51, duluxoz <> wrote: Hi All, In reference to this page from the Ceph documentation: https://docs.ceph.com/en/

[ceph-users] Latest Doco Out Of Date?

2024-04-19 Thread duluxoz
Hi All, In reference to this page from the Ceph documentation: https://docs.ceph.com/en/latest/cephfs/client-auth/, down the bottom of that page it says that you can run the following commands: ~~~ ceph fs authorize a client.x /dir1 rw ceph fs authorize a client.x /dir2 rw ~~~ This will

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-26 Thread duluxoz
I don't know Marc, i only know what I had to do to get the thing working  :-) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-26 Thread duluxoz
Hi All, OK, an update for everyone, a note about some (what I believe to be) missing information in the Ceph Doco, a success story, and an admission on my part that I may have left out some important information. So to start with, I finally got everything working - I now have my 4T RBD

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz
Hi Alexander, Already set (and confirmed by running the command again) - no good, I'm afraid. So I just restart with a brand new image and ran the following commands on the ceph cluster and the host respectively. Results are below: On the ceph cluster: [code] rbd create --size 4T

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz
Hi Curt, Blockdev --getbsz: 4096 Rbd info my_pool.meta/my_image: ~~~ rbd image 'my_image':     size 4 TiB in 1048576 objects     order 22 (4 MiB objects)     snapshot_count: 0     id: 294519bf21a1af     data_pool: my_pool.data     block_name_prefix:

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz
, Alwin Antreich wrote: Hi, March 24, 2024 at 8:19 AM, "duluxoz" wrote: Hi, Yeah, I've been testing various configurations since I sent my last email - all to no avail. So I'm back to the start with a brand new 4T image which is rbdmapped to /dev/rbd0. Its not formatted (yet) and so n

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz
Hi Curt, Nope, no dropped packets or errors - sorry, wrong tree  :-) Thanks for chiming in. On 24/03/2024 20:01, Curt wrote: I may be barking up the wrong tree, but if you run ip -s link show yourNicID on this server or your OSDs do you see any errors/dropped/missed?

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-24 Thread duluxoz
Hi, Yeah, I've been testing various configurations since I sent my last email - all to no avail. So I'm back to the start with a brand new 4T image which is rbdmapped to /dev/rbd0. Its not formatted (yet) and so not mounted. Every time I attempt a mkfs.xfs /dev/rbd0 (or mkfs.xfs

[ceph-users] Re: Mounting A RBD Via Kernal Modules

2024-03-23 Thread duluxoz
Hi Alexander, DOH! Thanks for pointing out my typo - I missed it, and yes, it was my issue.  :-) New issue (sort of): The requirement of the new RBD Image is 2 TB in size (its for a MariaDB Database/Data Warehouse). However, I'm getting the following errors: ~~~ mkfs.xfs: pwrite failed:

[ceph-users] Re: Laptop Losing Connectivity To CephFS On Sleep/Hibernation

2024-03-23 Thread duluxoz
On 23/03/2024 18:25, Konstantin Shalygin wrote: Hi, Yes, this is generic solution for end users mounts - samba gateway k Sent from my iPhone Thanks Konstantin, I really appreciate the help ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: Laptop Losing Connectivity To CephFS On Sleep/Hibernation

2024-03-23 Thread duluxoz
On 23/03/2024 18:22, Alexander E. Patrakov wrote: On Sat, Mar 23, 2024 at 3:08 PM duluxoz wrote: Almost right. Please set up a cluster of two SAMBA servers with CTDB, for high availability. Cool - thanks Alex, I really appreciate it :-) ___ ceph

[ceph-users] Re: Laptop Losing Connectivity To CephFS On Sleep/Hibernation

2024-03-23 Thread duluxoz
On 23/03/2024 18:00, Alexander E. Patrakov wrote: Hi Dulux-Oz, CephFS is not designed to deal with mobile clients such as laptops that can lose connectivity at any time. And I am not talking about the inconveniences on the laptop itself, but about problems that your laptop would cause to

[ceph-users] Mounting A RBD Via Kernal Modules

2024-03-23 Thread duluxoz
Hi All, I'm trying to mount a Ceph Reef (v18.2.2 - latest version) RBD Image as a 2nd HDD on a Rocky Linux v9.3 (latest version) host. The EC pool has been created and initialised and the image has been created. The ceph-common package has been installed on the host. The correct keyring

[ceph-users] Laptop Losing Connectivity To CephFS On Sleep/Hibernation

2024-03-23 Thread duluxoz
Hi All, I'm looking for some help/advice to solve the issue outlined in the heading. I'm running CepfFS (name: cephfs) on a Ceph Reef (v18.2.2 - latest update) cluster, connecting from a laptop running Rocky Linux v9.3 (latest update) with KDE v5 (latest update). I've set up the laptop to

[ceph-users] Re: CephFS On Windows 10

2024-03-08 Thread duluxoz
that it’s using the default port and not a custom one, also be aware the v1 protocol uses 6789 by default. Increasing the messenger log level to 10 might also be useful: debug ms = 10. Regards, Lucian On 28 Feb 2024, at 11:05, duluxoz wrote: Hi All, I'm looking for some pointers/help

[ceph-users] Re: CephFS On Windows 10

2024-03-08 Thread duluxoz
sections Also note the port is the 6789 not 3300. -Original Message- From: duluxoz Sent: Wednesday, February 28, 2024 4:05 AM To:ceph-users@ceph.io Subject: [ceph-users] CephFS On Windows 10 Hi All,  I'm looking for some pointers/help as to why I can't get my Win10 PC to connect

[ceph-users] Re: Which RHEL/Fusion/CentOS/Rocky Package Contains cephfs-shell?

2024-03-08 Thread duluxoz
Thanks for the info Kefu - hmm, I wonder who I should raise this with? On 08/03/2024 19:57, kefu chai wrote: On Fri, Mar 8, 2024 at 3:54 PM duluxoz wrote: Hi All, The subject pretty much says it all: I need to use cephfs-shell and its not installed on my Ceph Node, and I

[ceph-users] Which RHEL/Fusion/CentOS/Rocky Package Contains cephfs-shell?

2024-03-07 Thread duluxoz
Hi All, The subject pretty much says it all: I need to use cephfs-shell and its not installed on my Ceph Node, and I can't seem to locate which package contains it - help please.  :-) Cheers Dulux-Oz ___ ceph-users mailing list --

[ceph-users] Ceph Cluster Config File Locations?

2024-03-05 Thread duluxoz
Hi All, I don't know how its happened (bad backup/restore, bad config file somewhere, I don't know) but my (DEV) Ceph Cluster is in a very bad state, and I'm looking for pointers/help in getting it back running (unfortunate, a complete rebuild/restore is *not* an option). This is on Ceph

[ceph-users] CephFS On Windows 10

2024-02-28 Thread duluxoz
Hi All,  I'm looking for some pointers/help as to why I can't get my Win10 PC to connect to our Ceph Cluster's CephFS Service. Details are as follows: Ceph Cluster: - IP Addresses: 192.168.1.10, 192.168.1.11, 192.168.1.12 - Each node above is a monitor & an MDS - Firewall Ports: open (ie

[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread duluxoz
ut of curiosity, how are you mapping the rbd? Have you tried using guestmount? I'm just spitballing, I have no experience with your issue, so probably not much help or useful. On Mon, 5 Feb 2024, 10:05 duluxoz, wrote: ~~~ Hello, I think that /dev/rbd* devices are flitered

[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread duluxoz
~~~ Hello, I think that /dev/rbd* devices are flitered "out" or not filter "in" by the fiter option in the devices section of /etc/lvm/lvm.conf. So pvscan (pvs, vgs and lvs) don't look at your device. ~~~ Hi Gilles, So the lvm filter from the lvm.conf file is set to the default of `filter = [

[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread duluxoz
in this? Cheers On 04/02/2024 19:34, Jayanth Reddy wrote: Hi, Anything with "pvs" and "vgs" on the client machine where there is /dev/rbd0? Thanks ---- *From:* duluxoz *Sent:* Sunday, February 4, 2024 1:

[ceph-users] Re: RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-04 Thread duluxoz
olume on the system ? On Sun, Feb 4, 2024, 08:56 duluxoz wrote: Hi All, All of this is using the latest version of RL and Ceph Reef I've got an existing RBD Image (with data on it - not "critical" as I've got a back up, but its rather large so I was hopi

[ceph-users] RBD Image Returning 'Unknown Filesystem LVM2_member' On Mount - Help Please

2024-02-03 Thread duluxoz
Hi All, All of this is using the latest version of RL and Ceph Reef I've got an existing RBD Image (with data on it - not "critical" as I've got a back up, but its rather large so I was hoping to avoid the restore scenario). The RBD Image used to be server out via an (Ceph) iSCSI Gateway,

[ceph-users] Changing A Ceph Cluster's Front- And/Or Back-End Networks IP Address(es)

2024-01-30 Thread duluxoz
Hi All, Quick Q: How easy/hard is it to change the IP networks of: 1) A Ceph Cluster's "Front-End" Network? 2) A Ceph Cluster's "Back-End" Network? Is it a "simply" matter of: a) Placing the Nodes in maintenance mode b) Changing a config file (I assume it's /etc/ceph/ceph.conf) on each Node

[ceph-users] RFI: Prometheus, Etc, Services - Optimum Number To Run

2024-01-19 Thread duluxoz
Hi All, In regards to the monitoring services on a Ceph Cluster (ie Prometheus, Grafana, Alertmanager, Loki, Node-Exported, Promtail, etc) how many instances should/can we run for fault tolerance purposes? I can't seem to recall that advice being in the doco anywhere (but of course, I

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-06 Thread duluxoz
de-00 ~]# cephadm ls | grep "mgr."         "name": "mgr.ceph-node-00.aoxbdg",         "systemd_unit": "ceph-e877a630-abaa-11ee-b7ce-52540097c...@mgr.ceph-node-00.aoxbdg",         "service_name": "mgr", and you can use that name

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread duluxoz
erations/#watching-cephadm-log-messages On Fri, Jan 5, 2024 at 2:54 PM duluxoz wrote: Yeap, can do - are the relevant logs in the "usual" place or buried somewhere inside some sort of container (typically)?  :-) On 05/01/2024 20:14, Nizamudeen A wrote: no, the err

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread duluxoz
the error? It could have some tracebacks which can give more info to debug it further. Regards, On Fri, Jan 5, 2024 at 2:00 PM duluxoz wrote: Hi Nizam, Yeap, done all that - we're now at the point of creating the iSCSI Target(s) for the gateway (via the Dashboard and/or th

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread duluxoz
by ceph dashboard iscsi-gateway-add -i [] ceph dashboard iscsi-gateway-rm which you can find the documentation here: https://docs.ceph.com/en/quincy/mgr/dashboard/#enabling-iscsi-management Regards, Nizam On Fri, Jan 5, 2024 at 12:53 PM duluxoz wrote: Hi All, A little help please

[ceph-users] REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-04 Thread duluxoz
Hi All, A little help please. TL/DR: Please help with error message: ~~~ REST API failure, code : 500 Unable to access the configuration object Unable to contact the local API endpoint (https://localhost:5000/api) ~~~ The Issue I've been through the documentation and can't find

[ceph-users] Re: ceph-iscsi on RL9

2023-12-27 Thread duluxoz
Hi All, A follow up: So, I've got all the Ceph Nodes running Reef v18.2.1 on RL9.3, and everything is working - YAH! Except... The Ceph Dashboard shows 0 of 3 iSCSI Gateways working, and when I click on that panel it returns a "Page not Found" message - so I *assume* those are the three

[ceph-users] ceph-iscsi on RL9

2023-12-23 Thread duluxoz
Hi All, Just successfully(?) completed a "live" update of the first node of a Ceph Quincy cluster from RL8 to RL9. Everything "seems" to be working - EXCEPT the iSCSI Gateway on that box. During the update the ceph-iscsi package was removed (ie `ceph-iscsi-3.6-2.g97f5b02.el8.noarch.rpm` -

[ceph-users] Pool Migration / Import/Export

2023-12-10 Thread duluxoz
Hi All, I find myself in the position of having to change the k/m values on an ec-pool. I've discovered that I simply can't change the ec-profile, but have to create a "new ec-profile" and a "new ec-pool" using the new values, then migrate the "old ec-pool" to the new (see:

[ceph-users] Re: ceph-users Digest, Vol 114, Issue 14

2023-12-05 Thread duluxoz
files & DR (David Rivera) 2. Re: EC Profiles & DR (duluxoz) 3. Re: EC Profiles & DR (Eugen Block) ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io %(web_page_url)slistinfo%(cgiex

[ceph-users] Re: EC Profiles & DR

2023-12-05 Thread duluxoz
n=osd when you should use crush-failure-domain=host. With three hosts, you should use k=2, m=1; this is not recommended in  production environment. On Mon, Dec 4, 2023, 23:26 duluxoz wrote: Hi All, Looking for some help/explanation around erasure code pools, etc. I set up a 3

[ceph-users] EC Profiles & DR

2023-12-04 Thread duluxoz
Hi All, Looking for some help/explanation around erasure code pools, etc. I set up a 3-node Ceph (Quincy) cluster with each box holding 7 OSDs (HDDs) and each box running Monitor, Manager, and iSCSI Gateway. For the record the cluster runs beautifully, without resource issues, etc. I

[ceph-users] Re: Ceph Quincy On Rocky 8.x - Upgrade To Rocky 9.1

2023-02-10 Thread duluxoz
Sorry, let me qualify things / try to make them simpler: When upgrading from a Rocky Linux 8.6 Server running Ceph-Quincy to Rocky Linux 9.1 Server running Ceph-Quincy (ie an in-place upgrade of a host-node in an existing cluster): - What is the update procedure? - Can we use the

[ceph-users] Re: Ceph Quincy On Rocky 8.x - Upgrade To Rocky 9.1

2023-02-10 Thread duluxoz
23 16:43, Konstantin Shalygin wrote: You are mentioned that your cluster is Quincy, the el9 package are also for Quincy. What exactly upgrade you mean? k Sent from my iPhone On 11 Feb 2023, at 12:29, duluxoz wrote:  That's great - thanks. Any idea if there are any upgrade instructions? An

[ceph-users] Re: Ceph Quincy On Rocky 8.x - Upgrade To Rocky 9.1

2023-02-10 Thread duluxoz
s packages el9_quincy are available [1] You can try k [1] https://download.ceph.com/rpm-quincy/el9/x86_64/ On 10 Feb 2023, at 13:23, duluxoz wrote: Sorry if this was mentioned previously (I obviously missed it if it was) but can we upgrade a Ceph Quincy Host/Cluster from Rocky Linux (RHEL

[ceph-users] Ceph Quincy On Rocky 8.x - Upgrade To Rocky 9.1

2023-02-09 Thread duluxoz
Hi All, Sorry if this was mentioned previously (I obviously missed it if it was) but can we upgrade a Ceph Quincy Host/Cluster from Rocky Linux (RHEL) v8.6/8.7 to v9.1 (yet), and if so, what is / where can I find the procedure to do this - ie is there anything "special" that needs to be done

[ceph-users] Re: Mysterious HDD-Space Eating Issue

2023-01-17 Thread duluxoz
Hi Eneko, Well, that's the thing: there are a whole bunch of ceph-guest-XX.log files in /var/log/caeh/; most of them are empty, a handful are up to 250 Kb in size, and this one () keeps on growing - and where not sure where they're coming from (ie there's nothing that we can see in the conf

[ceph-users] Re: Mysterious HDD-Space Eating Issue

2023-01-16 Thread duluxoz
Hi All, Thanks to Eneko Lacunza, E Taka, and Anthony D'Atri for replying - all that advice was really helpful. So, we finally tracked down our "disk eating monster" (sort of). We've got a "runaway" ceph-guest-NN that is filling up its log file (/var/log/ceph/ceph-guest-NN.log) and

[ceph-users] Mysterious Disk-Space Eater

2023-01-11 Thread duluxoz
Hi All, Got a funny one, which I'm hoping someone can help us with. We've got three identical(?) Ceph Quincy Nodes running on Rocky Linux 8.7. Each Node has 4 OSDs, plus Monitor, Manager, and iSCSI G/W services running on them (we're only a small shop). Each Node has a separate 16 GiB

[ceph-users] Re: 2-Layer CRUSH Map Rule?

2022-09-27 Thread duluxoz
a new rule, you can set your pool to use the new pool id anytime you are ready. On Sun, Sep 25, 2022 at 12:49 AM duluxoz wrote: Hi Everybody (Hi Dr. Nick), TL/DR: Is is possible to have a "2-Layer" Crush Map? I think it is (although I'm not sure about how to set it up

[ceph-users] 2-Layer CRUSH Map Rule?

2022-09-25 Thread duluxoz
Hi Everybody (Hi Dr. Nick), TL/DR: Is is possible to have a "2-Layer" Crush Map? I think it is (although I'm not sure about how to set it up). My issue is that we're using 4-2 Erasure coding on our OSDs, with 7-OSDs per OSD-Node (yes, the Cluster is handling things AOK - we're running at

[ceph-users] Ceph Quince Not Enabling `diskprediction-local` - RESOLVED

2022-09-21 Thread duluxoz
Hi Everybody (Hi Dr. Nick), A I've just figured it out - it should have been an underscore (`_`) not a dash (`-`) in `ceph mgr module enable diskprediction_local` "Sorry about that Chief" And sorry for the double-post (damn email client). Cheers Dulux-Oz

[ceph-users] Ceph Quince Not Enabling `diskprediction-local` - Help Please

2022-09-21 Thread duluxoz
Hi Everybody (Hi Dr. Nick), So, I'm trying to get my Ceph Quincy Cluster to recognise/interact with the "diskprediction-local" manager module. I have the "SMARTMon Tools" and the "ceph-mgr-diskprediction-local" package installed on all of the relevant nodes. Whenever I attempt to enable

[ceph-users] Ceph Quince Not Enabling `diskprediction-local` - Help Please

2022-09-21 Thread duluxoz
Hi Everybody (Hi Dr. Nick), So, I'm trying to get my Ceph Quincy Cluster to recognise/interact with the "diskprediction-local" manager module. I have the "SMARTMon Tools" and the "ceph-mgr-diskprediction-local" package installed on all of the relevant nodes. Whenever I attempt to enable

[ceph-users] Ceph iSCSI & oVirt

2022-09-21 Thread duluxoz
Hi Everybody (Hi Dr. Nick), I'm attacking this issue from both ends (ie from the Ceph-end and from the oVirt-end - I've posted questions on both mailing lists to ensure we capture the required knowledge-bearer(s)). We've got a Ceph Cluster set up with three iSCSI Gateways configured, and we

[ceph-users] Re: Ceph iSCSI rbd-target.api Failed to Load

2022-09-09 Thread duluxoz
I ports on ubuntu-gw02 1 gateway is inaccessible - updates will be disabled Querying ceph for state information Gathering pool stats for cluster 'ceph' Regards, Bailey -Original Message- From: duluxoz Sent: September 9, 2022 4:11 AM To: Bailey Allison ; ceph-users@ceph.io Subject: [cep

[ceph-users] Re: Ceph iSCSI rbd-target.api Failed to Load

2022-09-09 Thread duluxoz
he bear minimum settings are: api_secure = False # Optional settings related to the CLI/API service api_user = admin cluster_name = ceph loop_delay = 1 pool = rbd trusted_ip_list = X.X.X.X,X.X.X.X,X.X.X.X,X.X.X.X -Original Message- From: duluxoz Sent: September 7, 2022 6:38 AM To: ceph-users@ceph

[ceph-users] Ceph iSCSI rbd-target.api Failed to Load

2022-09-07 Thread duluxoz
Hi All, I've followed the instructions on the CEPH Doco website on Configuring the iSCSI Target. Everything went AOK up to the point where I try to start the rbd-target-api service, which fails (the rbd-target-gw service started OK). A `systemctl status rbd-target-api` gives: ~~~

[ceph-users] ceph -s command hangs with an authentication timeout

2022-08-06 Thread duluxoz
Hi All, So, I've been researching this for days (including reading this mailing-list), and I've had no luck what-so-ever in resolving my issue. I'm hoping someone here can point be in the correct direction. This is a brand new (physical) machine, and I've followed the Manual Deployment

[ceph-users] Re: New Issue - Mapping Block Devices

2021-03-23 Thread duluxoz
d. The mount command itself gives: mount: /my-rbd-bloc-device: special device /dev/rbd0p1 does not exist (same as before I updated my-id) Cheers Matthew J On 23/03/2021 17:34, Ilya Dryomov wrote: On Tue, Mar 23, 2021 at 6:13 AM duluxoz wrote: Hi All, I've got a new issue (hopefully this one w

[ceph-users] New Issue - Mapping Block Devices

2021-03-22 Thread duluxoz
Hi All, I've got a new issue (hopefully this one will be the last). I have a working Ceph (Octopus) cluster with a replicated pool (my-pool), an erasure-coded pool (my-pool-data), and an image (my-image) created - all *seems* to be working correctly. I also have the correct Keyring specified

[ceph-users] Re: Ceph Cluster Taking An Awful Long Time To Rebalance

2021-03-16 Thread duluxoz
Yeap - that was the issue: an incorrect CRUSH rule Thanks for the help Dulux-Oz ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Erasure-coded Block Device Image Creation With qemu-img - Help

2021-03-16 Thread duluxoz
Hi Guys, So, new issue (I'm gonna get the hang of this if it kills me :-) ). I have a working/healthy Ceph (Octopus) Cluster (with qemu-img, libvert, etc, installed), and an erasure-coded pool called "my_pool". I now need to create a "my_data" image within the "my_pool" pool. As this is for a

[ceph-users] Re: Ceph Cluster Taking An Awful Long Time To Rebalance

2021-03-16 Thread duluxoz
Ah, right, that makes sense - I'll have a go at that Thank you On 16/03/2021 19:12, Janne Johansson wrote: pgs: 88.889% pgs not active 6/21 objects misplaced (28.571%) 256 creating+incomplete For new clusters, "creating+incomplete" sounds like you

[ceph-users] Re: Ceph Cluster Taking An Awful Long Time To Rebalance

2021-03-15 Thread duluxoz
OK, so I set autoscaling to off for all five pools, and the "ceph -s" has not changed: ~~~  cluster:     id: [REDACTED]     health: HEALTH_WARN     Reduced data availability: 256 pgs inactive, 256 pgs incomplete     Degraded data redundancy: 12 pgs undersized   services:   

[ceph-users] Re: Ceph Cluster Taking An Awful Long Time To Rebalance

2021-03-15 Thread duluxoz
would suggest disabling the PG Auto Scaler on small test clusters. Thanks On 16 Mar 2021, 10:50 +0800, duluxoz , wrote: Hi Guys, Is the below "ceph -s" normal? This is a brand new cluster with (at the moment) a single Monitor and 7 OSDs (each 6 GiB) that h

[ceph-users] Ceph Cluster Taking An Awful Long Time To Rebalance

2021-03-15 Thread duluxoz
Hi Guys, Is the below "ceph -s" normal? This is a brand new cluster with (at the moment) a single Monitor and 7 OSDs (each 6 GiB) that has no data in it (yet), and yet its taking almost a day to "heal itself" from adding in the 2nd OSD. ~~~   cluster:     id: [REDACTED]     health:

[ceph-users] Newbie Help With ceph-mgr

2021-02-25 Thread duluxoz
Hi All, My ceph-mgr keeps stopping (for some unknown reason) after about an hour or so (but has run for up to 2-3 hours before stopping). Up till now I've simple restarted it with 'ceph-mgr -i ceph01'. Is this normal behaviour, or if it isn't, what should I be looking for in the logs? I

[ceph-users] Newbie Requesting Help - Please, This Is Driving Me Mad/Crazy! - A Follow Up

2021-02-25 Thread duluxoz
Hi Everyone, Thanks to all for both the online and PM help - once it was pointed out that the existing (Octopus) Documentation was... less than current I ended up using the ceph-volume command. A couple of follow-up questions: When using ceph-volume lvm create: 1. Can you specify an osd

[ceph-users] Re: Newbie Requesting Help - Please, This Is Driving Me Mad/Crazy!

2021-02-24 Thread duluxoz
Yes, the OSD Key is in the correct folder (or, at least, I think it is). The line in the steps I did is: sudo -u ceph ceph auth get-or-create osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' -o /var/lib/ceph/osd/ceph-0/keyring This places the osd-0 key in the file