[ceph-users] What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-08 Thread Michael Worsham
I have setup a 'reef' Ceph Cluster using Cephadm and Ansible in a VMware ESXi 7 
/ Ubuntu 22.04 lab environment per the how-to guide provided here:  
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/.

The installation steps were fairly easy and I was able to get the environment 
up and running in about 15 minutes under VMware ESXi 7. I have buckets and 
pools already setup. However, the ceph.io site is confusing on how to setup the 
Rados Gateway (radosgw) with Multi-site -- 
https://docs.ceph.com/en/latest/radosgw/multisite/. Is a copy of HAProxy also 
needed for handling the front-end load balancing or is it implied that Ceph 
sets it up?

Command-line scripting I was planning on using for setting up the RGW:
```
radosgw-admin realm create --rgw-realm=sandbox --default
radosgw-admin zonegroup create --rgw-zonegroup=sandbox  --master --default
radosgw-admin zone create --rgw-zonegroup=sandbox --rgw-zone=sandbox --master 
--default
radosgw-admin period update --rgw-realm=sandbox --commit
ceph orch apply rgw sandbox --realm=sandbox --zone=sandbox --placement="2 
ceph-mon1 ceph-mon2" --port=8000
```

What other steps are needed to get the RGW up and running so that it can be 
presented to something like Veeam for doing performance and I/O testing 
concepts?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-12 Thread Michael Worsham
Can anyone help me on this? I can't be that hard to do.

-- Michael


-Original Message-
From: Michael Worsham 
Sent: Thursday, February 8, 2024 3:03 PM
To: ceph-users@ceph.io
Subject: [ceph-users] What is the proper way to setup Rados Gateway (RGW) under 
Ceph?

I have setup a 'reef' Ceph Cluster using Cephadm and Ansible in a VMware ESXi 7 
/ Ubuntu 22.04 lab environment per the how-to guide provided here:  
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/.

The installation steps were fairly easy and I was able to get the environment 
up and running in about 15 minutes under VMware ESXi 7. I have buckets and 
pools already setup. However, the ceph.io site is confusing on how to setup the 
Rados Gateway (radosgw) with Multi-site -- 
https://docs.ceph.com/en/latest/radosgw/multisite/. Is a copy of HAProxy also 
needed for handling the front-end load balancing or is it implied that Ceph 
sets it up?

Command-line scripting I was planning on using for setting up the RGW:
```
radosgw-admin realm create --rgw-realm=sandbox --default radosgw-admin 
zonegroup create --rgw-zonegroup=sandbox  --master --default radosgw-admin zone 
create --rgw-zonegroup=sandbox --rgw-zone=sandbox --master --default 
radosgw-admin period update --rgw-realm=sandbox --commit ceph orch apply rgw 
sandbox --realm=sandbox --zone=sandbox --placement="2 ceph-mon1 ceph-mon2" 
--port=8000 ```

What other steps are needed to get the RGW up and running so that it can be 
presented to something like Veeam for doing performance and I/O testing 
concepts?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-12 Thread Michael Worsham
So, just so I am clear – in addition to the steps below, will I also need to 
also install NGINX or HAProxy on the server to act as the front end?

-- M

From: Rok Jaklič 
Sent: Monday, February 12, 2024 12:30 PM
To: Michael Worsham 
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: What is the proper way to setup Rados Gateway 
(RGW) under Ceph?

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.

Hi,

recommended methods of deploying rgw are imho overly complicated. You can get 
service up manually also with something simple like:

[root@mon1 bin]# cat /etc/ceph/ceph.conf

[global]
fsid = 12345678-XXXx ...
mon initial members = mon1,mon3
mon host = ip-mon1,ip-mon2
auth cluster required = none
auth service required = none
auth client required = none
ms_mon_client_mode = crc

[client.radosgw.mon1]
host = mon1
log_file = /var/log/ceph/client.radosgw.mon1.log
rgw_dns_name = mon1
rgw_frontends = "civetweb port=80 num_threads=500" # this is different in ceph 
versions 17, 18.
rgw_crypt_require_ssl = false



[root@mon1 bin]# cat start-rgw.sh
radosgw -c /etc/ceph/ceph.conf --setuser ceph --setgroup ceph -n 
client.radosgw.mon1 &

---

This configuration has nginx in front of rgw  all traffic goes from nginx 
443 -> rgw 80 and it assumes you "own the network" and you are aware of 
"drawbacks".

Rok

On Mon, Feb 12, 2024 at 2:15 PM Michael Worsham 
mailto:mwors...@datadimensions.com>> wrote:
Can anyone help me on this? I can't be that hard to do.

-- Michael


-Original Message-
From: Michael Worsham 
mailto:mwors...@datadimensions.com>>
Sent: Thursday, February 8, 2024 3:03 PM
To: ceph-users@ceph.io<mailto:ceph-users@ceph.io>
Subject: [ceph-users] What is the proper way to setup Rados Gateway (RGW) under 
Ceph?

I have setup a 'reef' Ceph Cluster using Cephadm and Ansible in a VMware ESXi 7 
/ Ubuntu 22.04 lab environment per the how-to guide provided here:  
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/.

The installation steps were fairly easy and I was able to get the environment 
up and running in about 15 minutes under VMware ESXi 7. I have buckets and 
pools already setup. However, the ceph.io<http://ceph.io> site is confusing on 
how to setup the Rados Gateway (radosgw) with Multi-site -- 
https://docs.ceph.com/en/latest/radosgw/multisite/. Is a copy of HAProxy also 
needed for handling the front-end load balancing or is it implied that Ceph 
sets it up?

Command-line scripting I was planning on using for setting up the RGW:
```
radosgw-admin realm create --rgw-realm=sandbox --default radosgw-admin 
zonegroup create --rgw-zonegroup=sandbox  --master --default radosgw-admin zone 
create --rgw-zonegroup=sandbox --rgw-zone=sandbox --master --default 
radosgw-admin period update --rgw-realm=sandbox --commit ceph orch apply rgw 
sandbox --realm=sandbox --zone=sandbox --placement="2 ceph-mon1 ceph-mon2" 
--port=8000 ```

What other steps are needed to get the RGW up and running so that it can be 
presented to something like Veeam for doing performance and I/O testing 
concepts?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to 
ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
__

[ceph-users] Re: What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-19 Thread Michael Worsham
I tried the follow the steps I did originally listed, but it seems I am not 
doing something right.

Our Storage/backup admin cannot connect to port 8000 on the server instance.

Do I need to setup a HAProxy as a front-end on a different port to communicate 
with the RGW listening ports between the two ceph monitors?

-- Michael


-Original Message-
From: Michael Worsham 
Sent: Thursday, February 8, 2024 3:03 PM
To: ceph-users@ceph.io
Subject: [ceph-users] What is the proper way to setup Rados Gateway (RGW) under 
Ceph?

I have setup a 'reef' Ceph Cluster using Cephadm and Ansible in a VMware ESXi 7 
/ Ubuntu 22.04 lab environment per the how-to guide provided here:  
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/.

The installation steps were fairly easy and I was able to get the environment 
up and running in about 15 minutes under VMware ESXi 7. I have buckets and 
pools already setup. However, the ceph.io site is confusing on how to setup the 
Rados Gateway (radosgw) with Multi-site -- 
https://docs.ceph.com/en/latest/radosgw/multisite/. Is a copy of HAProxy also 
needed for handling the front-end load balancing or is it implied that Ceph 
sets it up?

Command-line scripting I was planning on using for setting up the RGW:
```
radosgw-admin realm create --rgw-realm=sandbox --default radosgw-admin 
zonegroup create --rgw-zonegroup=sandbox  --master --default radosgw-admin zone 
create --rgw-zonegroup=sandbox --rgw-zone=sandbox --master --default 
radosgw-admin period update --rgw-realm=sandbox --commit ceph orch apply rgw 
sandbox --realm=sandbox --zone=sandbox --placement="2 ceph-mon1 ceph-mon2" 
--port=8000 ```

What other steps are needed to get the RGW up and running so that it can be 
presented to something like Veeam for doing performance and I/O testing 
concepts?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Cephadm and Ceph.conf

2024-02-26 Thread Michael Worsham
I deployed a Ceph reef cluster using cephadm. When it comes to the ceph.conf 
file, which file should I be editing for making changes to the cluster - the 
one running under the docker container or the local one on the Ceph monitors?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cephadm and Ceph.conf

2024-02-26 Thread Michael Worsham
So how would I be able to put in configurations like this into it?

[global]
fsid = 46620486-b8a6-11ee-bf23-6510c4d9efa7
mon_host = [v2:10.20.27.10:3300/0,v1:10.20.27.10:6789/0] 
[v2:10.20.27.11:3300/0,v1:10.20.27.11:6789/0]
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 256
osd pool default pgp num = 256
mon_max_pg_per_osd = 800
osd max pg per osd hard ratio = 10
mon allow pool delete = true
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
ms_mon_client_mode = crc

[client.radosgw.mon1]
host = ceph-mon1
log_file = /var/log/ceph/client.radosgw.mon1.log
rgw_dns_name = ceph-mon1
rgw_frontends = "beast port=80 num_threads=500"
rgw_crypt_require_ssl = false


-Original Message-
From: Robert Sander 
Sent: Monday, February 26, 2024 8:29 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Cephadm and Ceph.conf

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.


On 2/26/24 14:24, Michael Worsham wrote:
> I deployed a Ceph reef cluster using cephadm. When it comes to the ceph.conf 
> file, which file should I be editing for making changes to the cluster - the 
> one running under the docker container or the local one on the Ceph monitors?

None of both. You can adjust settings with "ceph config" or the Configuration 
tab of the Dashboard.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin 
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph & iSCSI

2024-02-26 Thread Michael Worsham
I was reading on the Ceph site that iSCSI is no longer under active development 
since November 2022. Why is that?

https://docs.ceph.com/en/latest/rbd/iscsi-overview/

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] OSD with dm-crypt?

2024-02-26 Thread Michael Worsham
Is there a how-to document or cheat sheet on how to enable OSD encryption using 
dm-crypt?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: OSD with dm-crypt?

2024-02-26 Thread Michael Worsham
I was setting up the Ceph cluster via this URL 
(https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/)
 and didn't know if there was a way to do it via the "ceph orch daemon add osd 
ceph-osd-01:/dev/sdb" command or not?

Is it possible to set the OSD to encryption after the fact or does that involve 
some other process?

-- Michael



Get Outlook for Android<https://aka.ms/AAb9ysg>

From: Alex Gorbachev 
Sent: Monday, February 26, 2024 11:10:54 PM
To: Michael Worsham 
Cc: ceph-users@ceph.io 
Subject: Re: [ceph-users] OSD with dm-crypt?

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.

If you are using a service spec, just set

encrypted: true

If using ceph-volume, pass this flag:

--dmcrypt

You can verify similar to 
https://smithfarm-thebrain.blogspot.com/2020/03/how-to-verify-that-encrypted-osd-is.html
--
Alex Gorbachev
ISS/Storcium



On Mon, Feb 26, 2024 at 10:25 PM Michael Worsham 
mailto:mwors...@datadimensions.com>> wrote:
Is there a how-to document or cheat sheet on how to enable OSD encryption using 
dm-crypt?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to 
ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Monitoring Ceph Bucket and overall ceph cluster remaining space

2024-03-05 Thread Michael Worsham
Is there an easy way to poll the ceph cluster buckets in a way to see how much 
space is remaining? And is it possible to see how much ceph cluster space is 
remaining overall? I am trying to extract the data from our  Ceph cluster and 
put it into a format that our SolarWinds can understand in whole number 
integers, so we can monitor bucket allocated space and overall cluster space in 
the cluster as a whole.

Via Canonical support, the said I can do something like "sudo ceph df -f 
json-pretty" to pull the information, but what is it I need to look at from the 
output (see below) to display over to SolarWinds?

{
"stats": {
"total_bytes": 960027263238144,
"total_avail_bytes": 403965214187520,
"total_used_bytes": 556062049050624,
"total_used_raw_bytes": 556062049050624,
"total_used_raw_ratio": 0.57921481132507324,
"num_osds": 48,
"num_per_pool_osds": 48,
"num_per_pool_omap_osds": 48
},
"stats_by_class": {
"ssd": {
"total_bytes": 960027263238144,
"total_avail_bytes": 403965214187520,
"total_used_bytes": 556062049050624,
"total_used_raw_bytes": 556062049050624,
"total_used_raw_ratio": 0.57921481132507324
}
},

And a couple of data pools...
{
"name": "default.rgw.jv-va-pool.data",
"id": 65,
"stats": {
"stored": 4343441915904,
"objects": 17466616,
"kb_used": 12774490932,
"bytes_used": 13081078714368,
"percent_used": 0.053900588303804398,
"max_avail": 76535973281792
}
},
{
"name": "default.rgw.jv-va-pool.index",
"id": 66,
"stats": {
"stored": 42533675008,
"objects": 401,
"kb_used": 124610380,
"bytes_used": 127601028363,
"percent_used": 0.00055542576592415571,
"max_avail": 76535973281792
}
},
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Monitoring Ceph Bucket and overall ceph cluster remaining space

2024-03-05 Thread Michael Worsham
This looks interesting, but instead of Prometheus, could the data be exported 
for SolarWinds?

The intent is to have SW watch the available storage space allocated and then 
to alert when a certain threshold is reached (75% remaining for a warning; 95% 
remaining for a critical).

-- Michael

From: Konstantin Shalygin 
Sent: Tuesday, March 5, 2024 11:17:10 PM
To: Michael Worsham 
Cc: ceph-users@ceph.io 
Subject: Re: [ceph-users] Monitoring Ceph Bucket and overall ceph cluster 
remaining space

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.

Hi,

For RGW usage statistics you can use radosgw_usage_exporter [1]


k
[1] https://github.com/blemmenes/radosgw_usage_exporter

Sent from my iPhone

On 6 Mar 2024, at 00:21, Michael Worsham  wrote:

Is there an easy way to poll the ceph cluster buckets in a way to see how much 
space is remaining? And is it possible to see how much ceph cluster space is 
remaining overall? I am trying to extract the data from our  Ceph cluster and 
put it into a format that our SolarWinds can understand in whole number 
integers, so we can monitor bucket allocated space and overall cluster space in 
the cluster as a whole.

Via Canonical support, the said I can do something like "sudo ceph df -f 
json-pretty" to pull the information, but what is it I need to look at from the 
output (see below) to display over to SolarWinds?

{
"stats": {
"total_bytes": 960027263238144,
"total_avail_bytes": 403965214187520,
"total_used_bytes": 556062049050624,
"total_used_raw_bytes": 556062049050624,
"total_used_raw_ratio": 0.57921481132507324,
"num_osds": 48,
"num_per_pool_osds": 48,
"num_per_pool_omap_osds": 48
},
"stats_by_class": {
"ssd": {
"total_bytes": 960027263238144,
"total_avail_bytes": 403965214187520,
"total_used_bytes": 556062049050624,
"total_used_raw_bytes": 556062049050624,
"total_used_raw_ratio": 0.57921481132507324
}
},

And a couple of data pools...
{
"name": "default.rgw.jv-va-pool.data",
"id": 65,
"stats": {
"stored": 4343441915904,
"objects": 17466616,
"kb_used": 12774490932,
"bytes_used": 13081078714368,
"percent_used": 0.053900588303804398,
"max_avail": 76535973281792
}
},
{
"name": "default.rgw.jv-va-pool.index",
"id": 66,
"stats": {
"stored": 42533675008,
"objects": 401,
"kb_used": 124610380,
"bytes_used": 127601028363,
"percent_used": 0.00055542576592415571,
"max_avail": 76535973281792
}
},
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Monitoring Ceph Bucket and overall ceph cluster remaining space

2024-03-06 Thread Michael Worsham
SW is SolarWinds (www.soparwinds.com), a network and application monitoring and 
alerting platform.

It's not very open source at all, but it's what we use for monitoring all of 
our physical and virtual servers, network switches, SAN and NAS devices, and 
anything else with a network card in it.

From: Konstantin Shalygin 
Sent: Wednesday, March 6, 2024 1:39:43 AM
To: Michael Worsham 
Cc: ceph-users@ceph.io 
Subject: Re: [ceph-users] Re: Monitoring Ceph Bucket and overall ceph cluster 
remaining space

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.


Hi,

Don't aware about what is SW, but if this software works with Prometheus 
metrics format - why not. Anyway the exporters are open source, you can modify 
the existing code for your environment


k

Sent from my iPhone

> On 6 Mar 2024, at 07:58, Michael Worsham  wrote:
>
> This looks interesting, but instead of Prometheus, could the data be exported 
> for SolarWinds?
>
> The intent is to have SW watch the available storage space allocated and then 
> to alert when a certain threshold is reached (75% remaining for a warning; 
> 95% remaining for a critical).

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Need easy way to calculate Ceph cluster space for SolarWinds

2024-03-20 Thread Michael Worsham
Is there an easy way to poll a Ceph cluster to see how much space is available 
and how much space is available per bucket?

Looking for a way to use SolarWinds to monitor the entire Ceph cluster space 
utilization and then also be able to break down each RGW bucket to see how much 
space it was provisioned for and how much is available.

-- Michael


Get Outlook for Android
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Need easy way to calculate Ceph cluster space for SolarWinds

2024-03-20 Thread Michael Worsham
I had a request from the upper management wanting to use SolarWinds to be able 
to extract what I am looking at and have SolarWinds track it in terms of total 
available space, remaining space of the overall cluster, and I guess would be 
the current RGW pools/buckets we have and their allocated sizes and space 
remaining in it as well. I am sort of in the dark when it comes to trying to 
break things down to make it readable/understandable for those that are 
non-technical.

I was told that when it comes to pools and buckets, you sort of have to see it 
this way:
- Bucket is like a folder
- Pool is like a hard drive.
- You can create many folders in a hard drive and you can add quota to each 
folder.
- But if you want to know the remaining space, you need to check the hard drive.

I did the "ceph df" command on the ceph monitor and we have something that 
looks like this:

>> sudo ceph df
--- RAW STORAGE ---
CLASS SIZEAVAIL USED  RAW USED  %RAW USED
ssd873 TiB  346 TiB  527 TiB   527 TiB  60.40
TOTAL  873 TiB  346 TiB  527 TiB   527 TiB  60.40

--- POOLS ---
POOL ID   PGS   STORED  OBJECTS USED  %USED  
MAX AVAIL
.mgr  1 1  449 KiB2  1.3 MiB  0 
61 TiB
default.rgw.buckets.data  2  2048  123 TiB   41.86M  371 TiB  66.76 
61 TiB
default.rgw.control   3 2  0 B8  0 B  0 
61 TiB
default.rgw.data.root 4 2  0 B0  0 B  0 
61 TiB
default.rgw.gc5 2  0 B0  0 B  0 
61 TiB
default.rgw.log   6 2   41 KiB  209  732 KiB  0 
61 TiB
default.rgw.intent-log7 2  0 B0  0 B  0 
61 TiB
default.rgw.meta  8 2   20 KiB   96  972 KiB  0 
61 TiB
default.rgw.otp   9 2  0 B0  0 B  0 
61 TiB
default.rgw.usage10 2  0 B0  0 B  0 
61 TiB
default.rgw.users.keys   11 2  0 B0  0 B  0 
61 TiB
default.rgw.users.email  12 2  0 B0  0 B  0 
61 TiB
default.rgw.users.swift  13 2  0 B0  0 B  0 
61 TiB
default.rgw.users.uid14 2  0 B0  0 B  0 
61 TiB
default.rgw.buckets.extra1516  0 B0  0 B  0 
61 TiB
default.rgw.buckets.index1664  6.3 GiB  184   19 GiB   0.01 
61 TiB
.rgw.root17 2  2.3 KiB4   48 KiB  0 
61 TiB
ceph-benchmarking18   128  596 GiB  302.20k  1.7 TiB   0.94 
61 TiB
ceph-fs_data 1964  438 MiB  110  1.3 GiB  0 
61 TiB
ceph-fs_metadata 2016   37 MiB   32  111 MiB  0 
61 TiB
test 2132   21 TiB5.61M   64 TiB  25.83 
61 TiB
DD-Test  2232   11 MiB   13   32 MiB  0 
61 TiB
nativesqlbackup  2432  539 MiB  147  1.6 GiB  0 
61 TiB
default.rgw.buckets.non-ec   2532  1.7 MiB0  5.0 MiB  0 
61 TiB
ceph-fs_sql_backups  2632  0 B0  0 B  0 
61 TiB
ceph-fs_sql_backups_metadata 2732  0 B0  0 B  0 
61 TiB
dd-drs-backups   2832  0 B0  0 B  0 
61 TiB
default.rgw.jv-corp-pool.data5932   16 TiB   63.90M   49 TiB  21.12 
61 TiB
default.rgw.jv-corp-pool.index   6032  108 GiB1.19k  323 GiB   0.17 
61 TiB
default.rgw.jv-corp-pool.non-ec  6132  0 B0  0 B  0 
61 TiB
default.rgw.jv-comm-pool.data6232  8.1 TiB   44.20M   24 TiB  11.65 
61 TiB
default.rgw.jv-comm-pool.index   6332   83 GiB  811  248 GiB   0.13 
61 TiB
default.rgw.jv-comm-pool.non-ec  6432  0 B0  0 B  0 
61 TiB
default.rgw.jv-va-pool.data  6532  4.8 TiB   22.17M   14 TiB   7.28 
61 TiB
default.rgw.jv-va-pool.index 6632   38 GiB  401  113 GiB   0.06 
61 TiB
default.rgw.jv-va-pool.non-ec6732  0 B0  0 B  0 
61 TiB
jv-edi-pool  6832  0 B0  0 B  0 
61 TiB

-- Michael

-Original Message-
From: Anthony D'Atri 
Sent: Wednesday, March 20, 2024 2:48 PM
To: Michael Worsham 
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Need easy way to calculate Ceph cluster space for 
SolarWinds

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.


> On Mar 20, 2024, at 14:42, Michael Worsham  
> wrote:
>
> Is there an easy way to poll a Ceph cluster to see how much space is
> av

[ceph-users] Re: Need easy way to calculate Ceph cluster space for SolarWinds

2024-03-20 Thread Michael Worsham
  18.19040   1.0   18 TiB   10 TiB   10 TiB   13 GiB   45 GiB  8.1 
TiB  55.41  0.92  192  up
31ssd  18.19040   1.0   18 TiB   12 TiB   12 TiB   22 GiB   48 GiB  6.2 
TiB  65.92  1.09  186  up
35ssd  18.19040   1.0   18 TiB   10 TiB   10 TiB   15 GiB   33 GiB  8.0 
TiB  56.11  0.93  175  up
37ssd  18.19040   1.0   18 TiB   13 TiB   13 TiB   13 GiB   53 GiB  5.0 
TiB  72.78  1.21  179  up
43ssd  18.19040   1.0   18 TiB  8.9 TiB  8.8 TiB   17 GiB   23 GiB  9.3 
TiB  48.71  0.81  178  up
TOTAL  873 TiB  527 TiB  525 TiB  704 GiB  2.0 TiB  346 
TiB  60.40
MIN/MAX VAR: 0.76/1.22  STDDEV: 6.98

-Original Message-
From: Anthony D'Atri 
Sent: Wednesday, March 20, 2024 5:09 PM
To: Michael Worsham 
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Need easy way to calculate Ceph cluster space for 
SolarWinds

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.


Looks like you have one device class and the same replication on all pools, 
which makes that simpler.

Your MAX AVAIL figures are lower than I would expect if you're using size=3, so 
I'd check if you have the balancer enabled, if it's working properly.

Run

ceph osd df

and look at the VAR column,

[rook@rook-ceph-tools-5ff8d58445-p9npl /]$ ceph osd df | head
ID   CLASS  WEIGHT   REWEIGHT  SIZE RAW USE   DATA  OMAP META 
AVAIL%USE   VAR   PGS  STATUS

Ideally the numbers should all be close to 1.00 + / -

> On Mar 20, 2024, at 16:55, Michael Worsham  
> wrote:
>
> I had a request from the upper management wanting to use SolarWinds to be 
> able to extract what I am looking at and have SolarWinds track it in terms of 
> total available space, remaining space of the overall cluster, and I guess 
> would be the current RGW pools/buckets we have and their allocated sizes and 
> space remaining in it as well. I am sort of in the dark when it comes to 
> trying to break things down to make it readable/understandable for those that 
> are non-technical.
>
> I was told that when it comes to pools and buckets, you sort of have to see 
> it this way:
> - Bucket is like a folder
> - Pool is like a hard drive.
> - You can create many folders in a hard drive and you can add quota to each 
> folder.
> - But if you want to know the remaining space, you need to check the hard 
> drive.
>
> I did the "ceph df" command on the ceph monitor and we have something that 
> looks like this:
>
>>> sudo ceph df
> --- RAW STORAGE ---
> CLASS SIZEAVAIL USED  RAW USED  %RAW USED
> ssd873 TiB  346 TiB  527 TiB   527 TiB  60.40
> TOTAL  873 TiB  346 TiB  527 TiB   527 TiB  60.40
>
> --- POOLS ---
> POOL ID   PGS   STORED  OBJECTS USED  %USED  
> MAX AVAIL
> .mgr  1 1  449 KiB2  1.3 MiB  0   
>   61 TiB
> default.rgw.buckets.data  2  2048  123 TiB   41.86M  371 TiB  66.76   
>   61 TiB
> default.rgw.control   3 2  0 B8  0 B  0   
>   61 TiB
> default.rgw.data.root 4 2  0 B0  0 B  0   
>   61 TiB
> default.rgw.gc5 2  0 B0  0 B  0   
>   61 TiB
> default.rgw.log   6 2   41 KiB  209  732 KiB  0   
>   61 TiB
> default.rgw.intent-log7 2  0 B0  0 B  0   
>   61 TiB
> default.rgw.meta  8 2   20 KiB   96  972 KiB  0   
>   61 TiB
> default.rgw.otp   9 2  0 B0  0 B  0   
>   61 TiB
> default.rgw.usage10 2  0 B0  0 B  0   
>   61 TiB
> default.rgw.users.keys   11 2  0 B0  0 B  0   
>   61 TiB
> default.rgw.users.email  12 2  0 B0  0 B  0   
>   61 TiB
> default.rgw.users.swift  13 2  0 B0  0 B  0   
>   61 TiB
> default.rgw.users.uid14 2  0 B0  0 B  0   
>   61 TiB
> default.rgw.buckets.extra1516  0 B0  0 B  0   
>   61 TiB
> default.rgw.buckets.index1664  6.3 GiB  184   19 GiB   0.01   
>   61 TiB
> .rgw.root17 2  2.3 KiB4   48 KiB  0   
>   61 TiB
> ceph-benchmarking18   128  596 GiB  302.20k  1.7 TiB   0.94   
>   61 TiB
> ceph-fs_data 1964  438 MiB  110  1.3 GiB  0   
>   61 TiB
> ceph-fs_metadata 2016   37 MiB   32  111 MiB  0   
>   61 TiB
> test  

[ceph-users] Upgrading from Reef v18.2.1 to v18.2.2

2024-03-21 Thread Michael Worsham
I originally used Cephadm to build my sandbox Ceph cluster (Reef v18.2.1) using 
Cephadm and Ansible. It's stable and works fine.

Now that Reef v18.2.2 has come out, is there a set of instructions on how to 
upgrade to the latest version via using Cephadm?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Setting up Hashicorp Vault for Encryption with Ceph

2024-04-15 Thread Michael Worsham
Is there a how-to document available on how to setup Hashicorp's Vault for 
Ceph, preferably in a HA state?

Due to some encryption needs, we need to set up LUKS, OSD encryption AND Ceph 
bucket encryption as well. Yes, we know there will be a performance hit, but 
the encrypt-everything is a hard requirement for our business needs since we 
have government and healthcare-related contracts.

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Patching Ceph cluster

2024-06-12 Thread Michael Worsham
What is the proper way to patch a Ceph cluster and reboot the servers in said 
cluster if a reboot is necessary for said updates? And is it possible to 
automate it via Ansible? This message and its attachments are from Data 
Dimensions and are intended only for the use of the individual or entity to 
which it is addressed, and may contain information that is privileged, 
confidential, and exempt from disclosure under applicable law. If the reader of 
this message is not the intended recipient, or the employee or agent 
responsible for delivering the message to the intended recipient, you are 
hereby notified that any dissemination, distribution, or copying of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify the sender immediately and permanently delete the 
original email and destroy any copies or printouts of this email as well as any 
attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Patching Ceph cluster

2024-06-12 Thread Michael Worsham
Interesting. How do you set this "maintenance mode"? If you have a series of 
documented steps that you have to do and could provide as an example, that 
would be beneficial for my efforts.

We are in the process of standing up both a dev-test environment consisting of 
3 Ceph servers (strictly for testing purposes) and a new production environment 
consisting of 20+ Ceph servers.

We are using Ubuntu 22.04.

-- Michael


From: Daniel Brown 
Sent: Wednesday, June 12, 2024 9:18 AM
To: Anthony D'Atri 
Cc: Michael Worsham ; ceph-users@ceph.io 

Subject: Re: [ceph-users] Patching Ceph cluster

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.


There’s also a Maintenance mode that you can set for each server, as you’re 
doing updates, so that the cluster doesn’t try to move data from affected 
OSD’s, while the server being updated is offline or down. I’ve worked some on 
automating this with Ansible, but have found my process (and/or my cluster) 
still requires some manual intervention while it’s running to get things done 
cleanly.



> On Jun 12, 2024, at 8:49 AM, Anthony D'Atri  wrote:
>
> Do you mean patching the OS?
>
> If so, easy -- one node at a time, then after it comes back up, wait until 
> all PGs are active+clean and the mon quorum is complete before proceeding.
>
>
>
>> On Jun 12, 2024, at 07:56, Michael Worsham  
>> wrote:
>>
>> What is the proper way to patch a Ceph cluster and reboot the servers in 
>> said cluster if a reboot is necessary for said updates? And is it possible 
>> to automate it via Ansible? This message and its attachments are from Data 
>> Dimensions and are intended only for the use of the individual or entity to 
>> which it is addressed, and may contain information that is privileged, 
>> confidential, and exempt from disclosure under applicable law. If the reader 
>> of this message is not the intended recipient, or the employee or agent 
>> responsible for delivering the message to the intended recipient, you are 
>> hereby notified that any dissemination, distribution, or copying of this 
>> communication is strictly prohibited. If you have received this 
>> communication in error, please notify the sender immediately and permanently 
>> delete the original email and destroy any copies or printouts of this email 
>> as well as any attachments.
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Patching Ceph cluster

2024-06-13 Thread Michael Worsham
I'd love to see what your playbook(s) looks like for doing this.

-- Michael

From: Sake Ceph 
Sent: Thursday, June 13, 2024 4:05 PM
To: ceph-users@ceph.io 
Subject: [ceph-users] Re: Patching Ceph cluster

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.


Yeah we fully automated this with Ansible. In short we do the following.

1. Check if cluster is healthy before continuing (via REST-API) only health_ok 
is good
2. Disable scrub and deep-scrub
3. Update all applications on all the hosts in the cluster
4. For every host, one by one, do the following:
4a. Check if applications got updated
4b. Check via reboot-hint if a reboot is necessary
4c. If applications got updated or reboot is necessary, do the following :
4c1. Put host in maintenance
4c2. Reboot host if necessary
4c3. Check and wait via 'ceph orch host ls' if status of the host is maintance 
and nothing else
4c4. Get host out of maintenance
4d. Check if cluster is healthy before continuing (via Rest-API) only warning 
about scrub and deep-scrub is allowed, but no pg's should be degraded
5. Enable scrub and deep-scrub when all hosts are done
6. Check if cluster is healthy (via Rest-API) only health_ok is good
7. Done

For upgrade the OS we have something similar, but exiting maintenance mode is 
broken (with 17.2.7) :(
I need to check the tracker for similar issues and if I can't find anything, I 
will create a ticket.

Kind regards,
Sake

> Op 12-06-2024 19:02 CEST schreef Daniel Brown :
>
>
> I have two ansible roles, one for enter, one for exit. There’s likely better 
> ways to do this — and I’ll not be surprised if someone here lets me know. 
> They’re using orch commands via the cephadm shell. I’m using Ansible for 
> other configuration management in my environment, as well, including setting 
> up clients of the ceph cluster.
>
>
> Below excerpts from main.yml in the “tasks” for the enter/exit roles. The 
> host I’m running ansible from is one of my CEPH servers - I’ve limited which 
> process run there though so it’s in the cluster but not equal to the others.
>
>
> —
> Enter
> —
>
> - name: Ceph Maintenance Mode Enter
>   shell:
>
> cmd: ' cephadm shell ceph orch host maintenance enter {{ 
> (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }} 
> --force --yes-i-really-mean-it ‘
>   become: True
>
>
>
> —
> Exit
> —
>
>
> - name: Ceph Maintenance Mode Exit
>   shell:
> cmd: 'cephadm shell ceph orch host maintenance exit {{ 
> (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }} ‘
>   become: True
>   connection: local
>
>
> - name: Wait for Ceph to be available
>   ansible.builtin.wait_for:
> delay: 60
>     host: '{{ 
> (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}’
> port: 9100
>   connection: local
>
>
>
>
>
>
> > On Jun 12, 2024, at 11:28 AM, Michael Worsham  
> > wrote:
> >
> > Interesting. How do you set this "maintenance mode"? If you have a series 
> > of documented steps that you have to do and could provide as an example, 
> > that would be beneficial for my efforts.
> >
> > We are in the process of standing up both a dev-test environment consisting 
> > of 3 Ceph servers (strictly for testing purposes) and a new production 
> > environment consisting of 20+ Ceph servers.
> >
> > We are using Ubuntu 22.04.
> >
> > -- Michael
> > From: Daniel Brown 
> > Sent: Wednesday, June 12, 2024 9:18 AM
> > To: Anthony D'Atri 
> > Cc: Michael Worsham ; ceph-users@ceph.io 
> > 
> > Subject: Re: [ceph-users] Patching Ceph cluster
> >  This is an external email. Please take care when clicking links or opening 
> > attachments. When in doubt, check with the Help Desk or Security.
> >
> >
> > There’s also a Maintenance mode that you can set for each server, as you’re 
> > doing updates, so that the cluster doesn’t try to move data from affected 
> > OSD’s, while the server being updated is offline or down. I’ve worked some 
> > on automating this with Ansible, but have found my process (and/or my 
> > cluster) still requires some manual intervention while it’s running to get 
> > things done cleanly.
> >
> >
> >
> > > On Jun 12, 2024, at 8:49 AM, Anthony D'Atri  
> > > wrote:
> > >
> > > Do you mean patching the OS?
> > >
> > > If so, easy -- one node at a time, then after it comes back up, wait 
> > > until all PGs are active+clean and the mon quorum is

[ceph-users] IT Consulting Firms with Ceph and Hashicorp Vault expertise?

2024-07-10 Thread Michael Worsham
I am in need of a list of IT consulting firms that can set up high-availability 
Vault and also configure the Ceph Object Gateway to use SSE-S3 with Vault.

-- Michael


This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Paid support options?

2024-08-23 Thread Michael Worsham
I will tell you from personal experience that our company is using 42on 
(42on.com) / Fairbanks.nl for our new 5+ Petabyte Ceph 
cluster. We have two environments - dev/test and production - and they worked 
with us through getting the environment setup with what our plan was, what we 
had already purchased, what was recommended with our current specs, etc. Both 
environments were deployed with Canonical's MAAS product (bare-metal 
provisioning) and the newer Cephadm tool. Even though they are located in the 
Netherlands, their engineers are spread all over the world so they are able to 
work with us for consulting and emergency support services as well. 42on also 
offers managed services as well.

-- Michael


From: Alex 
Sent: Friday, August 23, 2024 12:45 PM
To: ceph-users@ceph.io 
Subject: [ceph-users] Re: Paid support options?

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.


I'll jump on this thread as well.

There's a slight possibility we may want to outsource the management
of our Ceph cluster.
It's too early to seriously discuss, but since this thread
conveniently came today up as we were talking about this topic at
work,
I'll also ask who can do support, but the catch is that we're running
Redhat / IBM Ceph and are not thrilled with their level of support.
We'd likely want to stay with Redhat / IBM Ceph disto, but possibly
use another company for the support.
Ideally with staff in US East.

Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph-ansible installation error

2024-09-02 Thread Michael Worsham
I used the steps under this article for setting up a Ceph cluster in my homelab 
environment. It uses Ansible in a couple of ways, but honestly you could 
probably take a number of the manual steps and make your own playbook out of it.

https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/

-- Michael


Get Outlook for Android

From: Michel Niyoyita 
Sent: Friday, August 30, 2024 10:53:38 AM
To: ceph-users 
Subject: [ceph-users] ceph-ansible installation error

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.


Dear team ,


I configuring ceph cluster using ceph-ansible , ubuntu OS 20.04 . my
previous production cluster have been configured using the same
configuration and it is working perfectly , now I am trying to build the
new cluster using the same configurations , but I am facing the
following errors:

ansible-playbook site.yml
Traceback (most recent call last):
  File "/usr/bin/ansible-playbook", line 66, in 
from ansible.utils.display import Display, initialize_locale
ImportError: cannot import name 'initialize_locale' from
'ansible.utils.display'
(/usr/local/lib/python3.8/dist-packages/ansible/utils/display.py)

Can someone help to solve the issue?

Best regards

Michel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Setting up Ceph RGW with SSE-S3 - Any examples?

2024-09-06 Thread Michael Worsham
Has anyone been successful at standing up a Ceph RGW S3 bucket with Hashicorp 
Vault for S3 bucket encryption? The documentation for doing it with Ceph jumps 
all over the page between token and agent, so it's nearly impossible to know 
which variables and parameters are required for each.

I was able to successfully setup a Hashicorp Vault in a high-availability 
cluster via an Ansible playbook I wrote. It's the configuration of what needs 
to be configured for the transit engine for vault and then what changes I need 
to do to Ceph that has me stumped.

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io