[ceph-users] Re: Unable to add OSD after removing completely

2024-02-12 Thread Anthony D'Atri
You probably have the H330 HBA, rebadged LSI.  You can set the “mode” or 
“personality” using storcli / perccli.  You might need to remove the VDs from 
them too.  

> On Feb 12, 2024, at 7:53 PM, sa...@dcl-online.com wrote:
> 
> Hello,
> 
> I have a Ceph cluster created by orchestrator Cephadm. It consists of 3 Dell 
> PowerEdge R730XD servers. This cluster's hard drives used as OSD were 
> configured as RAID 0. The configuration summery is as the following:
> ceph-node1 (mgr, mon)
>  Public network: 172.16.7.11/22
>  Cluster network: 10.10.10.11/24, 10.10.10.14/24
> ceph-node2 (mgr, mon)
>  Public network: 172.16.7.12/22
>  Cluster network: 10.10.10.12/24, 10.10.10.15/24
> ceph-node3 (mon)
>  Public network: 172.16.7.13/22
>  Cluster network: 10.10.10.13/24, 10.10.10.16/24
> 
> Recently I removed all OSDs from node3 with the following set of commands
>  sudo ceph osd out osd.3
>  sudo systemctl stop ceph@osd.3.service
>  sudo ceph osd rm osd.3
>  sudo ceph osd crush rm osd.3
>  sudo ceph auth del osd.3
> 
> After this, I configured all OSD hard drives as non-RAIN from the server 
> settings and tried to add the hard drives as OSD again. First I used the 
> following command to add them automatically
>  ceph orch apply osd --all-available-devices --unmanaged=false
> But this was generating the following error in my Ceph GUI console
>  CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): 
> osd.all-available-devices
> I am also unable to add the hard drives manually with the following command
>  sudo ceph orch daemon add osd ceph-node3:/dev/sdb
> 
> Can anyone please help me with this issue?
> 
> I really appreciate any help you can provide.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Accumulation of removed_snaps_queue After Deleting Snapshots in Ceph RBD

2024-02-12 Thread localhost Liam
Thanks, we are storing a lot less stress.
0. I rebooted 30 OSDs on one machine and the queue was not reduced, but the 
storage space was released in large amounts.
1. why did the reboot OSD release so much space?


Here are Ceph details..

ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)

  cluster:
id: 9acc3734-b27b-4bc3-84b8-c7762f2294c6
health: HEALTH_OK

  services:
mon: 3 daemons, quorum onf-akl-stor001,onf-akl-stor002,onf-akl-stor003 (age
11d)
mgr: onf-akl-stor001(active, since 3M), standbys: onf-akl-stor002
osd: 101 osds: 98 up (since 41s), 98 in (since 11d)
rgw: 2 daemons active (2 hosts, 1 zones)

  data:
pools:   7 pools, 2209 pgs
objects: 25.47M objects, 58 TiB
usage:   115 TiB used, 184 TiB / 299 TiB avail
pgs: 2183 active+clean
24   active+clean+snaptrim
2active+clean+scrubbing+deep

  io:
client:   38 MiB/s rd, 226 MiB/s wr, 1.32k op/s rd, 2.27k op/s wr
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: PG stuck at recovery

2024-02-12 Thread Leon Gao
Thanks a lot! Yes it turns out to be the same issue that you pointed to.
Switching to wpq solved the issue. We are running 18.2.0.

Leon

On Wed, Feb 7, 2024 at 12:48 PM Kai Stian Olstad 
wrote:

> You don't say anything about the Ceph version you are running.
> I had an similar issue with 17.2.7, and is seams to be an issue with
> mclock,
> when I switch to wpq everything worked again.
>
> You can read more about it here
>
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/IPHBE3DLW5ABCZHSNYOBUBSI3TLWVD22/#OE3QXLAJIY6NU7PNMGHP47UK2CBZJPUG
>
> -
> Kai Stian Olstad
>
>
> On Tue, Feb 06, 2024 at 06:35:26AM -, LeonGao  wrote:
> >Hi community
> >
> >We have a new Ceph cluster deployment with 100 nodes. When we are
> draining an OSD host from the cluster, we see a small amount of PGs that
> cannot make any progress to the end. From the logs and metrics, it seems
> like the recovery progress is stuck (0 recovery ops for several days).
> Would like to get some ideas on this. Re-peering and OSD restart do resolve
> to mitigate the issue but we want to get to the root cause of it as
> draining and recovery happen frequently.
> >
> >I have put some debugging information below. Any help is appreciated,
> thanks!
> >
> >ceph -s
> >pgs: 4210926/7380034104 objects misplaced (0.057%)
> > 41198 active+clean
> > 71active+remapped+backfilling
> > 12active+recovering
> >
> >One of the stuck PG:
> >6.38f1   active+remapped+backfilling [313,643,727]
>  313 [313,643,717] 313
> >
> >PG query result:
> >
> >ceph pg 6.38f1 query
> >{
> >"snap_trimq": "[]",
> >"snap_trimq_len": 0,
> >"state": "active+remapped+backfilling",
> >"epoch": 246856,
> >"up": [
> >313,
> >643,
> >727
> >],
> >"acting": [
> >313,
> >643,
> >717
> >],
> >"backfill_targets": [
> >"727"
> >],
> >"acting_recovery_backfill": [
> >"313",
> >"643",
> >"717",
> >"727"
> >],
> >"info": {
> >"pgid": "6.38f1",
> >"last_update": "212333'38916",
> >"last_complete": "212333'38916",
> >"log_tail": "80608'37589",
> >"last_user_version": 38833,
> >"last_backfill": "MAX",
> >"purged_snaps": [],
> >"history": {
> >"epoch_created": 3726,
> >"epoch_pool_created": 3279,
> >"last_epoch_started": 243987,
> >"last_interval_started": 243986,
> >"last_epoch_clean": 220174,
> >"last_interval_clean": 220173,
> >"last_epoch_split": 3726,
> >"last_epoch_marked_full": 0,
> >"same_up_since": 238347,
> >"same_interval_since": 243986,
> >"same_primary_since": 3728,
> >"last_scrub": "212333'38916",
> >"last_scrub_stamp": "2024-01-29T13:43:10.654709+",
> >"last_deep_scrub": "212333'38916",
> >"last_deep_scrub_stamp": "2024-01-28T07:43:45.920198+",
> >"last_clean_scrub_stamp": "2024-01-29T13:43:10.654709+",
> >"prior_readable_until_ub": 0
> >},
> >"stats": {
> >"version": "212333'38916",
> >"reported_seq": 413425,
> >"reported_epoch": 246856,
> >"state": "active+remapped+backfilling",
> >"last_fresh": "2024-02-05T21:14:40.838785+",
> >"last_change": "2024-02-03T22:33:43.052272+",
> >"last_active": "2024-02-05T21:14:40.838785+",
> >"last_peered": "2024-02-05T21:14:40.838785+",
> >"last_clean": "2024-02-03T04:26:35.168232+",
> >"last_became_active": "2024-02-03T22:31:16.037823+",
> >"last_became_peered": "2024-02-03T22:31:16.037823+",
> >"last_unstale": "2024-02-05T21:14:40.838785+",
> >"last_undegraded": "2024-02-05T21:14:40.838785+",
> >"last_fullsized": "2024-02-05T21:14:40.838785+",
> >"mapping_epoch": 243986,
> >"log_start": "80608'37589",
> >"ondisk_log_start": "80608'37589",
> >"created": 3726,
> >"last_epoch_clean": 220174,
> >"parent": "0.0",
> >"parent_split_bits": 14,
> >"last_scrub": "212333'38916",
> >"last_scrub_stamp": "2024-01-29T13:43:10.654709+",
> >"last_deep_scrub": "212333'38916",
> >"last_deep_scrub_stamp": "2024-01-28T07:43:45.920198+",
> >"last_clean_scrub_stamp": "2024-01-29T13:43:10.654709+",
> >"objects_scrubbed": 17743,
> >"log_size": 1327,
> >"log_dups_size": 3000,
> >"ondisk_log_size": 1327,
> >"stats_invalid": false,
> >"dirty_stats_invalid": false,
> >"omap_stats_invalid": false,
> >  

[ceph-users] Unable to add OSD after removing completely

2024-02-12 Thread salam
Hello,

I have a Ceph cluster created by orchestrator Cephadm. It consists of 3 Dell 
PowerEdge R730XD servers. This cluster's hard drives used as OSD were 
configured as RAID 0. The configuration summery is as the following:
ceph-node1 (mgr, mon)
  Public network: 172.16.7.11/22
  Cluster network: 10.10.10.11/24, 10.10.10.14/24
ceph-node2 (mgr, mon)
  Public network: 172.16.7.12/22
  Cluster network: 10.10.10.12/24, 10.10.10.15/24
ceph-node3 (mon)
  Public network: 172.16.7.13/22
  Cluster network: 10.10.10.13/24, 10.10.10.16/24

Recently I removed all OSDs from node3 with the following set of commands
  sudo ceph osd out osd.3
  sudo systemctl stop ceph@osd.3.service
  sudo ceph osd rm osd.3
  sudo ceph osd crush rm osd.3
  sudo ceph auth del osd.3

After this, I configured all OSD hard drives as non-RAIN from the server 
settings and tried to add the hard drives as OSD again. First I used the 
following command to add them automatically
  ceph orch apply osd --all-available-devices --unmanaged=false
But this was generating the following error in my Ceph GUI console
  CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): 
osd.all-available-devices
I am also unable to add the hard drives manually with the following command
  sudo ceph orch daemon add osd ceph-node3:/dev/sdb

Can anyone please help me with this issue?

I really appreciate any help you can provide.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does it impact write performance when SSD applies into block.wal (not block.db)

2024-02-12 Thread jaemin joo
Thank you for your idea. 
I realize that the number of SSD is important as well as the capacity of SSD 
for block.wal.

> Naturally the best solution is to not use HDDs at all ;)
You are right! :)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Increase number of PGs

2024-02-12 Thread Murilo Morais
Janne, thanks for the tip.
Does the "target_max_misplaced_ratio" parameter influence the process? I
would like to make the increase with as little overhead as possible.

Em seg., 12 de fev. de 2024 às 11:39, Janne Johansson 
escreveu:

> Den mån 12 feb. 2024 kl 14:12 skrev Murilo Morais :
> >
> > Good morning and happy holidays everyone!
> >
> > Guys, what would be the best strategy to increase the number of PGs in a
> > POOL that is already in production?
>
> "ceph osd pool set  pg_num  the current value>" and let the pool get pgp_nums increased slowly by
> itself.
>
> --
> May the most significant bit of your life be positive.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-12 Thread Rok Jaklič
You don't have to. You can serve rgw on the front end directly.

You:
1. set certificate with smth like: rgw_frontends = " ...
ssl_certificate=/etc/pki/ceph/cert.pem". We are using nginx on front end to
act as a proxy and for some other stuff.
2. delete line with rgw_crypt_require_ssl

... you should be ready to go. :)

Rok

On Mon, Feb 12, 2024 at 6:43 PM Michael Worsham 
wrote:

> So, just so I am clear – in addition to the steps below, will I also need
> to also install NGINX or HAProxy on the server to act as the front end?
>
>
>
> -- M
>
>
>
> *From:* Rok Jaklič 
> *Sent:* Monday, February 12, 2024 12:30 PM
> *To:* Michael Worsham 
> *Cc:* ceph-users@ceph.io
> *Subject:* Re: [ceph-users] Re: What is the proper way to setup Rados
> Gateway (RGW) under Ceph?
>
>
>
> This is an external email. Please take care when clicking links or opening
> attachments. When in doubt, check with the Help Desk or Security.
>
>
>
> Hi,
>
>
>
> recommended methods of deploying rgw are imho overly complicated. You can
> get service up manually also with something simple like:
>
>
>
> [root@mon1 bin]# cat /etc/ceph/ceph.conf
>
> [global]
> fsid = 12345678-XXXx ...
> mon initial members = mon1,mon3
> mon host = ip-mon1,ip-mon2
> auth cluster required = none
> auth service required = none
> auth client required = none
> ms_mon_client_mode = crc
>
> [client.radosgw.mon1]
> host = mon1
> log_file = /var/log/ceph/client.radosgw.mon1.log
> rgw_dns_name = mon1
> rgw_frontends = "civetweb port=80 num_threads=500" # this is different in
> ceph versions 17, 18.
> rgw_crypt_require_ssl = false
>
> 
>
> [root@mon1 bin]# cat start-rgw.sh
> radosgw -c /etc/ceph/ceph.conf --setuser ceph --setgroup ceph -n
> client.radosgw.mon1 &
>
>
>
> ---
>
>
>
> This configuration has nginx in front of rgw  all traffic goes from
> nginx 443 -> rgw 80 and it assumes you "own the network" and you are aware
> of "drawbacks".
>
>
>
> Rok
>
>
>
> On Mon, Feb 12, 2024 at 2:15 PM Michael Worsham <
> mwors...@datadimensions.com> wrote:
>
> Can anyone help me on this? I can't be that hard to do.
>
> -- Michael
>
>
> -Original Message-
> From: Michael Worsham 
> Sent: Thursday, February 8, 2024 3:03 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] What is the proper way to setup Rados Gateway (RGW)
> under Ceph?
>
> I have setup a 'reef' Ceph Cluster using Cephadm and Ansible in a VMware
> ESXi 7 / Ubuntu 22.04 lab environment per the how-to guide provided here:
> https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/
> .
>
> The installation steps were fairly easy and I was able to get the
> environment up and running in about 15 minutes under VMware ESXi 7. I have
> buckets and pools already setup. However, the ceph.io site is confusing
> on how to setup the Rados Gateway (radosgw) with Multi-site --
> https://docs.ceph.com/en/latest/radosgw/multisite/. Is a copy of HAProxy
> also needed for handling the front-end load balancing or is it implied that
> Ceph sets it up?
>
> Command-line scripting I was planning on using for setting up the RGW:
> ```
> radosgw-admin realm create --rgw-realm=sandbox --default radosgw-admin
> zonegroup create --rgw-zonegroup=sandbox  --master --default radosgw-admin
> zone create --rgw-zonegroup=sandbox --rgw-zone=sandbox --master --default
> radosgw-admin period update --rgw-realm=sandbox --commit ceph orch apply
> rgw sandbox --realm=sandbox --zone=sandbox --placement="2 ceph-mon1
> ceph-mon2" --port=8000 ```
>
> What other steps are needed to get the RGW up and running so that it can
> be presented to something like Veeam for doing performance and I/O testing
> concepts?
>
> -- Michael
>
> This message and its attachments are from Data Dimensions and are intended
> only for the use of the individual or entity to which it is addressed, and
> may contain information that is privileged, confidential, and exempt from
> disclosure under applicable law. If the reader of this message is not the
> intended recipient, or the employee or agent responsible for delivering the
> message to the intended recipient, you are hereby notified that any
> dissemination, distribution, or copying of this communication is strictly
> prohibited. If you have received this communication in error, please notify
> the sender immediately and permanently delete the original email and
> destroy any copies or printouts of this email as well as any attachments.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
> This message and its attachments are from Data Dimensions and are intended
> only for the use of the individual or entity to which it is addressed, and
> may contain information that is privileged, confidential, and exempt from
> disclosure under applicable law. If the reader of this message is not the
> intended recipient, or the employee or agent responsible for delivering the
> message to the 

[ceph-users] Re: What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-12 Thread Michael Worsham
So, just so I am clear – in addition to the steps below, will I also need to 
also install NGINX or HAProxy on the server to act as the front end?

-- M

From: Rok Jaklič 
Sent: Monday, February 12, 2024 12:30 PM
To: Michael Worsham 
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: What is the proper way to setup Rados Gateway 
(RGW) under Ceph?

This is an external email. Please take care when clicking links or opening 
attachments. When in doubt, check with the Help Desk or Security.

Hi,

recommended methods of deploying rgw are imho overly complicated. You can get 
service up manually also with something simple like:

[root@mon1 bin]# cat /etc/ceph/ceph.conf

[global]
fsid = 12345678-XXXx ...
mon initial members = mon1,mon3
mon host = ip-mon1,ip-mon2
auth cluster required = none
auth service required = none
auth client required = none
ms_mon_client_mode = crc

[client.radosgw.mon1]
host = mon1
log_file = /var/log/ceph/client.radosgw.mon1.log
rgw_dns_name = mon1
rgw_frontends = "civetweb port=80 num_threads=500" # this is different in ceph 
versions 17, 18.
rgw_crypt_require_ssl = false



[root@mon1 bin]# cat start-rgw.sh
radosgw -c /etc/ceph/ceph.conf --setuser ceph --setgroup ceph -n 
client.radosgw.mon1 &

---

This configuration has nginx in front of rgw  all traffic goes from nginx 
443 -> rgw 80 and it assumes you "own the network" and you are aware of 
"drawbacks".

Rok

On Mon, Feb 12, 2024 at 2:15 PM Michael Worsham 
mailto:mwors...@datadimensions.com>> wrote:
Can anyone help me on this? I can't be that hard to do.

-- Michael


-Original Message-
From: Michael Worsham 
mailto:mwors...@datadimensions.com>>
Sent: Thursday, February 8, 2024 3:03 PM
To: ceph-users@ceph.io
Subject: [ceph-users] What is the proper way to setup Rados Gateway (RGW) under 
Ceph?

I have setup a 'reef' Ceph Cluster using Cephadm and Ansible in a VMware ESXi 7 
/ Ubuntu 22.04 lab environment per the how-to guide provided here:  
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/.

The installation steps were fairly easy and I was able to get the environment 
up and running in about 15 minutes under VMware ESXi 7. I have buckets and 
pools already setup. However, the ceph.io site is confusing on 
how to setup the Rados Gateway (radosgw) with Multi-site -- 
https://docs.ceph.com/en/latest/radosgw/multisite/. Is a copy of HAProxy also 
needed for handling the front-end load balancing or is it implied that Ceph 
sets it up?

Command-line scripting I was planning on using for setting up the RGW:
```
radosgw-admin realm create --rgw-realm=sandbox --default radosgw-admin 
zonegroup create --rgw-zonegroup=sandbox  --master --default radosgw-admin zone 
create --rgw-zonegroup=sandbox --rgw-zone=sandbox --master --default 
radosgw-admin period update --rgw-realm=sandbox --commit ceph orch apply rgw 
sandbox --realm=sandbox --zone=sandbox --placement="2 ceph-mon1 ceph-mon2" 
--port=8000 ```

What other steps are needed to get the RGW up and running so that it can be 
presented to something like Veeam for doing performance and I/O testing 
concepts?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to 
ceph-users-le...@ceph.io
This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe 

[ceph-users] Re: Installing ceph s3.

2024-02-12 Thread Albert Shih
Le 12/02/2024 à 18:38:08+0100, Kai Stian Olstad a écrit
> On 12.02.2024 18:15, Albert Shih wrote:
> > I couldn't find a documentation about how to install a S3/Swift API (as
> > I
> > understand it's RadosGW) on quincy.
> 
> It depends on how you have install Ceph.
> If your are using Cephadm the docs is here

Yes. 

> https://docs.ceph.com/en/reef/cephadm/services/rgw/

Thanks. I didn't find it maybe because I search radosgw and not rgw...

> > I can find some documentation on octupus
> > (https://docs.ceph.com/en/octopus/install/ceph-deploy/install-ceph-gateway/)
> 
> ceph-deploy is deprecated
> https://docs.ceph.com/en/reef/install/

Yes it's what I though. 

Thanks.
-- 
Albert SHIH 嶺 
France
Heure locale/Local time:
lun. 12 févr. 2024 18:40:59 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Installing ceph s3.

2024-02-12 Thread Kai Stian Olstad

On 12.02.2024 18:15, Albert Shih wrote:
I couldn't find a documentation about how to install a S3/Swift API (as 
I

understand it's RadosGW) on quincy.


It depends on how you have install Ceph.
If your are using Cephadm the docs is here 
https://docs.ceph.com/en/reef/cephadm/services/rgw/




I can find some documentation on octupus
(https://docs.ceph.com/en/octopus/install/ceph-deploy/install-ceph-gateway/)


ceph-deploy is deprecated
https://docs.ceph.com/en/reef/install/

--
Kai Stian Olstad
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-12 Thread Rok Jaklič
Hi,

recommended methods of deploying rgw are imho overly complicated. You can
get service up manually also with something simple like:

[root@mon1 bin]# cat /etc/ceph/ceph.conf

[global]
fsid = 12345678-XXXx ...
mon initial members = mon1,mon3
mon host = ip-mon1,ip-mon2
auth cluster required = none
auth service required = none
auth client required = none
ms_mon_client_mode = crc

[client.radosgw.mon1]
host = mon1
log_file = /var/log/ceph/client.radosgw.mon1.log
rgw_dns_name = mon1
rgw_frontends = "civetweb port=80 num_threads=500" # this is different in
ceph versions 17, 18.
rgw_crypt_require_ssl = false



[root@mon1 bin]# cat start-rgw.sh
radosgw -c /etc/ceph/ceph.conf --setuser ceph --setgroup ceph -n
client.radosgw.mon1 &

---

This configuration has nginx in front of rgw  all traffic goes from
nginx 443 -> rgw 80 and it assumes you "own the network" and you are aware
of "drawbacks".

Rok

On Mon, Feb 12, 2024 at 2:15 PM Michael Worsham 
wrote:

> Can anyone help me on this? I can't be that hard to do.
>
> -- Michael
>
>
> -Original Message-
> From: Michael Worsham 
> Sent: Thursday, February 8, 2024 3:03 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] What is the proper way to setup Rados Gateway (RGW)
> under Ceph?
>
> I have setup a 'reef' Ceph Cluster using Cephadm and Ansible in a VMware
> ESXi 7 / Ubuntu 22.04 lab environment per the how-to guide provided here:
> https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/
> .
>
> The installation steps were fairly easy and I was able to get the
> environment up and running in about 15 minutes under VMware ESXi 7. I have
> buckets and pools already setup. However, the ceph.io site is confusing
> on how to setup the Rados Gateway (radosgw) with Multi-site --
> https://docs.ceph.com/en/latest/radosgw/multisite/. Is a copy of HAProxy
> also needed for handling the front-end load balancing or is it implied that
> Ceph sets it up?
>
> Command-line scripting I was planning on using for setting up the RGW:
> ```
> radosgw-admin realm create --rgw-realm=sandbox --default radosgw-admin
> zonegroup create --rgw-zonegroup=sandbox  --master --default radosgw-admin
> zone create --rgw-zonegroup=sandbox --rgw-zone=sandbox --master --default
> radosgw-admin period update --rgw-realm=sandbox --commit ceph orch apply
> rgw sandbox --realm=sandbox --zone=sandbox --placement="2 ceph-mon1
> ceph-mon2" --port=8000 ```
>
> What other steps are needed to get the RGW up and running so that it can
> be presented to something like Veeam for doing performance and I/O testing
> concepts?
>
> -- Michael
>
> This message and its attachments are from Data Dimensions and are intended
> only for the use of the individual or entity to which it is addressed, and
> may contain information that is privileged, confidential, and exempt from
> disclosure under applicable law. If the reader of this message is not the
> intended recipient, or the employee or agent responsible for delivering the
> message to the intended recipient, you are hereby notified that any
> dissemination, distribution, or copying of this communication is strictly
> prohibited. If you have received this communication in error, please notify
> the sender immediately and permanently delete the original email and
> destroy any copies or printouts of this email as well as any attachments.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Installing ceph s3.

2024-02-12 Thread Albert Shih
Hi everyone. 

I couldn't find a documentation about how to install a S3/Swift API (as I
understand it's RadosGW) on quincy. 

I can find some documentation on octupus
(https://docs.ceph.com/en/octopus/install/ceph-deploy/install-ceph-gateway/)

Very strangely when I go 

  https://docs.ceph.com/en/quincy/radosgw/

I can see a lot of very detailed documentation about each component, but
cannot find a more global documentation. 

Any new documentation somewhere ? I think it's not a good idea to use the
one on octopus...

Regards

-- 
Albert SHIH 嶺 
France
Heure locale/Local time:
lun. 12 févr. 2024 18:08:59 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Increase number of PGs

2024-02-12 Thread Janne Johansson
Den mån 12 feb. 2024 kl 14:12 skrev Murilo Morais :
>
> Good morning and happy holidays everyone!
>
> Guys, what would be the best strategy to increase the number of PGs in a
> POOL that is already in production?

"ceph osd pool set  pg_num " and let the pool get pgp_nums increased slowly by
itself.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: What is the proper way to setup Rados Gateway (RGW) under Ceph?

2024-02-12 Thread Michael Worsham
Can anyone help me on this? I can't be that hard to do.

-- Michael


-Original Message-
From: Michael Worsham 
Sent: Thursday, February 8, 2024 3:03 PM
To: ceph-users@ceph.io
Subject: [ceph-users] What is the proper way to setup Rados Gateway (RGW) under 
Ceph?

I have setup a 'reef' Ceph Cluster using Cephadm and Ansible in a VMware ESXi 7 
/ Ubuntu 22.04 lab environment per the how-to guide provided here:  
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/.

The installation steps were fairly easy and I was able to get the environment 
up and running in about 15 minutes under VMware ESXi 7. I have buckets and 
pools already setup. However, the ceph.io site is confusing on how to setup the 
Rados Gateway (radosgw) with Multi-site -- 
https://docs.ceph.com/en/latest/radosgw/multisite/. Is a copy of HAProxy also 
needed for handling the front-end load balancing or is it implied that Ceph 
sets it up?

Command-line scripting I was planning on using for setting up the RGW:
```
radosgw-admin realm create --rgw-realm=sandbox --default radosgw-admin 
zonegroup create --rgw-zonegroup=sandbox  --master --default radosgw-admin zone 
create --rgw-zonegroup=sandbox --rgw-zone=sandbox --master --default 
radosgw-admin period update --rgw-realm=sandbox --commit ceph orch apply rgw 
sandbox --realm=sandbox --zone=sandbox --placement="2 ceph-mon1 ceph-mon2" 
--port=8000 ```

What other steps are needed to get the RGW up and running so that it can be 
presented to something like Veeam for doing performance and I/O testing 
concepts?

-- Michael

This message and its attachments are from Data Dimensions and are intended only 
for the use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message to the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify the 
sender immediately and permanently delete the original email and destroy any 
copies or printouts of this email as well as any attachments.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Increase number of PGs

2024-02-12 Thread Murilo Morais
Good morning and happy holidays everyone!

Guys, what would be the best strategy to increase the number of PGs in a
POOL that is already in production?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io