[ceph-users] Re: Traffic between public and cluster network

2022-09-29 Thread Boris Behrens
Hi Murilo, as far as I understand ceph: You connect via NFS to a radosgw. When sending data to the rgw instance (uploading files via NFS), the RGW instance talks to the primary OSDs for the required placementgroups through the public network. To primary OSDs talk to their replicas via the cluster

[ceph-users] Re: Slow OSD startup and slow ops

2022-09-29 Thread Stefan Kooman
On 9/26/22 18:04, Gauvain Pocentek wrote: We are running a Ceph Octopus (15.2.16) cluster with similar configuration. We have *a lot* of slow ops when starting OSDs. Also during peering. When the OSDs start they consume 100% CPU for up to ~ 10 seconds, and after that consum

[ceph-users] Re: Fstab entry for mounting specific ceph fs?

2022-09-29 Thread Dominique Ramaekers
Thanks Ken for the info. Is it possible these pulls aren’t jet included in Quincy Stable? I can't find a notice in my syslog about the mount syntax I use being deprecated. > -Oorspronkelijk bericht- > Van: Ken Dreyer > Verzonden: woensdag 28 september 2022 16:50 > Aan: Sagittarius-

[ceph-users] Re: 15.2.17: RGW deploy through cephadm exits immediately with exit code 5/NOTINSTALLED

2022-09-29 Thread Michel Jouvin
Hi, Unfortunately it was a wrong track. The problem remains the same, with the same error messages, on another host with only one network address in the Ceph cluster public network. BTW, "ceph shell --name rgw_daemon" works and from the shell I can use radosgw-admin and ceph command, suggesti

[ceph-users] Re: Traffic between public and cluster network

2022-09-29 Thread Murilo Morais
You understood my question correctly, thanks for the explanation. Boris, I was able to force the traffic to go out only through the cluster network by making the first machine have OSDs only of the Primary type and the other machines only Secondary. It worked as intended on writing, but reading on

[ceph-users] OSDs (v172.3) won't start after Rocky Upgrade to Kernel 4.18.0-372.26.1.el8_6.x86_64

2022-09-29 Thread Ackermann, Christoph
Hello list member, after upgrading from Octopus to Quincy yesterday, now we have a problem starting OSDs on the newest Rocky 8.6 4.18.0-372.26.1.el8_6.x86_64. This is a non-cephadm Cluster. All nodes running Rocky with Kernel 4.18.0-372.19.1.el8_6.x86_64 except this one host (ceph1n012) i restar

[ceph-users] Adding IPs to an existing iscsi gateway

2022-09-29 Thread Stolte, Felix
Hey guys, we are using ceph-iscsi and want to update our configuration to serve iSCSI to an additional network. I did set up everything via the gwcli comman. Originally i created the gateway with „create gw-a 192.168.100.4". Now i want to add an additional IP to the existing gateway, but i don’

[ceph-users] Re: Fstab entry for mounting specific ceph fs?

2022-09-29 Thread Ken Dreyer
On Thu, Sep 29, 2022 at 7:52 AM Dominique Ramaekers wrote: > Is it possible these pulls aren’t jet included in Quincy Stable? > > I can't find a notice in my syslog about the mount syntax I use being > deprecated. Those PRs are in Quincy. However, there are no syslog warnings about deprecating t

[ceph-users] Ceph quincy cephadm orch daemon stop osd.X not working

2022-09-29 Thread Budai Laszlo
Dear All, I'm testing ceph quincy and I have problems using the cephadm ochestrator backend. When I'm trying to use it to start/stop osd daemons nothing happens. I have a "brand new" cluster deployed with cephadm. So far everything else that I tried worked just like in Pacific, but the ceph or

[ceph-users] Re: OSDs (v172.3) won't start after Rocky Upgrade to Kernel 4.18.0-372.26.1.el8_6.x86_64

2022-09-29 Thread Janne Johansson
> Many thanks for any hint helping to get missing 7 OSDs up ASAP. Not sure if it "helps", but I would try "ceph-volume lvm activate --all" if those were on lvm, I guess ceph-volume simple and raw might have similar command to search for and start everything that looks like a ceph OSD. Perhaps the

[ceph-users] Re: *****SPAM***** Re: OSDs (v172.3) won't start after Rocky Upgrade to Kernel 4.18.0-372.26.1.el8_6.x86_64

2022-09-29 Thread Marc
> > > Many thanks for any hint helping to get missing 7 OSDs up ASAP. > > Not sure if it "helps", but I would try "ceph-volume lvm activate > --all" if those were on lvm, I guess ceph-volume simple and raw might > have similar command to search for and start everything that looks > like a ceph OS

[ceph-users] Re: OSDs (v172.3) won't start after Rocky Upgrade to Kernel 4.18.0-372.26.1.el8_6.x86_64

2022-09-29 Thread Ackermann, Christoph
Janne, LVM looks fine so far. Please se below... BUT. It seems that after upgrade from Octopus to Quincy yesterday the standalone packet "ceph-volume.noarch" won't updated/installed. So after re-installation of ceph-volume and activation i got all the tmpfs mounts under /var/lib/ceph again and wo

[ceph-users] Re: *****SPAM***** Re: OSDs (v172.3) won't start after Rocky Upgrade to Kernel 4.18.0-372.26.1.el8_6.x86_64

2022-09-29 Thread Ackermann, Christoph
Please see my original post/answer... missing ceph-volume.noarch packet causes the problem! Thanks, Christoph Am Do., 29. Sept. 2022 um 16:36 Uhr schrieb Marc : > > > > > Many thanks for any hint helping to get missing 7 OSDs up ASAP. > > > > Not sure if it "helps", but I would try "ceph-volume

[ceph-users] Re: *****SPAM***** Re: OSDs (v172.3) won't start after Rocky Upgrade to Kernel 4.18.0-372.26.1.el8_6.x86_64

2022-09-29 Thread Ackermann, Christoph
Please see my original post/answer... missing ceph-volume.noarch packet causes the problem! Thanks, Christoph Am Do., 29. Sept. 2022 um 16:37 Uhr schrieb Marc : > > > > > Many thanks for any hint helping to get missing 7 OSDs up ASAP. > > > > Not sure if it "helps", but I would try "ceph-volume

[ceph-users] Recommended SSDs for Ceph

2022-09-29 Thread Drew Weaver
Hello, We had been using Intel SSD D3 S4610/20 SSDs but Solidigm is... having problems Bottom line is they haven't shipped an order in a year. Does anyone have any recommendations on SATA SSDs that have a fairly good mix of performance/endurance/cost? I know that they should all just work

[ceph-users] Re: Recommended SSDs for Ceph

2022-09-29 Thread Matt Vandermeulen
I think you're likely to get a lot of mixed opinions and experiences with this question. I might suggest trying to grab a few samples from different vendors, and making sure they meet your needs (throw some workloads at them, qualify them), then make sure your vendors have a reasonable lead ti

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-29 Thread Neha Ojha
On Mon, Sep 19, 2022 at 9:38 AM Yuri Weinstein wrote: > Update: > > Remaining => > upgrade/octopus-x - Neha pls review/approve > Both the failures in http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_16:33:35-upgrade:octopus-x-quincy-release-distro-default-smithi/ seem related to RGW. Casey,

[ceph-users] Re: Recommended SSDs for Ceph

2022-09-29 Thread Janne Johansson
Den tors 29 sep. 2022 kl 17:57 skrev Matt Vandermeulen : > > I think you're likely to get a lot of mixed opinions and experiences > with this question. I might suggest trying to grab a few samples from > different vendors, and making sure they meet your needs (throw some > workloads at them, quali

[ceph-users] Re: strange osd error during add disk

2022-09-29 Thread Satish Patel
Bump! Any suggestions? On Wed, Sep 28, 2022 at 4:26 PM Satish Patel wrote: > Folks, > > I have 15 nodes for ceph and each node has a 160TB disk attached. I am > using cephadm quincy release and all 14 nodes have been added except one > node which is giving a very strange error during adding it.

[ceph-users] Re: strange osd error during add disk

2022-09-29 Thread Alvaro Soto
Where is your ceph.conf file? ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf --- Alvaro Soto. Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. ---

[ceph-users] Same location for wal.db and block.db

2022-09-29 Thread Massimo Sgaravatto
I used to create Bluestore OSDs using commands such as this one: ceph-volume lvm create --bluestore --data ceph-block-50/block-50 --block.db ceph-db-50-54/db-50 with the goal of having block.db and wal.db co-located on the same LV (ceph-db-50-54/db-5 in my example, which is on a SSD device). Is

[ceph-users] Re: Slow OSD startup and slow ops

2022-09-29 Thread Gauvain Pocentek
Hi Stefan, Thanks for your feedback! On Thu, Sep 29, 2022 at 10:28 AM Stefan Kooman wrote: > On 9/26/22 18:04, Gauvain Pocentek wrote: > > > > > > > We are running a Ceph Octopus (15.2.16) cluster with similar > > configuration. We have *a lot* of slow ops when starting OSDs. Also > >

[ceph-users] Re: Ceph quincy cephadm orch daemon stop osd.X not working

2022-09-29 Thread Eugen Block
What is your cluster status (ceph -s)? I assume that either your cluster is not healthy or your crush rules don't cover an osd failure. Sometimes it helps to fail the active mgr (ceph mgr fail). Can you also share your 'ceph osd tree'? Do you use the default replicated_rule or any additiona