[ceph-users] Re: iSCSI GW trusted IPs

2023-11-15 Thread Brent Kennedy
I just setup iscisi on a reef cluster and I couldn’t add targets properly until I put in the username and password entered for the gateways via the "Discovery Authentication" button at the top of the targets page in the iscsi area. I don’t remember if the quincy console had that though. In my

[ceph-users] Service Discovery issue in Reef 18.2.0 release ( upgrading )

2023-11-14 Thread Brent Kennedy
Greetings group! We recently reloaded a cluster from scratch using cephadm and reef. The cluster came up, no issues. We then decided to upgrade two existing cephadm clusters that were on quincy. Those two clusters came up just fine but there is an issue with the Grafana graphs on both

[ceph-users] Cluster Migration VS Newly Spun up from scratch cephadm Cluster

2022-11-13 Thread Brent Kennedy
We recently elected to rebuild a cluster from bottom up ( move all data off and move back ). I used cephadm to create the new cluster and then compared it with another cluster of similar size that had been migrated/adopted into cephadm. I see lots of differences. I gather this is due to the

[ceph-users] Re: Cephadm - Adding host to migrated cluster

2022-10-18 Thread Brent Kennedy
, 2022 2:25 PM To: Brent Kennedy Cc: Eugen Block ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: Cephadm - Adding host to migrated cluster Do the journal logs for the OSDs say anything about why they couldn't start up? ("cephadm ls --no-detail" run on the host will give the sys

[ceph-users] Re: Cephadm - Adding host to migrated cluster

2022-10-17 Thread Brent Kennedy
stay persistent in the cephadm dashboard until I delete them manually from the node. Its like the container doesn't spin up on the node for each of the disks. -Brent -Original Message- From: Eugen Block Sent: Monday, October 17, 2022 12:52 PM To: ceph-users@ceph.io Subject: [ceph-users

[ceph-users] Re: Cephadm - Adding host to migrated cluster

2022-10-17 Thread Brent Kennedy
Block Sent: Monday, October 17, 2022 12:52 PM To: ceph-users@ceph.io Subject: [ceph-users] Re: Cephadm - Adding host to migrated cluster Does the cephadm.log on that node reveal anything useful? What about the (active) mgr log? Zitat von Brent Kennedy : > Greetings everyone, > > > &

[ceph-users] Cephadm - Adding host to migrated cluster

2022-10-17 Thread Brent Kennedy
Greetings everyone, We recently moved a ceph-ansible cluster running pacific on centos 8 to centos 8 stream and then upgraded to quincy using cephadm after converting to cephadm. Everything with the transition worked but recently we decided to add another node to the cluster with 10 more

[ceph-users] Re: Ceph disks fill up to 100%

2022-08-20 Thread Brent Kennedy
If you can get them back online, you could then reweight each one manually until you see a good balance, then turn on auto-balancing. An entire host being down could be a problem though. Also, you didn’t mention the highest size(replication) you have active. -Brent -Original

[ceph-users] Re: Upgrade paths beyond octopus on Centos7

2022-08-13 Thread Brent Kennedy
To: Brent Kennedy ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: Upgrade paths beyond octopus on Centos7 An update on my testing. I have a 6 node test ceph cluster deployed as 1 admin and 5 OSDs. Each nodes is running Centos7+podman with cephadm deployment of Octopus. Other than scale

[ceph-users] Re: Upgrade paths beyond octopus on Centos7

2022-08-13 Thread Brent Kennedy
rom: Frank Schilder Sent: Friday, August 12, 2022 8:30 AM To: Brent Kennedy ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: Upgrade paths beyond octopus on Centos7 Isn't the problem stated in the error message: 2022-06-25 21:51:59,703 7f4748727b80 DEBUG /usr/bin/ceph-mon: too many arguments: [--d

[ceph-users] Re: Upgrade paths beyond octopus on Centos7

2022-08-11 Thread Brent Kennedy
quot;any > CRI", like k8s works with docker/podman/crio. > > Best regards, > > Nico > > On 2022-08-07 07:11, Brent Kennedy wrote: >> Did you ever find an answer? Have the same issue, stuck in podman >> compatibility purgatory with Centos 7 and octopus container

[ceph-users] Re: Upgrade paths beyond octopus on Centos7

2022-08-11 Thread Brent Kennedy
-Original Message- From: Marc Sent: Monday, August 8, 2022 4:10 AM To: Brent Kennedy ; ceph-users@ceph.io Subject: RE: [ceph-users] Re: Upgrade paths beyond octopus on Centos7 > When the containers start on podman 4, they show an error regarding > groups. > Searching on the web for t

[ceph-users] Re: Upgrade paths beyond octopus on Centos7

2022-08-11 Thread Brent Kennedy
Original Message- From: Nico Schottelius Sent: Monday, August 8, 2022 3:09 AM To: Brent Kennedy Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: Upgrade paths beyond octopus on Centos7 Hey Brent, thanks a lot for following up on this. Would it be possible to send the error messsages

[ceph-users] Re: Upgrade paths beyond octopus on Centos7

2022-08-07 Thread Brent Kennedy
All I can say is that its been impossible this past month to upgrade past octopus using cephadm on centos 7. I thought if I spun up new servers and started containers on those using the Octopus cephadm script, I would be ok. But both Rocky and Centos 8 stream wont run the older Octopus

[ceph-users] Re: Upgrade paths beyond octopus on Centos7

2022-08-06 Thread Brent Kennedy
Did you ever find an answer? Have the same issue, stuck in podman compatibility purgatory with Centos 7 and octopus containers. Cephadm quincy wont even run on centos 7, says not supported. Closest thing I can find to a solution is to save the bare metal ceph configuration on the node,

[ceph-users] Re: Conversion to Cephadm

2022-07-01 Thread Brent Kennedy
with a petabyte of data will be a long upgrade… -Brent From: Redouane Kachach Elhichou Sent: Monday, June 27, 2022 3:10 AM To: Brent Kennedy Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Conversion to Cephadm From the error message: 2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin

[ceph-users] Conversion to Cephadm

2022-06-25 Thread Brent Kennedy
I successfully converted to cephadm after upgrading the cluster to octopus. I am on CentOS 7 and am attempting to convert some of the nodes over to rocky, but when I try to add a rocky node in and start the mgr or mon service, it tries to start an octopus container and the service comes back with

[ceph-users] Re: Ceph osd Reweight command in octopus

2021-04-12 Thread Brent Kennedy
: Monday, March 15, 2021 3:48 PM To: Brent Kennedy Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Ceph osd Reweight command in octopus Have you tried a more aggressive reweight value? I've seen some stubborn crush maps that don't start moving date until 0.9 or lower in some cases. Reed > On Mar

[ceph-users] Ceph osd Reweight command in octopus

2021-03-11 Thread Brent Kennedy
We have a ceph octopus cluster running 15.2.6, its indicating a near full osd which I can see is not weighted equally with the rest of the osds. I tried to do the usual "ceph osd reweight osd.0 0.95" to force it down a little bit, but unlike the nautilus clusters, I see no data movement when

[ceph-users] Re: v15.2.6 Octopus released

2020-11-21 Thread Brent Kennedy
FYI, ceph-ansible has a problem due to ceph-volume , so don't do it via the rolling-upgrade.yml script. -Brent -Original Message- From: David Galloway Sent: Wednesday, November 18, 2020 9:39 PM To: ceph-annou...@ceph.io; ceph-users@ceph.io; d...@ceph.io; ceph-de...@vger.kernel.org;

[ceph-users] Re: question about rgw delete speed

2020-11-12 Thread Brent Kennedy
Ceph is definitely a good choice for storing millions of files. It sounds like you plan to use this like s3, so my first question would be: Are the deletes done for a specific reason? ( e.g. the files are used for a process and discarded ) If its an age thing, you can set the files to

[ceph-users] Autoscale - enable or not on main pool?

2020-11-12 Thread Brent Kennedy
I recently setup a new octopus cluster and was testing the autoscale feature. Used ceph-ansible so its enabled by default. Anyhow, I have three other clusters that are on nautilus, so I wanted to see if it made sense to enable it there on the main pool. Here is a print out of the autoscale

[ceph-users] Re: Rados Crashing

2020-11-12 Thread Brent Kennedy
nce Nautilus the default rgw frontend is beast, have you thought about switching? Regards, Eugen [1] https://tracker.ceph.com/issues/22951 Zitat von Brent Kennedy : > We are performing file maintenance( deletes essentially ) and when the > process gets to a certain point, all four rados

[ceph-users] Rados Crashing

2020-10-21 Thread Brent Kennedy
We are performing file maintenance( deletes essentially ) and when the process gets to a certain point, all four rados gateways crash with the following: Log output: -5> 2020-10-20 06:09:53.996 7f15f1543700 2 req 7 0.000s s3:delete_obj verifying op params -4> 2020-10-20 06:09:53.996

[ceph-users] Re: NVMe's

2020-09-23 Thread Brent Kennedy
-Original Message- From: Stefan Kooman Sent: Wednesday, September 23, 2020 3:49 AM To: Brent Kennedy ; 'ceph-users' Subject: Re: [ceph-users] NVMe's On 2020-09-23 07:39, Brent Kennedy wrote: > We currently run a SSD cluster and HDD clusters and are looking at > possibly creating a c

[ceph-users] NVMe's

2020-09-22 Thread Brent Kennedy
We currently run a SSD cluster and HDD clusters and are looking at possibly creating a cluster for NVMe storage. For spinners and SSDs, it seemed the max recommended per osd host server was 16 OSDs ( I know it depends on the CPUs and RAM, like 1 cpu core and 2GB memory ). Questions: 1. If

[ceph-users] Re: ceph-volume lvm cannot zap???

2020-09-22 Thread Brent Kennedy
I run this command, e.g. "ceph-volume lvm zap --destroy /dev/sdm" when zapping. but I haven't run into any locks since going to nautilus, it seems to know when a disk is dead. Removing an OSD requires you to stop the process, so I can imagine anything was running to that. Perhaps a bad

[ceph-users] OSD upgrades

2020-06-01 Thread Brent Kennedy
We are rebuilding servers and before luminous our process was: 1. Reweight the OSD to 0 2. Wait for rebalance to complete 3. Out the osd 4. Crush remove osd 5. Auth del osd 6. Ceph osd rm # Seems the luminous documentation says that you should: 1.

[ceph-users] Re: Questions on Ceph cluster without OS disks

2020-04-05 Thread Brent Kennedy
in the correct ceph configuration files during boot without needing to do tons of scripting ( the clusters are 15-20 machines ). -Brent From: Martin Verges Sent: Sunday, April 5, 2020 3:04 AM To: Brent Kennedy Cc: huxia...@horebdata.cn; ceph-users Subject: Re: [ceph-users] Re: Questions on Ceph

[ceph-users] Re: Cann't create ceph cluster

2020-04-05 Thread Brent Kennedy
If you are deploying the octopus release, it cant be installed on CentOS 7 ( must use centOS 8 currently ). Cant quite tell from that short log snippet. -Brent -Original Message- From: sz_cui...@163.com Sent: Sunday, April 5, 2020 2:36 AM To: ceph-users Subject: [ceph-users] Cann't

[ceph-users] Re: v14.2.8 Nautilus released

2020-04-04 Thread Brent Kennedy
Did you get an answer for this? My original thought when I read it was that the osd would need to be recreated(as you noted). -Brent -Original Message- From: Marc Roos Sent: Tuesday, March 3, 2020 10:58 AM To: abhishek ; ceph-users Subject: [ceph-users] Re: v14.2.8 Nautilus released

[ceph-users] Re: Questions on Ceph cluster without OS disks

2020-04-04 Thread Brent Kennedy
Forgive me for asking but it seems most OS's require a swap file and when I look into doing something similar(meaning not having anything), they all say the OS could go unstable without it. It seems that anyone doing this needs to be 100 certain memory will not be used at 100% ever or the OS

[ceph-users] Re: Is there a better way to make a samba/nfs gateway?

2020-04-04 Thread Brent Kennedy
I think I may have cheated... I setup the ceph iscsi gateway in HA mode, then a freenas server. Connected the freenas server to the iscsi targets and poof, I have a universal NFS share(s). I stood up a few freenas servers to share various loads. We also use the iscsi gateways for direct esxi