[ceph-users] Re: OSDs growing beyond full ratio

2022-08-28 Thread Jarett
Isn’t rebalancing onto the empty OSDs default behavior? From: Wyll IngersollSent: Sunday, August 28, 2022 10:31 AMTo: ceph-users@ceph.ioSubject: [ceph-users] OSDs growing beyond full ratio We have a pacific cluster that is overly filled and is having major trouble recovering.  We are desperate for

[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread Jarett
Did you mean SSD? 12 x 5TB solid-state disks? Or is that “Spinning Disk Drive?” Do you have any SSDs/NVMe you can use? From: gagan tiwariSent: Wednesday, September 14, 2022 1:54 AMTo: ceph-users@ceph.ioSubject: [ceph-users] ceph deployment best practice Hi Guys,    I am new to Ceph and

[ceph-users] access to a pool hangs, only on one node

2022-07-18 Thread Jarett DeAngelis
hi, I have a 3-node Proxmox Ceph cluster that's been acting up whenever I try to get it to do anything with one of the pools (fastwrx) on the cluster. `rbd pool stats fastwrx` just hangs on one node, but on the other two, responds instantaneously. `ceph -s` looks like this: root@ibnmajid:~# c

[ceph-users] Re: OSD backups and recovery

2020-05-29 Thread Jarett DeAngelis
For some reason I’d thought replication between clusters was an “official” method of backing up. > On May 29, 2020, at 4:31 PM, > wrote: > > Ludek; > > As a cluster system, Ceph isn't really intended to be backed up. It's > designed to take quite a beating, and preserve your data. > > Fro

[ceph-users] Re: Help

2020-08-17 Thread Jarett DeAngelis
Configuring it with respect to what about these applications? What are you trying to do? Do you have existing installations of any of these? We need a little more about your requirements. > On Apr 17, 2020, at 1:14 PM, Randy Morgan wrote: > > We are seeking information on configuring Ceph to w

[ceph-users] Newbie to Ceph jacked up his monitor

2020-03-22 Thread Jarett DeAngelis
hi folks. I seem to have gotten myself into a situation where my monitor directory filled itself up entirely while I wasn't looking, and now my monitor is unresponsive. (this is in a 3-node cluster). it grew a whole lot right after I added a number of OSDs and it started rebalancing. how can I

[ceph-users] Re: Newbie to Ceph jacked up his monitor

2020-03-22 Thread Jarett DeAngelis
So, I thought I’d post with what I learned re: what to do with this problem. This system is a 3-node Proxmox cluster, and each node had: 1 x 1TB NVMe 2 x 512GB HDD I had maybe 100GB of data in this system total. Then I added: 2 x 256GB SSD 1 x 1TB HDD To each system, and let it start rebalanci

[ceph-users] Re: No reply or very slow reply from Prometheus plugin - ceph-mgr 13.2.8 mimic

2020-03-27 Thread Jarett DeAngelis
I’m actually very curious how well this is performing for you as I’ve definitely not seen a deployment this large. How do you use it? > On Mar 27, 2020, at 11:47 AM, shubjero wrote: > > I've reported stability problems with ceph-mgr w/ prometheus plugin > enabled on all versions we ran in produ

[ceph-users] Terrible IOPS performance

2020-03-30 Thread Jarett DeAngelis
run more than one OSD on the NVMe — how is that done, and can I do it “on the fly” with the system already up and running like this? And, will more OSDs give me better IOPS? Thanks, Jarett ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Multiple CephFS creation

2020-03-30 Thread Jarett DeAngelis
Hi guys, This is documented as an experimental feature, but it doesn’t explain how to ensure that affinity for a given MDS sticks to the second filesystem you create. Has anyone had success implementing a second CephFS? In my case it will be based on a completely different pool from my first on

[ceph-users] Re: Multiple CephFS creation

2020-03-31 Thread Jarett DeAngelis
Thanks for this. Still on Nautilus here because this is a Proxmox cluster but good for folks tracking master to know. J On Tue, Mar 31, 2020, 3:14 AM Patrick Donnelly wrote: > On Mon, Mar 30, 2020 at 11:57 PM Eugen Block wrote: > > For the standby daemon you have to be aware of this: > > > > >

[ceph-users] Re: Multiple CephFS creation

2020-03-31 Thread Jarett DeAngelis
So, for the record, this doesn’t appears to work in Nautilus. Does this mean that I should just count on my standby MDS to “step in” when a new FS is created? > On Mar 31, 2020, at 3:19 AM, Eugen Block wrote: > >> This has changed in Octopus. The above config variables are removed. >> Instea

[ceph-users] Re: Multiple CephFS creation

2020-03-31 Thread Jarett DeAngelis
> Yes, standby (as opposed to standby-replay) MDS' form a shared pool > from which the mons will promote an MDS to the required role. > > On Tue, Mar 31, 2020 at 12:52 PM Jarett DeAngelis wrote: >> >> So, for the record, this doesn’t appears to work in Nautilus. &

[ceph-users] Possible to "move" an OSD?

2020-04-11 Thread Jarett DeAngelis
This is an edge case and probably not something that would be done in production, so I suspect the answer is “lol, no,” but here goes: I have three nodes running Nautilus courtesy of Proxmox. One of them is a self-built Ryzen 5 3600 system, and the other two are salvaged i5 Skylake desktops tha

[ceph-users] Re: OSDs get full with bluestore logs

2020-04-18 Thread Jarett DeAngelis
I’ve also had this problem. In my case it was because I’d added a number of OSDs recently and the cluster was spending a lot of time rebalancing. Is there a reason it might be doing that in yours? > On Apr 18, 2020, at 12:59 PM, Khodayar Doustar wrote: > > Hi, > > I have a 3 node cluster of m

[ceph-users] Re: Dear Abby: Why Is Architecting CEPH So Hard?

2020-04-22 Thread Jarett DeAngelis
Well, for starters, "more network" = "faster cluster." On Wed, Apr 22, 2020 at 11:18 PM lin.yunfan wrote: > I have seen a lot of people saying not to go with big nodes. > What is the exact reason for that? > I can understand that if the cluster is not big enough then the total > nodes count coul