Hit send too early...
So I did find in the code that it's looking for the deletion timestamp, but
deleting this field in the CRD does not stop the deletion request either.
The deletionTimestamp reappears after committing the change.
https://github.com/rook/rook/blob/23108cc94afdebc8f4ab144130a270b
hey folks,
I have managed to fat finger a config apply command and accidentally
deleted the CRD for one of my pools. The operator went ahead and tried to
purge it, but fortunately since it's used by CephFS it was unable to.
Redeploying the exact same CRD does not make the operator stop trying to
Thanks for the info! Interesting numbers. Probably not 60K client
IOPs/OSD then, but the tp_osd_tp threads were probably working pretty
hard under the combined client/recovery workload.
Mark
On 9/24/20 2:49 PM, Martin Verges wrote:
Hello,
It was some time ago but as far as I remember and
Hello,
It was some time ago but as far as I remember and found in the chat log, it
was during backfill/recovery and high client workload and on Intel Xeon
Silver 4110, 2.10GHz, 8C/16T Cpu.
I found a screenshot in my chat history stating 775% and 722% cpu usage in
htop for 2 OSDs (the server has 2
Yeah but you should divide sysstat of each disk by 5. Which is Ceph's WA.
60k/5 = 12k external iops, pretty realistic.
> I did not see 10 cores, but 7 cores per osd over a long period on pm1725a
> disks with around 60k
> IO/s according to sysstat of each disk.
___
Mind if I ask what size of IOs those where, what kind of IOs
(reads/writes/sequential/random?) and what kind of cores?
Mark
On 9/24/20 1:43 PM, Martin Verges wrote:
I did not see 10 cores, but 7 cores per osd over a long period on
pm1725a disks with around 60k IO/s according to sysstat of ea
I did not see 10 cores, but 7 cores per osd over a long period on pm1725a
disks with around 60k IO/s according to sysstat of each disk.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 M
Hi,
While reading documentation on CephFS I came accross the "network
restriction" feature (supported since Nautilus) [1].
I have never seen a reference to / blog post on this feature and didn't
even know it was possible. This might be true for more people, hence
this mail to the list :-).
So, fo
On 2020-09-24 14:34, Eugen Block wrote:
> Hi *,
>
> I'm curious if this idea [1] of quotas on namespace level for rbd will
> be implemented. I couldn't find any existing commands in my lab Octopus
> cluster so I guess it's still just an idea, right?
>
> If there's any information I missed please
On 9/24/20 11:46 AM, vita...@yourcmc.ru wrote:
OK, I'll retry my tests several times more.
But I've never seen OSD utilize 10 cores, so... I won't believe it until I see
it myself on my machine. :-))
It's better to see evidence with your own eyes of course!
I tried a fresh OSD on a bloc
Hi,
I recently restarted a storage node for our Ceph cluster and had an
issue bringing one of the OSDs back online. This storage node has
multiple HDs each as a devoted OSD for a data pool, and a single nVME
drive with an LVM partition assigned as an OSD in a metadata pool.
After rebooting the ho
OK, I'll retry my tests several times more.
But I've never seen OSD utilize 10 cores, so... I won't believe it until I see
it myself on my machine. :-))
I tried a fresh OSD on a block ramdisk ("brd"), for example. It was eating 658%
CPU and pushing only 4138 write iops...
__
On Thu, Sep 24, 2020 at 9:53 AM Stefan Kooman wrote:
>
> On 2020-09-24 14:34, Eugen Block wrote:
> > Hi *,
> >
> > I'm curious if this idea [1] of quotas on namespace level for rbd will
> > be implemented. I couldn't find any existing commands in my lab Octopus
> > cluster so I guess it's still ju
Hi *,
I'm curious if this idea [1] of quotas on namespace level for rbd will
be implemented. I couldn't find any existing commands in my lab
Octopus cluster so I guess it's still just an idea, right?
If there's any information I missed please point me to it.
Thanks!
Eugen
[1]
https://tr
Hi Roman,
Yes, you're right - OSDs list all objects during peering and take the latest
full version of each object. Full version is a version that has at least
min_size parts for XOR/EC, or any version for replicated setups which is OK
because writes are atomic. If there is a newer "committed"
Yeah, this should work as well...
On 9/24/2020 9:32 AM, Michael Fladischer wrote:
Hi Igor,
Am 23.09.2020 um 18:38 schrieb Igor Fedotov:
bin/ceph-bluestore-tool --path dev/osd0 --devs-source
dev/osd0/block.wal --dev-target dev/osd0/block.db --command
bluefs-bdev-migrate
Would this also work
I’m interested in the following as well, any chance you could point us to a
specific commit Jason?
> On 14 Sep 2020, at 13:55, Jason Dillaman wrote:
>
> Can you try the latest development release of Octopus [1]? A librbd
> crash fix has been sitting in that branch for about a month now to be
>
17 matches
Mail list logo