[ceph-users] Re: how to "undelete" a pool

2020-09-24 Thread Peter Sarossy
Hit send too early... So I did find in the code that it's looking for the deletion timestamp, but deleting this field in the CRD does not stop the deletion request either. The deletionTimestamp reappears after committing the change. https://github.com/rook/rook/blob/23108cc94afdebc8f4ab144130a270b

[ceph-users] how to "undelete" a pool

2020-09-24 Thread Peter Sarossy
hey folks, I have managed to fat finger a config apply command and accidentally deleted the CRD for one of my pools. The operator went ahead and tried to purge it, but fortunately since it's used by CephFS it was unable to. Redeploying the exact same CRD does not make the operator stop trying to

[ceph-users] Re: NVMe's

2020-09-24 Thread Mark Nelson
Thanks for the info!  Interesting numbers.  Probably not 60K client IOPs/OSD then, but the tp_osd_tp threads were probably working pretty hard under the combined client/recovery workload. Mark On 9/24/20 2:49 PM, Martin Verges wrote: Hello, It was some time ago but as far as I remember and

[ceph-users] Re: NVMe's

2020-09-24 Thread Martin Verges
Hello, It was some time ago but as far as I remember and found in the chat log, it was during backfill/recovery and high client workload and on Intel Xeon Silver 4110, 2.10GHz, 8C/16T Cpu. I found a screenshot in my chat history stating 775% and 722% cpu usage in htop for 2 OSDs (the server has 2

[ceph-users] Re: NVMe's

2020-09-24 Thread vitalif
Yeah but you should divide sysstat of each disk by 5. Which is Ceph's WA. 60k/5 = 12k external iops, pretty realistic. > I did not see 10 cores, but 7 cores per osd over a long period on pm1725a > disks with around 60k > IO/s according to sysstat of each disk. ___

[ceph-users] Re: NVMe's

2020-09-24 Thread Mark Nelson
Mind if I ask what size of IOs those where, what kind of IOs (reads/writes/sequential/random?) and what kind of cores? Mark On 9/24/20 1:43 PM, Martin Verges wrote: I did not see 10 cores, but 7 cores per osd over a long period on pm1725a disks with around 60k IO/s according to sysstat of ea

[ceph-users] Re: NVMe's

2020-09-24 Thread Martin Verges
I did not see 10 cores, but 7 cores per osd over a long period on pm1725a disks with around 60k IO/s according to sysstat of each disk. -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat: https://t.me/MartinVerges croit GmbH, Freseniusstr. 31h, 81247 M

[ceph-users] Feature highlight: CephFS network restriction

2020-09-24 Thread Stefan Kooman
Hi, While reading documentation on CephFS I came accross the "network restriction" feature (supported since Nautilus) [1]. I have never seen a reference to / blog post on this feature and didn't even know it was possible. This might be true for more people, hence this mail to the list :-). So, fo

[ceph-users] Re: RBD quota per namespace

2020-09-24 Thread Stefan Kooman
On 2020-09-24 14:34, Eugen Block wrote: > Hi *, > > I'm curious if this idea [1] of quotas on namespace level for rbd will > be implemented. I couldn't find any existing commands in my lab Octopus > cluster so I guess it's still just an idea, right? > > If there's any information I missed please

[ceph-users] Re: NVMe's

2020-09-24 Thread Mark Nelson
On 9/24/20 11:46 AM, vita...@yourcmc.ru wrote: OK, I'll retry my tests several times more. But I've never seen OSD utilize 10 cores, so... I won't believe it until I see it myself on my machine. :-)) It's better to see evidence with your own eyes of course! I tried a fresh OSD on a bloc

[ceph-users] Unable to restart OSD assigned to LVM partition on Ceph 15.1.2?

2020-09-24 Thread Matt Larson
Hi, I recently restarted a storage node for our Ceph cluster and had an issue bringing one of the OSDs back online. This storage node has multiple HDs each as a devoted OSD for a data pool, and a single nVME drive with an LVM partition assigned as an OSD in a metadata pool. After rebooting the ho

[ceph-users] Re: NVMe's

2020-09-24 Thread vitalif
OK, I'll retry my tests several times more. But I've never seen OSD utilize 10 cores, so... I won't believe it until I see it myself on my machine. :-)) I tried a fresh OSD on a block ramdisk ("brd"), for example. It was eating 658% CPU and pushing only 4138 write iops... __

[ceph-users] Re: RBD quota per namespace

2020-09-24 Thread Jason Dillaman
On Thu, Sep 24, 2020 at 9:53 AM Stefan Kooman wrote: > > On 2020-09-24 14:34, Eugen Block wrote: > > Hi *, > > > > I'm curious if this idea [1] of quotas on namespace level for rbd will > > be implemented. I couldn't find any existing commands in my lab Octopus > > cluster so I guess it's still ju

[ceph-users] RBD quota per namespace

2020-09-24 Thread Eugen Block
Hi *, I'm curious if this idea [1] of quotas on namespace level for rbd will be implemented. I couldn't find any existing commands in my lab Octopus cluster so I guess it's still just an idea, right? If there's any information I missed please point me to it. Thanks! Eugen [1] https://tr

[ceph-users] Re: Vitastor, a fast Ceph-like block storage for VMs

2020-09-24 Thread vitalif
Hi Roman, Yes, you're right - OSDs list all objects during peering and take the latest full version of each object. Full version is a version that has at least min_size parts for XOR/EC, or any version for replicated setups which is OK because writes are atomic. If there is a newer "committed"

[ceph-users] Re: Remove separate WAL device from OSD

2020-09-24 Thread Igor Fedotov
Yeah, this should work as well... On 9/24/2020 9:32 AM, Michael Fladischer wrote: Hi Igor, Am 23.09.2020 um 18:38 schrieb Igor Fedotov: bin/ceph-bluestore-tool --path dev/osd0 --devs-source dev/osd0/block.wal --dev-target dev/osd0/block.db --command bluefs-bdev-migrate Would this also work

[ceph-users] Re: virtual machines crashes after upgrade to octopus

2020-09-24 Thread Denis Krienbühl
I’m interested in the following as well, any chance you could point us to a specific commit Jason? > On 14 Sep 2020, at 13:55, Jason Dillaman wrote: > > Can you try the latest development release of Octopus [1]? A librbd > crash fix has been sitting in that branch for about a month now to be >