[ceph-users] Re: Strange performance drop and low oss performance

2020-02-06 Thread Janne Johansson
> > > For object gateway, the performance is got by `swift-bench -t 64` which > uses 64 threads concurrently. Will the radosgw and http overhead be so > significant (94.5MB/s to 26MB/s for cluster1) when multiple threads are > used? Thanks in advance! > > Can't say what it "must" be, but if I log i

[ceph-users] Re: osd_memory_target ignored

2020-02-06 Thread Frank Schilder
Dear Stefan, thanks for your help. I opened these: https://tracker.ceph.com/issues/44010 https://tracker.ceph.com/issues/44011 Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Stefan Kooman Sent: 05 February 202

[ceph-users] Re: Write i/o in CephFS metadata pool

2020-02-06 Thread Samy Ascha
> On 4 Feb 2020, at 16:14, Samy Ascha wrote: > > > >> On 2 Feb 2020, at 12:45, Patrick Donnelly wrote: >> >> On Wed, Jan 29, 2020 at 1:25 AM Samy Ascha wrote: >>> >>> Hi! >>> >>> I've been running CephFS for a while now and ever since setting it up, I've >>> seen unexpectedly large writ

[ceph-users] Re: Write i/o in CephFS metadata pool

2020-02-06 Thread Stefan Kooman
> Hi! > > I've confirmed that the write IO to the metadata pool is coming form active > MDSes. > > I'm experiencing very poor write performance on clients and I would like to > see if there's anything I can do to optimise the performance. > > Right now, I'm specifically focussing on speeding u

[ceph-users] Stuck with an unavailable iscsi gateway

2020-02-06 Thread jcharles
Hello I can't find, a way to resolve my problem. I lost a iscsi gateway in a pool of 4 gateway, there is 3 lefts. I can't delete the lost gateway from host and I can't change the Owner of the resource owned by the lost gateway. Finally, I have ressources which are inaccessible from clients and

[ceph-users] Re: Understanding Bluestore performance characteristics

2020-02-06 Thread vitalif
Hi Stefan, Do you mean more info than: Yes, there's more... I don't remember exactly, I think some information ends up included into OSD perf counters and some information is dumped into the OSD log, maybe there's even a 'ceph daemon' command to trigger it... There are 4 options that enab

[ceph-users] Need info about ceph bluestore autorepair

2020-02-06 Thread Mario Giammarco
Hello, if I have a pool with replica 3 what happens when one replica is corrupted? I suppose ceph detects bad replica using checksums and replace it with good one If I have a pool with replica 2 what happens? Thanks, Mario ___ ceph-users mailing list -- c

[ceph-users] Re: Stuck with an unavailable iscsi gateway

2020-02-06 Thread Jason Dillaman
Originally, the idea of a gateway just permanently disappearing out-of-the-blue was never a concern. However, since this seems to be a recurring issue, the latest version of ceph-iscsi includes support for force-deleting a permanently dead iSCSI gateway [1]. I don't think that fix is in an official

[ceph-users] Re: Need info about ceph bluestore autorepair

2020-02-06 Thread Janne Johansson
Den tors 6 feb. 2020 kl 15:06 skrev Mario Giammarco : > Hello, > if I have a pool with replica 3 what happens when one replica is corrupted? > The PG on which this happens will turn from active+clean to active+inconsistent. > I suppose ceph detects bad replica using checksums and replace it wit

[ceph-users] RBD cephx read-only key

2020-02-06 Thread Andras Pataki
I'm trying to set up a cephx key to mount RBD images read-only.  I have the following two keys: [client.rbd]     key = xxx     caps mgr = "profile rbd"     caps mon = "profile rbd"     caps osd = "profile rbd pool=rbd_vm" [client.rbd-ro]     key = xxx     caps mgr = "profile rbd-read-only"    

[ceph-users] Re: RBD cephx read-only key

2020-02-06 Thread Jason Dillaman
On Thu, Feb 6, 2020 at 11:20 AM Andras Pataki wrote: > > I'm trying to set up a cephx key to mount RBD images read-only. I have > the following two keys: > > [client.rbd] > key = xxx > caps mgr = "profile rbd" > caps mon = "profile rbd" > caps osd = "profile rbd pool=rbd_vm" >

[ceph-users] Re: RBD cephx read-only key

2020-02-06 Thread Andras Pataki
Ah, that makes sense.  Thanks for the quick reply! Andras On 2/6/20 11:24 AM, Jason Dillaman wrote: On Thu, Feb 6, 2020 at 11:20 AM Andras Pataki wrote: I'm trying to set up a cephx key to mount RBD images read-only. I have the following two keys: [client.rbd] key = xxx caps mg

[ceph-users] Different memory usage on OSD nodes after update to Nautilus

2020-02-06 Thread Massimo Sgaravatto
Dear all In the mid of January I updated my ceph cluster from Luminous to Nautilus. Attached you can see the memory metrics collected on one OSD node (I see the very same behavior on all OSD hosts) graphed via Ganglia This is Centos 7 node, with 64 GB of RAM, hosting 10 OSDs. So before the updat

[ceph-users] Re: Different memory usage on OSD nodes after update to Nautilus

2020-02-06 Thread Massimo Sgaravatto
Thanks for your feedback The Ganglia graphs are available here: https://cernbox.cern.ch/index.php/s/0xBDVwNkRqcoGdF Replying to the other questions: - Free Memory in ganglia is derived from "MemFree" in /proc/meminfo - Memory Buffers in ganglia is derived from "Buffers" in /proc/meminfo - On th

[ceph-users] Re: Ubuntu 18.04.4 Ceph 12.2.12

2020-02-06 Thread Dan Hill
For the Ubuntu 18.04 LTS, the latest ceph package is 12.2.12-0ubuntu0.18.04.4 and can be found in the bionic-updates pocket [0]. There is an active SRU (stable release update) to move to the new 12.2.13 point release. You can follow its progress on launchpad [1]. I should note that the Ubuntu 18.0

[ceph-users] Re: mds lost very frequently

2020-02-06 Thread Stefan Kooman
Hi, After setting: ceph config set mds mds_recall_max_caps 1 (5000 before change) and ceph config set mds mds_recall_max_decay_rate 1.0 (2.5 before change) And the: ceph tell 'mds.*' injectargs '--mds_recall_max_caps 1' ceph tell 'mds.*' injectargs '--mds_recall_max_decay_rate 1.0'

[ceph-users] Re: slow using ISCSI - Help-me

2020-02-06 Thread Mike Christie
On 02/05/2020 07:03 AM, Gesiel Galvão Bernardes wrote: > Em dom., 2 de fev. de 2020 às 00:37, Gesiel Galvão Bernardes > mailto:gesiel.bernar...@gmail.com>> escreveu: > > Hi, > > Just now was possible continue this. Below is the information > required. Thanks advan Hey, sorry for the

[ceph-users] Benefits of high RAM on a metadata server?

2020-02-06 Thread Matt Larson
Hi, we are planning out a Ceph storage cluster and were choosing between 64GB, 128GB, or even 256GB on metadata servers. We are considering having 2 metadata servers overall. Does going to high levels of RAM possibly yield any performance benefits? Is there a size beyond which there are just dimin

[ceph-users] Re: Benefits of high RAM on a metadata server?

2020-02-06 Thread Bogdan Adrian Velica
Hi, I am running on 3 MDS servers (1 active and 2 backups and I recommend that) each of 128 GB of RAM (the clients are running ML analysis) and I have about 20 mil inodes loaded in ram. It's working fine except some warnings I have "client X is failing to respond to cache pressure." Besides that t

[ceph-users] Re: Benefits of high RAM on a metadata server?

2020-02-06 Thread Wido den Hollander
On 2/6/20 11:01 PM, Matt Larson wrote: > Hi, we are planning out a Ceph storage cluster and were choosing > between 64GB, 128GB, or even 256GB on metadata servers. We are > considering having 2 metadata servers overall. > > Does going to high levels of RAM possibly yield any performance > benef

[ceph-users] Re: Benefits of high RAM on a metadata server?

2020-02-06 Thread Matt Larson
Hi Bogdan, Are the "client failing to respond" messages indicating that you actually exceed the 128 GB ram on your MDS hosts? The MDS servers are not planned to have SSD drives. The storage servers would have HD's and 1 nVME SSD drive that could hold metadata volumes. On Thu, Feb 6, 2020 at 4:1