[ceph-users] Re: v14.2.8 Nautilus released

2020-04-04 Thread Brent Kennedy
Did you get an answer for this? My original thought when I read it was that the osd would need to be recreated(as you noted). -Brent -Original Message- From: Marc Roos Sent: Tuesday, March 3, 2020 10:58 AM To: abhishek ; ceph-users Subject: [ceph-users] Re: v14.2.8 Nautilus released

[ceph-users] Re: Questions on Ceph cluster without OS disks

2020-04-04 Thread Anthony D'Atri
Linuxes don’t require swap at least, maybe BSDs still do but I haven’t run one since the mid 90s. Back in the day swap had to be at least the size of physmem, but we’re talking SunOS 4.14 days. Swap IMHO has been moot for years. It dates to a time when RAM was much more expensive and

[ceph-users] Re: Questions on Ceph cluster without OS disks

2020-04-04 Thread Brent Kennedy
Forgive me for asking but it seems most OS's require a swap file and when I look into doing something similar(meaning not having anything), they all say the OS could go unstable without it. It seems that anyone doing this needs to be 100 certain memory will not be used at 100% ever or the OS

[ceph-users] Re: Is there a better way to make a samba/nfs gateway?

2020-04-04 Thread Brent Kennedy
I think I may have cheated... I setup the ceph iscsi gateway in HA mode, then a freenas server. Connected the freenas server to the iscsi targets and poof, I have a universal NFS share(s). I stood up a few freenas servers to share various loads. We also use the iscsi gateways for direct esxi

[ceph-users] Re: Recommendation for decent write latency performance from HDDs

2020-04-04 Thread jesper
> On Sat, Apr 4, 2020 at 4:13 PM wrote: >> Offloading the block.db on NVMe / SSD: >> https://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/ >> >> Pro: Easy to deal with - seem heavily supported. >> Con: As far as I can tell - this will only benefit the metadata of the >> osd-

[ceph-users] Re: Recommendation for decent write latency performance from HDDs

2020-04-04 Thread Paul Emmerich
On Sat, Apr 4, 2020 at 4:13 PM wrote: > Offloading the block.db on NVMe / SSD: > https://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/ > > Pro: Easy to deal with - seem heavily supported. > Con: As far as I can tell - this will only benefit the metadata of the > osd- not

[ceph-users] Recommendation for decent write latency performance from HDDs

2020-04-04 Thread jesper
Hi. We have a need for "bulk" storage - but with decent write latencies. Normally we would do this with a DAS with a Raid5 with 2GB Battery backed write cache in front - As cheap as possible but still getting the features of scalability of ceph. In our "first" ceph cluster we did the same - just

[ceph-users] Re: Resize Bluestore i.e. shrink?

2020-04-04 Thread Robert Sander
Hi, Am 03.04.20 um 21:51 schrieb Udo Waechter: > I'm currently building a little ceph-cluster, with embedded devices. My > OSD Nodes are constrained in RAM (1GB, but 5 SATA Ports, please don't > kill me ;) ). Anyways. Each of those nodes have 2x256 GB SSD and 2x 1TB > HDDs. This will not work.

[ceph-users] Re: Fw: Incompatibilities (implicit_tenants & barbican) with Openstack after migrating from Ceph Luminous to Nautilus.

2020-04-04 Thread Scheurer François
Dear Casey We cherry picked your backports for the patches for multi-tenant and barbican (and also one for keystone caching) on rgw 14.2.8 : Merge pull request #26095 from bbc/s3secretcache rgw: Added caching for S3 credentials retrieved from keystone (cherry picked from commit