Re: [ceph-users] Minimal MDS for CephFS on OSD hosts

2018-06-19 Thread Denny Fuchs
hi,


> Am 19.06.2018 um 20:40 schrieb Steffen Winther Sørensen :
> 
> 
> 
>> Den 19. jun. 2018 kl. 16.50 skrev Webert de Souza Lima 
>> :
>> 
>> Keep in mind that the mds server is cpu-bound, so during heavy workloads it 
>> will eat up CPU usage, so the OSD daemons can affect or be affected by the 
>> MDS daemon.
>> But it does work well. We've been running a few clusters with MON, MDS and 
>> OSDs sharing the same hosts for a couple of years now.
> We’re also running mds on the osd hosts but again we only run vm backup on 
> cephfs.

we need it to replace the Synology NFS cluster. Also we want share the files 
between two datacenter (dark fiber with 10Gbit)  (but keep both Ceph clusters 
separate). So I think, we put three MDS in VMs on our Proxmox VM cluster (5 
nodes) with a bit RAM and check, if it is reliable.
Its much more complicated to have a shared storage for "legacy" systems, than 
we thought :-) Also RadosGW with NFS is on Debian Stretch not the first choice, 
as NFS does not has the features, we need (or my Google was to old).

cu denny
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Minimal MDS for CephFS on OSD hosts

2018-06-19 Thread Steffen Winther Sørensen


> Den 19. jun. 2018 kl. 16.50 skrev Webert de Souza Lima 
> :
> 
> Keep in mind that the mds server is cpu-bound, so during heavy workloads it 
> will eat up CPU usage, so the OSD daemons can affect or be affected by the 
> MDS daemon.
> But it does work well. We've been running a few clusters with MON, MDS and 
> OSDs sharing the same hosts for a couple of years now.
We’re also running mds on the osd hosts but again we only run vm backup on 
cephfs.

Another alternative could be a file server vm, exporting its rbd based devices 
eg. via nfs/samba whatever’s preferred/required by clients. We have a nfs vm in 
one proxmox cluster like this.

/Steffen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Minimal MDS for CephFS on OSD hosts

2018-06-19 Thread Webert de Souza Lima
Keep in mind that the mds server is cpu-bound, so during heavy workloads it
will eat up CPU usage, so the OSD daemons can affect or be affected by the
MDS daemon.
But it does work well. We've been running a few clusters with MON, MDS and
OSDs sharing the same hosts for a couple of years now.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*


On Tue, Jun 19, 2018 at 11:03 AM Paul Emmerich 
wrote:

> Just co-locate them with your OSDs. You can can control how much RAM the
> MDSs use with the "mds cache memory limit" option. (default 1 GB)
> Note that the cache should be large enough RAM to keep the active working
> set in the mds cache but 1 million files is not really a lot.
> As a rule of thumb: ~1GB of MDS cache per ~100k files.
>
> 64GB of RAM for 12 OSDs and an MDS is enough in most cases.
>
> Paul
>
> 2018-06-19 15:34 GMT+02:00 Denny Fuchs :
>
>> Hi,
>>
>> Am 19.06.2018 15:14, schrieb Stefan Kooman:
>>
>> Storage doesn't matter for MDS, as they won't use it to store ceph data
>>> (but instead use the (meta)data pool to store meta data).
>>> I would not colocate the MDS daemons with the OSDS, but instead create a
>>> couple of VMs (active / standby) and give them as much RAM as you
>>> possibly can.
>>>
>>
>> thanks a lot. I think, we would start with round about 8GB and see, what
>> happens.
>>
>> cu denny
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Minimal MDS for CephFS on OSD hosts

2018-06-19 Thread Paul Emmerich
Just co-locate them with your OSDs. You can can control how much RAM the
MDSs use with the "mds cache memory limit" option. (default 1 GB)
Note that the cache should be large enough RAM to keep the active working
set in the mds cache but 1 million files is not really a lot.
As a rule of thumb: ~1GB of MDS cache per ~100k files.

64GB of RAM for 12 OSDs and an MDS is enough in most cases.

Paul

2018-06-19 15:34 GMT+02:00 Denny Fuchs :

> Hi,
>
> Am 19.06.2018 15:14, schrieb Stefan Kooman:
>
> Storage doesn't matter for MDS, as they won't use it to store ceph data
>> (but instead use the (meta)data pool to store meta data).
>> I would not colocate the MDS daemons with the OSDS, but instead create a
>> couple of VMs (active / standby) and give them as much RAM as you
>> possibly can.
>>
>
> thanks a lot. I think, we would start with round about 8GB and see, what
> happens.
>
> cu denny
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Minimal MDS for CephFS on OSD hosts

2018-06-19 Thread Denny Fuchs

Hi,

Am 19.06.2018 15:14, schrieb Stefan Kooman:


Storage doesn't matter for MDS, as they won't use it to store ceph data
(but instead use the (meta)data pool to store meta data).
I would not colocate the MDS daemons with the OSDS, but instead create 
a

couple of VMs (active / standby) and give them as much RAM as you
possibly can.


thanks a lot. I think, we would start with round about 8GB and see, what 
happens.


cu denny
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Minimal MDS for CephFS on OSD hosts

2018-06-19 Thread Stefan Kooman
Quoting Denny Fuchs (linuxm...@4lin.net):
> 
> We have also a 2nd cluster which holds the VMs with also 128Gb Ram and 2 x
> Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz. But with only system disks (ZFS
> Raid1).

Storage doesn't matter for MDS, as they won't use it to store ceph data
(but instead use the (meta)data pool to store meta data).
I would not colocate the MDS daemons with the OSDS, but instead create a
couple of VMs (active / standby) and give them as much RAM as you
possibly can.

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com