P.s. The Vagrant file I'm using:
https://gist.github.com/markmaas/5b39e3356a240a9b0fc2063d6d16ffab
That might help someone as well ;-)
On Sun, Mar 15, 2020 at 4:01 PM Mark M wrote:
>
> Hi List,
>
> Trying to learn Ceph, so I'm using a Vagrant setup of six nodes with
> Docker and LVM on them.
>
On 3/14/20 3:08 AM, Seth Galitzer wrote:
Thanks to all who have offered advise on this. I have been looking at
using vfs_ceph in samba, but I'm unsure how to get it on Centos7. As I
understand it, it's optional at compile time. When searching for a
package for it, I see one glusterfs (samba-vfs
On 3/13/20 8:49 PM, Marc Roos wrote:
Can you also create snapshots via the vfs_ceph solution?
Yes! Since Samba 4.11 this supported via vfs_ceph_snapshots module.
k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ce
Hi,
We have a production cluster that just suffered an issue with multiple
of our NVMe OSDs. Multiple of them died (>12) with errors that they no
longer had space with a 'ENOSPC from bluestore, misconfigured cluster'
error over 4 nodes. These are all simple one device bluestore osds.
ceph versi
Hi everyone:
There are two types of qos in ceph(one based on tokenbucket algorithm,another
based on mclock ).
Which one I can use in nautilus production environment ?Thank you
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
I have changed an osd node with this 5.5 kernel-ml, and had still a high
load with this kworker. I moved the backup rsync script to a vm and also
there the high load persists.
This happens already with one rsync session. Before on Luminous I did
not have such issues at all having 4 concurrent
Hi List,
Trying to learn Ceph, so I'm using a Vagrant setup of six nodes with
Docker and LVM on them.
But as soon as I reach this step
https://docs.ceph.com/docs/master/cephadm/#bootstrap-a-new-cluster
I get stuck with this error:
```
INFO:cephadm:Mgr epoch 5 not available, waiting (6/10)...
IN
What if I add another 16GB of RAM for a Planned capacity of not more than
150TB?
On Sun 15 Mar, 2020, 7:26 PM Martin Verges, wrote:
> This is too little memory. We have already seen MDS with well over 50 GB
> Ram requirements.
>
> --
> Martin Verges
> Managing director
>
> Mobile: +49 174 933569
Hi,
Vitaliy - Sure, I can use those absolute values (30GB for DB, 2GB for WAL)
you suggested.
Currently - Proxmox is defaulting to a 178.85 GB partition for the DB/WAL.
(It seems to put the DB and WAL on the same partition).
Using your calculations, with 6 x OSDs per host - that means a total of
This is too little memory. We have already seen MDS with well over 50 GB
Ram requirements.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE31063849
Thank you, All for your suggestions and ideas.
what is your view on using MON, MGR, MDS and cephfs client or samba-ceph
vfs in a single machine (10 core xeon CPU with 16GB RAM and SSD disk)?.
On Sun, Mar 15, 2020 at 3:58 PM Dr. Marco Savoca
wrote:
> Hi Jesper,
>
>
>
> can you state your suggest
WAL is 1G (you can allocate 2 to be sure), DB should always be 30G. And this
doesn't depend on the size of the data partition :-)
14 марта 2020 г. 22:50:37 GMT+03:00, Victor Hooi пишет:
>Hi,
>
>I'm building a 4-node Proxmox cluster, with Ceph for the VM disk
>storage.
>
>On each node, I have:
>
yes, this is a regression issue with the new version:
https://tracker.ceph.com/issues/44614
On Thu, Mar 12, 2020 at 8:44 PM 曹 海旺 wrote:
> I think it is a bug . I reinstall the cluster . The response of create
> topic still 405 .methodnotallowed, anynoe konw why? Thank you very much !
>
> 2020年3月
Hi Jesper,
can you state your suggestion more precisely? I have a similar setup and I’m
also interested.
If i understand you right, you suggest to create an RBD image for data that
attachs to a VM with installed samba Server.
But what would be the „best“ way to connect? Kernel module mapping o
Hello, I have updated my cluster from luminous to nautilus and now my
cluster is working but I am seeing a weird behavior in my monitors and
managers.
The monitors are using a huge amount of memory and becoming very slow. The
CPU usage is also very higher than it used to be.
The manager keeps c
15 matches
Mail list logo