[ceph-users] Re: looking for telegram group in English or Chinese

2020-05-26 Thread Zhenshi Zhou
Awesome, thanks! Martin Verges 于2020年5月27日周三 下午2:04写道: > Hello, > > as I find it a good idea and couldn't find another, I just created > https://t.me/ceph_users. > Please feel free to join and let's see to get this channel startet ;) > > -- > Martin Verges > Managing director > > Mobile: +49 174

[ceph-users] Re: looking for telegram group in English or Chinese

2020-05-26 Thread Martin Verges
Hello, as I find it a good idea and couldn't find another, I just created https://t.me/ceph_users. Please feel free to join and let's see to get this channel startet ;) -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat: https://t.me/MartinVerges croi

[ceph-users] Re: looking for telegram group in English or Chinese

2020-05-26 Thread Konstantin Shalygin
On 5/26/20 1:13 PM, Zhenshi Zhou wrote: Is there any telegram group for communicating with ceph users? AFAIK there is only Russian (CIS) group [1], but feel free to join with English! [1] https://t.me/ceph_ru k ___ ceph-users mailing list -- ce

[ceph-users] Re: Multisite RADOS Gateway replication factor in zonegroup

2020-05-26 Thread Konstantin Shalygin
On 5/25/20 9:50 PM, alexander.vysoc...@megafon.ru wrote: I didn’t find any information about the replication factor in the zone group. Assume  I  have three ceph clusters with Rados Gateway in one zone group each with replica size 3. How many replicas of an object  I’ll get in total? Is it

[ceph-users] Re: [External Email] Re: Ceph Nautius not working after setting MTU 9000

2020-05-26 Thread Paul Emmerich
Don't optimize stuff without benchmarking *before and after*, don't apply random tuning tipps from the Internet without benchmarking them. My experience with Jumbo frames: 3% performance. On a NVMe-only setup with 100 Gbit/s network. Paul -- Paul Emmerich Looking for help with your Ceph cluste

[ceph-users] Re: Prometheus Python Errors

2020-05-26 Thread Ernesto Puerta
This has been recently fixed in master (I just submitted backporting PRs for octopus and nautilus ). BTW the fix is pretty trivial

[ceph-users] Re: [External Email] Re: Ceph Nautius not working after setting MTU 9000

2020-05-26 Thread Marc Roos
Look what I have found!!! :) https://ceph.com/geen-categorie/ceph-loves-jumbo-frames/ -Original Message- From: Anthony D'Atri [mailto:anthony.da...@gmail.com] Sent: maandag 25 mei 2020 22:12 To: Marc Roos Cc: kdhall; martin.verges; sstkadu; amudhan83; ceph-users; doustar Subject: Re:

[ceph-users] Re: move bluestore wal/db

2020-05-26 Thread Eneko Lacunza
Hi, Yes, it can be done (shuting down the OSD but no rebuild required), we did it for resizing wal partition to a bigger one. A simple Google search will help; I can paste the procedure we followed but it's in spanish :( Cheers El 26/5/20 a las 17:20, Frank R escribió: Is there a safe way

[ceph-users] move bluestore wal/db

2020-05-26 Thread Frank R
Is there a safe way to move the bluestore wal and db to a new device that doesn't involve rebuilding the entire OSD? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: OSDs taking too much memory, for buffer_anon

2020-05-26 Thread Mark Nelson
Hi Harald, Yeah, I suspect your issue is definitely related to what Adam has been investigating. FWIW, we are talking about re-introducing a periodic trim in Adam's PR here: https://github.com/ceph/ceph/pull/35171 That should help on the memory growth side, but if we still have objects u

[ceph-users] Ceph client on rhel6?

2020-05-26 Thread Simon Sutter
Hello again, I have a new question: We want to upgrade a server, with an os based on rhel6. The ceph cluster is atm on octopus. How can I install the client packages to mount cephfs and do a backup of the server? Is it even possible? Are the client packages from hammer compatible with the oct

[ceph-users] Re: mds container dies during deployment

2020-05-26 Thread Simon Sutter
Hello, Didn't read the right one: https://docs.ceph.com/docs/master/cephadm/install/#deploy-mdss There it says, how to do it right. The command I was using, was just to add a mds daemon if you have already one. Hopes it helps others. Cheers, Simon Von: Simon

[ceph-users] dealing with spillovers

2020-05-26 Thread thoralf schulze
hi there, trying to get around my head rocksdb spillovers and how to deal with them … in particular, i have one osds which does not have any pools associated (as per ceph pg ls-by-osd $osd ), yet it does show up in ceph health detail as: osd.$osd spilled over 2.9 MiB metadata from 'db' devic

[ceph-users] Re: Nautilus: (Minority of) OSDs with huge buffer_anon usage - triggering OOMkiller in worst cases.

2020-05-26 Thread aoanla
Hi Mark, thanks for your efforts on this already. I had to wait for my account on tracker.ceph to be approved before I could submit the bug - which is here: https://tracker.ceph.com/issues/45706 Sam ___ ceph-users mailing list -- ceph-users@ceph.io T

[ceph-users] Performance issues in newly deployed Ceph cluster

2020-05-26 Thread Loschwitz,Martin Gerhard
Folks, I am running into a very strange issue with a brand new Ceph cluster during initial testing. Cluster consists of 12 nodes, 4 of them have SSDs only, the other eight have a mixture of SSDs and HDDs. The latter nods are configured so that three or four HDDs use one SSDs for their blockdb.