[ceph-users] Re: Erasure-coded Block Device Image Creation With qemu-img - Help

2021-03-17 Thread Loïc Dachary
Hi Matthew, On 17/03/2021 06:29, duluxoz wrote: > > After doing some research I *think* I need to specify a replicated (as > opposed to erasure-coded) pool for my_pool's metadata (eg > 'my_pool_metadata'), and thus use the command: > > ``` > > rbd create -s 1T --data-pool my_pool

[ceph-users] Re: Diskless boot for Ceph nodes

2021-03-17 Thread Martin Verges
Hello, that's right, you can test our croit.io software for free or watch how it works in a recording of a webinar https://youtu.be/uMNxOIP1kHI?t=752 From our point of view, booting systems using PXE provides at least the same benefits as containers on a system but with much stronger

[ceph-users] Quick quota question

2021-03-17 Thread Andrew Walker-Brown
Hi all When setting a quota on a pool (or directory in Cephfs), is it the amount of client data written or the client data x number of replicas that counts toward the quota? Cheers A Sent from my iPhone ___ ceph-users mailing list --

[ceph-users] Re: Networking Idea/Question

2021-03-17 Thread Janne Johansson
Den ons 17 mars 2021 kl 02:04 skrev Tony Liu : > What's the purpose of "cluster" network, simply increasing total > bandwidth or for some isolations? Not having client traffic (that only occurs on the public network) fight over bandwidth with OSD<->OSD traffic (replication and recovery). Now,

[ceph-users] Re: Erasure-coded Block Device Image Creation With qemu-img - Help

2021-03-17 Thread Mykola Golub
On Wed, Mar 17, 2021 at 04:29:10PM +1100, duluxoz wrote: > ``` > > rbd create -s 1T --data-pool my_pool my_pool_metadata/my_data > > ``` > > First Question: Is this correct? Yes > > Second Question: What is the qemu-img equivalent command - is it: > > ``` > > qemu-img create -f rbd

[ceph-users] Re: Quick quota question

2021-03-17 Thread Stefan Kooman
On 3/17/21 11:28 AM, Andrew Walker-Brown wrote: Hi Magnus, Thanks for the reply. Just to be certain (I’m having a slow day today), it’s the amount of data stored by the clients. As an example. a pool using 3 replicas and a quota 3TB : clients would be able to create up to 3TB of data and

[ceph-users] Re: Diskless boot for Ceph nodes

2021-03-17 Thread Clyso GmbH - Ceph Foundation Member
Hello, we implemented this as part of a customer project for Immutable Infrastructure. Regards, Joachim ___ Clyso GmbH - Ceph Foundation Member supp...@clyso.com https://www.clyso.com Am 16.03.2021 um 18:37 schrieb Stephen Smith6: Hey folks - thought I'd

[ceph-users] RGW dashboard

2021-03-17 Thread thomas . charles
HI, this is regarding my fresh installation of CEPH on a Proxmox cluster. I need to use the radosGW and I'm quite stucked in dashboard configuration. ceph -v ceph version 15.2.9 (7b3df4a1b15c5a048c237733c797a2667f08196e) octopus (stable) ceph -w cluster: id:

[ceph-users] Re: Quick quota question

2021-03-17 Thread Burkhard Linke
Hi, On 3/17/21 11:28 AM, Andrew Walker-Brown wrote: Hi Magnus, Thanks for the reply. Just to be certain (I’m having a slow day today), it’s the amount of data stored by the clients. As an example. a pool using 3 replicas and a quota 3TB : clients would be able to create up to 3TB of data

[ceph-users] Re: Quick quota question

2021-03-17 Thread Andrew Walker-Brown
Ahh ok, good to know! Sent from Mail for Windows 10 From: Stefan Kooman Sent: 17 March 2021 10:37 To: Andrew Walker-Brown; Magnus HAGDORN;

[ceph-users] Re: Quick quota question

2021-03-17 Thread Andrew Walker-Brown
Hi Magnus, Thanks for the reply. Just to be certain (I’m having a slow day today), it’s the amount of data stored by the clients. As an example. a pool using 3 replicas and a quota 3TB : clients would be able to create up to 3TB of data and Ceph would use 9TB of raw storage? Cheers, A.

[ceph-users] Re: Quick quota question

2021-03-17 Thread Magnus HAGDORN
On Wed, 2021-03-17 at 08:26 +, Andrew Walker-Brown wrote: > When setting a quota on a pool (or directory in Cephfs), is it the > amount of client data written or the client data x number of replicas > that counts toward the quota? It's the amount of data stored so independent of replication

[ceph-users] Re: Quick quota question

2021-03-17 Thread Andrew Walker-Brown
Thank you! Sent from Mail for Windows 10 From: Burkhard Linke Sent: 17 March 2021 10:32 To: ceph-users@ceph.io Subject: [ceph-users] Re: Quick quota question Hi,

[ceph-users] Re: Diskless boot for Ceph nodes

2021-03-17 Thread Stefan Kooman
On 3/17/21 12:34 AM, Nico Schottelius wrote: On 2021-03-16 22:06, Stefan Kooman wrote: On 3/16/21 6:37 PM, Stephen Smith6 wrote: Hey folks - thought I'd check and see if anyone has ever tried to use ephemeral (tmpfs / ramfs based) boot disks for Ceph nodes? croit.io does that quite

[ceph-users] Re: Networking Idea/Question

2021-03-17 Thread Stefan Kooman
On 3/17/21 7:44 AM, Janne Johansson wrote: Den ons 17 mars 2021 kl 02:04 skrev Tony Liu : What's the purpose of "cluster" network, simply increasing total bandwidth or for some isolations? Not having client traffic (that only occurs on the public network) fight over bandwidth with OSD<->OSD

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Teoman Onay
Hi! AFAIK the focus is on ceph-adm to replace ceph-ansible. Today it is still missing some important features but it is just a matter of time. I don't think that the devs will do twice the work, once for cephadm and once for ceph-ansible but if someone feels the need to keep it working and

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Matthew H
There should not be any performance difference between an un-containerized version and a containerized one. The shift to containers makes sense, as this is the general direction that the industry as a whole is taking. I would suggest giving cephadm a try, it's relatively straight forward and

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Teoman Onay
A containerized environment just makes troubleshooting more difficult, getting access and retrieving details on Ceph processes isn't as straightforward as with a non containerized infrastructure. I am still not convinced that containerizing everything brings any benefits except the collocation of

[ceph-users] ceph-ansible in Pacific and beyond?

2021-03-17 Thread Matthew Vernon
Hi, I caught up with Sage's talk on what to expect in Pacific ( https://www.youtube.com/watch?v=PVtn53MbxTc ) and there was no mention of ceph-ansible at all. Is it going to continue to be supported? We use it (and uncontainerised packages) for all our clusters, so I'd be a bit alarmed if

[ceph-users] Telemetry ident use?

2021-03-17 Thread Matthew Vernon
Hi, What use is made of the ident data in the telemetry module? It's disabled by default, and the docs don't seem to say what it's used for... Thanks, Matthew -- The Wellcome Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Oliver Freyermuth
Am 17.03.21 um 20:09 schrieb Stefan Kooman: On 3/17/21 7:51 PM, Martin Verges wrote:   I am still not convinced that containerizing everything brings any benefits except the collocation of services. Is there even a benefit? Decoupling from underlying host OS. On a test cluster I'm running

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Fox, Kevin M
There are a lot of benefits to containerization that is hard to do without it. Finer grained ability to allocate resources to services. (This process gets 2g of ram and 1 cpu) Security is better where only minimal software is available within the container so on service compromise its harder to

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Matthew H
"A containerized environment just makes troubleshooting more difficult, getting access and retrieving details on Ceph processes isn't as straightforward as with a non containerized infrastructure. I am still not convinced that containerizing everything brings any benefits except the collocation

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Marc
> > Finer grained ability to allocate resources to services. (This process > gets 2g of ram and 1 cpu) > > do you really believe this is a benefit? How can it be a benefit to have > crashing or slow OSDs? Sounds cool but doesn't work in most environments > I > ever had my hands on. > We often

[ceph-users] Same data for two buildings

2021-03-17 Thread Denis Morejon Lopez
⁣I have a ceph cluster with 5 nodes. 3 in one building, and 2 in the other one. I put this information in the CRUSH. So that ceph were able to put one copy of objects in the nodes of one building and the other copy to the nodes of the other building. I mean, I setup replicas=2 in order to put

[ceph-users] Re: Email alerts from Ceph

2021-03-17 Thread Marc
> > How have folks implemented getting email or snmp alerts out of Ceph? > Getting things like osd/pool nearly full or osd/daemon failures etc. > ceph mgr have a prometheus and influx exporter, via prometheus/influx you can then arrange the alerting.

[ceph-users] Email alerts from Ceph

2021-03-17 Thread Andrew Walker-Brown
Hi all, How have folks implemented getting email or snmp alerts out of Ceph? Getting things like osd/pool nearly full or osd/daemon failures etc. Kind regards Andrew Sent from my iPhone ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Martin Verges
> I am still not convinced that containerizing everything brings any benefits except the collocation of services. Is there even a benefit? We as croit collocate all our services from Ceph itself MON,MGR,MDS,OSD,... as well as ISCSI, SMB, NFS,... on the same host. No problem with that, not a

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Martin Verges
Hello, > Finer grained ability to allocate resources to services. (This process gets 2g of ram and 1 cpu) do you really believe this is a benefit? How can it be a benefit to have crashing or slow OSDs? Sounds cool but doesn't work in most environments I ever had my hands on. We often encounter

[ceph-users] Re: Same data for two buildings

2021-03-17 Thread Andrew Walker-Brown
Denis, I’m doing something similar to you with 5 nodes, 4 with OSDs and a 5th just as a mon. I have pools set with 4 replicas, minimum 2, crush map configured so 2 replicas go to each DC and then to host level. 5th mon is in a third location, but could be a VM with higher latency somewhere

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Stefan Kooman
On 3/17/21 7:51 PM, Martin Verges wrote: I am still not convinced that containerizing everything brings any benefits except the collocation of services. Is there even a benefit? Decoupling from underlying host OS. On a test cluster I'm running Ubuntu Focal on the host (and a bunch of

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Milan Kupcevic
On 3/17/21 1:38 PM, Teoman Onay wrote: > A containerized environment just makes troubleshooting more difficult, > getting access and retrieving details on Ceph processes isn't as > straightforward as with a non containerized infrastructure. I am still not > convinced that containerizing everything

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Milan Kupcevic
On 3/17/21 1:26 PM, Matthew H wrote: > There should not be any performance difference between an un-containerized > version and a containerized one. > That is right. Let us choose which one fits our setup better. Milan -- Milan Kupcevic Senior Cyberinfrastructure Engineer at Project NESE

[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Alexander E. Patrakov
I agree with this sentiment. Please do not make a containerized and orchestrated deployment mandatory until all of the documentation is rewritten to take this deployment scenario into account. Also, in the past year, I have personally tested three Ceph training courses from various vendors. They