Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)

2017-11-03 Thread Bassam Tabbara
(sorry for the late response, just catching up on ceph-users)

> Probably the main difference is that ceph-helm aims to run Ceph as part of 
> the container infrastructure.  The containers are privileged so they can 
> interact with hardware where needed (e.g., lvm for dm-crypt) and the 
> cluster runs on the host network.  We use kubernetes some orchestration: 
> kube is a bit of a headache for mons and osds but will be very helpful for 
> scheduling everything else: mgrs, rgw, rgw-nfs, iscsi, mds, ganesha, 
> samba, rbd-mirror, etc.
> 
> Rook, as I understand it at least (the rook folks on the list can speak up 
> here), aims to run Ceph more as a tenant of kubernetes.  The cluster runs 
> in the container network space, and the aim is to be able to deploy ceph 
> more like an unprivileged application on e.g., a public cloud providing 
> kubernetes as the cloud api.

Yes Rook’s goal is to run wherever Kubernetes runs without making changes at 
the host level. Eventually we plan to remove the need to run some of the 
containers privileged, and automatically work with different kernel versions 
and heterogeneous environments. It's fair to think of Rook as an application of 
Kubernetes. As a result you could run it on AWS, Google, bare-metal or wherever.

> The other difference is around rook-operator, which is the thing that lets 
> you declare what you want (ceph clusters, pools, etc) via kubectl and goes 
> off and creates the cluster(s) and tells it/them what to do.  It makes the 
> storage look like it is tightly integrated with and part of kubernetes but 
> means that kubectl becomes the interface for ceph cluster management.  

Rook extends Kubernetes to understand storage concepts like Pool, Object Store, 
FileSystems. Our goal is for storage to be integrated deeply into Kuberentes. 
That said, you can easily launch the Rook toolbox and use the ceph tools at any 
point. I don’t think the goal is for Rook to replace the ceph tools, but 
instead to offer a Kuberentes-native alternative to them.

> Some of that seems useful to me (still developing opinions here!) and 
> perhaps isn't so different than the declarations in your chart's 
> values.yaml but I'm unsure about the wisdom of going too far down the road 
> of administering ceph via yaml.
> 
> Anyway, I'm still pretty new to kubernetes-land and very interested in 
> hearing what people are interested in or looking for here!

Would be great to find ways to get these two projects closer. 

Bassam

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-08 Thread Bassam Tabbara
Thanks Sage.

> At CDM yesterday we talked about removing the ability to name your ceph 
> clusters. 


Just to be clear, it would still be possible to run multiple ceph clusters on 
the same nodes, right?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Announcing: Embedded Ceph and Rook

2016-12-02 Thread Bassam Tabbara
Hi Dan,

Is there anyplace you explain in more detail about why this design is
attractive?  I'm having a hard time imagining why applications would
want to try to embed the cluster.

Take a look at https://github.com/rook/rook for a small explanation of how we 
use embedded Ceph.

Thanks!
Bassam

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Announcing: Embedded Ceph and Rook

2016-11-30 Thread Bassam Tabbara
Hello Cephers,

I wanted to let you know about a new library that is now available in master. 
It's called “libcephd” and it enables the embedding of Ceph daemons like MON 
and OSD (and soon MDS and RGW) into other applications. Using libcephd it's 
possible to create new applications that closely integrate Ceph storage without 
bringing in the full distribution of Ceph and its dependencies. For example, 
you can build storage application that runs the Ceph daemons on limited 
distributions like CoreOS natively or along side a hypervisor for 
hyperconverged scenarios. The goal is to enable a broader ecosystem of 
solutions built around Ceph and reduce some of the friction for adopting Ceph 
today. See http://pad.ceph.com/p/embedded-ceph for the blueprint.

We (Quantum) are using embedded Ceph in a new open-source project called Rook 
(https://github.com/rook/rook and https://rook.io). Rook integrates embedded 
Ceph in a deployment that is targeting cloud-native applications.

Please feel free to respond with feedback. Also if you’re in the Seattle area 
next week stop by for a meetup on embedded Ceph and its use in Rook 
https://www.meetup.com/Pacific-Northwest-Ceph-Meetup/events/235632106/

Thanks!
Bassam

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com