In my experiments with ceph so far, setting up a new cluster goes fairly 
well... so long as i only use a single network.
But when I try to use separate networks, things stop functioning in various 
ways.
(For example, I can "
 SO I thought I'd ask for pointers to any multi-network setup guide.

My goal:

* have a 3+ node ceph cluster that each has local SSD storage only.
* have an RBD mapped on each node, which will then have a non-cephfs filesystem 
on it, shared out via NFS
* have each node share out NFS on one interface, but communicate to the cluster 
on a separate interface

In the old ways, it was theoretically straightforward, in that you could 
specify "public" interfaces vs other ones.
But in the new cephadm driven world, I havent found the magic that works.

For example, in my current iteration, I have successfully added all three 
hosts, and have 3 "mon"s...
but   "ceph orch device ls --refresh"
only shows the dev from the node I'm running it on.



Im still trying to cycle through different bootstrap options. and Im 
experimenting with overriding naming in /etc/hosts, for things like which IP 
addresses get mapped to the real hostname, vs which get given 
"hostname-datainterface" type naming.

For example:
On the one hand, Im wondering if I need to name ALL IP addresses for a host, 
with the same hostname.
But on the other hand, my sysadmin instincts whisper to me that sounds like a 
terrible idea.

SO, tips from people who have done multi homing under octopus, would be 
appreciated.

Note that my initial proof-of-concept cluster is just 3 physical nodes, so 
everything needs to live on them.




--
Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
5 Peters Canyon Rd Suite 250 
Irvine CA 92606 
Office 714.918.1310| Fax 714.918.1325 
pbr...@medata.com| www.medata.com
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to