[ceph-users] Ceph OSD network with IPv6 SLAAC networks?

2017-03-27 Thread Richard Hesse
Has anyone run their Ceph OSD cluster network on IPv6 using SLAAC? I know that ceph supports IPv6, but I'm not sure how it would deal with the address rotation in SLAAC, permanent vs outgoing address, etc. It would be very nice for me, as I wouldn't have to run any kind of DHCP server or use static

Re: [ceph-users] Ceph OSD network with IPv6 SLAAC networks?

2017-03-27 Thread Richard Hesse
Nix the second question, as I understand it, ceph doesn't work in mixed IPv6 and legacy IPv4 environments. Still, would like to hear from people running it in SLAAC environments. On Mon, Mar 27, 2017 at 12:49 PM, Richard Hesse wrote: > Has anyone run their Ceph OSD cluster network

Re: [ceph-users] Ceph OSD network with IPv6 SLAAC networks?

2017-03-30 Thread Richard Hesse
ol to handle that? We're using classless static routes via DHCP on v4 to solve this problem, and I'm curious what the v6 SLAAC equivalent was. Thanks, -richard On Tue, Mar 28, 2017 at 8:30 AM, Wido den Hollander wrote: > > > Op 27 maart 2017 om 21:49 schreef Rich

[ceph-users] Slow CephFS writes after Jewel upgrade from Infernalis

2017-03-31 Thread Richard Hesse
Hi, we recently upgraded one of our Ceph clusters from Infernalis to Jewel. The upgrade process went smoothly: Upgraded OSD's, restarted them in batches, waited for health OK, updated Mon and MDS, restarted, waited for heatlh OK, etc. We then set require_jewel_osds flag and upgraded our CephFS clie

Re: [ceph-users] Ceph with Clos IP fabric

2017-04-17 Thread Richard Hesse
A couple of questions: 1) What is your rack topology? Are all ceph nodes in the same rack communicating with the same top of rack switch? 2) Why did you choose to run the ceph nodes on loopback interfaces as opposed to the /24 for the "public" interface? 3) Are you planning on using RGW at all?

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-17 Thread Richard Hesse
I'm just spitballing here, but what if you set osd crush update on start = false ? Ansible would activate the OSD's but not place them in any particular rack, working around the ceph.conf problem you mentioned. Then you could place them in your CRUSH map by hand. I know you wanted to avoid editing

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-18 Thread Richard Hesse
adding racks and racks of OSD's every week, you should have found the crush location hook a long time ago. On Tue, Apr 18, 2017 at 12:53 PM, Matthew Vernon wrote: > On 17/04/17 21:16, Richard Hesse wrote: > > I'm just spitballing here, but what if you set osd crush update on st

Re: [ceph-users] Ceph with Clos IP fabric

2017-04-18 Thread Richard Hesse
richard On Tue, Apr 18, 2017 at 9:30 AM, Jan Marquardt wrote: > Am 17.04.17 um 22:12 schrieb Richard Hesse: > > A couple of questions: > > > > 1) What is your rack topology? Are all ceph nodes in the same rack > > communicating with the same top of rack switch? >

Re: [ceph-users] Ceph with Clos IP fabric

2017-04-20 Thread Richard Hesse
On Thu, Apr 20, 2017 at 2:13 AM, Maxime Guyot wrote: > >2) Why did you choose to run the ceph nodes on loopback interfaces as > opposed to the /24 for the "public" interface? > > I can’t speak for this example, but in a clos fabric you generally want to > assign the routed IPs on loopback rather

Re: [ceph-users] Ceph with Clos IP fabric

2017-04-22 Thread Richard Hesse
he trickiest part is getting the routing on the hosts right, you > essentially set static routes over each link and the kernel takes care of > the ECMP. > > I understand this is a bit different from your setup, but Ceph has no > trouble at all with the IPs on multiple interfaces. >

Re: [ceph-users] Ceph with Clos IP fabric

2017-04-23 Thread Richard Hesse
entrants! On Apr 23, 2017 7:56 PM, "Christian Balzer" wrote: Hello, Aaron pretty much stated most of what I was going to write, but to generalize things and make some points more obvious, I shall pipe up as well. On Sat, 22 Apr 2017 21:45:58 -0700 Richard Hesse wrote: > Out of curi