Chris Marino <chris@...> writes: > > > Hello everyone, just wanted to let you know that today we opened up the repos for the new open source networking project we’ve been working on. It’s called Romana and the project site is romana.io. > Thought you would be interested because it enables multi-tenant networking without a virtual network overlay. It's targeted for use with applications that only need L3 networks so we’ve been able to eliminate and simplify many things to make the network faster, and easier to build and operate. > If you run these kind of Cloud Native apps on OpenStack (or even directly on bare metal with Docker or Kubernetes), we’d love to hear what you think. We’re still working on the container CNM/CNI integration. Any and all feedback is welcome. > The code is on Github at github.com/romana and you can see how it all works with a demo we’ve set up that lets you install and run OpenStack on EC2. > You can read about how Romana works on the project site, here. In summary, it extends the physical network hierarchy of a layer 3 routed access design from spine and leaf switches on to hosts, VMs and containers. > This enables a very simple and intuitive tenancy model: For every tenant (and each of their network segments) there is an actual physical network CIDR on each host, with all tenants sharing the host-specific address prefix. The advantage of this is that route aggregation makes route distribution unnecessary and collapses the number of iptables rules required for segment isolation. In addition, traffic policies, such as security rules, can easily be applied to those tenant or segment specific CIDRs across all hosts. > Any/all comments welcome. > Thanks > CM > Hi Chris!
It's a very interesting project, I like the clean L3 design, without mixing OpenVSwitches and namespaces. But as for myself I'd lean more towards a full L2 implementetion. I for example liked the Dragonflow blog posts I have read. I like the way Provider Networks in Neutron allow me to connect physical boxes to VMs in OpenStack. My concerns are: 1) I'm a service provider. As long as I'm giving out my IP addresses to tenants, all is fine. But what if someone wants to bring their own? There should be a way out of the fixed addressing scheme in exceptional cases. 2) What about some legacy distributed apps, which scan the network to find all nodes? Think MPI. 3) What if I already have the 10.0.0.0/8 network used in the datacenter? (I think it will be, in most of them.) 4) What about floating IPs? Or will there be dynamic routing for internet access of individual instances? 5) Do you have a roadmap / feature matrix of what would you like to have and how it compares to other OpenStack solutions? 6) If you ever do another benchmark, focus more on throughput than latency. The difference between 1.5 ms and 0.24 ms is not interesting. The one between, say, 5 and 10 Gb/s would be. And please document the hardware setup ;-). Tomas _______________________________________________ OpenStack-operators mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
