Hi Alon,

Alon Bar-Lev píše v Ne 11. 11. 2012 v 13:28 -0500:
> 
> ----- Original Message -----
> > From: "Antoni Segura Puimedon" <asegu...@redhat.com>
> > To: "Alon Bar-Lev" <alo...@redhat.com>
> > Cc: vdsm-de...@fedorahosted.org, "Dan Kenigsberg" <dan...@redhat.com>
> > Sent: Sunday, November 11, 2012 5:47:54 PM
> > Subject: Re: [vdsm] Future of Vdsm network configuration
> > 
> > 
> > 
> > ----- Original Message -----
> > > From: "Alon Bar-Lev" <alo...@redhat.com>
> > > To: "Dan Kenigsberg" <dan...@redhat.com>
> > > Cc: vdsm-de...@fedorahosted.org
> > > Sent: Sunday, November 11, 2012 3:46:43 PM
> > > Subject: Re: [vdsm] Future of Vdsm network configuration
> > > 
> > > 
> > > 
> > > ----- Original Message -----
> > > > From: "Dan Kenigsberg" <dan...@redhat.com>
> > > > To: vdsm-de...@fedorahosted.org
> > > > Sent: Sunday, November 11, 2012 4:07:30 PM
> > > > Subject: [vdsm] Future of Vdsm network configuration
> > > > 
> > > > Hi,
> > > > 
> > > > Nowadays, when vdsm receives the setupNetowrk verb, it mangles
> > > > /etc/sysconfig/network-scripts/ifcfg-* files and restarts the
> > > > network
> > > > service, so they are read by the responsible SysV service.
> > > > 
> > > > This is very much Fedora-oriented, and not up with the new themes
> > > > in Linux network configuration. Since we want oVirt and Vdsm to
> > > > be
> > > > distribution agnostic, and support new features, we have to
> > > > change.
> > > > 
> > > > setupNetwork is responsible for two different things:
> > > > (1) configure the host networking interfaces, and
> > > > (2) create virtual networks for guests and connect the to the
> > > > world
> > > > over (1).
> > > > 
> > > > Functionality (2) is provided by building Linux software bridges,
> > > > and
> > > > vlan devices. I'd like to explore moving it to Open vSwitch,
> > > > which
> > > > would
> > > > enable a host of functionalities that we currently lack (e.g.
> > > > tunneling). One thing that worries me is the need to reimplement
> > > > our
> > > > config snapshot/recovery on ovs's database.
> > > > 
> > > > As far as I know, ovs is unable to maintain host level parameters
> > > > of
> > > > interfaces (e.g. eth0's IPv4 address), so we need another
> > > > tool for functionality (1): either speak to NetworkManager
> > > > directly,
> > > > or
> > > > to use NetCF, via its libvirt virInterface* wrapper.
> > > > 
> > > > I have minor worries about NetCF's breadth of testing and usage;
> > > > I
> > > > know
> > > > it is intended to be cross-platform, but unlike ovs, I am not
> > > > aware
> > > > of a
> > > > wide Debian usage thereof. On the other hand, its API is ready
> > > > for
> > > > vdsm's
> > > > usage for quite a while.
> > > > 
> > > > NetworkManager has become ubiquitous, and we'd better integrate
> > > > with
> > > > it
> > > > better than our current setting of NM_CONTROLLED=no. But as DPB
> > > > tells
> > > > us,
> > > > https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.html
> > > > we'd better offload integration with NM to libvirt.
> > > > 
> > > > We would like to take Network configuration in VDSM to the next
> > > > level
> > > > and make it distribution agnostic in addition for setting the
> > > > infrastructure for more advanced features to be used going
> > > > forward.
> > > > The path we think of taking is to integrate with OVS and for
> > > > feature
> > > > completeness use NetCF, via its libvirt virInterface* wrapper.
> > > > Any
> > > > comments or feedback on this proposal is welcomed.
> > > > 
> > > > Thanks to the oVirt net team members who's input has helped
> > > > writing
> > > > this
> > > > email.
> > > 
> > > Hi,
> > > 
> > > As far as I see this, network manager is a monster that is a huge
> > > dependency to have just to create bridges or configure network
> > > interfaces... It is true that on a host where network manager lives
> > > it would be not polite to define network resources not via its
> > > interface, however I don't like we force network manager.

NM is a default way of network configuration from F17 on and it's
available on all platforms. It isn't exactly small but it wouldn't pull
any dependency AFAICT because all its dependencies are on Fedora
initramfs already...

> > > 
> > > libvirt is long not used as virtualization library but system
> > > management agent, I am not sure this is the best system agent I
> > > would have chosen.
> > > 
> > > I think that all the terms and building blocks got lost in time...
> > > and the result integration became more and more complex.
> > > 
> > > Stabilizing such multi-layered component environment is much harder
> > > than monolithic environment.
> > > 
> > > I would really want to see vdsm as monolithic component with full
> > > control over its resources, I believe this is the only way vdsm can
> > > be stable enough to be production grade.
> > > 
> > > Hypervisor should be a total slave of manager (or cluster), so I
> > > have
> > > no problem in bypassing/disabling any distribution specific tool in
> > > favour of atoms (brctl, iproute), in non persistence mode.
> > 
> > So you propose that we would keep the network configuration database
> > ourselves (something like sqlite maybe), disable network.service and
> > networkmanager.service and put up and down the interfaces we need via
> > brctl/iproute, sysfs and other netlink talking interfaces right?
> > 
> > I won't deny that for hypervisor nodes it sounds really well. For
> > installations on machines that maybe serve other purposes as well, it
> > could be slightly problematic. Not the part of managing the network,
> > but the part of disabling network manager and network.service.
> > 
> > Since what you said was bypass NM and network.service, maybe it would
> > be better instead to leave whichever is default enabled and let
> > the user define which interfaces we should manage, and make those
> > unavailable to NM and network.service. Thre are four cases here:
> > 
> > NM enabled network.service disabled:
> >     Simply create ifcfg-* for the interfaces that we want to manage
> >     that include NM_CONTROLLED=no and the MAC address of the
> >     interface.
> > NM disabled and network.service disabled:
> >     Just make sure that the interfaces we are to manage do not have
> >     a ifcfg-* file.
> > NM disabled and network.service disabled:
> >     No special requirements to make it work.
> > NM enabled and network.service enabled:
> >     Make sure that there are no ifcfg-* files for the interfaces we
> >     manage and create a NM keyfile stating the interface as not
> >     managed.
> > 
> > Alon, just correct me if I am wrong in my interpretation of what you
> > said, I wanted to expand on it to make sure I understood it well.
> > 
> > Best, Toni
> > 
> > > 
> > > I know this derive some more work, but I don't think it is that
> > > complex to implement and maintain.
> > > 
> > > Just my 2 cents...
> 
> Hello Toni,
> 
> The demonstrate what I think, let's take this to the extreme...
> 
> Hypervisor should be stable and rock solid, so I would use the minimum 
> required dependencies with tight integration.
> For this purpose I would use kernel + busybox + host-manager.
> host-manager that uses ioctls/netlink to perform the network management and 
> storage management.
> And as we only use qemu/kvm linked against qemu.
> We may add some OPTIONAL infrastructure component like openvswitch for extra 
> functionality.
> 
> I, personally, don't see the value in running the hypervisor on generic 
> hosts, meaning running VMs on host that performs other tasks as well, such as 
> database server or application server.
> 
> But let's say there is some value in that, so we have to ask:
> 1. What is the stability factor we expect from these hosts?
> 2. How well do we need to integrate with the distribution specific features?
> 
> If the answer to (1) is as same as hypervisor, then we take the same software 
> and compromise with the integration.
> 
> Otherwise we perform the minimum we can for such integration, such as 
> removing the network interfaces from the network manager control.
> 
> The reasoning behind my opinion is that components such as dbus, systemd, 
> network manager are component that were design to solve the problem of the 
> END USER, not to be used as MISSION CRITICAL infrastructure components. This 
> was part of the effort to make the Linux desktop more friendly. But then 
> leaked to the MISSION CRITICAL core.

This is surely not true for systemd and as far as I know about
NetworkManager, it's recent developments are moving it to
mission-critical grade software.

> 
> The stability of the hypervisor should be the same or higher than the hosts 
> it runs, so it cannot use none mission critical components to achieve that.
> 
> The solution can be to write the whole network functionality as plugins, 
> example: bridge plugin, vlan plugin, bond plugin etc...

Putting this together with other facts (inability of current kernel +
scripts to handle full IPv6 functionality), you effectively propose to
write Yet Another Network Daemon, This Time Done Right.

If you can spend one hour of your time to listen to some networking-related 
talks, please have a look at these two:
https://www.youtube.com/watch?v=lzCLkjjrg1Q (by Pavel Šimerda, one of 
NetworkManager developers)
https://www.youtube.com/watch?v=XUgmFyBe_9w (by SUSE guys developing Wicked)

> Then have implementation of these plugins using network manager, openvswitch, 
> ioctl/netlink.
> Using the appropriate plugin based on desired functionality per desired 
> stability.
> 
> I really like to see rock solid monolithic host manager / cluster manager.

Systemd is on the best path to become such a monolithic beast that will
do everything, given its efforts to absorb functionalities unrelated to
init into its monolithic design (syslog, anacron).

David

> 
> I hope I clarified a little...
> 
> Regards,
> Alon
> _______________________________________________
> vdsm-devel mailing list
> vdsm-devel@lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key:     22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

Reply via email to