Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
What would the port binding operation do in this case? Just mark the port as bound and nothing else? On Wed, Dec 10, 2014 at 12:48 AM, henry hly henry4...@gmail.com wrote: Hi Kevin, Does it make sense to introduce GeneralvSwitch MD, working with VIF_TYPE_TAP? It just do very simple port bind just like OVS and bridge. Then anyone can implement their backend and agent without patch neutron drivers. Best Regards Henry On Fri, Dec 5, 2014 at 4:23 PM, Kevin Benton blak...@gmail.com wrote: I see the difference now. The main concern I see with the NOOP type is that creating the virtual interface could require different logic for certain hypervisors. In that case Neutron would now have to know things about nova and to me it seems like that's slightly too far the other direction. On Thu, Dec 4, 2014 at 8:00 AM, Neil Jerram neil.jer...@metaswitch.com wrote: Kevin Benton blak...@gmail.com writes: What you are proposing sounds very reasonable. If I understand correctly, the idea is to make Nova just create the TAP device and get it attached to the VM and leave it 'unplugged'. This would work well and might eliminate the need for some drivers. I see no reason to block adding a VIF type that does this. I was actually floating a slightly more radical option than that: the idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does absolutely _nothing_, not even create the TAP device. (My pending Nova spec at https://review.openstack.org/#/c/130732/ proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but then does nothing else - i.e. exactly what you've described just above. But in this email thread I was musing about going even further, towards providing a platform for future networking experimentation where Nova isn't involved at all in the networking setup logic.) However, there is a good reason that the VIF type for some OVS-based deployments require this type of setup. The vSwitches are connected to a central controller using openflow (or ovsdb) which configures forwarding rules/etc. Therefore they don't have any agents running on the compute nodes from the Neutron side to perform the step of getting the interface plugged into the vSwitch in the first place. For this reason, we will still need both types of VIFs. Thanks. I'm not advocating that existing VIF types should be removed, though - rather wondering if similar function could in principle be implemented without Nova VIF plugging - or what that would take. For example, suppose someone came along and wanted to implement a new OVS-like networking infrastructure? In principle could they do that without having to enhance the Nova VIF driver code? I think at the moment they couldn't, but that they would be able to if VIF_TYPE_NOOP (or possibly VIF_TYPE_TAP) was already in place. In principle I think it would then be possible for the new implementation to specify VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind of configuration and vSwitch plugging that you've described above. Does that sound correct, or am I missing something else? 1 .When the port is created in the Neutron DB, and handled (bound etc.) by the plugin and/or mechanism driver, the TAP device name is already present at that time. This is backwards. The tap device name is derived from the port ID, so the port has already been created in Neutron at that point. It is just unbound. The steps are roughly as follows: Nova calls neutron for a port, Nova creates/plugs VIF based on port, Nova updates port on Neutron, Neutron binds the port and notifies agent/plugin/whatever to finish the plumbing, Neutron notifies Nova that port is active, Nova unfreezes the VM. None of that should be affected by what you are proposing. The only difference is that your Neutron agent would also perform the 'plugging' operation. Agreed - but thanks for clarifying the exact sequence of events. I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP) might fit as part of the Nova-network/Neutron Migration priority that's just been announced for Kilo. I'm aware that a part of that priority is concerned with live migration, but perhaps it could also include the goal of future networking work not having to touch Nova code? Regards, Neil -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
On Thu, Dec 11, 2014 at 12:36 AM, Kevin Benton blak...@gmail.com wrote: What would the port binding operation do in this case? Just mark the port as bound and nothing else? Also to set the vif type to tap, but don't care what the real backend switch is. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
Hi Kevin, Does it make sense to introduce GeneralvSwitch MD, working with VIF_TYPE_TAP? It just do very simple port bind just like OVS and bridge. Then anyone can implement their backend and agent without patch neutron drivers. Best Regards Henry On Fri, Dec 5, 2014 at 4:23 PM, Kevin Benton blak...@gmail.com wrote: I see the difference now. The main concern I see with the NOOP type is that creating the virtual interface could require different logic for certain hypervisors. In that case Neutron would now have to know things about nova and to me it seems like that's slightly too far the other direction. On Thu, Dec 4, 2014 at 8:00 AM, Neil Jerram neil.jer...@metaswitch.com wrote: Kevin Benton blak...@gmail.com writes: What you are proposing sounds very reasonable. If I understand correctly, the idea is to make Nova just create the TAP device and get it attached to the VM and leave it 'unplugged'. This would work well and might eliminate the need for some drivers. I see no reason to block adding a VIF type that does this. I was actually floating a slightly more radical option than that: the idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does absolutely _nothing_, not even create the TAP device. (My pending Nova spec at https://review.openstack.org/#/c/130732/ proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but then does nothing else - i.e. exactly what you've described just above. But in this email thread I was musing about going even further, towards providing a platform for future networking experimentation where Nova isn't involved at all in the networking setup logic.) However, there is a good reason that the VIF type for some OVS-based deployments require this type of setup. The vSwitches are connected to a central controller using openflow (or ovsdb) which configures forwarding rules/etc. Therefore they don't have any agents running on the compute nodes from the Neutron side to perform the step of getting the interface plugged into the vSwitch in the first place. For this reason, we will still need both types of VIFs. Thanks. I'm not advocating that existing VIF types should be removed, though - rather wondering if similar function could in principle be implemented without Nova VIF plugging - or what that would take. For example, suppose someone came along and wanted to implement a new OVS-like networking infrastructure? In principle could they do that without having to enhance the Nova VIF driver code? I think at the moment they couldn't, but that they would be able to if VIF_TYPE_NOOP (or possibly VIF_TYPE_TAP) was already in place. In principle I think it would then be possible for the new implementation to specify VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind of configuration and vSwitch plugging that you've described above. Does that sound correct, or am I missing something else? 1 .When the port is created in the Neutron DB, and handled (bound etc.) by the plugin and/or mechanism driver, the TAP device name is already present at that time. This is backwards. The tap device name is derived from the port ID, so the port has already been created in Neutron at that point. It is just unbound. The steps are roughly as follows: Nova calls neutron for a port, Nova creates/plugs VIF based on port, Nova updates port on Neutron, Neutron binds the port and notifies agent/plugin/whatever to finish the plumbing, Neutron notifies Nova that port is active, Nova unfreezes the VM. None of that should be affected by what you are proposing. The only difference is that your Neutron agent would also perform the 'plugging' operation. Agreed - but thanks for clarifying the exact sequence of events. I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP) might fit as part of the Nova-network/Neutron Migration priority that's just been announced for Kilo. I'm aware that a part of that priority is concerned with live migration, but perhaps it could also include the goal of future networking work not having to touch Nova code? Regards, Neil -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
I see the difference now. The main concern I see with the NOOP type is that creating the virtual interface could require different logic for certain hypervisors. In that case Neutron would now have to know things about nova and to me it seems like that's slightly too far the other direction. On Thu, Dec 4, 2014 at 8:00 AM, Neil Jerram neil.jer...@metaswitch.com wrote: Kevin Benton blak...@gmail.com writes: What you are proposing sounds very reasonable. If I understand correctly, the idea is to make Nova just create the TAP device and get it attached to the VM and leave it 'unplugged'. This would work well and might eliminate the need for some drivers. I see no reason to block adding a VIF type that does this. I was actually floating a slightly more radical option than that: the idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does absolutely _nothing_, not even create the TAP device. (My pending Nova spec at https://review.openstack.org/#/c/130732/ proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but then does nothing else - i.e. exactly what you've described just above. But in this email thread I was musing about going even further, towards providing a platform for future networking experimentation where Nova isn't involved at all in the networking setup logic.) However, there is a good reason that the VIF type for some OVS-based deployments require this type of setup. The vSwitches are connected to a central controller using openflow (or ovsdb) which configures forwarding rules/etc. Therefore they don't have any agents running on the compute nodes from the Neutron side to perform the step of getting the interface plugged into the vSwitch in the first place. For this reason, we will still need both types of VIFs. Thanks. I'm not advocating that existing VIF types should be removed, though - rather wondering if similar function could in principle be implemented without Nova VIF plugging - or what that would take. For example, suppose someone came along and wanted to implement a new OVS-like networking infrastructure? In principle could they do that without having to enhance the Nova VIF driver code? I think at the moment they couldn't, but that they would be able to if VIF_TYPE_NOOP (or possibly VIF_TYPE_TAP) was already in place. In principle I think it would then be possible for the new implementation to specify VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind of configuration and vSwitch plugging that you've described above. Does that sound correct, or am I missing something else? 1 .When the port is created in the Neutron DB, and handled (bound etc.) by the plugin and/or mechanism driver, the TAP device name is already present at that time. This is backwards. The tap device name is derived from the port ID, so the port has already been created in Neutron at that point. It is just unbound. The steps are roughly as follows: Nova calls neutron for a port, Nova creates/plugs VIF based on port, Nova updates port on Neutron, Neutron binds the port and notifies agent/plugin/whatever to finish the plumbing, Neutron notifies Nova that port is active, Nova unfreezes the VM. None of that should be affected by what you are proposing. The only difference is that your Neutron agent would also perform the 'plugging' operation. Agreed - but thanks for clarifying the exact sequence of events. I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP) might fit as part of the Nova-network/Neutron Migration priority that's just been announced for Kilo. I'm aware that a part of that priority is concerned with live migration, but perhaps it could also include the goal of future networking work not having to touch Nova code? Regards, Neil -- Kevin Benton ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
Ian Wells ijw.ubu...@cack.org.uk writes: On 4 December 2014 at 08:00, Neil Jerram neil.jer...@metaswitch.com wrote: Kevin Benton blak...@gmail.com writes: I was actually floating a slightly more radical option than that: the idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does absolutely _nothing_, not even create the TAP device. Nova always does something, and that something amounts to 'attaches the VM to where it believes the endpoint to be'. Effectively you should view the VIF type as the form that's decided on during negotiation between Neutron and Nova - Neutron says 'I will do this much and you have to take it from there'. (In fact, I would prefer that it was *more* of a negotiation, in the sense that the hypervisor driver had a say to Neutron of what VIF types it supported and preferred, and Neutron could choose from a selection, but I don't think it adds much value at the moment and I didn't want to propose a change just for the sake of it.) I think you're just proposing that the hypervisor driver should do less of the grunt work of connection. Also, libvirt is not the only hypervisor driver and I've found it interesting to nose through the others for background reading, even if you're not using them much. For example, suppose someone came along and wanted to implement a new OVS-like networking infrastructure? In principle could they do that without having to enhance the Nova VIF driver code? I think at the moment they couldn't, but that they would be able to if VIF_TYPE_NOOP (or possibly VIF_TYPE_TAP) was already in place. In principle I think it would then be possible for the new implementation to specify VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind of configuration and vSwitch plugging that you've described above. At the moment, the rule is that *if* you create a new type of infrastructure then *at that point* you create your new VIF plugging type to support it - vhostuser being a fine example, having been rejected on the grounds that it was, at the end of Juno, speculative. I'm not sure I particularly like this approach but that's how things are at the moment - largely down to not wanting to add code that isn;t used and therefore tested. None of this is criticism of your proposal, which sounds reasonable; I was just trying to provide a bit of context. Many thanks for your explanations; I think I'm understanding this more fully now. For example, I now see that, when using libvirt, Nova has to generate config that describes all aspects of the VM to launch, including how the VNIC is implemented and how it's bound to networking on the host. Also different hypervisors, or layers like libvirt, may go to different lengths as regards how far they connect the VNIC to some form of networking on the host, and I can see that Nova would want to normalize that, i.e. to ensure that a predictable level of connectivity has always been achieved, regardless of hypervisor, by the time that Nova hands over to someone else such as Neutron. Therefore I see now that Nova _must_ be involved to some extent in VIF plugging, and hence that VIF_TYPE_NOOP doesn't fly. For a minimal, generic implementation of an unbridged TAP interface, then, we're back to VIF_TYPE_TAP as I've proposed in https://review.openstack.org/#/c/130732/. I've just revised and reuploaded this, based on the insight provided by this ML thread, and hope people will take a look. Many thanks, Neil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
Kevin Benton blak...@gmail.com writes: I see the difference now. The main concern I see with the NOOP type is that creating the virtual interface could require different logic for certain hypervisors. In that case Neutron would now have to know things about nova and to me it seems like that's slightly too far the other direction. Many thanks, Kevin. I see this now too, as I've just written more fully in my response to Ian. Based on your and others' insight, I've revised and reuploaded my VIF_TYPE_TAP spec, and hope it's a lot clearer now. Regards, Neil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
Kevin Benton blak...@gmail.com writes: What you are proposing sounds very reasonable. If I understand correctly, the idea is to make Nova just create the TAP device and get it attached to the VM and leave it 'unplugged'. This would work well and might eliminate the need for some drivers. I see no reason to block adding a VIF type that does this. I was actually floating a slightly more radical option than that: the idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does absolutely _nothing_, not even create the TAP device. (My pending Nova spec at https://review.openstack.org/#/c/130732/ proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but then does nothing else - i.e. exactly what you've described just above. But in this email thread I was musing about going even further, towards providing a platform for future networking experimentation where Nova isn't involved at all in the networking setup logic.) However, there is a good reason that the VIF type for some OVS-based deployments require this type of setup. The vSwitches are connected to a central controller using openflow (or ovsdb) which configures forwarding rules/etc. Therefore they don't have any agents running on the compute nodes from the Neutron side to perform the step of getting the interface plugged into the vSwitch in the first place. For this reason, we will still need both types of VIFs. Thanks. I'm not advocating that existing VIF types should be removed, though - rather wondering if similar function could in principle be implemented without Nova VIF plugging - or what that would take. For example, suppose someone came along and wanted to implement a new OVS-like networking infrastructure? In principle could they do that without having to enhance the Nova VIF driver code? I think at the moment they couldn't, but that they would be able to if VIF_TYPE_NOOP (or possibly VIF_TYPE_TAP) was already in place. In principle I think it would then be possible for the new implementation to specify VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind of configuration and vSwitch plugging that you've described above. Does that sound correct, or am I missing something else? 1 .When the port is created in the Neutron DB, and handled (bound etc.) by the plugin and/or mechanism driver, the TAP device name is already present at that time. This is backwards. The tap device name is derived from the port ID, so the port has already been created in Neutron at that point. It is just unbound. The steps are roughly as follows: Nova calls neutron for a port, Nova creates/plugs VIF based on port, Nova updates port on Neutron, Neutron binds the port and notifies agent/plugin/whatever to finish the plumbing, Neutron notifies Nova that port is active, Nova unfreezes the VM. None of that should be affected by what you are proposing. The only difference is that your Neutron agent would also perform the 'plugging' operation. Agreed - but thanks for clarifying the exact sequence of events. I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP) might fit as part of the Nova-network/Neutron Migration priority that's just been announced for Kilo. I'm aware that a part of that priority is concerned with live migration, but perhaps it could also include the goal of future networking work not having to touch Nova code? Regards, Neil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
On 4 December 2014 at 08:00, Neil Jerram neil.jer...@metaswitch.com wrote: Kevin Benton blak...@gmail.com writes: I was actually floating a slightly more radical option than that: the idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does absolutely _nothing_, not even create the TAP device. Nova always does something, and that something amounts to 'attaches the VM to where it believes the endpoint to be'. Effectively you should view the VIF type as the form that's decided on during negotiation between Neutron and Nova - Neutron says 'I will do this much and you have to take it from there'. (In fact, I would prefer that it was *more* of a negotiation, in the sense that the hypervisor driver had a say to Neutron of what VIF types it supported and preferred, and Neutron could choose from a selection, but I don't think it adds much value at the moment and I didn't want to propose a change just for the sake of it.) I think you're just proposing that the hypervisor driver should do less of the grunt work of connection. Also, libvirt is not the only hypervisor driver and I've found it interesting to nose through the others for background reading, even if you're not using them much. For example, suppose someone came along and wanted to implement a new OVS-like networking infrastructure? In principle could they do that without having to enhance the Nova VIF driver code? I think at the moment they couldn't, but that they would be able to if VIF_TYPE_NOOP (or possibly VIF_TYPE_TAP) was already in place. In principle I think it would then be possible for the new implementation to specify VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind of configuration and vSwitch plugging that you've described above. At the moment, the rule is that *if* you create a new type of infrastructure then *at that point* you create your new VIF plugging type to support it - vhostuser being a fine example, having been rejected on the grounds that it was, at the end of Juno, speculative. I'm not sure I particularly like this approach but that's how things are at the moment - largely down to not wanting to add code that isn;t used and therefore tested. None of this is criticism of your proposal, which sounds reasonable; I was just trying to provide a bit of context. -- Ian. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
Hi there. I've been looking into a tricky point, and hope I can succeed in expressing it clearly here... I believe it is the case, even when using a committedly Neutron-based networking implementation, that Nova is still involved a little bit in the networking setup logic. Specifically I mean the plug() and unplug() operations, whose implementations are provided by *VIFDriver classes for the various possible hypervisors. For example, for the libvirt hypervisor, LibvirtGenericVIFDriver typically implements plug() by calling create_tap_dev() to create the TAP device, and then plugging into some form of L2 bridge. Does this logic actually have to be in Nova? For a Neutron-based networking implementation, it seems to me that it should also be possible to do this in a Neutron agent (obviously running on the compute node concerned), and that - if so - that would be preferable because it would enable more Neutron-based experimentation without having to modify any Nova code. Specifically, therefore, I wonder if we could/should add a do-nothing value to the set of Nova VIF types (VIF_TYPE_NOOP?), and implement plug()/unplug() for that value to do nothing at all, leaving all setup to the Neutron agent? And then hopefully it should never be necessary to introduce further Nova VIF type support ever again... Am I missing something that really makes that not fly? Two possible objections occurs to me, as follows, but I think they're both surmountable. 1. When the port is created in the Neutron DB, and handled (bound etc.) by the plugin and/or mechanism driver, the TAP device name is already present at that time. I think this is still OK because Neutron knows anyway what the TAP device name _will_ be, even if the actual TAP device hasn't been created yet. 2. With some agent implementations, there isn't a direct instruction, from the plugin to the agent, to say now look after this VM / port. Instead the agents polls the OS for new TAP devices appearing. Clearly, then, if there isn't something other than the agent that creates the TAP device, any logic in the agent will never be triggered. This is certain a problem. For new networking experimentation, however, we can write agent code that is directly instructed by the plugin, and hence (a) doesn't need to poll (b) doesn't require the TAP device to have been previously created by Nova - which I'd argue is preferable. Thoughts? (FYI my context is that I've been working on a networking implementation where the TAP device to/from a VM should _not_ be plugged into a bridge - and for that I've had to make a Nova change even though my team's aim was to do the whole thing in Neutron. I've proposed a spec for the Nova change that plugs a TAP interface without bridging it (https://review.openstack.org/#/c/130732/), but that set me wondering about this wider question of whether such Nova changes should still be necessary...) Many thanks, Neil ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?
What you are proposing sounds very reasonable. If I understand correctly, the idea is to make Nova just create the TAP device and get it attached to the VM and leave it 'unplugged'. This would work well and might eliminate the need for some drivers. I see no reason to block adding a VIF type that does this. However, there is a good reason that the VIF type for some OVS-based deployments require this type of setup. The vSwitches are connected to a central controller using openflow (or ovsdb) which configures forwarding rules/etc. Therefore they don't have any agents running on the compute nodes from the Neutron side to perform the step of getting the interface plugged into the vSwitch in the first place. For this reason, we will still need both types of VIFs. 1 .When the port is created in the Neutron DB, and handled (bound etc.) by the plugin and/or mechanism driver, the TAP device name is already present at that time. This is backwards. The tap device name is derived from the port ID, so the port has already been created in Neutron at that point. It is just unbound. The steps are roughly as follows: Nova calls neutron for a port, Nova creates/plugs VIF based on port, Nova updates port on Neutron, Neutron binds the port and notifies agent/plugin/whatever to finish the plumbing, Neutron notifies Nova that port is active, Nova unfreezes the VM. None of that should be affected by what you are proposing. The only difference is that your Neutron agent would also perform the 'plugging' operation. For your second point, scanning the integration bridge for new ports is currently used now, but that's an implementation detail of the reference OVS driver. It doesn't block your work directly since OVS wouldn't use your NOOP VIF type anyway. Cheers, Kevin Benton On Wed, Dec 3, 2014 at 8:08 AM, Neil Jerram neil.jer...@metaswitch.com wrote: Hi there. I've been looking into a tricky point, and hope I can succeed in expressing it clearly here... I believe it is the case, even when using a committedly Neutron-based networking implementation, that Nova is still involved a little bit in the networking setup logic. Specifically I mean the plug() and unplug() operations, whose implementations are provided by *VIFDriver classes for the various possible hypervisors. For example, for the libvirt hypervisor, LibvirtGenericVIFDriver typically implements plug() by calling create_tap_dev() to create the TAP device, and then plugging into some form of L2 bridge. Does this logic actually have to be in Nova? For a Neutron-based networking implementation, it seems to me that it should also be possible to do this in a Neutron agent (obviously running on the compute node concerned), and that - if so - that would be preferable because it would enable more Neutron-based experimentation without having to modify any Nova code. Specifically, therefore, I wonder if we could/should add a do-nothing value to the set of Nova VIF types (VIF_TYPE_NOOP?), and implement plug()/unplug() for that value to do nothing at all, leaving all setup to the Neutron agent? And then hopefully it should never be necessary to introduce further Nova VIF type support ever again... Am I missing something that really makes that not fly? Two possible objections occurs to me, as follows, but I think they're both surmountable. 1. When the port is created in the Neutron DB, and handled (bound etc.) by the plugin and/or mechanism driver, the TAP device name is already present at that time. I think this is still OK because Neutron knows anyway what the TAP device name _will_ be, even if the actual TAP device hasn't been created yet. 2. With some agent implementations, there isn't a direct instruction, from the plugin to the agent, to say now look after this VM / port. Instead the agents polls the OS for new TAP devices appearing. Clearly, then, if there isn't something other than the agent that creates the TAP device, any logic in the agent will never be triggered. This is certain a problem. For new networking experimentation, however, we can write agent code that is directly instructed by the plugin, and hence (a) doesn't need to poll (b) doesn't require the TAP device to have been previously created by Nova - which I'd argue is preferable. Thoughts? (FYI my context is that I've been working on a networking implementation where the TAP device to/from a VM should _not_ be plugged into a bridge - and for that I've had to make a Nova change even though my team's aim was to do the whole thing in Neutron. I've proposed a spec for the Nova change that plugs a TAP interface without bridging it (https://review.openstack.org/#/c/130732/), but that set me wondering about this wider question of whether such Nova changes should still be necessary...) Many thanks, Neil ___ OpenStack-dev mailing list