Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-05-05 Thread Oleg Bondarev
On Wed, Apr 30, 2014 at 7:40 PM, Salvatore Orlando sorla...@nicira.comwrote:

 On 30 April 2014 17:28, Jesse Pretorius jesse.pretor...@gmail.com wrote:

 On 30 April 2014 16:30, Oleg Bondarev obonda...@mirantis.com wrote:

 I've tried updating interface while running ssh session from guest to
 host and it was dropped :(


 Please allow me to tell you I told you so! ;)


 The drop is not great, but ok if the instance is still able to be
 communicated to after the arp tables refresh and the connection is
 re-established.

 If the drop can't be avoided, there is comfort in knowing that there is
 no need for an instance reboot, suspend/resume or any manual actions.


 I agree with Jesse's point. I think it will be reasonable to say that the
 migration will trigger a connection reset for all existing TCP connections.
 However, what are exactly the changes we're making on the data plane? Are
 you testing with migrating VIF from a linux bridge instance to an Open
 vSwitch obne?


Actually I was testing VIF move from nova-net bridge br100 to neutron's
bridge (qbrXXX) (which is kind of final step in instance migration as I see
it)
Another problem that I faced is clearing network filters which nova-net
configures on the VIF, but this seems to be fixed in libvirt now:
https://www.redhat.com/archives/libvirt-users/2014-May/msg2.html

Thanks,
Oleg


 Salvatore



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-30 Thread Oleg Bondarev
So by running ping while instance interface update we can see ~10-20 sec of
connectivity downtime. Here is a tcp capture during update (pinging ext net
gateway):

*05:58:41.020791 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 10, length 64*
*05:58:41.020866 IP 172.24.4.1  10.0.0.4 http://10.0.0.4: ICMP echo
reply, id 29954, seq 10, length 64*
*05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward,
Agreement]*
*05:58:42.022785 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 11, length 64*
*05:58:42.022832 IP 172.24.4.1  10.0.0.4 http://10.0.0.4: ICMP echo
reply, id 29954, seq 11, length 64*
*[vm interface updated..]*
*05:58:43.023310 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 12, length 64*
*05:58:44.024042 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 13, length 64*
*05:58:45.025760 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 14, length 64*
*05:58:46.026260 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 15, length 64*
*05:58:47.027813 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 16, length 64*
*05:58:48.028229 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 17, length 64*
*05:58:49.029881 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 18, length 64*
*05:58:50.029952 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 19, length 64*
*05:58:51.031380 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 20, length 64*
*05:58:52.032012 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 21, length 64*
*05:58:53.033456 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 22, length 64*
*05:58:54.034061 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 23, length 64*
*05:58:55.035170 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 24, length 64*
*05:58:56.035988 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 25, length 64*
*05:58:57.037285 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 26, length 64*
*05:58:57.045691 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*05:58:58.038245 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 27, length 64*
*05:58:58.045496 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*05:58:59.040143 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 28, length 64*
*05:58:59.045609 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*05:59:00.040789 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 29, length 64*
*05:59:01.042333 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*05:59:01.042618 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa (oui Unknown),
length 28*
*05:59:01.043471 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 30, length 64*
*05:59:01.063176 IP 172.24.4.1  10.0.0.4 http://10.0.0.4: ICMP echo
reply, id 29954, seq 30, length 64*
*05:59:02.042699 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
request, id 29954, seq 31, length 64*
*05:59:02.042840 IP 172.24.4.1  10.0.0.4 http://10.0.0.4: ICMP echo
reply, id 29954, seq 31, length 64*

However this connectivity downtime can be significally reduced by restarting
network service on the instance right after interface update.


On Mon, Apr 28, 2014 at 6:29 PM, Kyle Mestery mest...@noironetworks.comwrote:

 On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery mest...@noironetworks.com
 
  wrote:
 
  On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev obonda...@mirantis.com
  wrote:
   Yeah, I also saw in docs that update-device is supported since 0.8.0
   version,
   not sure why it didn't work in my setup.
   I installed latest libvirt 1.2.3 and now update-device works just fine
   and I
   am able
   to move instance tap device from one bridge to another with no
 downtime
   and
   no reboot!
   I'll try to investigate why it didn't work on 0.9.8 and which is the
   minimal
   libvirt version for this.
  
  Wow, cool! This is really good news. Thanks for driving this! By
  chance did you notice if there was a drop in connectivity at all, or
  if the guest detected the move at all?
 
 
  Didn't check it yet. What in your opinion would be the best way of
 testing
  this?
 
 The simplest way would to have a ping running when you run
 update-device and see if any packets are dropped. We can do more
 thorough testing after that, but that would give us a good
 approximation of connectivity while swapping the underlying device.

  Kyle
 
   Thanks,
   Oleg
  
  
   On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery
   mest...@noironetworks.com
   wrote:
  
   

Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-30 Thread Eugene Nikanorov
I think it's better to test with some tcp connection (ssh session?) rather
then with ping.

Eugene.


On Wed, Apr 30, 2014 at 5:28 PM, Oleg Bondarev obonda...@mirantis.comwrote:

 So by running ping while instance interface update we can see ~10-20 sec of
 connectivity downtime. Here is a tcp capture during update (pinging ext
 net gateway):

 *05:58:41.020791 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 10, length 64*
 *05:58:41.020866 IP 172.24.4.1  10.0.0.4 http://10.0.0.4: ICMP echo
 reply, id 29954, seq 10, length 64*
 *05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward,
 Agreement]*
 *05:58:42.022785 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 11, length 64*
 *05:58:42.022832 IP 172.24.4.1  10.0.0.4 http://10.0.0.4: ICMP echo
 reply, id 29954, seq 11, length 64*
 *[vm interface updated..]*
 *05:58:43.023310 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 12, length 64*
 *05:58:44.024042 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 13, length 64*
 *05:58:45.025760 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 14, length 64*
 *05:58:46.026260 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 15, length 64*
 *05:58:47.027813 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 16, length 64*
 *05:58:48.028229 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 17, length 64*
 *05:58:49.029881 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 18, length 64*
 *05:58:50.029952 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 19, length 64*
 *05:58:51.031380 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 20, length 64*
 *05:58:52.032012 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 21, length 64*
 *05:58:53.033456 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 22, length 64*
 *05:58:54.034061 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 23, length 64*
 *05:58:55.035170 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 24, length 64*
 *05:58:56.035988 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 25, length 64*
 *05:58:57.037285 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 26, length 64*
 *05:58:57.045691 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
 *05:58:58.038245 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 27, length 64*
 *05:58:58.045496 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
 *05:58:59.040143 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 28, length 64*
 *05:58:59.045609 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
 *05:59:00.040789 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 29, length 64*
 *05:59:01.042333 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
 *05:59:01.042618 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa (oui
 Unknown), length 28*
 *05:59:01.043471 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 30, length 64*
 *05:59:01.063176 IP 172.24.4.1  10.0.0.4 http://10.0.0.4: ICMP echo
 reply, id 29954, seq 30, length 64*
 *05:59:02.042699 IP 10.0.0.4  172.24.4.1 http://172.24.4.1: ICMP echo
 request, id 29954, seq 31, length 64*
 *05:59:02.042840 IP 172.24.4.1  10.0.0.4 http://10.0.0.4: ICMP echo
 reply, id 29954, seq 31, length 64*

 However this connectivity downtime can be significally reduced by
 restarting
 network service on the instance right after interface update.


 On Mon, Apr 28, 2014 at 6:29 PM, Kyle Mestery 
 mest...@noironetworks.comwrote:

 On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery 
 mest...@noironetworks.com
  wrote:
 
  On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev obonda...@mirantis.com
 
  wrote:
   Yeah, I also saw in docs that update-device is supported since 0.8.0
   version,
   not sure why it didn't work in my setup.
   I installed latest libvirt 1.2.3 and now update-device works just
 fine
   and I
   am able
   to move instance tap device from one bridge to another with no
 downtime
   and
   no reboot!
   I'll try to investigate why it didn't work on 0.9.8 and which is the
   minimal
   libvirt version for this.
  
  Wow, cool! This is really good news. Thanks for driving this! By
  chance did you notice if there was a drop in connectivity at all, or
  if the guest detected the move at all?
 
 
  Didn't check it yet. What in your opinion would be the best way of
 testing
  this?
 
 The simplest way would to have a ping running when you run
 update-device and see if any packets are dropped. We can do more
 

Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-30 Thread Kyle Mestery
Agreed, ping was a good first tool to verify downtime, but trying with
something using TCP at this point would be useful as well.

On Wed, Apr 30, 2014 at 8:39 AM, Eugene Nikanorov
enikano...@mirantis.com wrote:
 I think it's better to test with some tcp connection (ssh session?) rather
 then with ping.

 Eugene.


 On Wed, Apr 30, 2014 at 5:28 PM, Oleg Bondarev obonda...@mirantis.com
 wrote:

 So by running ping while instance interface update we can see ~10-20 sec
 of
 connectivity downtime. Here is a tcp capture during update (pinging ext
 net gateway):

 05:58:41.020791 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 10, length 64
 05:58:41.020866 IP 172.24.4.1  10.0.0.4: ICMP echo reply, id 29954, seq
 10, length 64
 05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward,
 Agreement]
 05:58:42.022785 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 11, length 64
 05:58:42.022832 IP 172.24.4.1  10.0.0.4: ICMP echo reply, id 29954, seq
 11, length 64
 [vm interface updated..]
 05:58:43.023310 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 12, length 64
 05:58:44.024042 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 13, length 64
 05:58:45.025760 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 14, length 64
 05:58:46.026260 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 15, length 64
 05:58:47.027813 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 16, length 64
 05:58:48.028229 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 17, length 64
 05:58:49.029881 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 18, length 64
 05:58:50.029952 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 19, length 64
 05:58:51.031380 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 20, length 64
 05:58:52.032012 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 21, length 64
 05:58:53.033456 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 22, length 64
 05:58:54.034061 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 23, length 64
 05:58:55.035170 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 24, length 64
 05:58:56.035988 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 25, length 64
 05:58:57.037285 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 26, length 64
 05:58:57.045691 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28
 05:58:58.038245 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 27, length 64
 05:58:58.045496 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28
 05:58:59.040143 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 28, length 64
 05:58:59.045609 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28
 05:59:00.040789 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 29, length 64
 05:59:01.042333 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28
 05:59:01.042618 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa (oui Unknown),
 length 28
 05:59:01.043471 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 30, length 64
 05:59:01.063176 IP 172.24.4.1  10.0.0.4: ICMP echo reply, id 29954, seq
 30, length 64
 05:59:02.042699 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954, seq
 31, length 64
 05:59:02.042840 IP 172.24.4.1  10.0.0.4: ICMP echo reply, id 29954, seq
 31, length 64

 However this connectivity downtime can be significally reduced by
 restarting
 network service on the instance right after interface update.


 On Mon, Apr 28, 2014 at 6:29 PM, Kyle Mestery mest...@noironetworks.com
 wrote:

 On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery
  mest...@noironetworks.com
  wrote:
 
  On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev
  obonda...@mirantis.com
  wrote:
   Yeah, I also saw in docs that update-device is supported since 0.8.0
   version,
   not sure why it didn't work in my setup.
   I installed latest libvirt 1.2.3 and now update-device works just
   fine
   and I
   am able
   to move instance tap device from one bridge to another with no
   downtime
   and
   no reboot!
   I'll try to investigate why it didn't work on 0.9.8 and which is the
   minimal
   libvirt version for this.
  
  Wow, cool! This is really good news. Thanks for driving this! By
  chance did you notice if there was a drop in connectivity at all, or
  if the guest detected the move at all?
 
 
  Didn't check it yet. What in your opinion would be the best way of
  testing
  this?
 
 The simplest way would to have a ping running when you run
 update-device and see if any packets are dropped. We can do more
 thorough testing after that, but that would give us a good
 approximation of connectivity while swapping the underlying device.

  Kyle
 
   Thanks,
   Oleg
  
  
   On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery
   mest...@noironetworks.com
   wrote:
  
   According to this page [1], update-device is 

Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-30 Thread Oleg Bondarev
I've tried updating interface while running ssh session from guest to host
and it was dropped :(

*07:27:58.676570 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
44:88, ack 61, win 2563, options [nop,nop,TS val 4539607 ecr 24227108],
length 44*
*07:27:58.677161 IP 172.18.76.80.22  10.0.0.4.52556: Flags [P.], seq
61:121, ack 88, win 277, options [nop,nop,TS val 24227149 ecr 4539607],
length 60*
*07:27:58.677720 IP 10.0.0.4.52556  172.18.76.80.22: Flags [.], ack 121,
win 2563, options [nop,nop,TS val 4539608 ecr 24227149], length 0*
*07:27:59.087582 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
88:132, ack 121, win 2563, options [nop,nop,TS val 4539710 ecr 24227149],
length 44*
*07:27:59.088140 IP 172.18.76.80.22  10.0.0.4.52556: Flags [P.], seq
121:181, ack 132, win 277, options [nop,nop,TS val 24227251 ecr 4539710],
length 60*
*07:27:59.088487 IP 10.0.0.4.52556  172.18.76.80.22: Flags [.], ack 181,
win 2563, options [nop,nop,TS val 4539710 ecr 24227251], length 0*
*[vm interface updated..]*
*07:28:17.157594 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544228 ecr 24227251],
length 44*
*07:28:17.321060 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
176:220, ack 181, win 2563, options [nop,nop,TS val 4544268 ecr 24227251],
length 44*
*07:28:17.361835 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544279 ecr 24227251],
length 44*
*07:28:17.769935 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544381 ecr 24227251],
length 44*
*07:28:18.585887 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544585 ecr 24227251],
length 44*
*07:28:20.221797 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4544994 ecr 24227251],
length 44*
*07:28:23.493540 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4545812 ecr 24227251],
length 44*
*07:28:30.037927 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4547448 ecr 24227251],
length 44*
*07:28:35.045733 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*07:28:36.045388 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*07:28:37.045900 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*07:28:43.063118 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request
from fa:16:3e:ec:eb:a4, length 280*
*07:28:43.084384 IP 10.0.0.3.67  10.0.0.4.68: BOOTP/DHCP, Reply, length
323*
*07:28:43.085038 ARP, Request who-has 10.0.0.3 tell 10.0.0.4, length 28*
*07:28:43.099463 ARP, Reply 10.0.0.3 is-at fa:16:3e:79:9b:9c, length 28*
*07:28:43.099841 IP 10.0.0.4  10.0.0.3 http://10.0.0.3: ICMP 10.0.0.4
udp port 68 unreachable, length 359*
*07:28:43.125379 ARP, Request who-has 10.0.0.1 tell 10.0.0.4, length 28*
*07:28:43.125626 ARP, Reply 10.0.0.1 is-at fa:16:3e:61:28:fa, length 28*
*07:28:43.125907 IP 10.0.0.4.52556  172.18.76.80.22: Flags [P.], seq
132:176, ack 181, win 2563, options [nop,nop,TS val 4550720 ecr 24227251],
length 44*
*07:28:43.132650 IP 172.18.76.80.22  10.0.0.4.52556: Flags [R], seq
369316248, win 0, length 0*
*07:28:48.148853 ARP, Request who-has 10.0.0.4 tell 10.0.0.1, length 28*
*07:28:48.149377 ARP, Reply 10.0.0.4 is-at fa:16:3e:ec:eb:a4, length 28*


On Wed, Apr 30, 2014 at 5:50 PM, Kyle Mestery mest...@noironetworks.comwrote:

 Agreed, ping was a good first tool to verify downtime, but trying with
 something using TCP at this point would be useful as well.

 On Wed, Apr 30, 2014 at 8:39 AM, Eugene Nikanorov
 enikano...@mirantis.com wrote:
  I think it's better to test with some tcp connection (ssh session?)
 rather
  then with ping.
 
  Eugene.
 
 
  On Wed, Apr 30, 2014 at 5:28 PM, Oleg Bondarev obonda...@mirantis.com
  wrote:
 
  So by running ping while instance interface update we can see ~10-20 sec
  of
  connectivity downtime. Here is a tcp capture during update (pinging ext
  net gateway):
 
  05:58:41.020791 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954,
 seq
  10, length 64
  05:58:41.020866 IP 172.24.4.1  10.0.0.4: ICMP echo reply, id 29954,
 seq
  10, length 64
  05:58:41.885381 STP 802.1s, Rapid STP, CIST Flags [Learn, Forward,
  Agreement]
  05:58:42.022785 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954,
 seq
  11, length 64
  05:58:42.022832 IP 172.24.4.1  10.0.0.4: ICMP echo reply, id 29954,
 seq
  11, length 64
  [vm interface updated..]
  05:58:43.023310 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954,
 seq
  12, length 64
  05:58:44.024042 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954,
 seq
  13, length 64
  05:58:45.025760 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954,
 seq
  14, length 64
  05:58:46.026260 IP 10.0.0.4  172.24.4.1: ICMP echo request, id 29954,
 seq
  15, length 64
  05:58:47.027813 IP 

Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-30 Thread Jesse Pretorius
On 30 April 2014 16:30, Oleg Bondarev obonda...@mirantis.com wrote:

 I've tried updating interface while running ssh session from guest to host
 and it was dropped :(


The drop is not great, but ok if the instance is still able to be
communicated to after the arp tables refresh and the connection is
re-established.

If the drop can't be avoided, there is comfort in knowing that there is no
need for an instance reboot, suspend/resume or any manual actions.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-30 Thread Salvatore Orlando
On 30 April 2014 17:28, Jesse Pretorius jesse.pretor...@gmail.com wrote:

 On 30 April 2014 16:30, Oleg Bondarev obonda...@mirantis.com wrote:

 I've tried updating interface while running ssh session from guest to
 host and it was dropped :(


Please allow me to tell you I told you so! ;)


 The drop is not great, but ok if the instance is still able to be
 communicated to after the arp tables refresh and the connection is
 re-established.

 If the drop can't be avoided, there is comfort in knowing that there is no
 need for an instance reboot, suspend/resume or any manual actions.


I agree with Jesse's point. I think it will be reasonable to say that the
migration will trigger a connection reset for all existing TCP connections.
However, what are exactly the changes we're making on the data plane? Are
you testing with migrating VIF from a linux bridge instance to an Open
vSwitch obne?

Salvatore



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-28 Thread Oleg Bondarev
Yeah, I also saw in docs that *update-device *is supported since 0.8.0
version,
not sure why it didn't work in my setup.
I installed latest libvirt 1.2.3 and now update-device works just fine and
I am able
to move instance tap device from one bridge to another with no downtime and
no reboot!
I'll try to investigate why it didn't work on 0.9.8 and which is the
minimal libvirt version for this.

Thanks,
Oleg


On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery mest...@noironetworks.comwrote:

 According to this page [1], update-device is supported from libvirt
 0.8.0 onwards. So in theory, this should be working with your 0.9.8
 version you have. If you continue to hit issues here Oleg, I'd suggest
 sending an email to the libvirt mailing list with the specifics of the
 problem. I've found in the past there are lots of very helpful on that
 mailing list.

 Thanks,
 Kyle

 [1] http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device

 On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  So here is the etherpad for the migration discussion:
  https://etherpad.openstack.org/p/novanet-neutron-migration
  I've also filed a design session on this:
  http://summit.openstack.org/cfp/details/374
 
  Currently I'm still struggling with instance vNic update, trying to move
 it
  from one bridge to another.
  Tried the following on ubuntu 12.04 with libvirt 0.9.8:
 
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
  virsh update-device shows success but nothing actually changes in the
  instance interface config.
  Going to try this with later libvirt version.
 
  Thanks,
  Oleg
 
 
 
  On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido rsblend...@suse.com
 
  wrote:
 
 
  Very interesting topic!
  +1 Salvatore
 
  It would be nice to have an etherpad to share the information and
 organize
  a plan. This way it would be easier for interested people  to join.
 
  Rossella
 
 
  On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
 
  It's great to see that there is activity on the launchpad blueprint as
  well.
  From what I heard Oleg should have already translated the various
  discussion into a list of functional requirements (or something like
 that).
 
  If that is correct, it might be a good idea to share them with relevant
  stakeholders (operators and developers), define an actionable plan for
 Juno,
  and then distribute tasks.
  It would be a shame if it turns out several contributors are working on
  this topic independently.
 
  Salvatore
 
 
  On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com
 wrote:
 
  On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com wrote:
 
  From previous requirements discussions,
 
 
  There's a track record of discussions on the whiteboard here:
  https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-28 Thread Kyle Mestery
On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev obonda...@mirantis.com wrote:
 Yeah, I also saw in docs that update-device is supported since 0.8.0
 version,
 not sure why it didn't work in my setup.
 I installed latest libvirt 1.2.3 and now update-device works just fine and I
 am able
 to move instance tap device from one bridge to another with no downtime and
 no reboot!
 I'll try to investigate why it didn't work on 0.9.8 and which is the minimal
 libvirt version for this.

Wow, cool! This is really good news. Thanks for driving this! By
chance did you notice if there was a drop in connectivity at all, or
if the guest detected the move at all?

Kyle

 Thanks,
 Oleg


 On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery mest...@noironetworks.com
 wrote:

 According to this page [1], update-device is supported from libvirt
 0.8.0 onwards. So in theory, this should be working with your 0.9.8
 version you have. If you continue to hit issues here Oleg, I'd suggest
 sending an email to the libvirt mailing list with the specifics of the
 problem. I've found in the past there are lots of very helpful on that
 mailing list.

 Thanks,
 Kyle

 [1] http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device

 On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  So here is the etherpad for the migration discussion:
  https://etherpad.openstack.org/p/novanet-neutron-migration
  I've also filed a design session on this:
  http://summit.openstack.org/cfp/details/374
 
  Currently I'm still struggling with instance vNic update, trying to move
  it
  from one bridge to another.
  Tried the following on ubuntu 12.04 with libvirt 0.9.8:
 
  https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
  virsh update-device shows success but nothing actually changes in the
  instance interface config.
  Going to try this with later libvirt version.
 
  Thanks,
  Oleg
 
 
 
  On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido
  rsblend...@suse.com
  wrote:
 
 
  Very interesting topic!
  +1 Salvatore
 
  It would be nice to have an etherpad to share the information and
  organize
  a plan. This way it would be easier for interested people  to join.
 
  Rossella
 
 
  On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
 
  It's great to see that there is activity on the launchpad blueprint as
  well.
  From what I heard Oleg should have already translated the various
  discussion into a list of functional requirements (or something like
  that).
 
  If that is correct, it might be a good idea to share them with relevant
  stakeholders (operators and developers), define an actionable plan for
  Juno,
  and then distribute tasks.
  It would be a shame if it turns out several contributors are working on
  this topic independently.
 
  Salvatore
 
 
  On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com
  wrote:
 
  On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com wrote:
 
  From previous requirements discussions,
 
 
  There's a track record of discussions on the whiteboard here:
  https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-28 Thread Oleg Bondarev
On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery mest...@noironetworks.comwrote:

 On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  Yeah, I also saw in docs that update-device is supported since 0.8.0
  version,
  not sure why it didn't work in my setup.
  I installed latest libvirt 1.2.3 and now update-device works just fine
 and I
  am able
  to move instance tap device from one bridge to another with no downtime
 and
  no reboot!
  I'll try to investigate why it didn't work on 0.9.8 and which is the
 minimal
  libvirt version for this.
 
 Wow, cool! This is really good news. Thanks for driving this! By
 chance did you notice if there was a drop in connectivity at all, or
 if the guest detected the move at all?


Didn't check it yet. What in your opinion would be the best way of testing
this?

Kyle

  Thanks,
  Oleg
 
 
  On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery mest...@noironetworks.com
 
  wrote:
 
  According to this page [1], update-device is supported from libvirt
  0.8.0 onwards. So in theory, this should be working with your 0.9.8
  version you have. If you continue to hit issues here Oleg, I'd suggest
  sending an email to the libvirt mailing list with the specifics of the
  problem. I've found in the past there are lots of very helpful on that
  mailing list.
 
  Thanks,
  Kyle
 
  [1]
 http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device
 
  On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev obonda...@mirantis.com
  wrote:
   So here is the etherpad for the migration discussion:
   https://etherpad.openstack.org/p/novanet-neutron-migration
   I've also filed a design session on this:
   http://summit.openstack.org/cfp/details/374
  
   Currently I'm still struggling with instance vNic update, trying to
 move
   it
   from one bridge to another.
   Tried the following on ubuntu 12.04 with libvirt 0.9.8:
  
  
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
   virsh update-device shows success but nothing actually changes in the
   instance interface config.
   Going to try this with later libvirt version.
  
   Thanks,
   Oleg
  
  
  
   On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido
   rsblend...@suse.com
   wrote:
  
  
   Very interesting topic!
   +1 Salvatore
  
   It would be nice to have an etherpad to share the information and
   organize
   a plan. This way it would be easier for interested people  to join.
  
   Rossella
  
  
   On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
  
   It's great to see that there is activity on the launchpad blueprint
 as
   well.
   From what I heard Oleg should have already translated the various
   discussion into a list of functional requirements (or something like
   that).
  
   If that is correct, it might be a good idea to share them with
 relevant
   stakeholders (operators and developers), define an actionable plan
 for
   Juno,
   and then distribute tasks.
   It would be a shame if it turns out several contributors are working
 on
   this topic independently.
  
   Salvatore
  
  
   On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com
   wrote:
  
   On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com
 wrote:
  
   From previous requirements discussions,
  
  
   There's a track record of discussions on the whiteboard here:
  
 https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-28 Thread Kyle Mestery
On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev obonda...@mirantis.com wrote:
 On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery mest...@noironetworks.com
 wrote:

 On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  Yeah, I also saw in docs that update-device is supported since 0.8.0
  version,
  not sure why it didn't work in my setup.
  I installed latest libvirt 1.2.3 and now update-device works just fine
  and I
  am able
  to move instance tap device from one bridge to another with no downtime
  and
  no reboot!
  I'll try to investigate why it didn't work on 0.9.8 and which is the
  minimal
  libvirt version for this.
 
 Wow, cool! This is really good news. Thanks for driving this! By
 chance did you notice if there was a drop in connectivity at all, or
 if the guest detected the move at all?


 Didn't check it yet. What in your opinion would be the best way of testing
 this?

The simplest way would to have a ping running when you run
update-device and see if any packets are dropped. We can do more
thorough testing after that, but that would give us a good
approximation of connectivity while swapping the underlying device.

 Kyle

  Thanks,
  Oleg
 
 
  On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery
  mest...@noironetworks.com
  wrote:
 
  According to this page [1], update-device is supported from libvirt
  0.8.0 onwards. So in theory, this should be working with your 0.9.8
  version you have. If you continue to hit issues here Oleg, I'd suggest
  sending an email to the libvirt mailing list with the specifics of the
  problem. I've found in the past there are lots of very helpful on that
  mailing list.
 
  Thanks,
  Kyle
 
  [1]
  http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device
 
  On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev obonda...@mirantis.com
  wrote:
   So here is the etherpad for the migration discussion:
   https://etherpad.openstack.org/p/novanet-neutron-migration
   I've also filed a design session on this:
   http://summit.openstack.org/cfp/details/374
  
   Currently I'm still struggling with instance vNic update, trying to
   move
   it
   from one bridge to another.
   Tried the following on ubuntu 12.04 with libvirt 0.9.8:
  
  
   https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
   virsh update-device shows success but nothing actually changes in the
   instance interface config.
   Going to try this with later libvirt version.
  
   Thanks,
   Oleg
  
  
  
   On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido
   rsblend...@suse.com
   wrote:
  
  
   Very interesting topic!
   +1 Salvatore
  
   It would be nice to have an etherpad to share the information and
   organize
   a plan. This way it would be easier for interested people  to join.
  
   Rossella
  
  
   On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
  
   It's great to see that there is activity on the launchpad blueprint
   as
   well.
   From what I heard Oleg should have already translated the various
   discussion into a list of functional requirements (or something like
   that).
  
   If that is correct, it might be a good idea to share them with
   relevant
   stakeholders (operators and developers), define an actionable plan
   for
   Juno,
   and then distribute tasks.
   It would be a shame if it turns out several contributors are working
   on
   this topic independently.
  
   Salvatore
  
  
   On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com
   wrote:
  
   On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com
   wrote:
  
   From previous requirements discussions,
  
  
   There's a track record of discussions on the whiteboard here:
  
   https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  

Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-25 Thread Kyle Mestery
According to this page [1], update-device is supported from libvirt
0.8.0 onwards. So in theory, this should be working with your 0.9.8
version you have. If you continue to hit issues here Oleg, I'd suggest
sending an email to the libvirt mailing list with the specifics of the
problem. I've found in the past there are lots of very helpful on that
mailing list.

Thanks,
Kyle

[1] http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device

On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev obonda...@mirantis.com wrote:
 So here is the etherpad for the migration discussion:
 https://etherpad.openstack.org/p/novanet-neutron-migration
 I've also filed a design session on this:
 http://summit.openstack.org/cfp/details/374

 Currently I'm still struggling with instance vNic update, trying to move it
 from one bridge to another.
 Tried the following on ubuntu 12.04 with libvirt 0.9.8:
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
 virsh update-device shows success but nothing actually changes in the
 instance interface config.
 Going to try this with later libvirt version.

 Thanks,
 Oleg



 On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido rsblend...@suse.com
 wrote:


 Very interesting topic!
 +1 Salvatore

 It would be nice to have an etherpad to share the information and organize
 a plan. This way it would be easier for interested people  to join.

 Rossella


 On 04/23/2014 12:57 AM, Salvatore Orlando wrote:

 It's great to see that there is activity on the launchpad blueprint as
 well.
 From what I heard Oleg should have already translated the various
 discussion into a list of functional requirements (or something like that).

 If that is correct, it might be a good idea to share them with relevant
 stakeholders (operators and developers), define an actionable plan for Juno,
 and then distribute tasks.
 It would be a shame if it turns out several contributors are working on
 this topic independently.

 Salvatore


 On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com wrote:

 On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com wrote:

 From previous requirements discussions,


 There's a track record of discussions on the whiteboard here:
 https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-24 Thread Oleg Bondarev
So here is the etherpad for the migration discussion:
https://etherpad.openstack.org/p/novanet-neutron-migration
I've also filed a design session on this:
http://summit.openstack.org/cfp/details/374

Currently I'm still struggling with instance vNic update, trying to move it
from one bridge to another.
Tried the following on ubuntu 12.04 with libvirt 0.9.8:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
virsh update-device shows success but nothing actually changes in the
instance interface config.
Going to try this with later libvirt version.

Thanks,
Oleg



On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido rsblend...@suse.comwrote:


 Very interesting topic!
 +1 Salvatore

 It would be nice to have an etherpad to share the information and organize
 a plan. This way it would be easier for interested people  to join.

 Rossella


 On 04/23/2014 12:57 AM, Salvatore Orlando wrote:

 It's great to see that there is activity on the launchpad blueprint as
 well.
 From what I heard Oleg should have already translated the various
 discussion into a list of functional requirements (or something like that).

  If that is correct, it might be a good idea to share them with relevant
 stakeholders (operators and developers), define an actionable plan for
 Juno, and then distribute tasks.
 It would be a shame if it turns out several contributors are working on
 this topic independently.

  Salvatore


 On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com wrote:

  On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com wrote:

 From previous requirements discussions,


  There's a track record of discussions on the whiteboard here:
 https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-23 Thread Rossella Sblendido


Very interesting topic!
+1 Salvatore

It would be nice to have an etherpad to share the information and 
organize a plan. This way it would be easier for interested people to join.


Rossella

On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
It's great to see that there is activity on the launchpad blueprint as 
well.
From what I heard Oleg should have already translated the various 
discussion into a list of functional requirements (or something like 
that).


If that is correct, it might be a good idea to share them with 
relevant stakeholders (operators and developers), define an actionable 
plan for Juno, and then distribute tasks.
It would be a shame if it turns out several contributors are working 
on this topic independently.


Salvatore


On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com 
mailto:jesse.pretor...@gmail.com wrote:


On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com
mailto:sorla...@nicira.com wrote:

From previous requirements discussions,


There's a track record of discussions on the whiteboard here:
https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-22 Thread Salvatore Orlando
From previous requirements discussions, I recall that:
- A control plan outage is unavoidable (I think everybody agrees here)
- Data plane outages should be avoided at all costs; small l3 outages
deriving from the transition to the l3 agent from the network node might be
allowed.

However, a L2 data plane outage on the instance NIC, albeit small, would
probably still case existing TCP connections to be terminated.
I'm not sure it this can be accepted; however, if there is no way to avoid
it, we should probably consider tolerating it.

It would be good to know what kind of modifications the NIC needs; perhaps
no data plane downtime is needed.
Regarding libvirt version, I thinks it's ok to have no-downtime migrations
only for deployments running at least a certain version of libvirt.

Salvatore


On 21 April 2014 13:18, Akihiro Motoki mot...@da.jp.nec.com wrote:


 (2014/04/21 18:10), Oleg Bondarev wrote:


 On Fri, Apr 18, 2014 at 9:10 PM, Kyle Mestery 
 mest...@noironetworks.comwrote:

 On Fri, Apr 18, 2014 at 8:52 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  Hi all,
 
  While investigating possible options for Nova-network to Neutron
 migration
  I faced a couple of issues with libvirt.
  One of the key requirements for the migration is that instances should
 stay
  running and don't need restarting. In order to meet this requirement we
 need
  to either attach new nic to the instance or update existing one to plug
 it
  to the Neutron network.
 
  Thanks for looking into this Oleg! I just wanted to mention that if
 we're trying to plug a new NIC into the VM, this will likely require
 modifications in the guest. The new NIC will likely have a new PCI ID,
 MAC, etc., and thus the guest would have to switch to this. Therefor,
 I think it may be better to try and move the existing NIC from a nova
 network onto a neutron network.


  Yeah, I agree that modifying the existing NIC is the preferred way.


 Thanks for investigating ways of migrating from nova-network to neutron.
 I think we need to define the levels of the migration.
 We can't satisfy all requirements at the same time, so we need to
 determine/clarify
 some reasonable limitations on the migration.

 - datapath downtime
   - no downtime
   - a small period of downtime
   - rebooting an instnace
 - API and management plane downtime
 - Combination of the above

 I think modifying the existing NIC requires plug and unplug an device in
 some way
 (plug/unplug an network interface to VM? move a tap device from
 nova-network
 to neutron bridge?). It leads to a small downtime. On the other hand,
 adding a new
 interface requires a geust to deal with network migration (though it can
 potentially
 provide no downtime migration as an infra level).
 IMO a small downtime can be accepted in cloud use cases and it is a good
 start line.

 Thanks,
 Akihiro




  So what I've discovered is that attaching a new network device is only
  applied
  on the instance after reboot although VIR_DOMAIN_AFFECT_LIVE flag is
 passed
  to
  the libvirt call attachDeviceFlags():
 
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1412
  Is that expected? Are there any other options to apply new nic without
  reboot?
 
  I also tried to update existing nic of an instance by using libvirt
  updateDeviceFlags() call,
  but it fails with the following:
  'this function is not supported by the connection driver: cannot modify
  network device configuration'
  libvirt API spec (http://libvirt.org/hvsupport.html) shows that 0.8.0
 as
  minimal
  qemu version for the virDomainUpdateDeviceFlags call, kvm --version on
 my
  setup shows
  'QEMU emulator version 1.0 (qemu-kvm-1.0)'
  Could someone please point what am I missing here?
 
  What does libvirtd -V show for the libvirt version? On my Fedora 20
 setup, I see the following:

 [kmestery@fedora-mac neutron]$ libvirtd -V
 libvirtd (libvirt) 1.1.3.4
 [kmestery@fedora-mac neutron]$


  On my Ubuntu 12.04 it shows:
   $ libvirtd --version
  libvirtd (libvirt) 0.9.8


 Thanks,
 Kyle

  Any help on the above is much appreciated!
 
  Thanks,
  Oleg
 
 
   ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-22 Thread Jesse Pretorius
On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com wrote:

 From previous requirements discussions,


There's a track record of discussions on the whiteboard here:
https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-22 Thread Salvatore Orlando
It's great to see that there is activity on the launchpad blueprint as well.
From what I heard Oleg should have already translated the various
discussion into a list of functional requirements (or something like that).

If that is correct, it might be a good idea to share them with relevant
stakeholders (operators and developers), define an actionable plan for
Juno, and then distribute tasks.
It would be a shame if it turns out several contributors are working on
this topic independently.

Salvatore


On 22 April 2014 16:27, Jesse Pretorius jesse.pretor...@gmail.com wrote:

 On 22 April 2014 14:58, Salvatore Orlando sorla...@nicira.com wrote:

 From previous requirements discussions,


 There's a track record of discussions on the whiteboard here:
 https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-21 Thread Oleg Bondarev
On Fri, Apr 18, 2014 at 9:10 PM, Kyle Mestery mest...@noironetworks.comwrote:

 On Fri, Apr 18, 2014 at 8:52 AM, Oleg Bondarev obonda...@mirantis.com
 wrote:
  Hi all,
 
  While investigating possible options for Nova-network to Neutron
 migration
  I faced a couple of issues with libvirt.
  One of the key requirements for the migration is that instances should
 stay
  running and don't need restarting. In order to meet this requirement we
 need
  to either attach new nic to the instance or update existing one to plug
 it
  to the Neutron network.
 
 Thanks for looking into this Oleg! I just wanted to mention that if
 we're trying to plug a new NIC into the VM, this will likely require
 modifications in the guest. The new NIC will likely have a new PCI ID,
 MAC, etc., and thus the guest would have to switch to this. Therefor,
 I think it may be better to try and move the existing NIC from a nova
 network onto a neutron network.


Yeah, I agree that modifying the existing NIC is the preferred way.


  So what I've discovered is that attaching a new network device is only
  applied
  on the instance after reboot although VIR_DOMAIN_AFFECT_LIVE flag is
 passed
  to
  the libvirt call attachDeviceFlags():
 
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1412
  Is that expected? Are there any other options to apply new nic without
  reboot?
 
  I also tried to update existing nic of an instance by using libvirt
  updateDeviceFlags() call,
  but it fails with the following:
  'this function is not supported by the connection driver: cannot modify
  network device configuration'
  libvirt API spec (http://libvirt.org/hvsupport.html) shows that 0.8.0 as
  minimal
  qemu version for the virDomainUpdateDeviceFlags call, kvm --version on my
  setup shows
  'QEMU emulator version 1.0 (qemu-kvm-1.0)'
  Could someone please point what am I missing here?
 
 What does libvirtd -V show for the libvirt version? On my Fedora 20
 setup, I see the following:

 [kmestery@fedora-mac neutron]$ libvirtd -V
 libvirtd (libvirt) 1.1.3.4
 [kmestery@fedora-mac neutron]$


On my Ubuntu 12.04 it shows:
 $ libvirtd --version
 libvirtd (libvirt) 0.9.8


 Thanks,
 Kyle

  Any help on the above is much appreciated!
 
  Thanks,
  Oleg
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-21 Thread Akihiro Motoki

(2014/04/21 18:10), Oleg Bondarev wrote:

On Fri, Apr 18, 2014 at 9:10 PM, Kyle Mestery 
mest...@noironetworks.commailto:mest...@noironetworks.com wrote:
On Fri, Apr 18, 2014 at 8:52 AM, Oleg Bondarev 
obonda...@mirantis.commailto:obonda...@mirantis.com wrote:
 Hi all,

 While investigating possible options for Nova-network to Neutron migration
 I faced a couple of issues with libvirt.
 One of the key requirements for the migration is that instances should stay
 running and don't need restarting. In order to meet this requirement we need
 to either attach new nic to the instance or update existing one to plug it
 to the Neutron network.

Thanks for looking into this Oleg! I just wanted to mention that if
we're trying to plug a new NIC into the VM, this will likely require
modifications in the guest. The new NIC will likely have a new PCI ID,
MAC, etc., and thus the guest would have to switch to this. Therefor,
I think it may be better to try and move the existing NIC from a nova
network onto a neutron network.

Yeah, I agree that modifying the existing NIC is the preferred way.

Thanks for investigating ways of migrating from nova-network to neutron.
I think we need to define the levels of the migration.
We can't satisfy all requirements at the same time, so we need to 
determine/clarify
some reasonable limitations on the migration.

- datapath downtime
  - no downtime
  - a small period of downtime
  - rebooting an instnace
- API and management plane downtime
- Combination of the above

I think modifying the existing NIC requires plug and unplug an device in some 
way
(plug/unplug an network interface to VM? move a tap device from nova-network
to neutron bridge?). It leads to a small downtime. On the other hand, adding a 
new
interface requires a geust to deal with network migration (though it can 
potentially
provide no downtime migration as an infra level).
IMO a small downtime can be accepted in cloud use cases and it is a good start 
line.

Thanks,
Akihiro



 So what I've discovered is that attaching a new network device is only
 applied
 on the instance after reboot although VIR_DOMAIN_AFFECT_LIVE flag is passed
 to
 the libvirt call attachDeviceFlags():
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1412
 Is that expected? Are there any other options to apply new nic without
 reboot?

 I also tried to update existing nic of an instance by using libvirt
 updateDeviceFlags() call,
 but it fails with the following:
 'this function is not supported by the connection driver: cannot modify
 network device configuration'
 libvirt API spec (http://libvirt.org/hvsupport.html) shows that 0.8.0 as
 minimal
 qemu version for the virDomainUpdateDeviceFlags call, kvm --version on my
 setup shows
 'QEMU emulator version 1.0 (qemu-kvm-1.0)'
 Could someone please point what am I missing here?

What does libvirtd -V show for the libvirt version? On my Fedora 20
setup, I see the following:

[kmestery@fedora-mac neutron]$ libvirtd -V
libvirtd (libvirt) 1.1.3.4
[kmestery@fedora-mac neutron]$

On my Ubuntu 12.04 it shows:
 $ libvirtd --version
 libvirtd (libvirt) 0.9.8


Thanks,
Kyle

 Any help on the above is much appreciated!

 Thanks,
 Oleg


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-18 Thread Oleg Bondarev
Hi all,

While investigating possible options for Nova-network to Neutron migration
I faced a couple of issues with libvirt.
One of the key requirements for the migration is that instances should stay
running and don't need restarting. In order to meet this requirement we
need
to either attach new nic to the instance or update existing one to plug it
to the Neutron network.

So what I've discovered is that attaching a new network device is only
applied
on the instance after reboot although *VIR_DOMAIN_AFFECT_LIVE* flag is
passed to
the libvirt call *attachDeviceFlags()*:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1412
Is that expected? Are there any other options to apply new nic without
reboot?

I also tried to update existing nic of an instance by using libvirt
*updateDeviceFlags()* call,
but it fails with the following:
*'this function is not supported by the connection driver: cannot modify
network device configuration'*
libvirt API spec (http://libvirt.org/hvsupport.html) shows that 0.8.0 as
minimal
qemu version for the virDomainUpdateDeviceFlags call, kvm --version on my
setup shows
'*QEMU emulator version 1.0 (qemu-kvm-1.0)*'
Could someone please point what am I missing here?

Any help on the above is much appreciated!

Thanks,
Oleg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-18 Thread Kyle Mestery
On Fri, Apr 18, 2014 at 8:52 AM, Oleg Bondarev obonda...@mirantis.com wrote:
 Hi all,

 While investigating possible options for Nova-network to Neutron migration
 I faced a couple of issues with libvirt.
 One of the key requirements for the migration is that instances should stay
 running and don't need restarting. In order to meet this requirement we need
 to either attach new nic to the instance or update existing one to plug it
 to the Neutron network.

Thanks for looking into this Oleg! I just wanted to mention that if
we're trying to plug a new NIC into the VM, this will likely require
modifications in the guest. The new NIC will likely have a new PCI ID,
MAC, etc., and thus the guest would have to switch to this. Therefor,
I think it may be better to try and move the existing NIC from a nova
network onto a neutron network.

 So what I've discovered is that attaching a new network device is only
 applied
 on the instance after reboot although VIR_DOMAIN_AFFECT_LIVE flag is passed
 to
 the libvirt call attachDeviceFlags():
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1412
 Is that expected? Are there any other options to apply new nic without
 reboot?

 I also tried to update existing nic of an instance by using libvirt
 updateDeviceFlags() call,
 but it fails with the following:
 'this function is not supported by the connection driver: cannot modify
 network device configuration'
 libvirt API spec (http://libvirt.org/hvsupport.html) shows that 0.8.0 as
 minimal
 qemu version for the virDomainUpdateDeviceFlags call, kvm --version on my
 setup shows
 'QEMU emulator version 1.0 (qemu-kvm-1.0)'
 Could someone please point what am I missing here?

What does libvirtd -V show for the libvirt version? On my Fedora 20
setup, I see the following:

[kmestery@fedora-mac neutron]$ libvirtd -V
libvirtd (libvirt) 1.1.3.4
[kmestery@fedora-mac neutron]$

Thanks,
Kyle

 Any help on the above is much appreciated!

 Thanks,
 Oleg


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev