Re: [Openstack] live_migration only using 8 Mb speed

2018-08-23 Thread Satish Patel
I'm testing this in lab, no load yet   

Sent from my iPhone

> On Aug 23, 2018, at 2:30 AM, Matthew Thode  wrote:
> 
>> On 18-08-22 23:04:57, Satish Patel wrote:
>> Mathew,
>> 
>> I haven't applied any patch yet but i am noticing in cluster some host
>> migrating VM super fast and some host migrating very slow. Is this
>> known behavior?
>> 
>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode
>>  wrote:
>>> On 18-08-22 10:58:48, Satish Patel wrote:
 Matthew,
 
 I have two option looks like, correct me if i am wrong.
 
 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to
 17.0.8-23-g0aff517  and upgrade full OSA
 
 2. Just do override as you said "nova_git_install_branch:" in my
 /etc/openstack_deploy/user_variables.yml  file, and run playbooks.
 
 
 I think option [2] is safe to just touch specific component, also am i
 correct about override in /etc/openstack_deploy/user_variables.yml
 file?
 
 You mentioned "nova_git_install_branch:
 dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be
 "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct?
 
 
 
 On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode
  wrote:
> On 18-08-22 10:33:11, Satish Patel wrote:
>> Thanks Matthew,
>> 
>> Can i put that sha in my OSA at
>> playbooks/defaults/repo_packages/openstack_services.yml by hand and
>> run playbooks [repo/nova] ?
>> 
>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode
>>  wrote:
>>> On 18-08-22 08:35:09, Satish Patel wrote:
 Currently in stable/queens i am seeing this sha
 https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112
 
 On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode
  wrote:
> On 18-08-22 01:57:17, Satish Patel wrote:
>> What I need to upgrade, any specific component?
>> 
>> I have deployed openstack-ansible
>> 
>> Sent from my iPhone
>> 
 On Aug 22, 2018, at 1:06 AM, Matthew Thode 
  wrote:
 
 On 18-08-22 01:02:53, Satish Patel wrote:
 Matthew,
 
 Thanks for reply, Look like i don't have this patch
 https://review.openstack.org/#/c/591761/
 
 So i have to patch following 3 file manually?
 
 nova/tests/unit/virt/libvirt/test_driver.py213
 nova/tests/unit/virt/test_virt_drivers.py2
 nova/virt/libvirt/driver.py
 
 
 On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode
  wrote:
> On 18-08-22 00:27:08, Satish Patel wrote:
>> Folks,
>> 
>> I am running openstack queens and hypervisor is kvm, my live 
>> migration
>> working fine. but somehow it stuck to 8 Mb network speed and 
>> taking
>> long time to migrate 1G instance.  I have 10Gbps network and i 
>> have
>> tried to copy 10G file between two compute node and it did copy 
>> in 2
>> minute, so i am not seeing any network issue also.
>> 
>> it seem live_migration has some bandwidth limit, I have tried
>> following option in nova.conf but it didn't work
>> 
>> live_migration_bandwidth = 500
>> 
>> My nova.conf look like following:
>> 
>> live_migration_uri =
>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa"
>> live_migration_tunnelled = True
>> live_migration_bandwidth = 500
>> hw_disk_discard = unmap
>> disk_cachemodes = network=writeback
>> 
> 
> Do you have a this patch (and a couple of patches up to it)?
> https://bugs.launchpad.net/nova/+bug/1786346
> 
>>> 
>>> I don't know if that would cleanly apply (there are other patches 
>>> that
>>> changed those functions within the last month and a half.  It'd be 
>>> best
>>> to upgrade and not do just one patch (which would be an untested
>>> process).
>>> 
> 
> The sha for nova has not been updated yet (next update is 24-48 hours
> away iirc), once that's done you can use the head of stable/queens 
> from
> OSA and run a inter-series upgrade (but the minimal thing to do would 
> be
> to run repo-build and os-nova plays).  I'm not sure when that sha bump
> will be tagged in a full release if you would rather wait on that.
>>> 
>>> it's this sha that needs updating.
>>> https://github.com/openstack/openstack-ansible/blob/stable/queens/playbooks/defaults/repo_packages/op

[Openstack] Help with ipv6 self-service and ip6tables rule on mangle chain

2018-08-23 Thread Jorge Luiz Correa
Hi all

I'm deploying a Queens on Ubuntu 18.04 with one controller, one network
controller e for now one compute node. I'm using ML2 with linuxbridge
mechanism driver and a self-service type of network. This is is a dual
stack environment (v4 and v6).

IPv4 is working fine, NATs oks and packets flowing.

With IPv6 I'm having a problem. Packets from external networks to a project
network are stopping on qrouter namespace firewall. I've a project with one
network, one v4 subnet and one v6 subnet. Adressing are all ok, virtual
machines are getting their IPs and can ping the network gateway.

However, from external to project network, using ipv6, the packets stop in
a DROP rule inside de qrouter namespace.

The ip6tables path is:

mangle prerouting -> neutron-l3-agent-PREROUTING -> neutron-l3-agent-scope
-> here we have a MARK rule:

pkts bytes target prot opt in out source
destination
3   296 MARK   all  qr-7f2944e7-cc *   ::/0
::/0 MARK xset 0x400/0x

qr interface is the internal network interface of the project (subnet
gateway). So, packets from this interface are marked.

But, the returning is the problem. The packets doesn't returns. I've rules
from the nexthop firewall and packets arrive on the external bridge
(network node). But, when they arrive on external interface of the qrouter
namespace, they are filtered.

Inside qrouter namespace this is the rule:

ip netns exec qrouter-5689783d-52c0-4d2f-bef5-99b111f8ef5f ip6tables -t
mangle -L -n -v

...
Chain neutron-l3-agent-scope (1 references)
 pkts bytes target prot opt in out source
destination
0 0 DROP   all  *  qr-7f2944e7-cc  ::/0
::/0 mark match ! 0x400/0x
...

If I create the following rule everything works great:

ip netns exec qrouter-5689783d-52c0-4d2f-bef5-99b111f8ef5f ip6tables -t
mangle -I neutron-l3-agent-scope -i qg-b6757bfe-c1 -j MARK --set-xmark
0x400/0x

where qg is the external interface of virtual router. So, if I mark packets
from external interface on mangle, they are not filtered.

Is this normal? I've to manually add a rule to do that?

How to use the "external_ingress_mark" option on l3-agent.ini ? Can I use
it to mark packets using a configuration parameter instead of manually
inserted ip6tables rule?

Thanks a lot!

- JLC
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] live_migration only using 8 Mb speed

2018-08-23 Thread Satish Patel
Matt,

I am going to override following in user_variable.yml file in that
case do i need to run ./bootstrap-ansible.sh script?

## Nova service
nova_git_repo: https://git.openstack.org/openstack/nova
nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
HEAD of "stable/queens" as of 06.08.2018
nova_git_project_group: nova_all



On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel  wrote:
> I'm testing this in lab, no load yet
>
> Sent from my iPhone
>
>> On Aug 23, 2018, at 2:30 AM, Matthew Thode  wrote:
>>
>>> On 18-08-22 23:04:57, Satish Patel wrote:
>>> Mathew,
>>>
>>> I haven't applied any patch yet but i am noticing in cluster some host
>>> migrating VM super fast and some host migrating very slow. Is this
>>> known behavior?
>>>
>>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode
>>>  wrote:
 On 18-08-22 10:58:48, Satish Patel wrote:
> Matthew,
>
> I have two option looks like, correct me if i am wrong.
>
> 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to
> 17.0.8-23-g0aff517  and upgrade full OSA
>
> 2. Just do override as you said "nova_git_install_branch:" in my
> /etc/openstack_deploy/user_variables.yml  file, and run playbooks.
>
>
> I think option [2] is safe to just touch specific component, also am i
> correct about override in /etc/openstack_deploy/user_variables.yml
> file?
>
> You mentioned "nova_git_install_branch:
> dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be
> "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct?
>
>
>
> On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode
>  wrote:
>> On 18-08-22 10:33:11, Satish Patel wrote:
>>> Thanks Matthew,
>>>
>>> Can i put that sha in my OSA at
>>> playbooks/defaults/repo_packages/openstack_services.yml by hand and
>>> run playbooks [repo/nova] ?
>>>
>>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode
>>>  wrote:
 On 18-08-22 08:35:09, Satish Patel wrote:
> Currently in stable/queens i am seeing this sha
> https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112
>
> On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode
>  wrote:
>> On 18-08-22 01:57:17, Satish Patel wrote:
>>> What I need to upgrade, any specific component?
>>>
>>> I have deployed openstack-ansible
>>>
>>> Sent from my iPhone
>>>
> On Aug 22, 2018, at 1:06 AM, Matthew Thode 
>  wrote:
>
> On 18-08-22 01:02:53, Satish Patel wrote:
> Matthew,
>
> Thanks for reply, Look like i don't have this patch
> https://review.openstack.org/#/c/591761/
>
> So i have to patch following 3 file manually?
>
> nova/tests/unit/virt/libvirt/test_driver.py213
> nova/tests/unit/virt/test_virt_drivers.py2
> nova/virt/libvirt/driver.py
>
>
> On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode
>  wrote:
>> On 18-08-22 00:27:08, Satish Patel wrote:
>>> Folks,
>>>
>>> I am running openstack queens and hypervisor is kvm, my live 
>>> migration
>>> working fine. but somehow it stuck to 8 Mb network speed and 
>>> taking
>>> long time to migrate 1G instance.  I have 10Gbps network and i 
>>> have
>>> tried to copy 10G file between two compute node and it did copy 
>>> in 2
>>> minute, so i am not seeing any network issue also.
>>>
>>> it seem live_migration has some bandwidth limit, I have tried
>>> following option in nova.conf but it didn't work
>>>
>>> live_migration_bandwidth = 500
>>>
>>> My nova.conf look like following:
>>>
>>> live_migration_uri =
>>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa"
>>> live_migration_tunnelled = True
>>> live_migration_bandwidth = 500
>>> hw_disk_discard = unmap
>>> disk_cachemodes = network=writeback
>>>
>>
>> Do you have a this patch (and a couple of patches up to it)?
>> https://bugs.launchpad.net/nova/+bug/1786346
>>

 I don't know if that would cleanly apply (there are other patches 
 that
 changed those functions within the last month and a half.  It'd be 
 best
 to upgrade and not do just one patch (which would be an untested
 process).

>>
>> The sha for nova has not been updated yet (next update is 24-48 hours
>> away iirc), 

Re: [Openstack] live_migration only using 8 Mb speed

2018-08-23 Thread Matthew Thode
On 18-08-23 14:33:44, Satish Patel wrote:
> Matt,
> 
> I am going to override following in user_variable.yml file in that
> case do i need to run ./bootstrap-ansible.sh script?
> 
> ## Nova service
> nova_git_repo: https://git.openstack.org/openstack/nova
> nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
> HEAD of "stable/queens" as of 06.08.2018
> nova_git_project_group: nova_all
> 
> 
> 
> On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel  wrote:
> > I'm testing this in lab, no load yet
> >
> > Sent from my iPhone
> >
> >> On Aug 23, 2018, at 2:30 AM, Matthew Thode  
> >> wrote:
> >>
> >>> On 18-08-22 23:04:57, Satish Patel wrote:
> >>> Mathew,
> >>>
> >>> I haven't applied any patch yet but i am noticing in cluster some host
> >>> migrating VM super fast and some host migrating very slow. Is this
> >>> known behavior?
> >>>
> >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode
> >>>  wrote:
>  On 18-08-22 10:58:48, Satish Patel wrote:
> > Matthew,
> >
> > I have two option looks like, correct me if i am wrong.
> >
> > 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 to
> > 17.0.8-23-g0aff517  and upgrade full OSA
> >
> > 2. Just do override as you said "nova_git_install_branch:" in my
> > /etc/openstack_deploy/user_variables.yml  file, and run playbooks.
> >
> >
> > I think option [2] is safe to just touch specific component, also am i
> > correct about override in /etc/openstack_deploy/user_variables.yml
> > file?
> >
> > You mentioned "nova_git_install_branch:
> > dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be
> > "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct?
> >
> >
> >
> > On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode
> >  wrote:
> >> On 18-08-22 10:33:11, Satish Patel wrote:
> >>> Thanks Matthew,
> >>>
> >>> Can i put that sha in my OSA at
> >>> playbooks/defaults/repo_packages/openstack_services.yml by hand and
> >>> run playbooks [repo/nova] ?
> >>>
> >>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode
> >>>  wrote:
>  On 18-08-22 08:35:09, Satish Patel wrote:
> > Currently in stable/queens i am seeing this sha
> > https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112
> >
> > On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode
> >  wrote:
> >> On 18-08-22 01:57:17, Satish Patel wrote:
> >>> What I need to upgrade, any specific component?
> >>>
> >>> I have deployed openstack-ansible
> >>>
> >>> Sent from my iPhone
> >>>
> > On Aug 22, 2018, at 1:06 AM, Matthew Thode 
> >  wrote:
> >
> > On 18-08-22 01:02:53, Satish Patel wrote:
> > Matthew,
> >
> > Thanks for reply, Look like i don't have this patch
> > https://review.openstack.org/#/c/591761/
> >
> > So i have to patch following 3 file manually?
> >
> > nova/tests/unit/virt/libvirt/test_driver.py213
> > nova/tests/unit/virt/test_virt_drivers.py2
> > nova/virt/libvirt/driver.py
> >
> >
> > On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode
> >  wrote:
> >> On 18-08-22 00:27:08, Satish Patel wrote:
> >>> Folks,
> >>>
> >>> I am running openstack queens and hypervisor is kvm, my live 
> >>> migration
> >>> working fine. but somehow it stuck to 8 Mb network speed and 
> >>> taking
> >>> long time to migrate 1G instance.  I have 10Gbps network and 
> >>> i have
> >>> tried to copy 10G file between two compute node and it did 
> >>> copy in 2
> >>> minute, so i am not seeing any network issue also.
> >>>
> >>> it seem live_migration has some bandwidth limit, I have tried
> >>> following option in nova.conf but it didn't work
> >>>
> >>> live_migration_bandwidth = 500
> >>>
> >>> My nova.conf look like following:
> >>>
> >>> live_migration_uri =
> >>> "qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa"
> >>> live_migration_tunnelled = True
> >>> live_migration_bandwidth = 500
> >>> hw_disk_discard = unmap
> >>> disk_cachemodes = network=writeback
> >>>
> >>
> >> Do you have a this patch (and a couple of patches up to it)?
> >> https://bugs.launchpad.net/nova/+bug/1786346
> >>
> 
>  I don't know if that would cleanly apply (there are other 
>  patches that
>  changed tho

Re: [Openstack] live_migration only using 8 Mb speed

2018-08-23 Thread Satish Patel
Matt,

I've added "nova_git_install_branch:
a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and
run repo-build.yml playbook but it didn't change anything

I am inside the repo container and still its showing old timestamp on
all nova file and i check all file they seems didn't change

at this path in repo container /var/www/repo/openstackgit/nova/nova

repo-build.yml should update that dir right?
On Thu, Aug 23, 2018 at 2:58 PM Satish Patel  wrote:
>
> Thanks Matthew,
>
> Going to do that and will update you in few min.
> On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode  
> wrote:
> >
> > On 18-08-23 14:33:44, Satish Patel wrote:
> > > Matt,
> > >
> > > I am going to override following in user_variable.yml file in that
> > > case do i need to run ./bootstrap-ansible.sh script?
> > >
> > > ## Nova service
> > > nova_git_repo: https://git.openstack.org/openstack/nova
> > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
> > > HEAD of "stable/queens" as of 06.08.2018
> > > nova_git_project_group: nova_all
> > >
> > >
> > >
> > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel  
> > > wrote:
> > > > I'm testing this in lab, no load yet
> > > >
> > > > Sent from my iPhone
> > > >
> > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode  
> > > >> wrote:
> > > >>
> > > >>> On 18-08-22 23:04:57, Satish Patel wrote:
> > > >>> Mathew,
> > > >>>
> > > >>> I haven't applied any patch yet but i am noticing in cluster some host
> > > >>> migrating VM super fast and some host migrating very slow. Is this
> > > >>> known behavior?
> > > >>>
> > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode
> > > >>>  wrote:
> > >  On 18-08-22 10:58:48, Satish Patel wrote:
> > > > Matthew,
> > > >
> > > > I have two option looks like, correct me if i am wrong.
> > > >
> > > > 1. I have two option, upgrade minor release from 17.0.7-6-g9187bb1 
> > > > to
> > > > 17.0.8-23-g0aff517  and upgrade full OSA
> > > >
> > > > 2. Just do override as you said "nova_git_install_branch:" in my
> > > > /etc/openstack_deploy/user_variables.yml  file, and run playbooks.
> > > >
> > > >
> > > > I think option [2] is safe to just touch specific component, also 
> > > > am i
> > > > correct about override in /etc/openstack_deploy/user_variables.yml
> > > > file?
> > > >
> > > > You mentioned "nova_git_install_branch:
> > > > dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should be
> > > > "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct?
> > > >
> > > >
> > > >
> > > > On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode
> > > >  wrote:
> > > >> On 18-08-22 10:33:11, Satish Patel wrote:
> > > >>> Thanks Matthew,
> > > >>>
> > > >>> Can i put that sha in my OSA at
> > > >>> playbooks/defaults/repo_packages/openstack_services.yml by hand 
> > > >>> and
> > > >>> run playbooks [repo/nova] ?
> > > >>>
> > > >>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode
> > > >>>  wrote:
> > >  On 18-08-22 08:35:09, Satish Patel wrote:
> > > > Currently in stable/queens i am seeing this sha
> > > > https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112
> > > >
> > > > On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode
> > > >  wrote:
> > > >> On 18-08-22 01:57:17, Satish Patel wrote:
> > > >>> What I need to upgrade, any specific component?
> > > >>>
> > > >>> I have deployed openstack-ansible
> > > >>>
> > > >>> Sent from my iPhone
> > > >>>
> > > > On Aug 22, 2018, at 1:06 AM, Matthew Thode 
> > > >  wrote:
> > > >
> > > > On 18-08-22 01:02:53, Satish Patel wrote:
> > > > Matthew,
> > > >
> > > > Thanks for reply, Look like i don't have this patch
> > > > https://review.openstack.org/#/c/591761/
> > > >
> > > > So i have to patch following 3 file manually?
> > > >
> > > > nova/tests/unit/virt/libvirt/test_driver.py213
> > > > nova/tests/unit/virt/test_virt_drivers.py2
> > > > nova/virt/libvirt/driver.py
> > > >
> > > >
> > > > On Wed, Aug 22, 2018 at 12:42 AM, Matthew Thode
> > > >  wrote:
> > > >> On 18-08-22 00:27:08, Satish Patel wrote:
> > > >>> Folks,
> > > >>>
> > > >>> I am running openstack queens and hypervisor is kvm, my 
> > > >>> live migration
> > > >>> working fine. but somehow it stuck to 8 Mb network speed 
> > > >>> and taking
> > > >>> long time to migrate 1G instance.  I have 10Gbps network 
> > > >>> and i have
> > > >>> tried to copy 10G file between two compute node and it 
> > > 

Re: [Openstack] live_migration only using 8 Mb speed

2018-08-23 Thread Satish Patel
Look like it need all 3 line in user_variables.yml file.. after
putting all 3 lines it works!!

## Nova service
nova_git_repo: https://git.openstack.org/openstack/nova
nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
HEAD of "stable/queens" as of 06.08.2018
nova_git_project_group: nova_all
On Thu, Aug 23, 2018 at 3:12 PM Satish Patel  wrote:
>
> Matt,
>
> I've added "nova_git_install_branch:
> a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and
> run repo-build.yml playbook but it didn't change anything
>
> I am inside the repo container and still its showing old timestamp on
> all nova file and i check all file they seems didn't change
>
> at this path in repo container /var/www/repo/openstackgit/nova/nova
>
> repo-build.yml should update that dir right?
> On Thu, Aug 23, 2018 at 2:58 PM Satish Patel  wrote:
> >
> > Thanks Matthew,
> >
> > Going to do that and will update you in few min.
> > On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode  
> > wrote:
> > >
> > > On 18-08-23 14:33:44, Satish Patel wrote:
> > > > Matt,
> > > >
> > > > I am going to override following in user_variable.yml file in that
> > > > case do i need to run ./bootstrap-ansible.sh script?
> > > >
> > > > ## Nova service
> > > > nova_git_repo: https://git.openstack.org/openstack/nova
> > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
> > > > HEAD of "stable/queens" as of 06.08.2018
> > > > nova_git_project_group: nova_all
> > > >
> > > >
> > > >
> > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel  
> > > > wrote:
> > > > > I'm testing this in lab, no load yet
> > > > >
> > > > > Sent from my iPhone
> > > > >
> > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode 
> > > > >>  wrote:
> > > > >>
> > > > >>> On 18-08-22 23:04:57, Satish Patel wrote:
> > > > >>> Mathew,
> > > > >>>
> > > > >>> I haven't applied any patch yet but i am noticing in cluster some 
> > > > >>> host
> > > > >>> migrating VM super fast and some host migrating very slow. Is this
> > > > >>> known behavior?
> > > > >>>
> > > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode
> > > > >>>  wrote:
> > > >  On 18-08-22 10:58:48, Satish Patel wrote:
> > > > > Matthew,
> > > > >
> > > > > I have two option looks like, correct me if i am wrong.
> > > > >
> > > > > 1. I have two option, upgrade minor release from 
> > > > > 17.0.7-6-g9187bb1 to
> > > > > 17.0.8-23-g0aff517  and upgrade full OSA
> > > > >
> > > > > 2. Just do override as you said "nova_git_install_branch:" in my
> > > > > /etc/openstack_deploy/user_variables.yml  file, and run playbooks.
> > > > >
> > > > >
> > > > > I think option [2] is safe to just touch specific component, also 
> > > > > am i
> > > > > correct about override in /etc/openstack_deploy/user_variables.yml
> > > > > file?
> > > > >
> > > > > You mentioned "nova_git_install_branch:
> > > > > dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it should 
> > > > > be
> > > > > "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct?
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode
> > > > >  wrote:
> > > > >> On 18-08-22 10:33:11, Satish Patel wrote:
> > > > >>> Thanks Matthew,
> > > > >>>
> > > > >>> Can i put that sha in my OSA at
> > > > >>> playbooks/defaults/repo_packages/openstack_services.yml by hand 
> > > > >>> and
> > > > >>> run playbooks [repo/nova] ?
> > > > >>>
> > > > >>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode
> > > > >>>  wrote:
> > > >  On 18-08-22 08:35:09, Satish Patel wrote:
> > > > > Currently in stable/queens i am seeing this sha
> > > > > https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112
> > > > >
> > > > > On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode
> > > > >  wrote:
> > > > >> On 18-08-22 01:57:17, Satish Patel wrote:
> > > > >>> What I need to upgrade, any specific component?
> > > > >>>
> > > > >>> I have deployed openstack-ansible
> > > > >>>
> > > > >>> Sent from my iPhone
> > > > >>>
> > > > > On Aug 22, 2018, at 1:06 AM, Matthew Thode 
> > > > >  wrote:
> > > > >
> > > > > On 18-08-22 01:02:53, Satish Patel wrote:
> > > > > Matthew,
> > > > >
> > > > > Thanks for reply, Look like i don't have this patch
> > > > > https://review.openstack.org/#/c/591761/
> > > > >
> > > > > So i have to patch following 3 file manually?
> > > > >
> > > > > nova/tests/unit/virt/libvirt/test_driver.py213
> > > > > nova/tests/unit/virt/test_virt_drivers.py2
> > > > > nova/virt/libvirt/driver.py
> > > > >
> > > > >
> > > > >

Re: [Openstack] live_migration only using 8 Mb speed

2018-08-23 Thread Satish Patel
I have upgraded my nova and all nova component got upgrade but still
my live_migration running on 8Mbps speed.. what else is wrong here?

I am using CentOS 7.5

On Thu, Aug 23, 2018 at 3:26 PM Satish Patel  wrote:
>
> Look like it need all 3 line in user_variables.yml file.. after
> putting all 3 lines it works!!
>
> ## Nova service
> nova_git_repo: https://git.openstack.org/openstack/nova
> nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
> HEAD of "stable/queens" as of 06.08.2018
> nova_git_project_group: nova_all
> On Thu, Aug 23, 2018 at 3:12 PM Satish Patel  wrote:
> >
> > Matt,
> >
> > I've added "nova_git_install_branch:
> > a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and
> > run repo-build.yml playbook but it didn't change anything
> >
> > I am inside the repo container and still its showing old timestamp on
> > all nova file and i check all file they seems didn't change
> >
> > at this path in repo container /var/www/repo/openstackgit/nova/nova
> >
> > repo-build.yml should update that dir right?
> > On Thu, Aug 23, 2018 at 2:58 PM Satish Patel  wrote:
> > >
> > > Thanks Matthew,
> > >
> > > Going to do that and will update you in few min.
> > > On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode  
> > > wrote:
> > > >
> > > > On 18-08-23 14:33:44, Satish Patel wrote:
> > > > > Matt,
> > > > >
> > > > > I am going to override following in user_variable.yml file in that
> > > > > case do i need to run ./bootstrap-ansible.sh script?
> > > > >
> > > > > ## Nova service
> > > > > nova_git_repo: https://git.openstack.org/openstack/nova
> > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
> > > > > HEAD of "stable/queens" as of 06.08.2018
> > > > > nova_git_project_group: nova_all
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel  
> > > > > wrote:
> > > > > > I'm testing this in lab, no load yet
> > > > > >
> > > > > > Sent from my iPhone
> > > > > >
> > > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode 
> > > > > >>  wrote:
> > > > > >>
> > > > > >>> On 18-08-22 23:04:57, Satish Patel wrote:
> > > > > >>> Mathew,
> > > > > >>>
> > > > > >>> I haven't applied any patch yet but i am noticing in cluster some 
> > > > > >>> host
> > > > > >>> migrating VM super fast and some host migrating very slow. Is this
> > > > > >>> known behavior?
> > > > > >>>
> > > > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode
> > > > > >>>  wrote:
> > > > >  On 18-08-22 10:58:48, Satish Patel wrote:
> > > > > > Matthew,
> > > > > >
> > > > > > I have two option looks like, correct me if i am wrong.
> > > > > >
> > > > > > 1. I have two option, upgrade minor release from 
> > > > > > 17.0.7-6-g9187bb1 to
> > > > > > 17.0.8-23-g0aff517  and upgrade full OSA
> > > > > >
> > > > > > 2. Just do override as you said "nova_git_install_branch:" in my
> > > > > > /etc/openstack_deploy/user_variables.yml  file, and run 
> > > > > > playbooks.
> > > > > >
> > > > > >
> > > > > > I think option [2] is safe to just touch specific component, 
> > > > > > also am i
> > > > > > correct about override in 
> > > > > > /etc/openstack_deploy/user_variables.yml
> > > > > > file?
> > > > > >
> > > > > > You mentioned "nova_git_install_branch:
> > > > > > dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it 
> > > > > > should be
> > > > > > "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct?
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode
> > > > > >  wrote:
> > > > > >> On 18-08-22 10:33:11, Satish Patel wrote:
> > > > > >>> Thanks Matthew,
> > > > > >>>
> > > > > >>> Can i put that sha in my OSA at
> > > > > >>> playbooks/defaults/repo_packages/openstack_services.yml by 
> > > > > >>> hand and
> > > > > >>> run playbooks [repo/nova] ?
> > > > > >>>
> > > > > >>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode
> > > > > >>>  wrote:
> > > > >  On 18-08-22 08:35:09, Satish Patel wrote:
> > > > > > Currently in stable/queens i am seeing this sha
> > > > > > https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112
> > > > > >
> > > > > > On Wed, Aug 22, 2018 at 2:02 AM, Matthew Thode
> > > > > >  wrote:
> > > > > >> On 18-08-22 01:57:17, Satish Patel wrote:
> > > > > >>> What I need to upgrade, any specific component?
> > > > > >>>
> > > > > >>> I have deployed openstack-ansible
> > > > > >>>
> > > > > >>> Sent from my iPhone
> > > > > >>>
> > > > > > On Aug 22, 2018, at 1:06 AM, Matthew Thode 
> > > > > >  wrote:
> > > > > >
> > > > > > On 18-08-22 01:02:53, Satish Patel wrote:
> > > > > > Matthew,
> > > > > >
> > > > > >

[Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend

2018-08-23 Thread Chris Martin
I back up my volumes daily, using incremental backups to minimize
network traffic and storage consumption. I want to periodically remove
old backups, and during this pruning operation, avoid entering a state
where a volume has no recent backups. Ceph RBD appears to support this
workflow, but unfortunately, Cinder does not. I can only delete the
*latest* backup of a given volume, and this precludes any reasonable
way to prune backups. Here, I'll show you.

Let's make three backups of the same volume:
```
openstack volume backup create --name backup-1 --force volume-foo
openstack volume backup create --name backup-2 --force volume-foo
openstack volume backup create --name backup-3 --force volume-foo
```

Cinder reports the following via `volume backup show`:
- backup-1 is not an incremental backup, but backup-2 and backup-3 are
(`is_incremental`).
- All but the latest backup have dependent backups (`has_dependent_backups`).

We take a backup every day, and after a week we're on backup-7. We
want to start deleting older backups so that we don't keep
accumulating backups forever! What happens when we try?

```
# openstack volume backup delete backup-1
Failed to delete backup with name or ID 'backup-1': Invalid backup:
Incremental backups exist for this backup. (HTTP 400)
```

We can't delete backup-1 because Cinder considers it a "base" backup
which `has_dependent_backups`. What about backup-2? Same story. Adding
the `--force` flag just gives a slightly different error message. The
*only* backup that Cinder will delete is backup-7 -- the very latest
one. This means that if we want to remove the oldest backups of a
volume, *we must first remove all newer backups of the same volume*,
i.e. delete literally all of our backups.

Also, we cannot force creation of another *full* (non-incrmental)
backup in order to free all of the earlier backups for removal.
(Omitting the `--incremental` flag has no effect; you still get an
incremental backup.)

Can we hope for better? Let's reach behind Cinder to the Ceph backend.
Volume backups are represented as a "base" RBD image with a snapshot
for each incremental backup:

```
# rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base
SNAPID NAME
   SIZE TIMESTAMP
   577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
10240 MB Thu Aug 23 10:57:48 2018
   578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44
10240 MB Thu Aug 23 11:05:43 2018
   579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46
10240 MB Thu Aug 23 11:06:47 2018
   580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
10240 MB Thu Aug 23 11:22:23 2018
   581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72
10240 MB Thu Aug 23 11:22:47 2018
   582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82
10240 MB Thu Aug 23 11:23:04 2018
   583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26
10240 MB Thu Aug 23 11:23:31 2018
   584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52
10240 MB Thu Aug 23 12:32:43 2018
```

It seems that each snapshot stands alone and doesn't depend on others.
Ceph lets me delete the older snapshots.

```
# rbd snap rm 
volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.b...@backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
Removing snap: 100% complete...done.
# rbd snap rm 
volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.b...@backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
Removing snap: 100% complete...done.
```

Now that we nuked backup-1 and backup-4, can we still restore from
backup-7 and launch an instance with it?

```
openstack volume create --size 10 --bootable volume-foo-restored
openstack volume backup restore backup-7 volume-foo-restored
openstack server create --volume volume-foo-restored --flavor medium1
instance-restored-from-backup-7
```

Yes! We can SSH to the instance and it appears intact.

Perhaps each snapshot in Ceph stores a complete diff from the base RBD
image (rather than each successive snapshot depending on the last). If
this is true, then Cinder is unnecessarily protective of older
backups. Cinder represents these as "with dependents" and doesn't let
us touch them, even though Ceph will let us delete older RBD
snapshots, apparently without disrupting newer snapshots of the same
volume. If we could remove this limitation, Cinder backups would be
significantly more useful for us. We mostly host servers with
non-cloud-native workloads (IaaS for research scientists). For these,
full-disk backups at the infrastructure level are an important
supplement to file-level or application-level backups.

It would be great if someone else could confirm or disprove what I'm
seeing here. I'd also love to hear from anyone else using Cinder
backups this way.

Regards,

Chris Martin at CyVerse

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ope

Re: [Openstack] live_migration only using 8 Mb speed

2018-08-23 Thread Satish Patel
I have updated this bug here something is wrong:
https://bugs.launchpad.net/nova/+bug/1786346

After nova upgrade i have compared these 3 files
https://review.openstack.org/#/c/591761/  and i am not seeing any
change here so look like this is not a complete patch.

Are you sure they push this changes in nova repo?
On Thu, Aug 23, 2018 at 4:36 PM Satish Patel  wrote:
>
> I have upgraded my nova and all nova component got upgrade but still
> my live_migration running on 8Mbps speed.. what else is wrong here?
>
> I am using CentOS 7.5
>
> On Thu, Aug 23, 2018 at 3:26 PM Satish Patel  wrote:
> >
> > Look like it need all 3 line in user_variables.yml file.. after
> > putting all 3 lines it works!!
> >
> > ## Nova service
> > nova_git_repo: https://git.openstack.org/openstack/nova
> > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
> > HEAD of "stable/queens" as of 06.08.2018
> > nova_git_project_group: nova_all
> > On Thu, Aug 23, 2018 at 3:12 PM Satish Patel  wrote:
> > >
> > > Matt,
> > >
> > > I've added "nova_git_install_branch:
> > > a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and
> > > run repo-build.yml playbook but it didn't change anything
> > >
> > > I am inside the repo container and still its showing old timestamp on
> > > all nova file and i check all file they seems didn't change
> > >
> > > at this path in repo container /var/www/repo/openstackgit/nova/nova
> > >
> > > repo-build.yml should update that dir right?
> > > On Thu, Aug 23, 2018 at 2:58 PM Satish Patel  wrote:
> > > >
> > > > Thanks Matthew,
> > > >
> > > > Going to do that and will update you in few min.
> > > > On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode 
> > > >  wrote:
> > > > >
> > > > > On 18-08-23 14:33:44, Satish Patel wrote:
> > > > > > Matt,
> > > > > >
> > > > > > I am going to override following in user_variable.yml file in that
> > > > > > case do i need to run ./bootstrap-ansible.sh script?
> > > > > >
> > > > > > ## Nova service
> > > > > > nova_git_repo: https://git.openstack.org/openstack/nova
> > > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
> > > > > > HEAD of "stable/queens" as of 06.08.2018
> > > > > > nova_git_project_group: nova_all
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel 
> > > > > >  wrote:
> > > > > > > I'm testing this in lab, no load yet
> > > > > > >
> > > > > > > Sent from my iPhone
> > > > > > >
> > > > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode 
> > > > > > >>  wrote:
> > > > > > >>
> > > > > > >>> On 18-08-22 23:04:57, Satish Patel wrote:
> > > > > > >>> Mathew,
> > > > > > >>>
> > > > > > >>> I haven't applied any patch yet but i am noticing in cluster 
> > > > > > >>> some host
> > > > > > >>> migrating VM super fast and some host migrating very slow. Is 
> > > > > > >>> this
> > > > > > >>> known behavior?
> > > > > > >>>
> > > > > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode
> > > > > > >>>  wrote:
> > > > > >  On 18-08-22 10:58:48, Satish Patel wrote:
> > > > > > > Matthew,
> > > > > > >
> > > > > > > I have two option looks like, correct me if i am wrong.
> > > > > > >
> > > > > > > 1. I have two option, upgrade minor release from 
> > > > > > > 17.0.7-6-g9187bb1 to
> > > > > > > 17.0.8-23-g0aff517  and upgrade full OSA
> > > > > > >
> > > > > > > 2. Just do override as you said "nova_git_install_branch:" in 
> > > > > > > my
> > > > > > > /etc/openstack_deploy/user_variables.yml  file, and run 
> > > > > > > playbooks.
> > > > > > >
> > > > > > >
> > > > > > > I think option [2] is safe to just touch specific component, 
> > > > > > > also am i
> > > > > > > correct about override in 
> > > > > > > /etc/openstack_deploy/user_variables.yml
> > > > > > > file?
> > > > > > >
> > > > > > > You mentioned "nova_git_install_branch:
> > > > > > > dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it 
> > > > > > > should be
> > > > > > > "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct?
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode
> > > > > > >  wrote:
> > > > > > >> On 18-08-22 10:33:11, Satish Patel wrote:
> > > > > > >>> Thanks Matthew,
> > > > > > >>>
> > > > > > >>> Can i put that sha in my OSA at
> > > > > > >>> playbooks/defaults/repo_packages/openstack_services.yml by 
> > > > > > >>> hand and
> > > > > > >>> run playbooks [repo/nova] ?
> > > > > > >>>
> > > > > > >>> On Wed, Aug 22, 2018 at 10:24 AM, Matthew Thode
> > > > > > >>>  wrote:
> > > > > >  On 18-08-22 08:35:09, Satish Patel wrote:
> > > > > > > Currently in stable/queens i am seeing this sha
> > > > > > > https://github.com/openstack/openstack-ansible/blob/stable/queens/ansible-role-requirements.yml#L112
> > > > > > >
> > > > > > >>>

Re: [Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend

2018-08-23 Thread David Medberry
Hi Chris,

Unless I overlooked something, I don't see Cinder or Ceph versions posted.

Feel free to just post the codenames but give us some inkling.

On Thu, Aug 23, 2018 at 3:26 PM, Chris Martin  wrote:

> I back up my volumes daily, using incremental backups to minimize
> network traffic and storage consumption. I want to periodically remove
> old backups, and during this pruning operation, avoid entering a state
> where a volume has no recent backups. Ceph RBD appears to support this
> workflow, but unfortunately, Cinder does not. I can only delete the
> *latest* backup of a given volume, and this precludes any reasonable
> way to prune backups. Here, I'll show you.
>
> Let's make three backups of the same volume:
> ```
> openstack volume backup create --name backup-1 --force volume-foo
> openstack volume backup create --name backup-2 --force volume-foo
> openstack volume backup create --name backup-3 --force volume-foo
> ```
>
> Cinder reports the following via `volume backup show`:
> - backup-1 is not an incremental backup, but backup-2 and backup-3 are
> (`is_incremental`).
> - All but the latest backup have dependent backups
> (`has_dependent_backups`).
>
> We take a backup every day, and after a week we're on backup-7. We
> want to start deleting older backups so that we don't keep
> accumulating backups forever! What happens when we try?
>
> ```
> # openstack volume backup delete backup-1
> Failed to delete backup with name or ID 'backup-1': Invalid backup:
> Incremental backups exist for this backup. (HTTP 400)
> ```
>
> We can't delete backup-1 because Cinder considers it a "base" backup
> which `has_dependent_backups`. What about backup-2? Same story. Adding
> the `--force` flag just gives a slightly different error message. The
> *only* backup that Cinder will delete is backup-7 -- the very latest
> one. This means that if we want to remove the oldest backups of a
> volume, *we must first remove all newer backups of the same volume*,
> i.e. delete literally all of our backups.
>
> Also, we cannot force creation of another *full* (non-incrmental)
> backup in order to free all of the earlier backups for removal.
> (Omitting the `--incremental` flag has no effect; you still get an
> incremental backup.)
>
> Can we hope for better? Let's reach behind Cinder to the Ceph backend.
> Volume backups are represented as a "base" RBD image with a snapshot
> for each incremental backup:
>
> ```
> # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base
> SNAPID NAME
>SIZE TIMESTAMP
>577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
> 10240 MB Thu Aug 23 10:57:48 2018
>578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44
> 10240 MB Thu Aug 23 11:05:43 2018
>579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46
> 10240 MB Thu Aug 23 11:06:47 2018
>580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
> 10240 MB Thu Aug 23 11:22:23 2018
>581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72
> 10240 MB Thu Aug 23 11:22:47 2018
>582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82
> 10240 MB Thu Aug 23 11:23:04 2018
>583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26
> 10240 MB Thu Aug 23 11:23:31 2018
>584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52
> 10240 MB Thu Aug 23 12:32:43 2018
> ```
>
> It seems that each snapshot stands alone and doesn't depend on others.
> Ceph lets me delete the older snapshots.
>
> ```
> # rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@
> backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
> Removing snap: 100% complete...done.
> # rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@
> backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
> Removing snap: 100% complete...done.
> ```
>
> Now that we nuked backup-1 and backup-4, can we still restore from
> backup-7 and launch an instance with it?
>
> ```
> openstack volume create --size 10 --bootable volume-foo-restored
> openstack volume backup restore backup-7 volume-foo-restored
> openstack server create --volume volume-foo-restored --flavor medium1
> instance-restored-from-backup-7
> ```
>
> Yes! We can SSH to the instance and it appears intact.
>
> Perhaps each snapshot in Ceph stores a complete diff from the base RBD
> image (rather than each successive snapshot depending on the last). If
> this is true, then Cinder is unnecessarily protective of older
> backups. Cinder represents these as "with dependents" and doesn't let
> us touch them, even though Ceph will let us delete older RBD
> snapshots, apparently without disrupting newer snapshots of the same
> volume. If we could remove this limitation, Cinder backups would be
> significantly more useful for us. We mostly host servers with
> non-cloud-native workloads (IaaS for research scientists). For these,
> full-disk backups at the infrastructure level ar

Re: [Openstack] [cinder] Pruning Old Volume Backups with Ceph Backend

2018-08-23 Thread Chris Martin
Apologies -- I'm running Pike release of Cinder, Luminous release of
Ceph. Deployed with OpenStack-Ansible and Ceph-Ansible respectively.

On Thu, Aug 23, 2018 at 8:27 PM, David Medberry  wrote:
> Hi Chris,
>
> Unless I overlooked something, I don't see Cinder or Ceph versions posted.
>
> Feel free to just post the codenames but give us some inkling.
>
> On Thu, Aug 23, 2018 at 3:26 PM, Chris Martin  wrote:
>>
>> I back up my volumes daily, using incremental backups to minimize
>> network traffic and storage consumption. I want to periodically remove
>> old backups, and during this pruning operation, avoid entering a state
>> where a volume has no recent backups. Ceph RBD appears to support this
>> workflow, but unfortunately, Cinder does not. I can only delete the
>> *latest* backup of a given volume, and this precludes any reasonable
>> way to prune backups. Here, I'll show you.
>>
>> Let's make three backups of the same volume:
>> ```
>> openstack volume backup create --name backup-1 --force volume-foo
>> openstack volume backup create --name backup-2 --force volume-foo
>> openstack volume backup create --name backup-3 --force volume-foo
>> ```
>>
>> Cinder reports the following via `volume backup show`:
>> - backup-1 is not an incremental backup, but backup-2 and backup-3 are
>> (`is_incremental`).
>> - All but the latest backup have dependent backups
>> (`has_dependent_backups`).
>>
>> We take a backup every day, and after a week we're on backup-7. We
>> want to start deleting older backups so that we don't keep
>> accumulating backups forever! What happens when we try?
>>
>> ```
>> # openstack volume backup delete backup-1
>> Failed to delete backup with name or ID 'backup-1': Invalid backup:
>> Incremental backups exist for this backup. (HTTP 400)
>> ```
>>
>> We can't delete backup-1 because Cinder considers it a "base" backup
>> which `has_dependent_backups`. What about backup-2? Same story. Adding
>> the `--force` flag just gives a slightly different error message. The
>> *only* backup that Cinder will delete is backup-7 -- the very latest
>> one. This means that if we want to remove the oldest backups of a
>> volume, *we must first remove all newer backups of the same volume*,
>> i.e. delete literally all of our backups.
>>
>> Also, we cannot force creation of another *full* (non-incrmental)
>> backup in order to free all of the earlier backups for removal.
>> (Omitting the `--incremental` flag has no effect; you still get an
>> incremental backup.)
>>
>> Can we hope for better? Let's reach behind Cinder to the Ceph backend.
>> Volume backups are represented as a "base" RBD image with a snapshot
>> for each incremental backup:
>>
>> ```
>> # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base
>> SNAPID NAME
>>SIZE TIMESTAMP
>>577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
>> 10240 MB Thu Aug 23 10:57:48 2018
>>578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44
>> 10240 MB Thu Aug 23 11:05:43 2018
>>579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46
>> 10240 MB Thu Aug 23 11:06:47 2018
>>580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
>> 10240 MB Thu Aug 23 11:22:23 2018
>>581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72
>> 10240 MB Thu Aug 23 11:22:47 2018
>>582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82
>> 10240 MB Thu Aug 23 11:23:04 2018
>>583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26
>> 10240 MB Thu Aug 23 11:23:31 2018
>>584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52
>> 10240 MB Thu Aug 23 12:32:43 2018
>> ```
>>
>> It seems that each snapshot stands alone and doesn't depend on others.
>> Ceph lets me delete the older snapshots.
>>
>> ```
>> # rbd snap rm
>> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.b...@backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
>> Removing snap: 100% complete...done.
>> # rbd snap rm
>> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.b...@backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
>> Removing snap: 100% complete...done.
>> ```
>>
>> Now that we nuked backup-1 and backup-4, can we still restore from
>> backup-7 and launch an instance with it?
>>
>> ```
>> openstack volume create --size 10 --bootable volume-foo-restored
>> openstack volume backup restore backup-7 volume-foo-restored
>> openstack server create --volume volume-foo-restored --flavor medium1
>> instance-restored-from-backup-7
>> ```
>>
>> Yes! We can SSH to the instance and it appears intact.
>>
>> Perhaps each snapshot in Ceph stores a complete diff from the base RBD
>> image (rather than each successive snapshot depending on the last). If
>> this is true, then Cinder is unnecessarily protective of older
>> backups. Cinder represents these as "with dependents" and doesn't let
>> us touch them, even though Ceph will let us delete older RBD
>> snapshots, apparent

[Openstack] The problem of how to update resouce allocation ratio dynamically.

2018-08-23 Thread 余婷婷
Hi:
   Sorry fo bothering everyone. Now I update my openstack to queen,and use the 
nova-placement-api to provider resource.
  When I use "/resource_providers/{uuid}/inventories/MEMORY_MB" to update 
memory_mb allocation_ratio, and it success.But after some minutes,it recove to 
old value automatically. Then I find it report the value from compute_node in 
nova-compute automatically. But the allocation_ratio of compute_node was came 
from the nova.conf.So that means,We can't update the allocation_ratio until we 
update the nova.conf? But I wish to update the allocation_ratio dynamically 
other to update the nova.conf. I don't known how  to update resouce allocation 
ratio dynamically.






 





 





 ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] live_migration only using 8 Mb speed

2018-08-23 Thread Satish Patel
Forgive me, by mistake i grab wrong commit and that was the reason i
didn't see any changer after applying patch.

It works after applying correct version :) Thanks
On Thu, Aug 23, 2018 at 6:36 PM Satish Patel  wrote:
>
> I have updated this bug here something is wrong:
> https://bugs.launchpad.net/nova/+bug/1786346
>
> After nova upgrade i have compared these 3 files
> https://review.openstack.org/#/c/591761/  and i am not seeing any
> change here so look like this is not a complete patch.
>
> Are you sure they push this changes in nova repo?
> On Thu, Aug 23, 2018 at 4:36 PM Satish Patel  wrote:
> >
> > I have upgraded my nova and all nova component got upgrade but still
> > my live_migration running on 8Mbps speed.. what else is wrong here?
> >
> > I am using CentOS 7.5
> >
> > On Thu, Aug 23, 2018 at 3:26 PM Satish Patel  wrote:
> > >
> > > Look like it need all 3 line in user_variables.yml file.. after
> > > putting all 3 lines it works!!
> > >
> > > ## Nova service
> > > nova_git_repo: https://git.openstack.org/openstack/nova
> > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
> > > HEAD of "stable/queens" as of 06.08.2018
> > > nova_git_project_group: nova_all
> > > On Thu, Aug 23, 2018 at 3:12 PM Satish Patel  wrote:
> > > >
> > > > Matt,
> > > >
> > > > I've added "nova_git_install_branch:
> > > > a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and
> > > > run repo-build.yml playbook but it didn't change anything
> > > >
> > > > I am inside the repo container and still its showing old timestamp on
> > > > all nova file and i check all file they seems didn't change
> > > >
> > > > at this path in repo container /var/www/repo/openstackgit/nova/nova
> > > >
> > > > repo-build.yml should update that dir right?
> > > > On Thu, Aug 23, 2018 at 2:58 PM Satish Patel  
> > > > wrote:
> > > > >
> > > > > Thanks Matthew,
> > > > >
> > > > > Going to do that and will update you in few min.
> > > > > On Thu, Aug 23, 2018 at 2:47 PM Matthew Thode 
> > > > >  wrote:
> > > > > >
> > > > > > On 18-08-23 14:33:44, Satish Patel wrote:
> > > > > > > Matt,
> > > > > > >
> > > > > > > I am going to override following in user_variable.yml file in that
> > > > > > > case do i need to run ./bootstrap-ansible.sh script?
> > > > > > >
> > > > > > > ## Nova service
> > > > > > > nova_git_repo: https://git.openstack.org/openstack/nova
> > > > > > > nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c 
> > > > > > > #
> > > > > > > HEAD of "stable/queens" as of 06.08.2018
> > > > > > > nova_git_project_group: nova_all
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Aug 23, 2018 at 7:18 AM, Satish Patel 
> > > > > > >  wrote:
> > > > > > > > I'm testing this in lab, no load yet
> > > > > > > >
> > > > > > > > Sent from my iPhone
> > > > > > > >
> > > > > > > >> On Aug 23, 2018, at 2:30 AM, Matthew Thode 
> > > > > > > >>  wrote:
> > > > > > > >>
> > > > > > > >>> On 18-08-22 23:04:57, Satish Patel wrote:
> > > > > > > >>> Mathew,
> > > > > > > >>>
> > > > > > > >>> I haven't applied any patch yet but i am noticing in cluster 
> > > > > > > >>> some host
> > > > > > > >>> migrating VM super fast and some host migrating very slow. Is 
> > > > > > > >>> this
> > > > > > > >>> known behavior?
> > > > > > > >>>
> > > > > > > >>> On Wed, Aug 22, 2018 at 11:28 AM, Matthew Thode
> > > > > > > >>>  wrote:
> > > > > > >  On 18-08-22 10:58:48, Satish Patel wrote:
> > > > > > > > Matthew,
> > > > > > > >
> > > > > > > > I have two option looks like, correct me if i am wrong.
> > > > > > > >
> > > > > > > > 1. I have two option, upgrade minor release from 
> > > > > > > > 17.0.7-6-g9187bb1 to
> > > > > > > > 17.0.8-23-g0aff517  and upgrade full OSA
> > > > > > > >
> > > > > > > > 2. Just do override as you said "nova_git_install_branch:" 
> > > > > > > > in my
> > > > > > > > /etc/openstack_deploy/user_variables.yml  file, and run 
> > > > > > > > playbooks.
> > > > > > > >
> > > > > > > >
> > > > > > > > I think option [2] is safe to just touch specific 
> > > > > > > > component, also am i
> > > > > > > > correct about override in 
> > > > > > > > /etc/openstack_deploy/user_variables.yml
> > > > > > > > file?
> > > > > > > >
> > > > > > > > You mentioned "nova_git_install_branch:
> > > > > > > > dee99b1ed03de4b6ded94f3cf6d2ea7214bca93b" but i believe it 
> > > > > > > > should be
> > > > > > > > "a9c9285a5a68ab89a6543d143c364d90a01cd51c" am i correct?
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, Aug 22, 2018 at 10:46 AM, Matthew Thode
> > > > > > > >  wrote:
> > > > > > > >> On 18-08-22 10:33:11, Satish Patel wrote:
> > > > > > > >>> Thanks Matthew,
> > > > > > > >>>
> > > > > > > >>> Can i put that sha in my OSA at
> > > > > > > >>> playbooks/defaults/repo_packages/openstack_services.yml 
> > > > > > > >

[Openstack] live migration dedicated network issue

2018-08-23 Thread Satish Patel
I am trying to set dedicated network for live migration and for that i
did following in nova.conf

My dedicated network is 172.29.0.0/24

live_migration_uri =
"qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa"
live_migration_tunnelled = False
live_migration_inbound_addr = "172.29.0.25"

When i am trying to migrate VM i am getting following error, despite
error i am able to ping remote machine and ssh also. why i am getting
this error?

2018-08-24 01:07:55.608 61304 ERROR nova.virt.libvirt.driver
[req-26561823-4ae0-43ca-b6fe-5dd9609e796b
eebe97b4bc714b8f814af8a44d08c2a4 2927a06cf30f4f7e938fdda2cc05aed2 -
default default] [instance: a61e7e6f-f819-4ddf-9314-8a142515f3d6] Live
Migration failure: unable to connect to server at '172.29.0.24:49152':
No route to host: libvirtError: unable to connect to server at
'172.29.0.24:49152': No route to host

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack