[Yahoo-eng-team] [Bug 1930866] [NEW] locked instance can be rendered broken by deleting port

2021-06-04 Thread George Shuklin
Public bug reported:

'server lock' is indented to protect instance from simple mistakes (like
removing the wrong instance, or shut-downing it). It does prevent
shutdown, destruction and port detachment.

But if port is removed via `openstack port delete` it silently get
removed from locked instance, effectively, breaking it.

Steps to reproduce:
```
openstack server create foo
openstack server lock foo
openstack port delete {id of the port of the instance}
```

Expected behavior: error message, rejecting to delete port, used by
locked instance.

Actual behavior: port is removed, leaving locked instance without
network.


I was able to reproduce it on nova 17.0.12, but newer versions may be affected 
too.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  'server lock' is indented to protect instance from simple mistakes (like
  removing the wrong instance, or shut-downing it). It does prevent
  shutdown, destruction and port detachment.
  
  But if port is removed via `openstack port delete` it silently get
  removed from locked instance, effectively, breaking it.
  
  Steps to reproduce:
  ```
  openstack server create foo
  openstack server lock foo
  openstack port delete {id of the port of the instance}
  ```
  
- I was able to reproduce it on nova 17.0.12, but newer versions may be
- affected too.
+ Expected behavior: error message, rejecting to delete port, used by
+ locked instance.
+ 
+ Actual behavior: port is removed, leaving locked instance without
+ network.
+ 
+ 
+ I was able to reproduce it on nova 17.0.12, but newer versions may be 
affected too.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1930866

Title:
  locked instance can be rendered broken by deleting port

Status in OpenStack Compute (nova):
  New

Bug description:
  'server lock' is indented to protect instance from simple mistakes
  (like removing the wrong instance, or shut-downing it). It does
  prevent shutdown, destruction and port detachment.

  But if port is removed via `openstack port delete` it silently get
  removed from locked instance, effectively, breaking it.

  Steps to reproduce:
  ```
  openstack server create foo
  openstack server lock foo
  openstack port delete {id of the port of the instance}
  ```

  Expected behavior: error message, rejecting to delete port, used by
  locked instance.

  Actual behavior: port is removed, leaving locked instance without
  network.

  
  I was able to reproduce it on nova 17.0.12, but newer versions may be 
affected too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1930866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663225] Re: ironic does not clean or shutdown nodes if nova-compute is down at the moment of 'nova delete'

2017-07-10 Thread George Shuklin
This problem exists in Ironic regardless of bot attempts to sweep it
under the expiration rug.

** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663225

Title:
  ironic does not clean or shutdown nodes if nova-compute is down at the
  moment of 'nova delete'

Status in OpenStack Compute (nova):
  New

Bug description:
  Affected configuration: Ironic installation with Ironic driver for
  nova.

  If nova-compute service is down at the moment of execution 'nova
  delete' for instance, nova marks instance as 'deleted' even node is
  continue to run.

  Steps to reproduce:
  1. Prepare ironic/nova
  2. Start instance (nova boot/openstack server create)
  3. Wait until 'ACTIVE' state for instance.
  4. Stop nova-compute
  5. Wait until it become 'down' in 'nova service-list'
  5. Execute 'nova delete' command for instance.
  6. Start nova-compute serivce

  Expected result:
  - Instance sits in the 'deleting' state until nova-compute is not come back.
  - Node switch to 'cleaning/available' as soon as nova-compute come back
  - Tenant instance (baremetal server) stops to operate as soon as nova-compute 
is up.
  nova-compute is up.

  Actual result:
  - Instance deleted almost instantly regardless of nova-compute status.
  - Node keeps 'active' state with filled in 'Instance UUID' field.
  - Tenant instance (baremetal server) continue to work after nova-compute is 
up to "running_deleted_instance_action" time.

  I believe this is incorrect behavior, because it allows tenants to
  continue to use services regardless of nova report that there are no
  instances are allocated to tenant.

  Affected version: newton.

  P.S. Normally nova (with libvirt/kvm driver) would keep instance in
  'deleting' state until nova-compute is not come back, and remove it
  from server (from libvirt). Only after that nova marks instance as
  deleted in database. Ironic driver should do the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663225/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685237] Re: port security does not block router advertisements for instances

2017-06-24 Thread George Shuklin
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1685237

Title:
  port security does not block router advertisements for instances

Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New

Bug description:
  Affected version: mitaka

  Issue: If port security is enabled, IPv6 router advertisements may be
  send by any instance.

  Network configuration: vlan, security groups disabled, port security
  enabled.

  subnet:
  {
    "description": "",
    "enable_dhcp": true,
    "network_id": "b71b7cc7-3534-481b-bb67-a473a8e083cc",
    "tenant_id": "4e632076f7004f908c8da67345a7592e",
    "created_at": "2017-04-21T12:39:13",
    "dns_nameservers": "",
    "updated_at": "2017-04-21T12:39:13",
    "ipv6_ra_mode": "",
    "allocation_pools": "{\"start\": \"2a00::3:101::2\", \"end\": 
\"2a00::3:101::::\"}",
    "gateway_ip": "2a00::3:101::1",
    "ipv6_address_mode": "slaac",
    "ip_version": 6,
    "host_routes": "",
    "cidr": "2a00::3:101::/64",
    "id": "789d4f41-7867-4b17-9f7b-220c1e689b0b",
    "subnetpool_id": "",
    "name": ""
  }

  When instance is configured by (malicious) user, it starts to send
  router advertisements (like it is a router) and those RAs may
  interrupt networking.

  tcpdump from physical interface of compute node:
  tcpdump -ni eth4 ip6
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on eth4, link-type EN10MB (Ethernet), capture size 262144 bytes
  14:16:47.707480 IP6 fe80::52eb:1aff:fe77:de4f > ff02::1: ICMP6, router 
advertisement, length 64
  14:16:48.709429 IP6 fe80::f816:3eff:fe69:6644 > ff02::1: ICMP6, router 
advertisement, length 56

  first line is a valid router RA, second line (:6644) - by instance,
  which should be blocked by port security.

  On a victim machine (same segment) routing table looks like this:

  ip -6 route

  default via fe80::52eb:1aff:fe77:de4f dev ens3  proto ra  metric 1024  
expires 1795sec hoplimit 64 pref medium
  default via fe80::f816:3eff:fe69:6644 dev ens3  proto ra  metric 1024  
expires 1796sec hoplimit 64 pref medium

  Last line - result of network hijacking from malicious instance, and
  shouldn't happen.

  I'm not sure if this is a security issue or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1685237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699495] [NEW] security groups allows localhost (127.0.0.0/8) to pass

2017-06-21 Thread George Shuklin
Public bug reported:

Host local IP addresses shouldn't be in source_ip for incoming packets.
No exceptions.

Current implementation of security groups, when user allow a wide range
of IP addresses to pass, allow to pass 127.0.0.0/8.

Steps to reproduce:
1. Create rule in security groups which allows from 0.0.0.0/0
2. send spoofed traffic with source 127.0.0.1 to instance (hping3 -a 127.0.0.1 
target_ip)

Expected behavior: no malformed traffic on instance interface.
Actual behavior: Traffic with source=127.0.0.1 on instance interface.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699495

Title:
  security groups allows localhost (127.0.0.0/8) to pass

Status in neutron:
  New

Bug description:
  Host local IP addresses shouldn't be in source_ip for incoming
  packets. No exceptions.

  Current implementation of security groups, when user allow a wide
  range of IP addresses to pass, allow to pass 127.0.0.0/8.

  Steps to reproduce:
  1. Create rule in security groups which allows from 0.0.0.0/0
  2. send spoofed traffic with source 127.0.0.1 to instance (hping3 -a 
127.0.0.1 target_ip)

  Expected behavior: no malformed traffic on instance interface.
  Actual behavior: Traffic with source=127.0.0.1 on instance interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1699495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685237] [NEW] port security does not block router advertisements for instances

2017-04-21 Thread George Shuklin
Public bug reported:

Affected version: mitaka

Issue: If port security is enabled, IPv6 router advertisements may be
send by any instance.

Network configuration: vlan, security groups disabled, port security
enabled.

subnet:
{
  "description": "",
  "enable_dhcp": true,
  "network_id": "b71b7cc7-3534-481b-bb67-a473a8e083cc",
  "tenant_id": "4e632076f7004f908c8da67345a7592e",
  "created_at": "2017-04-21T12:39:13",
  "dns_nameservers": "",
  "updated_at": "2017-04-21T12:39:13",
  "ipv6_ra_mode": "",
  "allocation_pools": "{\"start\": \"2a00::3:101::2\", \"end\": 
\"2a00::3:101::::\"}",
  "gateway_ip": "2a00::3:101::1",
  "ipv6_address_mode": "slaac",
  "ip_version": 6,
  "host_routes": "",
  "cidr": "2a00::3:101::/64",
  "id": "789d4f41-7867-4b17-9f7b-220c1e689b0b",
  "subnetpool_id": "",
  "name": ""
}

When instance is configured by (malicious) user, it starts to send
router advertisements (like it is a router) and those RAs may interrupt
networking.

tcpdump from physical interface of compute node:
tcpdump -ni eth4 ip6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth4, link-type EN10MB (Ethernet), capture size 262144 bytes
14:16:47.707480 IP6 fe80::52eb:1aff:fe77:de4f > ff02::1: ICMP6, router 
advertisement, length 64
14:16:48.709429 IP6 fe80::f816:3eff:fe69:6644 > ff02::1: ICMP6, router 
advertisement, length 56

first line is a valid router RA, second line (:6644) - by instance,
which should be blocked by port security.

On a victim machine (same segment) routing table looks like this:

ip -6 route

default via fe80::52eb:1aff:fe77:de4f dev ens3  proto ra  metric 1024  expires 
1795sec hoplimit 64 pref medium
default via fe80::f816:3eff:fe69:6644 dev ens3  proto ra  metric 1024  expires 
1796sec hoplimit 64 pref medium

Last line - result of network hijacking from malicious instance, and
shouldn't happen.

I'm not sure if this is a security issue or not.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1685237

Title:
  port security does not block router advertisements for instances

Status in neutron:
  New

Bug description:
  Affected version: mitaka

  Issue: If port security is enabled, IPv6 router advertisements may be
  send by any instance.

  Network configuration: vlan, security groups disabled, port security
  enabled.

  subnet:
  {
    "description": "",
    "enable_dhcp": true,
    "network_id": "b71b7cc7-3534-481b-bb67-a473a8e083cc",
    "tenant_id": "4e632076f7004f908c8da67345a7592e",
    "created_at": "2017-04-21T12:39:13",
    "dns_nameservers": "",
    "updated_at": "2017-04-21T12:39:13",
    "ipv6_ra_mode": "",
    "allocation_pools": "{\"start\": \"2a00::3:101::2\", \"end\": 
\"2a00::3:101::::\"}",
    "gateway_ip": "2a00::3:101::1",
    "ipv6_address_mode": "slaac",
    "ip_version": 6,
    "host_routes": "",
    "cidr": "2a00::3:101::/64",
    "id": "789d4f41-7867-4b17-9f7b-220c1e689b0b",
    "subnetpool_id": "",
    "name": ""
  }

  When instance is configured by (malicious) user, it starts to send
  router advertisements (like it is a router) and those RAs may
  interrupt networking.

  tcpdump from physical interface of compute node:
  tcpdump -ni eth4 ip6
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on eth4, link-type EN10MB (Ethernet), capture size 262144 bytes
  14:16:47.707480 IP6 fe80::52eb:1aff:fe77:de4f > ff02::1: ICMP6, router 
advertisement, length 64
  14:16:48.709429 IP6 fe80::f816:3eff:fe69:6644 > ff02::1: ICMP6, router 
advertisement, length 56

  first line is a valid router RA, second line (:6644) - by instance,
  which should be blocked by port security.

  On a victim machine (same segment) routing table looks like this:

  ip -6 route

  default via fe80::52eb:1aff:fe77:de4f dev ens3  proto ra  metric 1024  
expires 1795sec hoplimit 64 pref medium
  default via fe80::f816:3eff:fe69:6644 dev ens3  proto ra  metric 1024  
expires 1796sec hoplimit 64 pref medium

  Last line - result of network hijacking from malicious instance, and
  shouldn't happen.

  I'm not sure if this is a security issue or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1685237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673818] [NEW] Misleading requirements for 'unpartitioned disks' for ConfigDrive in documentation

2017-03-17 Thread George Shuklin
Public bug reported:

Current documentation states that:
http://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html#version-2

... a config drive: ...Must be a un-partitioned block device (/dev/vdb,
not /dev/vdb1)...

This is not correct.

1. Cloud-init actually, works with ConfigDrive as partition (e.g. /dev/sda1)
2. Ironic uses partition at the end of disk to write metadata, and it's absurd 
for baremetal provisioning to dedicate whole disk (actual SATA, SSD, SAS/FC 
drive) just to tiny metada.
3. According to @smoser at #cloud-init IRC, " i'm pretty sure the doc is just 
wrong, ... i'm pretty sure reading current code that if the filesystem has a 
label of 'config-2', then it will work".

I think this part of documentation should be rewritten to avoid
confusion with Ironic workflow for ConfigDrive.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Description changed:

- 
  Current documentation states that:
  
http://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html#version-2
  
  ... a config drive: ...Must be a un-partitioned block device (/dev/vdb,
  not /dev/vdb1)...
  
  This is not correct.
  
- 1. Cloud-init actually, works with ConfigDrive as partition.
+ 1. Cloud-init actually, works with ConfigDrive as partition (e.g. /dev/sda1)
  2. Ironic uses partition at the end of disk to write metadata, and it's 
absurd for baremetal provisioning to dedicate whole disk (actual SATA, SSD, 
SAS/FC drive) just to tiny metada.
  3. According to @smoser at #cloud-init IRC, " i'm pretty sure the doc is just 
wrong, ... i'm pretty sure reading current code that if the filesystem has a 
label of 'config-2', then it will work".
  
  I think this part of documentation should be rewritten to avoid
  confusion with Ironic workflow for ConfigDrive.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1673818

Title:
  Misleading requirements for 'unpartitioned disks' for ConfigDrive in
  documentation

Status in cloud-init:
  New

Bug description:
  Current documentation states that:
  
http://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html#version-2

  ... a config drive: ...Must be a un-partitioned block device
  (/dev/vdb, not /dev/vdb1)...

  This is not correct.

  1. Cloud-init actually, works with ConfigDrive as partition (e.g. /dev/sda1)
  2. Ironic uses partition at the end of disk to write metadata, and it's 
absurd for baremetal provisioning to dedicate whole disk (actual SATA, SSD, 
SAS/FC drive) just to tiny metada.
  3. According to @smoser at #cloud-init IRC, " i'm pretty sure the doc is just 
wrong, ... i'm pretty sure reading current code that if the filesystem has a 
label of 'config-2', then it will work".

  I think this part of documentation should be rewritten to avoid
  confusion with Ironic workflow for ConfigDrive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1673818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1672433] [NEW] dhcp-agent should send a grace ARP after assigning IP address in dhcp namespace

2017-03-13 Thread George Shuklin
Public bug reported:

Normally dhcp agents should not provide routable services. There is one
exception: monitoring. Checking dhcp agents availability by sending PING
requests is very easy and sits well with existing monitoring frameworks.
Outside of checking of availability of DHCP agent itself that check
allows to test network connectivity between DHCP-agent and network
equipment.

There is a specific scenario for DHCP agent when that check gives false
reports.

Scenario:
1. Boot instance with a give IP, assure that instance is UP (replies to pings).
2. Delete instance.
3. Add dhcp agent to net network where IP (from step1) is allocated in such a 
way that it would take that IP (from step1).

Expected behavior: DHCP agent should answer pings.
Actual behavior: DHCP agent does not reply to pings for up to 4 hours, that 
spontaneously replies.

Reason: Instance (from step1) updated ARP table on the router. When
instance was removed and DHCP agent start listen on that IP, it didn't
send gracious (probe) ARP. Normal workflow for DHCP does not require it
to send any traffic through router, therefore there is no reason for
router to update entry in ARP table. As long as router keep old
(invalid) entry pointing to old instance (from step1), DHCP couldn't
reply to the pings because every incoming request is coming with wrong
MAC destination address.

Proposal: dhcp agent should either:

1. Send some kind of network packet to network gateway (f.e. ping request).
2. Set arp_notify for network interface is uses (f.e. 
net.ipv4.conf.tap22dad33f-d7.arp_notify=1), and configure network address 
_BEFORE_ bringing interface up. If address is configured after interface was 
brought up, it wouldn't send gracious ARP.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672433

Title:
  dhcp-agent should send a grace ARP after assigning IP address in dhcp
  namespace

Status in neutron:
  New

Bug description:
  Normally dhcp agents should not provide routable services. There is
  one exception: monitoring. Checking dhcp agents availability by
  sending PING requests is very easy and sits well with existing
  monitoring frameworks. Outside of checking of availability of DHCP
  agent itself that check allows to test network connectivity between
  DHCP-agent and network equipment.

  There is a specific scenario for DHCP agent when that check gives
  false reports.

  Scenario:
  1. Boot instance with a give IP, assure that instance is UP (replies to 
pings).
  2. Delete instance.
  3. Add dhcp agent to net network where IP (from step1) is allocated in such a 
way that it would take that IP (from step1).

  Expected behavior: DHCP agent should answer pings.
  Actual behavior: DHCP agent does not reply to pings for up to 4 hours, that 
spontaneously replies.

  Reason: Instance (from step1) updated ARP table on the router. When
  instance was removed and DHCP agent start listen on that IP, it didn't
  send gracious (probe) ARP. Normal workflow for DHCP does not require
  it to send any traffic through router, therefore there is no reason
  for router to update entry in ARP table. As long as router keep old
  (invalid) entry pointing to old instance (from step1), DHCP couldn't
  reply to the pings because every incoming request is coming with wrong
  MAC destination address.

  Proposal: dhcp agent should either:

  1. Send some kind of network packet to network gateway (f.e. ping request).
  2. Set arp_notify for network interface is uses (f.e. 
  net.ipv4.conf.tap22dad33f-d7.arp_notify=1), and configure network address 
_BEFORE_ bringing interface up. If address is configured after interface was 
brought up, it wouldn't send gracious ARP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1672433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669727] [NEW] Mystery link in rhel.py

2017-03-03 Thread George Shuklin
Public bug reported:

Hello.

File cloud-init/cloudinit/distros/rhel.py has a broken link "# See:
http://tiny.cc/6r99fw";. Can you put it content somewhere inside repo?

Thanks.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1669727

Title:
  Mystery link in rhel.py

Status in cloud-init:
  New

Bug description:
  Hello.

  File cloud-init/cloudinit/distros/rhel.py has a broken link "# See:
  http://tiny.cc/6r99fw";. Can you put it content somewhere inside repo?

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1669727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665366] [NEW] [RFE] Add --key-name option to 'nova rebuild'

2017-02-16 Thread George Shuklin
Public bug reported:

Currently there is no way to change key-name associated with instance.
This has some justification as key may be downloaded only at build time
and later changes will be ignored by instance.

But this is not a case for rebuild command. If tenant want to rebuild
instance, he may wants to change key used to access that instance.

Main reason for 'rebuild' command instead of 'delete/create' often lies
in area of preserving network settings - fixed ips, mac addresses,
assosiated floatings IPs. Normally user want to keep the same ssh key as
at creation time, but occasionally he (she) may want to replace it.

Right now there is no such option.

TL;DR; Please add --key-name option to nova rebuild command (and API).

Thanks.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665366

Title:
  [RFE] Add --key-name option to 'nova rebuild'

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently there is no way to change key-name associated with instance.
  This has some justification as key may be downloaded only at build
  time and later changes will be ignored by instance.

  But this is not a case for rebuild command. If tenant want to rebuild
  instance, he may wants to change key used to access that instance.

  Main reason for 'rebuild' command instead of 'delete/create' often
  lies in area of preserving network settings - fixed ips, mac
  addresses, assosiated floatings IPs. Normally user want to keep the
  same ssh key as at creation time, but occasionally he (she) may want
  to replace it.

  Right now there is no such option.

  TL;DR; Please add --key-name option to nova rebuild command (and API).

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1665366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663225] Re: ironic does not clean or shutdown nodes if nova-compute is down at the moment of 'nova delete'

2017-02-09 Thread George Shuklin
** Project changed: ironic => nova

** Tags added: ironic

** Description changed:

  If nova-compute service is down at the moment of execution 'nova delete'
  for instance, node with this instance will never been cleaned/turned off
  after nova-compute start.
  
  Steps to reproduce:
  1. Prepare ironic/nova
  2. Start instance (nova boot/openstack server create)
  3. Wait until 'ACTIVE' state for instance.
  4. Stop nova-compute
  5. Wait until it become 'down' in 'nova service-list'
  5. Execute 'nova delete' command for instance.
  6. Start nova-compute serivce
  
  Expected result:
  Case 1:
  - Instance stuck in the 'deleting' state until nova-compute is not come back.
  - Node switch to 'cleaning/available' as soon as nova-compute come back
  - Tenant instance (baremetal server) stops to operate as soon as nova-compute 
is up.
  or
+ Case 2:
  - Instance deleted as usual
  - Node switch to 'cleaning/available' as soon as nova-compute come back
  - Tenant instance (baremetal server) stops to operate as soon as nova-compute 
is up.
  
  Actual result:
  - Instance deleted as usual.
  - Node has 'active' state with filled in 'Instance UUID' field.
  - Tenant instance (baremetal server) continue to work after nova-compute is 
up and continue to do so forever (until node is put to 'deleted' state manually 
by system administrator).
  
  I believe this is very severe bug, because it allows tenants to continue
  to use services regardless of nova report that there are no tenant
  instances running.
  
  Affected version: newton.

** Description changed:

  If nova-compute service is down at the moment of execution 'nova delete'
  for instance, node with this instance will never been cleaned/turned off
  after nova-compute start.
  
  Steps to reproduce:
  1. Prepare ironic/nova
  2. Start instance (nova boot/openstack server create)
  3. Wait until 'ACTIVE' state for instance.
  4. Stop nova-compute
  5. Wait until it become 'down' in 'nova service-list'
  5. Execute 'nova delete' command for instance.
  6. Start nova-compute serivce
  
  Expected result:
  Case 1:
  - Instance stuck in the 'deleting' state until nova-compute is not come back.
  - Node switch to 'cleaning/available' as soon as nova-compute come back
  - Tenant instance (baremetal server) stops to operate as soon as nova-compute 
is up.
  or
  Case 2:
  - Instance deleted as usual
  - Node switch to 'cleaning/available' as soon as nova-compute come back
  - Tenant instance (baremetal server) stops to operate as soon as nova-compute 
is up.
  
  Actual result:
  - Instance deleted as usual.
  - Node has 'active' state with filled in 'Instance UUID' field.
  - Tenant instance (baremetal server) continue to work after nova-compute is 
up and continue to do so forever (until node is put to 'deleted' state manually 
by system administrator).
  
  I believe this is very severe bug, because it allows tenants to continue
  to use services regardless of nova report that there are no tenant
  instances running.
  
  Affected version: newton.
+ 
+ P.S. Normally nova (with libvirt/kvm driver) would keep instance in
+ 'deleting' state until nova-compute is not come back, and remove it from
+ server (from libvirt). Only after that nova marks instance as deleted in
+ database. Ironic driver should do the same.

** Description changed:

+ 
+ Affected configuration: Ironic installation with Ironic driver for nova.
+ 
  If nova-compute service is down at the moment of execution 'nova delete'
- for instance, node with this instance will never been cleaned/turned off
- after nova-compute start.
+ for instance, baremetal node with this instance will never be
+ cleaned/turned off, even after nova-compute start.
  
  Steps to reproduce:
  1. Prepare ironic/nova
  2. Start instance (nova boot/openstack server create)
  3. Wait until 'ACTIVE' state for instance.
  4. Stop nova-compute
  5. Wait until it become 'down' in 'nova service-list'
  5. Execute 'nova delete' command for instance.
  6. Start nova-compute serivce
  
  Expected result:
  Case 1:
  - Instance stuck in the 'deleting' state until nova-compute is not come back.
  - Node switch to 'cleaning/available' as soon as nova-compute come back
  - Tenant instance (baremetal server) stops to operate as soon as nova-compute 
is up.
  or
  Case 2:
  - Instance deleted as usual
  - Node switch to 'cleaning/available' as soon as nova-compute come back
  - Tenant instance (baremetal server) stops to operate as soon as nova-compute 
is up.
  
  Actual result:
  - Instance deleted as usual.
  - Node has 'active' state with filled in 'Instance UUID' field.
  - Tenant instance (baremetal server) continue to work after nova-compute is 
up and continue to do so forever (until node is put to 'deleted' state manually 
by system administrator).
  
  I believe this is very severe bug, because it allows tenants to continue
  to use services regardless of nova report that there are no tenant
  instances running.
  
  Affect

[Yahoo-eng-team] [Bug 1660317] Re: NotImplementedError for detach_interface in nova-compute during instance deletion

2017-02-07 Thread George Shuklin
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660317

Title:
  NotImplementedError for detach_interface in nova-compute during
  instance deletion

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in ironic package in Ubuntu:
  Invalid
Status in nova package in Ubuntu:
  New

Bug description:
  When baremetal instance deleted there is a harmless but annoying trace
  in nova-compute output.

  nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Terminating instance 
[req-5f1eba69-239a-4dd4-8677-f28542b190bc 5a08515f35d749068a6327e387ca04e2 
7d450ecf00d64399aeb93bc122cb6dae - - -]
  nova.compute.resource_tracker[26553]: INFO Auditing locally available compute 
resources for node d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.resource_tracker[26553]: INFO Final resource view: 
name=d02c7361-5e3a-4fdf-89b5-f29b3901f0fc phys_ram=0MB used_ram=8096MB 
phys_disk=0GB used_disk=480GB total_vcpus=0 used_vcpus=0 pci_stats=[] 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.resource_tracker[26553]: INFO Compute_service record updated for 
bare-compute1:d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
  nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Neutron deleted interface 
6b563aa7-64d3-4105-9ed5-c764fee7b536; detaching it from the instance and 
deleting it from the info cache [req-fdfeee26-a860-40a5-b2e3-2505973ffa75 
11b95cf353f74788938f580e13b652d8 93c697ef6c2649eb9966900a8d6a73d8 - - -]
  oslo_messaging.rpc.server[26553]: ERROR Exception during message handling 
[req-fdfeee26-a860-40a5-b2e3-2505973ffa75 11b95cf353f74788938f580e13b652d8 
93c697ef6c2649eb9966900a8d6a73d8 - - -]
  oslo_messaging.rpc.server[26553]: TRACE Traceback (most recent call last):
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
  oslo_messaging.rpc.server[26553]: TRACE res = 
self.dispatcher.dispatch(message)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 150, 
in dispatch
  oslo_messaging.rpc.server[26553]: TRACE return 
self._do_dispatch(endpoint, method, ctxt, args)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 121, 
in _do_dispatch
  oslo_messaging.rpc.server[26553]: TRACE result = func(ctxt, **new_args)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 75, in 
wrapped
  oslo_messaging.rpc.server[26553]: TRACE function_name, call_dict, binary)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  oslo_messaging.rpc.server[26553]: TRACE self.force_reraise()
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  oslo_messaging.rpc.server[26553]: TRACE six.reraise(self.type_, 
self.value, self.tb)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 66, in 
wrapped
  oslo_messaging.rpc.server[26553]: TRACE return f(self, context, *args, 
**kw)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6691, in 
external_instance_event
  oslo_messaging.rpc.server[26553]: TRACE event.tag)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6660, in 
_process_instance_vif_deleted_event
  oslo_messaging.rpc.server[26553]: TRACE 
self.driver.detach_interface(instance, vif)
  oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 524, in 
detach_interface
  oslo_messaging.rpc.server[26553]: TRACE raise NotImplementedError()
  oslo_messaging.rpc.server[26553]: TRACE NotImplementedError
  oslo_messaging.rpc.server[26553]: TRACE

  
  Affected version:
  nova 14.0.3
  neutron 6.0.0
  ironic 6.2.1

  configuration for nova-compute:
  compute_driver = ironic.IronicDriver

  Ironic is configured to use neutron networks with generic switch as
  mechanism driver for ML2 pluging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1660317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListH

[Yahoo-eng-team] [Bug 1660317] [NEW] NotImplementedError for detach_interface in nova-compute during instance deletion

2017-01-30 Thread George Shuklin
Public bug reported:

When baremetal instance deleted there is a harmless but annoying trace
in nova-compute output.

nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Terminating instance 
[req-5f1eba69-239a-4dd4-8677-f28542b190bc 5a08515f35d749068a6327e387ca04e2 
7d450ecf00d64399aeb93bc122cb6dae - - -]
nova.compute.resource_tracker[26553]: INFO Auditing locally available compute 
resources for node d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
nova.compute.resource_tracker[26553]: INFO Final resource view: 
name=d02c7361-5e3a-4fdf-89b5-f29b3901f0fc phys_ram=0MB used_ram=8096MB 
phys_disk=0GB used_disk=480GB total_vcpus=0 used_vcpus=0 pci_stats=[] 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
nova.compute.resource_tracker[26553]: INFO Compute_service record updated for 
bare-compute1:d02c7361-5e3a-4fdf-89b5-f29b3901f0fc 
[req-d34e2b7b-386f-4a3c-ae85-16860a4a9c28 - - - - -]
nova.compute.manager[26553]: INFO [instance: 
e265be67-9e87-44ea-95b6-641fc2dcaad8] Neutron deleted interface 
6b563aa7-64d3-4105-9ed5-c764fee7b536; detaching it from the instance and 
deleting it from the info cache [req-fdfeee26-a860-40a5-b2e3-2505973ffa75 
11b95cf353f74788938f580e13b652d8 93c697ef6c2649eb9966900a8d6a73d8 - - -]
oslo_messaging.rpc.server[26553]: ERROR Exception during message handling 
[req-fdfeee26-a860-40a5-b2e3-2505973ffa75 11b95cf353f74788938f580e13b652d8 
93c697ef6c2649eb9966900a8d6a73d8 - - -]
oslo_messaging.rpc.server[26553]: TRACE Traceback (most recent call last):
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 133, in 
_process_incoming
oslo_messaging.rpc.server[26553]: TRACE res = 
self.dispatcher.dispatch(message)
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 150, 
in dispatch
oslo_messaging.rpc.server[26553]: TRACE return self._do_dispatch(endpoint, 
method, ctxt, args)
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 121, 
in _do_dispatch
oslo_messaging.rpc.server[26553]: TRACE result = func(ctxt, **new_args)
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 75, in 
wrapped
oslo_messaging.rpc.server[26553]: TRACE function_name, call_dict, binary)
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
oslo_messaging.rpc.server[26553]: TRACE self.force_reraise()
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
oslo_messaging.rpc.server[26553]: TRACE six.reraise(self.type_, self.value, 
self.tb)
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 66, in 
wrapped
oslo_messaging.rpc.server[26553]: TRACE return f(self, context, *args, **kw)
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6691, in 
external_instance_event
oslo_messaging.rpc.server[26553]: TRACE event.tag)
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6660, in 
_process_instance_vif_deleted_event
oslo_messaging.rpc.server[26553]: TRACE 
self.driver.detach_interface(instance, vif)
oslo_messaging.rpc.server[26553]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 524, in 
detach_interface
oslo_messaging.rpc.server[26553]: TRACE raise NotImplementedError()
oslo_messaging.rpc.server[26553]: TRACE NotImplementedError
oslo_messaging.rpc.server[26553]: TRACE


Affected version:
nova 14.0.3
neutron 6.0.0
ironic 6.2.1

configuration for nova-compute:
compute_driver = ironic.IronicDriver

Ironic is configured to use neutron networks with generic switch as
mechanism driver for ML2 pluging.

** Affects: ironic
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: ironic (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: ubuntu
   Importance: Undecided
   Status: New

** Also affects: ironic (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: ubuntu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660317

Title:
  NotImplementedError for detach_interface in nova-compute during
  instance deletion

Status in Ironic:
  New
Status in ne

[Yahoo-eng-team] [Bug 1659290] [NEW] Failure to load mechanism drivers in ML2 should be critical

2017-01-25 Thread George Shuklin
Public bug reported:

Right now when ML2 load mechanism driver, if some of them is unavailable
(due to typo or bug in the driver, preventing it to be loaded by
stevedore), it registered only in 'info' output of neutron-server
('Configured mechanism driver names' and 'Loaded mechanism driver
names').

I believe inability to initialize any on given mechanism drivers is
grave and fatal for neutron server. Server without proper mechanism
driver will silently made all relevant port bindings 'binding_failed',
causing harm and chaos in production environment.

Proposition:

terminate neutron-server with CRITICAL failure if some of
cfg.CONF.ml2.mechanism_drivers is unavailable.

This is a big issue for operators, because, currently, such
misconfigurations are REALLY hard to debug, especially in conjuncture
with (crappy) vendor mechanism drivers.

Affected version: 8.3 (newton)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1659290

Title:
  Failure to load mechanism drivers in ML2 should be critical

Status in neutron:
  New

Bug description:
  Right now when ML2 load mechanism driver, if some of them is
  unavailable (due to typo or bug in the driver, preventing it to be
  loaded by stevedore), it registered only in 'info' output of neutron-
  server ('Configured mechanism driver names' and 'Loaded mechanism
  driver names').

  I believe inability to initialize any on given mechanism drivers is
  grave and fatal for neutron server. Server without proper mechanism
  driver will silently made all relevant port bindings 'binding_failed',
  causing harm and chaos in production environment.

  Proposition:

  terminate neutron-server with CRITICAL failure if some of
  cfg.CONF.ml2.mechanism_drivers is unavailable.

  This is a big issue for operators, because, currently, such
  misconfigurations are REALLY hard to debug, especially in conjuncture
  with (crappy) vendor mechanism drivers.

  Affected version: 8.3 (newton)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1659290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658682] [NEW] port-security can't be disabled if security groups are not enabled

2017-01-23 Thread George Shuklin
Public bug reported:

If ml2 have settings

[DEFAULT]
extension_drivers = port_security

[securitygroup]
enable_security_group = False

and one is trying to disable port-security on a given port, he/she will
fail:

neutron port-update fad58638-3568-4bcb-8742-d857d138056d --port-
security-enabled=False

Port has security group associated. Cannot disable port security or ip address 
until security group is removed
Neutron server returns request_ids: ['req-12cd8a70-88ad-4d2b-bc3c-fcf574b088c4']

At the same time there is no way to use
neutron port-update fad58638-3568-4bcb-8742-d857d138056d --no-security-groups
:
Unrecognized attribute(s) 'security_groups'
Neutron server returns request_ids: ['req-1d2227c6-40a0-41e9-92a3-410168462635'

This cause drastic inconvenience for administrators who run openstack
with disabled security groups: to disable port security one ought to
disable security group on the same port, and forced to to enable
security group on server just to disable security group on the port.

Version: 8.3 (mitaka).

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- 
  If ml2 have settings
  
  [DEFAULT]
  extension_drivers = port_security
  
  [securitygroup]
  enable_security_group = False
  
- and one is trying to disable port-security on a given port, it will
+ and one is trying to disable port-security on a given port, he/she will
  fail:
  
  neutron port-update fad58638-3568-4bcb-8742-d857d138056d --port-
  security-enabled=False
  
  Port has security group associated. Cannot disable port security or ip 
address until security group is removed
  Neutron server returns request_ids: 
['req-12cd8a70-88ad-4d2b-bc3c-fcf574b088c4']
  
- At the same time there is no way to use 
+ At the same time there is no way to use
  neutron port-update fad58638-3568-4bcb-8742-d857d138056d --no-security-groups
  :
  Unrecognized attribute(s) 'security_groups'
  Neutron server returns request_ids: 
['req-1d2227c6-40a0-41e9-92a3-410168462635'
  
  This cause drastic inconvenience for administrators who run openstack
  with disabled security groups: to disable port security one ought to
  disable security group on the same port, and forced to to enable
  security group on server just to disable security group on the port.
  
  Version: 8.3 (mitaka).

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658682

Title:
  port-security can't be disabled if security groups are not enabled

Status in neutron:
  New

Bug description:
  If ml2 have settings

  [DEFAULT]
  extension_drivers = port_security

  [securitygroup]
  enable_security_group = False

  and one is trying to disable port-security on a given port, he/she
  will fail:

  neutron port-update fad58638-3568-4bcb-8742-d857d138056d --port-
  security-enabled=False

  Port has security group associated. Cannot disable port security or ip 
address until security group is removed
  Neutron server returns request_ids: 
['req-12cd8a70-88ad-4d2b-bc3c-fcf574b088c4']

  At the same time there is no way to use
  neutron port-update fad58638-3568-4bcb-8742-d857d138056d --no-security-groups
  :
  Unrecognized attribute(s) 'security_groups'
  Neutron server returns request_ids: 
['req-1d2227c6-40a0-41e9-92a3-410168462635'

  This cause drastic inconvenience for administrators who run openstack
  with disabled security groups: to disable port security one ought to
  disable security group on the same port, and forced to to enable
  security group on server just to disable security group on the port.

  Version: 8.3 (mitaka).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1658682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658636] [NEW] neutron (mitaka) rejects port updates for allowed address pairs

2017-01-23 Thread George Shuklin
Public bug reported:

Neutron 8.3 (mitaka) rejects requests to update allowed_address_pairs.

Request:

neutron --debug port-update b59bc3bb-7d34-4fbb-8e55-a9f1c5c88411
--allowed-address-pairs type=dict list=true ip_address=10.254.15.4

curl:
curl -g -i -X PUT 
https://network.servers.example.com:9696/v2.0/ports/b59bc3bb-7d34-4fbb-8e55-a9f1c5c88411.json
 -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: ecd9221f275333c7c271788e" 
-d '{"port": {"allowed_address_pairs": [{"ip_address": "10.254.15.4"}]}}'

Reply:

{"NeutronError": {"message": "Unrecognized attribute(s)
'allowed_address_pairs'", "type": "HTTPBadRequest", "detail": ""}}

Log entry:

2017-01-23 09:31:58.988 28914 INFO neutron.api.v2.resource [req-
56088d19-9359-4360-98db-ea9776e9dd33 46d2f76c9d5a409293cdb88ac8dcdeca
6d8ae5f32b294b2684c77417eb3b21cb - - -] update failed (client error):
Unrecognized attribute(s) 'allowed_address_pairs'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658636

Title:
  neutron (mitaka) rejects port updates for allowed address pairs

Status in neutron:
  New

Bug description:
  Neutron 8.3 (mitaka) rejects requests to update allowed_address_pairs.

  Request:

  neutron --debug port-update b59bc3bb-7d34-4fbb-8e55-a9f1c5c88411
  --allowed-address-pairs type=dict list=true ip_address=10.254.15.4

  curl:
  curl -g -i -X PUT 
https://network.servers.example.com:9696/v2.0/ports/b59bc3bb-7d34-4fbb-8e55-a9f1c5c88411.json
 -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: ecd9221f275333c7c271788e" 
-d '{"port": {"allowed_address_pairs": [{"ip_address": "10.254.15.4"}]}}'

  Reply:

  {"NeutronError": {"message": "Unrecognized attribute(s)
  'allowed_address_pairs'", "type": "HTTPBadRequest", "detail": ""}}

  Log entry:

  2017-01-23 09:31:58.988 28914 INFO neutron.api.v2.resource [req-
  56088d19-9359-4360-98db-ea9776e9dd33 46d2f76c9d5a409293cdb88ac8dcdeca
  6d8ae5f32b294b2684c77417eb3b21cb - - -] update failed (client error):
  Unrecognized attribute(s) 'allowed_address_pairs'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1658636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658024] [NEW] Incorrect tag in other-config for openvsiwtch agent after upgrade to mitaka

2017-01-20 Thread George Shuklin
Public bug reported:

We've performed upgrade juno->kilo->libery->mitaka (one by one) without
rebooting compute hosts.

After mitaka upgrage we found that some of tenant networks are not
functional. Deeper debug shows that in openvswitch tag value in 'other-
config' field in ovs port description does not match actual tag on the
port. (tag field).

This cause openvswitch-agent to set wrong segmentation_id on irrelevant
host-local tags.

Visual symptom: after restarting neutron-openvswitch-agent connectivity
with given port appears for some time, than disappears. Tcdpump on the
physical interface shows, that traffic coming to host with proper
segmentation_id, but instance's replies are send back with wrong
segmentation_id, which belongs to some random network of the different
tenant.

There are two ways to fix this: 
1. reboot host
2. write tag field to the tag value of the port and restart 
neutron-openvswitch-agent.

Example of the incorrectly filled port (ovs-vsctl port list):

_uuid   : a5bfb91f-78de-4916-b16a-6ea737cf3b6d
bond_active_slave   : []
bond_downdelay  : 0
bond_fake_iface : false
bond_mode   : []
bond_updelay: 0
external_ids: {}
fake_bridge : false
interfaces  : [7fb9c7a6-963c-4814-b9a4-a23d1a918843]
lacp: []
mac : []
name: "tap20802dee-34"
other_config: {net_uuid="9a1923c8-a07d-487e-a96e-310103acd911", 
network_type=vlan, physical_network=local, segmentation_id="3035", tag="201"}
qos : []
statistics  : {}
status  : {}
tag : 302
trunks  : []
vlan_mode   : []


This problems repeated in the few installations of openstack, therefore is not 
a random fluke.

This script [1] fixes bad tags, but I believe this is a rather serious
issue with openvswitch-agent persistency.


[1] https://gist.github.com/amarao/fba1e766cfa217b0342d0fe066aeedd7


Affected version: mitaka, but I believe it related to previous versions, which 
was: juno, upgraded to kilo, upgraded to liberty, upgraded to mitaka.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1658024

Title:
  Incorrect tag in other-config for openvsiwtch agent after upgrade to
  mitaka

Status in neutron:
  New

Bug description:
  We've performed upgrade juno->kilo->libery->mitaka (one by one)
  without rebooting compute hosts.

  After mitaka upgrage we found that some of tenant networks are not
  functional. Deeper debug shows that in openvswitch tag value in
  'other-config' field in ovs port description does not match actual tag
  on the port. (tag field).

  This cause openvswitch-agent to set wrong segmentation_id on
  irrelevant host-local tags.

  Visual symptom: after restarting neutron-openvswitch-agent
  connectivity with given port appears for some time, than disappears.
  Tcdpump on the physical interface shows, that traffic coming to host
  with proper segmentation_id, but instance's replies are send back with
  wrong segmentation_id, which belongs to some random network of the
  different tenant.

  There are two ways to fix this: 
  1. reboot host
  2. write tag field to the tag value of the port and restart 
neutron-openvswitch-agent.

  Example of the incorrectly filled port (ovs-vsctl port list):

  _uuid   : a5bfb91f-78de-4916-b16a-6ea737cf3b6d
  bond_active_slave   : []
  bond_downdelay  : 0
  bond_fake_iface : false
  bond_mode   : []
  bond_updelay: 0
  external_ids: {}
  fake_bridge : false
  interfaces  : [7fb9c7a6-963c-4814-b9a4-a23d1a918843]
  lacp: []
  mac : []
  name: "tap20802dee-34"
  other_config: {net_uuid="9a1923c8-a07d-487e-a96e-310103acd911", 
network_type=vlan, physical_network=local, segmentation_id="3035", tag="201"}
  qos : []
  statistics  : {}
  status  : {}
  tag : 302
  trunks  : []
  vlan_mode   : []

  
  This problems repeated in the few installations of openstack, therefore is 
not a random fluke.

  This script [1] fixes bad tags, but I believe this is a rather serious
  issue with openvswitch-agent persistency.

  
  [1] https://gist.github.com/amarao/fba1e766cfa217b0342d0fe066aeedd7

  
  Affected version: mitaka, but I believe it related to previous versions, 
which was: juno, upgraded to kilo, upgraded to liberty, upgraded to mitaka.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1658024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625305] Re: neutron-openvswitch-agent is crashing due to KeyError in _restore_local_vlan_map()

2017-01-19 Thread George Shuklin
We've got same issue after upgrading from liberty. It was really
painful, and we've been forced to manually patch agent on hosts.

This is  a real issue, please fix it.

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625305

Title:
  neutron-openvswitch-agent is crashing due to KeyError in
  _restore_local_vlan_map()

Status in neutron:
  New

Bug description:
  Neutron openvswitch agent is unable to restart because vms with
  untagged/flat networks (tagged 3999) cause issue with
  _restore_local_vlan_map

  Loaded agent extensions: []
  2016-09-06 07:57:39.682 70085 CRITICAL neutron 
[req-ef8eea4f-c1ed-47a0-8318-eb5473b7c667 - - - - -] KeyError: 3999
  2016-09-06 07:57:39.682 70085 ERROR neutron Traceback (most recent call last):
  2016-09-06 07:57:39.682 70085 ERROR neutron   File 
"/usr/bin/neutron-openvswitch-agent", line 28, in 
  2016-09-06 07:57:39.682 70085 ERROR neutron sys.exit(main())
  2016-09-06 07:57:39.682 70085 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 235, in __init__
  2016-09-06 07:57:39.682 70085 ERROR neutron self._restore_local_vlan_map()
  2016-09-06 07:57:39.682 70085 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 356, in _restore_local_vlan_map
  2016-09-06 07:57:39.682 70085 ERROR neutron 
self.available_local_vlans.remove(local_vlan)
  2016-09-06 07:57:39.682 70085 ERROR neutron KeyError: 3999
  2016-09-06 07:57:39.682 70085 ERROR neutron
  2016-09-06 07:57:39.684 70085 INFO oslo_rootwrap.client 
[req-ef8eea4f-c1ed-47a0-8318-eb5473b7c667 - - - - -] Stopping rootwrap daemon 
process with pid=70197

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1625305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656854] Re: Incorrect metada in ConfigDrive when using barematal ports under neutron

2017-01-16 Thread George Shuklin
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656854

Title:
  Incorrect metada in ConfigDrive when using barematal ports under
  neutron

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  If baremetal instance is booted with neutron network and config drive
  enabled, it receives incorrect network data in network_data.json,
  which cause trace in cloud-init: ValueError: Unknown network_data link
  type: unbound

  All software is at Newton:  ironic (1:6.2.1-0ubuntu1), nova
  (2:14.0.1-0ubuntu1), neutron (2:9.0.0-0ubuntu1).

  network_data.json content:

  {"services": [{"type": "dns", "address": "8.8.8.8"}], "networks":
  [{"network_id": "d22a675f-f89c-44ae-ae48-bb64e4b81a3d", "type":
  "ipv4", "netmask": "255.255.255.224", "link": "tap7d178b79-86",
  "routes": [{"netmask": "0.0.0.0", "network": "0.0.0.0", "gateway":
  "204.74.228.65"}], "ip_address": "204.74.228.75", "id": "network0"}],
  "links": [{"ethernet_mac_address": "18:66:da:5f:07:f4", "mtu": 1500,
  "type": "unbound", "id": "tap7d178b79-86", "vif_id":
  "7d178b79-86a9-4e56-824e-fe503e422960"}]}

  neutron port description:
  openstack  port show 7d178b79-86a9-4e56-824e-fe503e422960  -f json
  {
"status": "DOWN", 
"binding_profile": "local_link_information='[{u'switch_info': u'c426s1', 
u'port_id': u'1/1/21', u'switch_id': u'60:9c:9f:49:a8:b4'}]'", 
"project_id": "7d450ecf00d64399aeb93bc122cb6dae", 
"binding_vnic_type": "baremetal", 
"binding_vif_details": "", 
"name": "", 
"admin_state_up": "UP", 
"network_id": "d22a675f-f89c-44ae-ae48-bb64e4b81a3d", 
"created_at": "2017-01-16T14:32:27Z", 
"updated_at": "2017-01-16T14:36:22Z", 
"id": "7d178b79-86a9-4e56-824e-fe503e422960", 
"device_owner": "baremetal:none", 
"binding_host_id": "d02c7361-5e3a-4fdf-89b5-f29b3901f0fc", 
"revision_number": 7, 
"mac_address": "18:66:da:5f:07:f4", 
"binding_vif_type": "other", 
"device_id": "9762e013-ffb9-4512-a56d-2a11694a1de8", 
"fixed_ips": "ip_address='204.74.228.75', 
subnet_id='f41ae071-d0d8-4192-96c3-1fd73886275b'", 
"extra_dhcp_opts": "", 
"description": ""
  }

  ironic is configured for multitenancy (to use neutron): 
default_network_interface=neutron.
  neutron is configured for ML2, ML2 is configured for 
networking_generic_switch. Former works fine and toggle port on real switch in 
vlan (access) and out.

  Network is configured to work with vlans.

  Network description:
  openstack network show client-22-vlan  -f json
  {
"status": "ACTIVE", 
"router:external": "Internal", 
"availability_zone_hints": "", 
"availability_zones": "nova", 
"description": "", 
"provider:physical_network": "client", 
"admin_state_up": "UP", 
"updated_at": "2017-01-16T13:01:47Z", 
"created_at": "2017-01-16T12:59:10Z", 
"tags": [], 
"ipv6_address_scope": null, 
"provider:segmentation_id": 22, 
"mtu": 1500, 
"provider:network_type": "vlan", 
"revision_number": 5, 
"ipv4_address_scope": null, 
"subnets": "f41ae071-d0d8-4192-96c3-1fd73886275b", 
"shared": false, 
"project_id": "7d450ecf00d64399aeb93bc122cb6dae", 
"id": "d22a675f-f89c-44ae-ae48-bb64e4b81a3d", 
"name": "client-22-vlan"
  }

  subnet description:
  openstack  subnet show f41ae071-d0d8-4192-96c3-1fd73886275b  -f json
  {
"service_types": [], 
"description": "", 
"enable_dhcp": false, 
"network_id": "d22a675f-f89c-44ae-ae48-bb64e4b81a3d", 
"created_at": "2017-01-16T13:01:12Z", 
"dns_nameservers": "8.8.8.8", 
"updated_at": "2017-01-16T13:01:47Z", 
"ipv6_ra_mode": null, 
"allocation_pools": "204.74.228.66-204.74.228.94", 
"gateway_ip": "204.74.228.65", 
"revision_number": 3, 
"ipv6_address_mode": null, 
"ip_version": 4, 
"host_routes": "", 
"cidr": "204.74.228.64/27", 
"project_id": "7d450ecf00d64399aeb93bc122cb6dae", 
"id": "f41ae071-d0d8-4192-96c3-1fd73886275b", 
"subnetpool_id": null, 
"name": ""
  }

  Boot command:

  openstack server create good --config-drive true --flavor bare-1
  --image ubuntu-custom-7 --key-name keybane --nic net-id=d22a675f-f89c-
  44ae-ae48-bb64e4b81a3d

  According to  vdrok from #openstack-ironic allowed types for interface for 
cloud-init are:
  'bridge', 'ethernet', 'hw_veb', 'hyperv', 'ovs', 'phy', 'tap', 'vhostuser', 
'vif', 'bond', 'vlan'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1656854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656010] [NEW] Incorrect notification to nova about ironic baremetall port (for nodes in 'cleaning' state)

2017-01-12 Thread George Shuklin
Public bug reported:

version: newton (2:9.0.0-0ubuntu1~cloud0)

When neutron trying to bind port for Ironic baremetall node, it sending
wrong notification to nova about port been ready. neutron send it with
'device_id' == ironic-node-id, and nova rejects it as 'not found' (there
is no nova instance with such id).

Log:
neutron.db.provisioning_blocks[22265]: DEBUG Provisioning for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 completed by entity DHCP. 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:147
neutron.db.provisioning_blocks[22265]: DEBUG Provisioning complete for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a 
- - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:153
neutron.callbacks.manager[22265]: DEBUG Notify callbacks 
[('neutron.plugins.ml2.plugin.Ml2Plugin._port_provisioned--9223372036854150578',
 >)] for port, 
provisioning_complete [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] 
_notify_loop /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:142
neutron.plugins.ml2.plugin[22265]: DEBUG Port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 cannot update to ACTIVE because it is not 
bound. [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _port_provisioned 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py:224
oslo_messaging._drivers.amqpdriver[22265]: DEBUG sending reply msg_id: 
254703530cd3440584c980d72ed93011 reply queue: 
reply_8b6e70ad5191401a9512147c4e94ca71 time elapsed: 0.0452275519492s 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _send_reply 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:73
neutron.notifiers.nova[22263]: DEBUG Sending events: [{'name': 
'network-changed', 'server_uuid': u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] 
send_events /usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:257
novaclient.v2.client[22263]: DEBUG REQ: curl -g -i --insecure -X POST 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}592539c9fcd820d7e369ea58454ee17fe7084d5e" -d '{"events": [{"name": 
"network-changed", "server_uuid": "d02c7361-5e3a-4fdf-89b5-f29b3901f0fc"}]}' 
_http_log_request /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:337
novaclient.v2.client[22263]: DEBUG RESP: [404] Content-Type: application/json; 
charset=UTF-8 Content-Length: 78 X-Compute-Request-Id: 
req-a029af9e-e460-476f-9993-4551f3b210d6 Date: Thu, 12 Jan 2017 15:43:37 GMT 
Connection: keep-alive 
RESP BODY: {"itemNotFound": {"message": "No instances found for any event", 
"code": 404}}
 _http_log_response 
/usr/lib/python2.7/dist-packages/keystoneauth1/session.py:366
novaclient.v2.client[22263]: DEBUG POST call to compute for 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 used request id req-a029af9e-e460-476f-9993-4551f3b210d6 _log_request_id 
/usr/lib/python2.7/dist-packages/novaclient/client.py:85
neutron.notifiers.nova[22263]: DEBUG Nova returned NotFound for event: 
[{'name': 'network-changed', 'server_uuid': 
u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] send_events 
/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:263
oslo_messaging._drivers.amqpdriver[22265]: DEBUG received message msg_id: 
0bf04ac8fedd4234bd6cd6c04547beca reply to 
reply_8b6e70ad5191401a9512147c4e94ca71 __call__ 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:194
neutron.db.provisioning_blocks[22265]: DEBUG Provisioning complete for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 [req-47c505d7-4eb5-4c71-9656-9e0927408822 
- - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:153


Port info:
+-+---+
| Field   | Value   
  |
+-+---+
| admin_state_up  | True
  |
| binding:host_id | d02c7361-5e3a-4fdf-89b5-f29b3901f0fc
  |
| binding:profile | {"local_link_information": [{"switch_info": "c426s1", 
"port_id": "1/1/21",|
| | "switch_id": "60:96:9f:69:b4:b4"}]} 
  |
| binding:vif_details | {}  
  |
| binding:vif_type| binding_failed  

[Yahoo-eng-team] [Bug 1655974] [NEW] ml2 provides no information if there is no suitable mech_driver found during port binding

2017-01-12 Thread George Shuklin
Public bug reported:

If there is no suitable mech driver found, ML2 just make port
bind_failed and write uninformative message in the log:

2017-01-12 13:56:46.691 3889 ERROR neutron.plugins.ml2.managers [req-
d9d956d7-c9e9-4c1b-aa1b-59fb974dd980 5a08515f35d749068a6327e387ca04e2
7d450ecf00d64399aeb93bc122cb6dae - - -] Failed to bind port
f4e190cb-6678-43f6-9140-f662e9429e75 on host d02c7361-5e3a-4fdf-
89b5-f29b3901f0fc for vnic_type baremetal using segments
[{'segmentation_id': 21L, 'physical_network': u'provision', 'id':
u'6ed946b1-d7f6-4c8e-8459-10b6d65ce536', 'network_type': u'vlan'}]

I think it should report reason for this to admins more clearly, saying
that no mechanism driver found to bind port.

In my case it was: INFO neutron.plugins.ml2.managers [-] Loaded
mechanism driver names: [], which was hard to debug due to lack of any
information from neutron-server (even in debug mode!).

version: 2:9.0.0-0ubuntu1~cloud0

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- ml2 provides no information if there is no suitable mech_driver found
+ ml2 provides no information if there is no suitable mech_driver found during 
port binding

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655974

Title:
  ml2 provides no information if there is no suitable mech_driver found
  during port binding

Status in neutron:
  New

Bug description:
  If there is no suitable mech driver found, ML2 just make port
  bind_failed and write uninformative message in the log:

  2017-01-12 13:56:46.691 3889 ERROR neutron.plugins.ml2.managers [req-
  d9d956d7-c9e9-4c1b-aa1b-59fb974dd980 5a08515f35d749068a6327e387ca04e2
  7d450ecf00d64399aeb93bc122cb6dae - - -] Failed to bind port
  f4e190cb-6678-43f6-9140-f662e9429e75 on host d02c7361-5e3a-4fdf-
  89b5-f29b3901f0fc for vnic_type baremetal using segments
  [{'segmentation_id': 21L, 'physical_network': u'provision', 'id':
  u'6ed946b1-d7f6-4c8e-8459-10b6d65ce536', 'network_type': u'vlan'}]

  I think it should report reason for this to admins more clearly,
  saying that no mechanism driver found to bind port.

  In my case it was: INFO neutron.plugins.ml2.managers [-] Loaded
  mechanism driver names: [], which was hard to debug due to lack of any
  information from neutron-server (even in debug mode!).

  version: 2:9.0.0-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653967] Re: nova raises ConfigFileValueError for URLs with dashes

2017-01-05 Thread George Shuklin
I found one more source of this bug (It was pointed by Marsikus at
habrahabr.ru):
https://github.com/openstack/oslo.config/compare/3.18.0...master

As you can see, oslo.config have dependency for python-rfc3986== 0.2.0
in version 3.18.0, and 0.2.2 in stable/newton.

And https://releases.openstack.org/newton/index.html#oslo-config says
that 'Newton' is  3.17.0.

I think it is a maintenance mistake from oslo.config upstream.

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: oslo.config
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653967

Title:
  nova raises ConfigFileValueError for URLs with dashes

Status in OpenStack Global Requirements:
  New
Status in oslo.config:
  New
Status in nova package in Ubuntu:
  Fix Released
Status in python-rfc3986 package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  New
Status in python-rfc3986 source package in Xenial:
  New
Status in nova source package in Yakkety:
  New
Status in python-rfc3986 source package in Yakkety:
  New
Status in nova source package in Zesty:
  Fix Released
Status in python-rfc3986 source package in Zesty:
  Fix Released

Bug description:
  nova version: newton
  dpkg version: 2:14.0.1-0ubuntu1~cloud0
  distribution: nova @ xenial with ubuntu cloud archive, amd64.

  Nova fails with exception  ConfigFileValueError: Value for option url
  is not valid: invalid URI: if url parameter of [neutron] section or
  novncproxy_base_url parameter contains dashes in url.

  Steps to reproduce:

  Take a working openstack with nova+neutron.

  Put (in [neutron] section) url= http://nodash.example.com:9696  - it
  works

  Put url = http://with-dash.example.com:9696 - it fails with exception:

  
  nova[18937]: TRACE Traceback (most recent call last):
  nova[18937]: TRACE   File "/usr/bin/nova-api-os-compute", line 10, in 
  nova[18937]: TRACE sys.exit(main())
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 51, in main
  nova[18937]: TRACE service.wait()
  nova[18937]: TRACE   File "/usr/lib/python2.7/dist-packages/nova/service.py", 
line 415, in wait
  nova[18937]: TRACE _launcher.wait()
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
  nova[18937]: TRACE self.conf.log_opt_values(LOG, logging.DEBUG)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
  nova[18937]: TRACE _sanitize(opt, getattr(group_attr, opt_name)))
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
  nova[18937]: TRACE return self._conf._get(name, self._group)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
  nova[18937]: TRACE value = self._do_get(name, group, namespace)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
  nova[18937]: TRACE % (opt.name, str(ve)))
  nova[18937]: TRACE ConfigFileValueError: Value for option url is not valid: 
invalid URI: 'http://with-dash.example.com:9696'.

  Expected behavior: do not crash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-requirements/+bug/1653967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653967] Re: nova (newton) raises ConfigFileValueError for urls with dashess

2017-01-04 Thread George Shuklin
I found source of the bug: python-rfc3986 is to blame (it is used by
oslo-config). Version  0.2.0-2 contains bug which violates RFC3986. It
was fixed in 0.2.2. Version of python-rfc3986 from zesty (0.3.1-2) fix
this problem.

I believe this bug should be fixed by bumping up version of python-
rfc3986 in UCA to 0.2.2 or higher.

** Also affects: python-rfc3986 (Ubuntu)
   Importance: Undecided
   Status: New

** Summary changed:

- nova (newton) raises ConfigFileValueError for urls with dashess
+ nova (newton) raises ConfigFileValueError for urls with dashes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653967

Title:
  nova (newton) raises ConfigFileValueError for urls with dashes

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  New
Status in python-rfc3986 package in Ubuntu:
  New

Bug description:
  nova version: newton
  dpkg version: 2:14.0.1-0ubuntu1~cloud0
  distribution: nova @ xenial with ubuntu cloud archive, amd64.

  Nova fails with exception  ConfigFileValueError: Value for option url
  is not valid: invalid URI: if url parameter of [neutron] section or
  novncproxy_base_url parameter contains dashes in url.

  Steps to reproduce:

  Take a working openstack with nova+neutron.

  Put (in [neutron] section) url= http://nodash.example.com:9696  - it
  works

  Put url = http://with-dash.example.com:9696 - it fails with exception:

  
  nova[18937]: TRACE Traceback (most recent call last):
  nova[18937]: TRACE   File "/usr/bin/nova-api-os-compute", line 10, in 
  nova[18937]: TRACE sys.exit(main())
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 51, in main
  nova[18937]: TRACE service.wait()
  nova[18937]: TRACE   File "/usr/lib/python2.7/dist-packages/nova/service.py", 
line 415, in wait
  nova[18937]: TRACE _launcher.wait()
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
  nova[18937]: TRACE self.conf.log_opt_values(LOG, logging.DEBUG)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
  nova[18937]: TRACE _sanitize(opt, getattr(group_attr, opt_name)))
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
  nova[18937]: TRACE return self._conf._get(name, self._group)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
  nova[18937]: TRACE value = self._do_get(name, group, namespace)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
  nova[18937]: TRACE % (opt.name, str(ve)))
  nova[18937]: TRACE ConfigFileValueError: Value for option url is not valid: 
invalid URI: 'http://with-dash.example.com:9696'.

  Expected behavior: do not crash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653967] [NEW] nova (newton) raises ConfigFileValueError for urls with dashess

2017-01-04 Thread George Shuklin
Public bug reported:

nova version: newton
dpkg version: 2:14.0.1-0ubuntu1~cloud0
distribution: nova @ xenial with ubuntu cloud archive, amd64.

Nova fails with exception  ConfigFileValueError: Value for option url is
not valid: invalid URI: if url parameter of [neutron] section or
novncproxy_base_url parameter contains dashes in url.

Steps to reproduce:

Take a working openstack with nova+neutron.

Put (in [neutron] section) url= http://nodash.example.com:9696  - it
works

Put url = http://with-dash.example.com:9696 - it fails with exception:


nova[18937]: TRACE Traceback (most recent call last):
nova[18937]: TRACE   File "/usr/bin/nova-api-os-compute", line 10, in 
nova[18937]: TRACE sys.exit(main())
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 51, in main
nova[18937]: TRACE service.wait()
nova[18937]: TRACE   File "/usr/lib/python2.7/dist-packages/nova/service.py", 
line 415, in wait
nova[18937]: TRACE _launcher.wait()
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
nova[18937]: TRACE self.conf.log_opt_values(LOG, logging.DEBUG)
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
nova[18937]: TRACE _sanitize(opt, getattr(group_attr, opt_name)))
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
nova[18937]: TRACE return self._conf._get(name, self._group)
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
nova[18937]: TRACE value = self._do_get(name, group, namespace)
nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
nova[18937]: TRACE % (opt.name, str(ve)))
nova[18937]: TRACE ConfigFileValueError: Value for option url is not valid: 
invalid URI: 'http://with-dash.example.com:9696'.

Expected behavior: do not crash.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653967

Title:
  nova (newton) raises ConfigFileValueError for urls with dashess

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  New

Bug description:
  nova version: newton
  dpkg version: 2:14.0.1-0ubuntu1~cloud0
  distribution: nova @ xenial with ubuntu cloud archive, amd64.

  Nova fails with exception  ConfigFileValueError: Value for option url
  is not valid: invalid URI: if url parameter of [neutron] section or
  novncproxy_base_url parameter contains dashes in url.

  Steps to reproduce:

  Take a working openstack with nova+neutron.

  Put (in [neutron] section) url= http://nodash.example.com:9696  - it
  works

  Put url = http://with-dash.example.com:9696 - it fails with exception:

  
  nova[18937]: TRACE Traceback (most recent call last):
  nova[18937]: TRACE   File "/usr/bin/nova-api-os-compute", line 10, in 
  nova[18937]: TRACE sys.exit(main())
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/cmd/api_os_compute.py", line 51, in main
  nova[18937]: TRACE service.wait()
  nova[18937]: TRACE   File "/usr/lib/python2.7/dist-packages/nova/service.py", 
line 415, in wait
  nova[18937]: TRACE _launcher.wait()
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 568, in wait
  nova[18937]: TRACE self.conf.log_opt_values(LOG, logging.DEBUG)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2626, in 
log_opt_values
  nova[18937]: TRACE _sanitize(opt, getattr(group_attr, opt_name)))
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 3057, in __getattr__
  nova[18937]: TRACE return self._conf._get(name, self._group)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2668, in _get
  nova[18937]: TRACE value = self._do_get(name, group, namespace)
  nova[18937]: TRACE   File 
"/usr/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2711, in _do_get
  nova[18937]: TRACE % (opt.name, str(ve)))
  nova[18937]: TRACE ConfigFileValueError: Value for option url is not valid: 
invalid URI: 'http://with-dash.example.com:9696'.

  Expected behavior: do not crash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569779] [NEW] allow to investigate instance actions after instance deletion

2016-04-13 Thread George Shuklin
Public bug reported:

Right now if instance has been deleted, 'nova instance-action-list'
returns 404. Due to very specific nature of 'action list' is is very
nice to have ability to see action lists for deleted instances,
especially deletion request.

Can this feature be added to nova? Al least, for administrators.

Thanks.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1569779

Title:
  allow to investigate instance actions after instance deletion

Status in OpenStack Compute (nova):
  New

Bug description:
  Right now if instance has been deleted, 'nova instance-action-list'
  returns 404. Due to very specific nature of 'action list' is is very
  nice to have ability to see action lists for deleted instances,
  especially deletion request.

  Can this feature be added to nova? Al least, for administrators.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1569779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554195] [NEW] Nova (juno) ignores logging_*_format_string in syslog output

2016-03-07 Thread George Shuklin
Public bug reported:

Nova in juno ignores following settings in configuration file ([DEFAULT] 
section):
logging_context_format_string
logging_default_format_string
logging_debug_format_suffix
logging_exception_prefix

when sending logs via syslog. Log entries on stderr / in log files are
fine (use logging_*_format).

Steps to reproduce:

1. set up custom logging stings and enable syslog:

[DEFAULT]
logging_default_format_string=MYSTYLE-DEFAULT-%(message)s
logging_context_format_string=MYSTYLE-CONTEXT-%(message)s
use_syslog=true

2. restart nova and perform some actions

3. Check the syslog content

Expected behaviour: MYSTYLE- prefix in all messages.
Actual behaviour: no changes in log message styles.

This bug is specific to Juno version of nova.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554195

Title:
  Nova (juno) ignores logging_*_format_string in syslog output

Status in OpenStack Compute (nova):
  New
Status in nova package in Ubuntu:
  New

Bug description:
  Nova in juno ignores following settings in configuration file ([DEFAULT] 
section):
  logging_context_format_string
  logging_default_format_string
  logging_debug_format_suffix
  logging_exception_prefix

  when sending logs via syslog. Log entries on stderr / in log files are
  fine (use logging_*_format).

  Steps to reproduce:

  1. set up custom logging stings and enable syslog:

  [DEFAULT]
  logging_default_format_string=MYSTYLE-DEFAULT-%(message)s
  logging_context_format_string=MYSTYLE-CONTEXT-%(message)s
  use_syslog=true

  2. restart nova and perform some actions

  3. Check the syslog content

  Expected behaviour: MYSTYLE- prefix in all messages.
  Actual behaviour: no changes in log message styles.

  This bug is specific to Juno version of nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548724] Re: nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate fails on slow build server

2016-02-24 Thread George Shuklin
** Attachment added: "Full build log from CI"
   
https://bugs.launchpad.net/nova/+bug/1548724/+attachment/4579857/+files/consoleText

** Changed in: nova
   Status: Incomplete => Opinion

** Changed in: nova
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1548724

Title:
  nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
  fails on slow build server

Status in OpenStack Compute (nova):
  New

Bug description:
  When I've tried to set up CI build for nova package (13.0b2) but it
  fails on tests:

  ==
  FAIL: 
nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
  nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/test_signature_utils.py", line 306, in 
test_get_certificate
  signature_utils.get_certificate(None, cert_uuid))
File "nova/signature_utils.py", line 319, in get_certificate
  verify_certificate(certificate)
File "nova/signature_utils.py", line 342, in verify_certificate
  % certificate.not_valid_after)
  nova.exception.SignatureVerificationError: Signature verification for the 
image failed: Certificate is not valid after: 2016-02-22 18:53:41.545721 UTC.

  I believe it happen because our CI server is not that fast and nova
  build-and-test takes about 1.5hr. I propose to extend validity
  interval for mocked certificate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1548724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548724] [NEW] nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate fails on slow build server

2016-02-23 Thread George Shuklin
Public bug reported:

When I've tried to set up CI build for nova package (13.0b2) but it
fails on tests:

==
FAIL: 
nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
--
_StringException: Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
return func(*args, **keywargs)
  File "nova/tests/unit/test_signature_utils.py", line 306, in 
test_get_certificate
signature_utils.get_certificate(None, cert_uuid))
  File "nova/signature_utils.py", line 319, in get_certificate
verify_certificate(certificate)
  File "nova/signature_utils.py", line 342, in verify_certificate
% certificate.not_valid_after)
nova.exception.SignatureVerificationError: Signature verification for the image 
failed: Certificate is not valid after: 2016-02-22 18:53:41.545721 UTC.

I believe it happen because our CI server is not that fast and nova
build-and-test takes about 1.5hr. I propose to extend validity interval
for mocked certificate.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1548724

Title:
  nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
  fails on slow build server

Status in OpenStack Compute (nova):
  New

Bug description:
  When I've tried to set up CI build for nova package (13.0b2) but it
  fails on tests:

  ==
  FAIL: 
nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
  nova.tests.unit.test_signature_utils.TestSignatureUtils.test_get_certificate
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/test_signature_utils.py", line 306, in 
test_get_certificate
  signature_utils.get_certificate(None, cert_uuid))
File "nova/signature_utils.py", line 319, in get_certificate
  verify_certificate(certificate)
File "nova/signature_utils.py", line 342, in verify_certificate
  % certificate.not_valid_after)
  nova.exception.SignatureVerificationError: Signature verification for the 
image failed: Certificate is not valid after: 2016-02-22 18:53:41.545721 UTC.

  I believe it happen because our CI server is not that fast and nova
  build-and-test takes about 1.5hr. I propose to extend validity
  interval for mocked certificate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1548724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467544] [NEW] Network field update delayed if a few net-id were specified during creation

2015-06-22 Thread George Shuklin
Public bug reported:

Steps to reproduce:

1. Create a few networks. In my case they were shared external networks
of 'vlan' type.

Example:
neutron net-create internet_192.168.16.64/27 --router:external True 
--provider:physical_network internet --provider:network_type  vlan 
--provider:segmentation_id 20 --shared
neutron subnet-create internet_192.168.16.64/27 --enable-dhcp 
--gateway=192.168.16.65 --dns-nameserver=8.8.8.8 --dns-nameserver=77.88.8.8 
192.168.16.64/27


2. Boot instance:

 nova boot  --flavor  m1.small --image cirros --key-name x220 --nic net-
id=25f7440e-5ffd-4407-a83e-0bce6e8c216d --nic net-
id=a3af8097-f348-4767-97c3-b9bf75263ef9 myinstance

3. Get instance info after it becomes 'ACTIVE':

nova show 0111cff2-205f-493c-9d37-bf8a550270a2
+--+---+
| Property | Value  
   |
+--+---+
| OS-DCF:diskConfig| MANUAL 
   |
| OS-EXT-AZ:availability_zone  | nova   
   |
| OS-EXT-STS:power_state   | 1  
   |
| OS-EXT-STS:task_state| -  
   |
| OS-EXT-STS:vm_state  | active 
   |
| OS-SRV-USG:launched_at   | 2015-06-22T13:47:10.00 
   |
| OS-SRV-USG:terminated_at | -  
   |
| accessIPv4   |
   |
| accessIPv6   |
   |
| config_drive |
   |
| created  | 2015-06-22T13:47:03Z   
   |
| flavor   | SSD.30 (30)
   |
| hostId   | 
ac01a9c7098d3d6f769fabd7071794ba11cca06d11a15867da898dbc  |
| id   | 0111cff2-205f-493c-9d37-bf8a550270a2   
   |
| image| Debian 8.0 Jessie (x86_64) 
(cc00f340-c927-4309-965e-63f02c94027d) |
| internet_192.168.16.192/27 network   | 192.168.16.205 
   |
| key_name | x220   
   |
| local_private network|
   |
| metadata | {} 
   |
| name | hands  
   |
| os-extended-volumes:volumes_attached | [] 
   |
| progress | 0  
   |
| security_groups  | default
   |
| status   | ACTIVE 
   |
| tenant_id| 1d7f6604ebb54c69820f9d157bcea5f9   
   |
| updated  | 2015-06-22T13:47:10Z   
   |
| user_id  | 51b457fc5dee4b6098093542bd659e8a   
   |
+--+---+

The local_private network field is empty.

Expected: it contains an IP address.

This can be fixed by nova refresh-network, but it requires admin
privileges.

Version: nova 2014.2.4 with neutron network.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Steps to reproduce:
  
- 1. Create a few networks. In my case they were external networks of 'vlan' 
type. 
+ 1. Create a few networks. In my case they were shared external networks of 
'vlan' type.
  2. Boot instance:
  
-  nova boot  --flavor  m1.small --image cirros --key-name x220 --nic net-
+  nova boot  --flavor  m1.small --image cirros --key-name x220 --nic net-
  id=25f7440e-5ffd-4407-a83e-0bce6e8c216d --nic net-
  id=a3af8097-f348-4767-97c3-b9bf75263ef9 myinstance
  
  3. Get instance info after it becomes 'ACTI

[Yahoo-eng-team] [Bug 1467518] [NEW] neutron --debug port-list --binding:vif_type=binding_failed returns wrong ports

2015-06-22 Thread George Shuklin
Public bug reported:

neutron --debug port-list --binding:vif_type=binding_failed displays all
ports with all vif_type, not only with binding_failed.

vif_type=binding_failed is set when something bad happens on a compute
host during port configuration (no local vlans in ml2 conf, etc)

We had intention to monitor for such ports, but request to neutron
return some irrelevant ports:

REQ: curl -i -X GET
https://neutron.lab.internal:9696/v2.0/ports.json?binding%3Avif_type=binding_failed
-H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
"X-Auth-Token: 52c0c1ee1f764c408977f41c9f3743ca"

RESP BODY: {"ports": [{"status": "ACTIVE", "binding:host_id":
"compute2", "name": "", "admin_state_up": true, "network_id": "5c399fb7
-67ac-431d-9965-9586dbcec1c9", "tenant_id":
"3e6b1fc20da346838f93f124cb894d0f", "extra_dhcp_opts": [],
"binding:vif_details": {"port_filter": false, "ovs_hybrid_plug": false},
"binding:vif_type": "ovs", "device_owner": "network:dhcp",
"mac_address": "fa:16:3e:ad:6f:22", "binding:profile": {},
"binding:vnic_type": "normal", "fixed_ips": [{"subnet_id":
"c10a3520-17e2-4c04-94c6-a4419d79cca9", "ip_address": "192.168.0.3"}],
.

If request is send on neutron --debug port-list
--binding:host_id=compute1, filtering works as expected.

Neutron version - 2014.2.4

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- neutron --debug port-list --binding:vif_type=binding_failed displays wrong 
ports
+ neutron --debug port-list --binding:vif_type=binding_failed returns wrong 
ports

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467518

Title:
  neutron --debug port-list --binding:vif_type=binding_failed returns
  wrong ports

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron --debug port-list --binding:vif_type=binding_failed displays
  all ports with all vif_type, not only with binding_failed.

  vif_type=binding_failed is set when something bad happens on a compute
  host during port configuration (no local vlans in ml2 conf, etc)

  We had intention to monitor for such ports, but request to neutron
  return some irrelevant ports:

  REQ: curl -i -X GET
  
https://neutron.lab.internal:9696/v2.0/ports.json?binding%3Avif_type=binding_failed
  -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H
  "X-Auth-Token: 52c0c1ee1f764c408977f41c9f3743ca"

  RESP BODY: {"ports": [{"status": "ACTIVE", "binding:host_id":
  "compute2", "name": "", "admin_state_up": true, "network_id":
  "5c399fb7-67ac-431d-9965-9586dbcec1c9", "tenant_id":
  "3e6b1fc20da346838f93f124cb894d0f", "extra_dhcp_opts": [],
  "binding:vif_details": {"port_filter": false, "ovs_hybrid_plug":
  false}, "binding:vif_type": "ovs", "device_owner": "network:dhcp",
  "mac_address": "fa:16:3e:ad:6f:22", "binding:profile": {},
  "binding:vnic_type": "normal", "fixed_ips": [{"subnet_id":
  "c10a3520-17e2-4c04-94c6-a4419d79cca9", "ip_address": "192.168.0.3"}],
  .

  If request is send on neutron --debug port-list
  --binding:host_id=compute1, filtering works as expected.

  Neutron version - 2014.2.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461923] [NEW] Field 'gateway' not disabled when 'no gateway' selected in 'edit subnet'

2015-06-04 Thread George Shuklin
Public bug reported:

Horizon: 2014.2.3

Steps to reproduce:
1. Create net
2. Create subnet with gateway
3. Open network details (click on network name) under admin section
4. Click 'edit subnet'
5. Click "Disable Gateway"

Expected behavior:
1. Field 'gateway' disabled
2. IP address in 'gateway' cleared

Actual behavior:
1. Field 'gateway' still active
2. There is an old IP address in gateway field.

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- Field gateway do not disabled when 'no gateway' selected in 'edit subnet'
+ Field 'gateway' not disabled when 'no gateway' selected in 'edit subnet'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461923

Title:
  Field 'gateway' not disabled when 'no gateway' selected in 'edit
  subnet'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon: 2014.2.3

  Steps to reproduce:
  1. Create net
  2. Create subnet with gateway
  3. Open network details (click on network name) under admin section
  4. Click 'edit subnet'
  5. Click "Disable Gateway"

  Expected behavior:
  1. Field 'gateway' disabled
  2. IP address in 'gateway' cleared

  Actual behavior:
  1. Field 'gateway' still active
  2. There is an old IP address in gateway field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1461923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460577] [NEW] If instance was migrated while was in shutdown state, nova disallow start before resize-confirm

2015-06-01 Thread George Shuklin
Public bug reported:

Steps to reproduce:
1. Create instance
2. Shutdown instance
3. Perform resize
4. Try to start instance.

Expected behaviour: instance starts  in resize_confirm state
Actual behaviour: ERROR (Conflict): Instance 
d0e9bc6b-0544-410f-ba96-b0b78ce18828 in vm_state resized. Cannot start while 
the instance is in this state. (HTTP 409)

Rationale:

If tenant resizing running instance, he can log into instance after
reboot and see if it was successful.  If tenant resizing stopped
instance, he has no change to check if instance resized successfully or
not before confirming migration.

Proposed solution: Allow to start instance in the state resize_confirm +
stopped.

(Btw: I'd like to allow to stop/resize instances in  resize_confirm
state, because tenant may wish to reboot/stop/start instance few times
before deciding that migration was successful or revert it back).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460577

Title:
  If instance was migrated while was in shutdown state, nova disallow
  start before resize-confirm

Status in OpenStack Compute (Nova):
  New

Bug description:
  Steps to reproduce:
  1. Create instance
  2. Shutdown instance
  3. Perform resize
  4. Try to start instance.

  Expected behaviour: instance starts  in resize_confirm state
  Actual behaviour: ERROR (Conflict): Instance 
d0e9bc6b-0544-410f-ba96-b0b78ce18828 in vm_state resized. Cannot start while 
the instance is in this state. (HTTP 409)

  Rationale:

  If tenant resizing running instance, he can log into instance after
  reboot and see if it was successful.  If tenant resizing stopped
  instance, he has no change to check if instance resized successfully
  or not before confirming migration.

  Proposed solution: Allow to start instance in the state resize_confirm
  + stopped.

  (Btw: I'd like to allow to stop/resize instances in  resize_confirm
  state, because tenant may wish to reboot/stop/start instance few times
  before deciding that migration was successful or revert it back).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459726] Re: api servers hang with 100% CPU if syslog restarted

2015-05-29 Thread George Shuklin
May be. I'm not sure. Anyway, this is not nova/glance/neutron bug, but
python-eventlet, and it is mostly concerns for distributions, not for
developers.

** Also affects: python-eventlet (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459726

Title:
  api servers hang with 100% CPU if syslog restarted

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Logging configuration library for OpenStack:
  New
Status in python-eventlet package in Ubuntu:
  New

Bug description:
  Affected:

  glance-api
  glance-registry
  neutron-server
  nova-api

  If service was configured to use rsyslog and rsyslog was restarted
  after API server started, it hangs on next log line with 100% CPU. If
  server have few workers, each worker will eat own 100% CPU share.

  Steps to reproduce:
  1. Configure syslog:
  use_syslog=true
  syslog_log_facility=LOG_LOCAL4
  2. restart api service
  3. restart rsyslog

  Execute some command to force logging. F.e.: neutron net-create foo,
  nova boot, etc.

  Expected result: normal operation

  Actual result:
  with some chance (about 30-50%) api server will hung with 100% CPU usage and 
will not reply to request.

  Strace on hung service:

  gettimeofday({1432827199, 745141}, NULL) = 0
  poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
  sendto(3, "<151>keystonemiddleware.auth_token[12502]: DEBUG Authenticating 
user token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0", 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
  gettimeofday({1432827199, 745226}, NULL) = 0
  poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
  sendto(3, "<151>keystonemiddleware.auth_token[12502]: DEBUG Authenticating 
user token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0", 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
  gettimeofday({1432827199, 745325}, NULL) = 0

  Tested on:
  nova, glance, neutron:  1:2014.2.3, Ubuntu version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1459726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459726] [NEW] api servers hang with 100% CPU if syslog restarted

2015-05-28 Thread George Shuklin
Public bug reported:

Affected:

glance-api
glance-registry
neutron-server
nova-api

If service was configured to use rsyslog and rsyslog was restarted after
API server started, it hangs on next log line with 100% CPU. If server
have few workers, each worker will eat own 100% CPU share.

Steps to reproduce:
1. Configure syslog:
use_syslog=true
syslog_log_facility=LOG_LOCAL4
2. restart api service
3. restart rsyslog

Execute some command to force logging. F.e.: neutron net-create foo,
nova boot, etc.

Expected result: normal operation

Actual result:
with some chance (about 30-50%) api server will hung with 100% CPU usage and 
will not reply to request.

Strace on hung service:

gettimeofday({1432827199, 745141}, NULL) = 0
poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
sendto(3, "<151>keystonemiddleware.auth_token[12502]: DEBUG Authenticating user 
token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0", 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
gettimeofday({1432827199, 745226}, NULL) = 0
poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
sendto(3, "<151>keystonemiddleware.auth_token[12502]: DEBUG Authenticating user 
token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0", 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
gettimeofday({1432827199, 745325}, NULL) = 0

Tested on:
nova, glance, neutron:  1:2014.2.3, Ubuntu version.

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Affected:
  
  glance-api
  glance-registry
  neutron-server
  nova-api
  
  If service was configured to use rsyslog and rsyslog was restarted after
  API server started, it hangs on next log line with 100% CPU. If server
  have few workers, each worker will eat own 100% CPU share.
  
  Steps to reproduce:
- 1. Configure syslog: 
+ 1. Configure syslog:
  use_syslog=true
  syslog_log_facility=LOG_LOCAL4
  2. restart api service
  3. restart rsyslog
  
  Execute some command to force logging. F.e.: neutron net-create foo,
  nova boot, etc.
  
  Expected result: normal operation
  
  Actual result:
  with some chance (about 30-50%) api server will hung with 100% CPU usage and 
will not reply to request.
  
  Strace on hung service:
  
- 
  gettimeofday({1432827199, 745141}, NULL) = 0
  poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
  sendto(3, "<151>keystonemiddleware.auth_token[12502]: DEBUG Authenticating 
user token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0", 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
  gettimeofday({1432827199, 745226}, NULL) = 0
  poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 2, 6) = 1 ([{fd=3, 
revents=POLLOUT}])
  sendto(3, "<151>keystonemiddleware.auth_token[12502]: DEBUG Authenticating 
user token __call__ 
/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:650\0", 154, 
0, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)
  gettimeofday({1432827199, 745325}, NULL) = 0
+ 
+ Tested on:
+ nova, glance, neutron:  1:2014.2.3, Ubuntu version.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459726

Title:
  api servers hang with 100% CPU if syslog restarted

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Affected:

  glance-api
  glance-registry
  neutron-server
  nova-api

  If service was configured to use rsyslog and rsyslog was restarted
  after API server started, it hangs on next log line with 100% CPU. If
  server have few workers, each worker will eat own 100% CPU share.

  Steps to reproduce:
  1. Configure syslog:
  use_syslog=true
  syslog_log_facility=LOG_LOCAL4
  2. restart api service
  3. restart rsyslog

  Execute some command to force logging. F.e.: neutron net-create foo,
  nova boot, etc.

  Expected result: normal operation

  Actual result:
  with some chance (about 30-50%) api server will hung with 100% CPU usage and 
will not reply to request.

  Strace on hung service:

  gettimeofday({1432827199, 745141}, NULL) = 0
  poll([{fd=3, events=POLLOUT|POLLERR|POLLHUP}, {fd=5, 
events=POLLIN|POL

[Yahoo-eng-team] [Bug 1457900] [NEW] dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs (break networks)

2015-05-22 Thread George Shuklin
Public bug reported:

If neutron was configured to have more than one DHCP agent per network
(option dhcp_agents_per_network=2), it causes dnsmasq to reject leases
of others dnsmasqs, creating mess and stopping instances to boot
normally.

Symptoms:

Cirros (at the log):
Sending discover...
Sending select for 188.42.216.146...
Received DHCP NAK
Usage: /sbin/cirros-dhcpc 
Sending discover...
Sending select for 188.42.216.146...
Received DHCP NAK
Usage: /sbin/cirros-dhcpc 
Sending discover...
Sending select for 188.42.216.146...
Received DHCP NAK

Steps to reproduce:
1. Set up neutron with VLANs and dhcp_agents_per_network=2 option in 
neutron.conf
2. Set up two or more different nodes with enabled neutron-dhcp-agent
3. Create VLAN neutron network with --enable-dhcp option
4. Create instance with that network

Expected behaviour:

Instance recieve IP address via DHCP without problems or delays.

Actual behaviour:

Instance stuck in the network boot for long time.
There are complains about NACKs in the logs of dhcp client.
There are multiple NACKs on tcpdump on interfaces

Additional analysis: It is very complex, so I attach example of two
parallel tcpdumps from two dhcp namespaces in HTML format.


Version: 2014.2.3

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "tcpdump transcript of the bug"
   
https://bugs.launchpad.net/bugs/1457900/+attachment/4402420/+files/dhcp_neutron_bug.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457900

Title:
  dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs
  (break networks)

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If neutron was configured to have more than one DHCP agent per network
  (option dhcp_agents_per_network=2), it causes dnsmasq to reject leases
  of others dnsmasqs, creating mess and stopping instances to boot
  normally.

  Symptoms:

  Cirros (at the log):
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK

  Steps to reproduce:
  1. Set up neutron with VLANs and dhcp_agents_per_network=2 option in 
neutron.conf
  2. Set up two or more different nodes with enabled neutron-dhcp-agent
  3. Create VLAN neutron network with --enable-dhcp option
  4. Create instance with that network

  Expected behaviour:

  Instance recieve IP address via DHCP without problems or delays.

  Actual behaviour:

  Instance stuck in the network boot for long time.
  There are complains about NACKs in the logs of dhcp client.
  There are multiple NACKs on tcpdump on interfaces

  Additional analysis: It is very complex, so I attach example of two
  parallel tcpdumps from two dhcp namespaces in HTML format.

  
  Version: 2014.2.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1457900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457598] [NEW] Horizon unable to change quotas if routers extension is disabled in neutron

2015-05-21 Thread George Shuklin
Public bug reported:

Horizon version:

openstack-dashboard 1:2014.2.2-0ubuntu1~cloud0
python-django-horizon   1:2014.2.2-0ubuntu1~cloud0

Steps to reproduce:

1. Disable router extension in neutron (empty service_plugins in neutron.conf)
2. Disable routers in horizon OPENSTACK_NEUTRON_NETWORK = { 
'enable_router': False, ...
3. Try to change quotas for tenant

Expected behaviour:

1. Quotes dialogue without fields 'routers/floatingips'
2. Changes in quotas can be saved.

Actual behaviour:

1. Interface shows empty fields 'routers' and 'floatingips'
2. Attempt to save quotas without changes fails (see screenshot), complaining 
about 'this field is required'.
3. Any values in this fields rejected by server: 'Error: Modified project 
information and members, but unable to modify project quotas.'

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "quotas_routers_bug.png"
   
https://bugs.launchpad.net/bugs/1457598/+attachment/4401907/+files/quotas_routers_bug.png

** Description changed:

  Horizon version:
  
  openstack-dashboard 1:2014.2.2-0ubuntu1~cloud0
  python-django-horizon   1:2014.2.2-0ubuntu1~cloud0
  
  Steps to reproduce:
  
  1. Disable router extension in neutron (empty service_plugins in neutron.conf)
  2. Disable routers in horizon OPENSTACK_NEUTRON_NETWORK = { 
'enable_router': False, ...
  3. Try to change quotas for tenant
  
  Expected behaviour:
  
  1. Quotes dialogue without fields 'routers/floatingips'
  2. Changes in quotas can be saved.
  
  Actual behaviour:
  
- 1. Interface shows empty fields routers and quotas
+ 1. Interface shows empty fields 'routers' and 'floatingips'
  2. Attempt to save quotas without changes fails (see screenshot), complaining 
about 'this field is required'.
  3. Any values in this fields rejected by server: 'Error: Modified project 
information and members, but unable to modify project quotas.'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1457598

Title:
  Horizon unable to change quotas if routers extension is disabled in
  neutron

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon version:

  openstack-dashboard 1:2014.2.2-0ubuntu1~cloud0
  python-django-horizon   1:2014.2.2-0ubuntu1~cloud0

  Steps to reproduce:

  1. Disable router extension in neutron (empty service_plugins in neutron.conf)
  2. Disable routers in horizon OPENSTACK_NEUTRON_NETWORK = { 
'enable_router': False, ...
  3. Try to change quotas for tenant

  Expected behaviour:

  1. Quotes dialogue without fields 'routers/floatingips'
  2. Changes in quotas can be saved.

  Actual behaviour:

  1. Interface shows empty fields 'routers' and 'floatingips'
  2. Attempt to save quotas without changes fails (see screenshot), complaining 
about 'this field is required'.
  3. Any values in this fields rejected by server: 'Error: Modified project 
information and members, but unable to modify project quotas.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1457598/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425543] [NEW] (self-documentation) doc/api_samples/all_extensions/extensions-get-resp.json contain broken links

2015-02-25 Thread George Shuklin
Public bug reported:

doc/api_samples/all_extensions/extensions-get-resp.json in repository
contains broken links:

namespace": 
"http://docs.openstack.org/compute/ext/extended_rescue_with_image/api/v2";
namespace": "http://docs.openstack.org/compute/ext/rescue/api/v1.1";

etc.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425543

Title:
  (self-documentation) doc/api_samples/all_extensions/extensions-get-
  resp.json contain broken links

Status in OpenStack Compute (Nova):
  New

Bug description:
  doc/api_samples/all_extensions/extensions-get-resp.json in repository
  contains broken links:

  namespace": 
"http://docs.openstack.org/compute/ext/extended_rescue_with_image/api/v2";
  namespace": "http://docs.openstack.org/compute/ext/rescue/api/v1.1";

  etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424597] [NEW] Obscure 'No valid hosts found' if no free fixed IPs left in the network

2015-02-23 Thread George Shuklin
Public bug reported:

If network have no free fixed IPs, new instances failed with 'No valid
hosts found' without proper explanation.

Example:

nova boot foobar --flavor SSD.1 --image cirros --nic net-id=f3f2802a-
c2a1-4d8b-9f43-cf24d0dc8233

(There is no free IP left in network f3f2802a-c2a1-4d8b-
9f43-cf24d0dc8233)

nova show fb4552e5-50cb-4701-a095-c006e4545c04
...
| status   | BUILD  
   |

(few seconds later)

| fault| {"message": "No valid host was found. 
Exceeded max scheduling attempts 2 for instance 
fb4552e5-50cb-4701-a095-c006e4545c04. Last exception: [u'Traceback (most recent 
call last):\ |
|  | ', u'  File 
\"/usr/lib/python2.7/dist-packages/nova/compute/manager.py\", line 2036, in 
_do", "code": 500, "details": "  File 
\"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\", line 612, in 
build_instances |
|  | instances[0].uuid) 


|
|  |   File 
\"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py\", line 161, in 
populate_retry  
 |
|  | raise 
exception.NoValidHost(reason=msg)   

 |
| status   | ERROR  


|


Expected behaviour: Compains about 'No free IP' before attempting to schedule 
instance.

See https://bugs.launchpad.net/nova/+bug/1424594 for similar behaviour.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424597

Title:
  Obscure 'No valid hosts found' if no free fixed IPs left in the
  network

Status in OpenStack Compute (Nova):
  New

Bug description:
  If network have no free fixed IPs, new instances failed with 'No valid
  hosts found' without proper explanation.

  Example:

  nova boot foobar --flavor SSD.1 --image cirros --nic net-id=f3f2802a-
  c2a1-4d8b-9f43-cf24d0dc8233

  (There is no free IP left in network f3f2802a-c2a1-4d8b-
  9f43-cf24d0dc8233)

  nova show fb4552e5-50cb-4701-a095-c006e4545c04
  ...
  | status   | BUILD
 |

  (few seconds later)

  | fault| {"message": "No valid host was 
found. Exceeded max scheduling attempts 2 for instance 
fb4552e5-50cb-4701-a095-c006e4545c04. Last exception: [u'Traceback (most recent 
call last):\ |
  |  | ', u'  File 
\"/usr/lib/python2.7/dist-packages/nova/compute/manager.py\", line 2036, in 
_do", "code": 500, "details": "  File 
\"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\", line 612, in 
build_instances |
  |  | instances[0].uuid)   


  |
  |  |   File 
\"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py\", line 161, in 
populate_retry  
 |
  |  | raise 
exception.NoValidHost(reason=msg)   

 |
  | status   | ERROR


  |

  
  Expected behaviour: Compains about 'No free IP' before attempting to schedule 
instance.

  See https://bugs.launchpad.net/nova/+bug/1424594 for similar
  behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1424597/+subscriptions


[Yahoo-eng-team] [Bug 1424594] [NEW] 500 error and 2 traces if no free fixed IP left in the neutron network

2015-02-23 Thread George Shuklin
Public bug reported:

If nova recieve 404 from neutron due lack of free fixed IPs, it traces
badly and return 500 error to user.

Steps to reproduce:
0. Setup nova & neutron, create network & subnetwork
1. Consume all IP from that network
2. Try to attach interface to that network (nova interface-attach --net-id 
NET-UUID SERVER-UUID)

Actual behaviour:

ERROR (ClientException): The server has either erred or is incapable of
performing the requested operation. (HTTP 500) (Request-ID: req-
99ec-6a69-428d-9c16-c58d685553dd)

... and traces (see below)

Expected behaviour:

Proper complain about lack of IP (NoMoreFixedIps) and proper HTTP error
code.

Traces (nova-api):

nova.api.openstack.wsgi[26783]: DEBUG Action: 'create', calling method: >, body: {"interfaceAttachment": {"net_id": 
"f3f2802a-c2a1-4d8b-9f43-cf24d0dc8233"}} 
[req-57f4e821-a968-48cd-8358-f73fa16b4ff7 4aac5cb61b1741b2a32067619555ecc1 
78ea359977584bcc9feceef2553dbe57] _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:934
nova.api.openstack.compute.contrib.attach_interfaces[26783]: AUDIT [instance: 
5f1e84cb-1766-45e1-899b-9de1e535309b] Attach interface 
[req-57f4e821-a968-48cd-8358-f73fa16b4ff7 4aac5cb61b1741b2a32067619555ecc1 
78ea359977584bcc9feceef2553dbe57]
nova.api.openstack[26783]: ERROR Caught error: Zero fixed ips available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 134, in _dispatch_and_reply
incoming.message))

  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 177, in _dispatch
return self._do_dispatch(endpoint, method, ctxt, args)

  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 123, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)

  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 419, in 
decorated_function
return function(self, context, *args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
payload)

  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
return f(self, context, *args, **kw)

  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 303, in 
decorated_function
pass

  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 289, in 
decorated_function
return function(self, context, *args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 331, in 
decorated_function
kwargs['instance'], e, sys.exc_info())

  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 319, in 
decorated_function
return function(self, context, *args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4787, 
in attach_interface
context, instance, port_id, network_id, requested_ip)

  File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
569, in allocate_port_for_instance
requested_networks=requested_networks)

  File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
443, in allocate_for_instance
self._delete_ports(neutron, instance, created_port_ids)

  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
423, in allocate_for_instance
security_group_ids, available_macs, dhcp_opts)

  File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
226, in _create_port
raise exception.NoMoreFixedIps()

NoMoreFixedIps: Zero fixed ips available.
 [req-57f4e821-a968-48cd-8358-f73fa16b4ff7 4aac5cb61b1741b2a32067619555ecc1 
78ea359977584bcc9feceef2553dbe57]
nova.api.openstack[26783]: TRACE Traceback (most recent call last):
nova.api.openstack[26783]: TRACE   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 124, in 
__call__
nova.api.openstack[26783]: TRACE return req.get_response(self.application)
nova.api.openstack[26783]: TRACE   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
nova.api.openstack[26783]: TRACE application, catch_exc_info=False)
nova.api.openstack[26783]: TRACE   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
nova.api.openstack[26783]: TRACE app_iter = application(self.environ, 
start_response)
nova.api.openst

[Yahoo-eng-team] [Bug 1419002] [NEW] nova do not compain if 'my_ip' is wrong

2015-02-06 Thread George Shuklin
Public bug reported:

If my_ip in nova config do not exit on any interface of the compute
host, nova-compute silently accepts it and failing cold migration.

Expected behaviour: error or warning if my_ip can not be found on any
interface.

Nova version: 1:2014.2.1-0ubuntu1~cloud0

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419002

Title:
  nova do not compain if 'my_ip' is wrong

Status in OpenStack Compute (Nova):
  New

Bug description:
  If my_ip in nova config do not exit on any interface of the compute
  host, nova-compute silently accepts it and failing cold migration.

  Expected behaviour: error or warning if my_ip can not be found on any
  interface.

  Nova version: 1:2014.2.1-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1419002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1418590] Re: No ERROR state if image deleted, _base is lost and instance is rescued

2015-02-05 Thread George Shuklin
** Description changed:

  State to reproduce:
  1. Boot instance from image
  2. Delete image
  3. Stop nova-compute
  4. Remove /var/lib/nova/instances/_base/*
  5. start nova-compute
  6. Try to rescue instance (nova rescue image)
  
  Nova-compute will fail with few  traces (see below) and instance get
  strange state:
  
  nova show 290ab4b7-7225-4b74-853a-d342974a2080
  
+--+--+
  | Property | Value
|
  
+--+--+
  | OS-DCF:diskConfig| AUTO 
|
  | OS-EXT-AZ:availability_zone  | nova 
|
  | OS-EXT-SRV-ATTR:host | pp3  
|
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | pp3  
|
  | OS-EXT-SRV-ATTR:instance_name| instance-00f6
|
  | OS-EXT-STS:power_state   | 1
|
  | OS-EXT-STS:task_state| -
|
  | OS-EXT-STS:vm_state  | active   
|
  | OS-SRV-USG:launched_at   | 2015-02-05T14:15:30.00   
|
  | OS-SRV-USG:terminated_at | -
|
  (skip)
  
  After that it is impossible to unrescue instance  (Cannot 'unrescue'
  while instance is in vm_state active) or hard-reset (nothing happens).
  
  Only nova reset-state helps.
  
- Expected behavior: set it to ERROR state.
+ Expected behavior: set it to ERROR state or, better, to reject rescue if
+ no image found.
  
  Traces:
  
  2015-02-05 15:59:41.973 7281 INFO nova.virt.libvirt.driver 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 None] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] Instance failed to shutdown in 60 seconds.
  2015-02-05 15:59:42.363 7281 DEBUG nova.virt.driver [-] Emitting event 
 
Stopped> emit_event /usr/lib/python2.7/dist-packages/nova/virt/driver.py:1298
  2015-02-05 15:59:42.364 7281 INFO nova.compute.manager [-] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] VM Stopped (Lifecycle Event)
  2015-02-05 15:59:42.366 7281 INFO nova.virt.libvirt.driver [-] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] Instance destroyed successfully.
  2015-02-05 15:59:42.368 7281 INFO nova.virt.libvirt.driver 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 None] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] Creating image
  2015-02-05 15:59:42.369 7281 DEBUG nova.openstack.common.processutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 None] Running cmd (subprocess): sudo 
nova-rootwrap /etc/nova/rootwrap.conf chown 106 
/var/lib/nova/instances/290ab4b7-7225-4b74-853a-d342974a2080/console.log 
execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:161
  2015-02-05 15:59:42.389 7281 DEBUG nova.compute.manager [-] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] Synchronizing instance power state after 
lifecycle event "Stopped"; current vm_state: active, current task_state: 
rescuing, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:
  2015-02-05 15:59:42.405 7281 DEBUG nova.openstack.common.processutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 None] Result was 0 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:195
  2015-02-05 15:59:42.406 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 ] Created new semaphore 
"70a880bdefde82d942a92de4c180c202e6090dd6" internal_lock 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:206
  2015-02-05 15:59:42.406 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 ] Acquired semaphore 
"70a880bdefde82d942a92de4c180c202e6090dd6" lock 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:229
  2015-02-05 15:59:42.406 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 ] Attempting to grab external lock 
"70a880bdefde82d942a92de4c180c202e6090dd6" external_lock 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:178
  2015-02-05 15:59:42.406 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 ] Got file lock 
"/var/lib/nova/instances/locks/nova-70a880bdefde82d942a92de4c180c202e6090dd6" 
acquire /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:93
  2015-02-05 15:59:42.407 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3

[Yahoo-eng-team] [Bug 1418590] [NEW] No ERROR state if image deleted, _base is lost and instance is rescued

2015-02-05 Thread George Shuklin
Public bug reported:

State to reproduce:
1. Boot instance from image
2. Delete image
3. Stop nova-compute
4. Remove /var/lib/nova/instances/_base/*
5. start nova-compute
6. Try to rescue instance (nova rescue image)

Nova-compute will fail with few  traces (see below) and instance get
strange state:

nova show 290ab4b7-7225-4b74-853a-d342974a2080
+--+--+
| Property | Value  
  |
+--+--+
| OS-DCF:diskConfig| AUTO   
  |
| OS-EXT-AZ:availability_zone  | nova   
  |
| OS-EXT-SRV-ATTR:host | pp3
  |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | pp3
  |
| OS-EXT-SRV-ATTR:instance_name| instance-00f6  
  |
| OS-EXT-STS:power_state   | 1  
  |
| OS-EXT-STS:task_state| -  
  |
| OS-EXT-STS:vm_state  | active 
  |
| OS-SRV-USG:launched_at   | 2015-02-05T14:15:30.00 
  |
| OS-SRV-USG:terminated_at | -  
  |
(skip)

After that it is impossible to unrescue instance  (Cannot 'unrescue'
while instance is in vm_state active) or hard-reset (nothing happens).

Only nova reset-state helps.

Expected behavior: set it to ERROR state.

Traces:

2015-02-05 15:59:41.973 7281 INFO nova.virt.libvirt.driver 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 None] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] Instance failed to shutdown in 60 seconds.
2015-02-05 15:59:42.363 7281 DEBUG nova.virt.driver [-] Emitting event 
 
Stopped> emit_event /usr/lib/python2.7/dist-packages/nova/virt/driver.py:1298
2015-02-05 15:59:42.364 7281 INFO nova.compute.manager [-] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] VM Stopped (Lifecycle Event)
2015-02-05 15:59:42.366 7281 INFO nova.virt.libvirt.driver [-] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] Instance destroyed successfully.
2015-02-05 15:59:42.368 7281 INFO nova.virt.libvirt.driver 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 None] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] Creating image
2015-02-05 15:59:42.369 7281 DEBUG nova.openstack.common.processutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 None] Running cmd (subprocess): sudo 
nova-rootwrap /etc/nova/rootwrap.conf chown 106 
/var/lib/nova/instances/290ab4b7-7225-4b74-853a-d342974a2080/console.log 
execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:161
2015-02-05 15:59:42.389 7281 DEBUG nova.compute.manager [-] [instance: 
290ab4b7-7225-4b74-853a-d342974a2080] Synchronizing instance power state after 
lifecycle event "Stopped"; current vm_state: active, current task_state: 
rescuing, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:
2015-02-05 15:59:42.405 7281 DEBUG nova.openstack.common.processutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 None] Result was 0 execute 
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:195
2015-02-05 15:59:42.406 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 ] Created new semaphore 
"70a880bdefde82d942a92de4c180c202e6090dd6" internal_lock 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:206
2015-02-05 15:59:42.406 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 ] Acquired semaphore 
"70a880bdefde82d942a92de4c180c202e6090dd6" lock 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:229
2015-02-05 15:59:42.406 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 ] Attempting to grab external lock 
"70a880bdefde82d942a92de4c180c202e6090dd6" external_lock 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:178
2015-02-05 15:59:42.406 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 ] Got file lock 
"/var/lib/nova/instances/locks/nova-70a880bdefde82d942a92de4c180c202e6090dd6" 
acquire /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:93
2015-02-05 15:59:42.407 7281 DEBUG nova.openstack.common.lockutils 
[req-af7026f3-4d85-4899-8452-2b69a3e66123 ] Got semaphore / lock 
"fetch_func_sync" inner 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:271
2015-02-05 15:59:42.407 7281 DEBUG glanceclien

[Yahoo-eng-team] [Bug 1412798] [NEW] Typo in section header in config silently disables all config parsing

2015-01-20 Thread George Shuklin
Public bug reported:

I know it sounds silly, but I just spend five hours trying to find why
glance is not working with swift and printing random erros. At the end I
had found it had ignored all debug/log settings, and later I had found
the source of the problem - small typo in my config.

If config contains '[[DEFAULT]' instead of '[DEFAULT]' glance ignores
all setting in section (i think it is not only for 'default', but
'default' is the most devastating, because it disables logging and
logging locations).

Proposed solution: write down a warning to stdout if configuration file
contains no '[DEFAULT]' section.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1412798

Title:
  Typo in section header in config silently disables all config parsing

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  I know it sounds silly, but I just spend five hours trying to find why
  glance is not working with swift and printing random erros. At the end
  I had found it had ignored all debug/log settings, and later I had
  found the source of the problem - small typo in my config.

  If config contains '[[DEFAULT]' instead of '[DEFAULT]' glance ignores
  all setting in section (i think it is not only for 'default', but
  'default' is the most devastating, because it disables logging and
  logging locations).

  Proposed solution: write down a warning to stdout if configuration
  file contains no '[DEFAULT]' section.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1412798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404962] [NEW] openvswitch mech. driver does not report error in check_segment_for_agent

2014-12-22 Thread George Shuklin
Public bug reported:

When administrator misspells mappings for external flat networks, nova
fails with obscure trace during instance creation:

 Traceback (most recent call last):
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2231, 
in _build_resources
 yield resources
   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2101, 
in _build_and_run_instance
 block_device_info=block_device_info)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2619, in spawn
 write_to_disk=True)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4150, in _get_guest_xml
 context)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3936, in _get_guest_config
 flavor, CONF.libvirt.virt_type)
   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 352, 
in get_config
 _("Unexpected vif_type=%s") % vif_type)
 NovaException: Unexpected vif_type=binding_failed


The real problem lies in neutron/plugins/ml2/drivers/mech_openvswitch.py:

network_type = segment[api.NETWORK_TYPE]
if network_type == 'local':
return True
elif network_type in tunnel_types:
return True
elif network_type in ['flat', 'vlan']:
return segment[api.PHYSICAL_NETWORK] in mappings
else:
return False

If network_type is 'flat' and segment[api.PHYSICAL_NETWORK] is not in
mappings it returns False,  this causes all other problems.

Proposal: add some kind of WARNING in this place to let the
administrator know that no matching mappings found.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404962

Title:
  openvswitch mech. driver does not report error in
  check_segment_for_agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When administrator misspells mappings for external flat networks, nova
  fails with obscure trace during instance creation:

   Traceback (most recent call last):
 File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
2231, in _build_resources
   yield resources
 File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
2101, in _build_and_run_instance
   block_device_info=block_device_info)
 File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2619, in spawn
   write_to_disk=True)
 File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4150, in _get_guest_xml
   context)
 File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
3936, in _get_guest_config
   flavor, CONF.libvirt.virt_type)
 File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 
352, in get_config
   _("Unexpected vif_type=%s") % vif_type)
   NovaException: Unexpected vif_type=binding_failed

  
  The real problem lies in neutron/plugins/ml2/drivers/mech_openvswitch.py:

  network_type = segment[api.NETWORK_TYPE]
  if network_type == 'local':
  return True
  elif network_type in tunnel_types:
  return True
  elif network_type in ['flat', 'vlan']:
  return segment[api.PHYSICAL_NETWORK] in mappings
  else:
  return False

  If network_type is 'flat' and segment[api.PHYSICAL_NETWORK] is not in
  mappings it returns False,  this causes all other problems.

  Proposal: add some kind of WARNING in this place to let the
  administrator know that no matching mappings found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404943] [NEW] 'Error: Invalid service catalog service: volume' if no volume service is defined

2014-12-22 Thread George Shuklin
Public bug reported:

If openstack installation has no cinder service in endpoint list,
horizon reports 'Error: Invalid service catalog service: volume' many
times (after login, each time dialog for new instance is opened).

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1404943

Title:
  'Error: Invalid service catalog service: volume' if no volume service
  is defined

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If openstack installation has no cinder service in endpoint list,
  horizon reports 'Error: Invalid service catalog service: volume' many
  times (after login, each time dialog for new instance is opened).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1404943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1396677] [NEW] Heavy use of metering labels/rules cause memory leak in neutron server

2014-11-26 Thread George Shuklin
Public bug reported:

We found that large amount of metering labels and rules cause memory
leak in neutron server. This problem is multiplied by amount of workers
(10 workers - 10x memory leak).

In our case we have 657 metering-lables and 122399 metering-label-rules,

If anyone query them, neutron-server (worker) picked request eats +400Mb
of memory and keep it until restart. If more requests send, they come to
different workers cause each of them to bloat up.

Same problem happens if neutron-plugin-metering-agent running (it send
requests to neutron-server with same effect).

If neutron-server hit 100% CPU  it starts to consume even more memory
(in our case up to 1.4Gb per neutron-server worker).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1396677

Title:
  Heavy use of metering labels/rules cause memory leak in neutron server

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We found that large amount of metering labels and rules cause memory
  leak in neutron server. This problem is multiplied by amount of
  workers (10 workers - 10x memory leak).

  In our case we have 657 metering-lables and 122399 metering-label-
  rules,

  If anyone query them, neutron-server (worker) picked request eats
  +400Mb of memory and keep it until restart. If more requests send,
  they come to different workers cause each of them to bloat up.

  Same problem happens if neutron-plugin-metering-agent running (it send
  requests to neutron-server with same effect).

  If neutron-server hit 100% CPU  it starts to consume even more memory
  (in our case up to 1.4Gb per neutron-server worker).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1396677/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1392921] [NEW] host ssh key has been changed after full installation reboot

2014-11-14 Thread George Shuklin
Public bug reported:

We've has a planned outage for whole OS installation, and after booting
back (+few reboots of hosts and instances during that process) many (may
be all) instances changed their ssh keys.

OS: havana@ubuntu
cloud-init:
cloud-init 0.7.2-3~bpo70+1
cloud-initramfs-growroot   0.18.debian5~bpo70+1  

cloud-init.log in attachment.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.log"
   
https://bugs.launchpad.net/bugs/1392921/+attachment/4260843/+files/cloud-init.log

** Description changed:

  We've has a planned outage for whole OS installation, and after booting
  back (+few reboots of hosts and instances during that process) many (may
  be all) instances changed their ssh keys.
  
  OS: havana@ubuntu
- cloud-init: 
- ii  cloud-init 0.7.2-3~bpo70+1   all  
initialization system for infrastructure cloud instances
- ii  cloud-initramfs-growroot   0.18.debian5~bpo70+1  all  
automatically resize the root partition on first boot
- 
+ cloud-init:
+ cloud-init 0.7.2-3~bpo70+1
+ cloud-initramfs-growroot   0.18.debian5~bpo70+1  
  
  cloud-init.log in attachment.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1392921

Title:
  host ssh key has been changed after full installation reboot

Status in Init scripts for use on cloud images:
  New

Bug description:
  We've has a planned outage for whole OS installation, and after
  booting back (+few reboots of hosts and instances during that process)
  many (may be all) instances changed their ssh keys.

  OS: havana@ubuntu
  cloud-init:
  cloud-init 0.7.2-3~bpo70+1
  cloud-initramfs-growroot   0.18.debian5~bpo70+1  

  cloud-init.log in attachment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1392921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358147] [NEW] ProgrammingError: You have an error in your SQL syntax 'INSERT INTO meteringlabels'

2014-08-18 Thread George Shuklin
Public bug reported:

Installation works about few months, got this message in logstash.
Happens once, there is no surrounding activity (no requests to API).

Havanna, ubuntu-cloud-archive, 2013.2.3-0ubuntu1.1

Aug 17 21:48:59 api1 neutron.openstack.common.db.sqlalchemy.session[12400]:
ERROR DB exception wrapped.
Traceback (most recent call last):
  File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py",
 line 524, in _wrap
return f(*args, **kwargs)
  File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py",
 line 718, in flush
return super(Session, self).flush(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1818, 
in flush
self._flush(objects)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1936, 
in _flush
transaction.rollback(_capture_exception=True)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
58, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1900, 
in _flush
flush_context.execute()
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
372, in execute
rec.execute(self)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 
525, in execute
uow
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
64, in save_obj
table, insert)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
541, in _emit_insert_statements
execute(statement, multiparams)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 662, 
in execute
params)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 761, 
in _execute_clauseelement
compiled_sql, distilled_params
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 874, 
in _execute_context
context)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1024, 
in _handle_dbapi_exception
exc_info
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 195, 
in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 867, 
in _execute_context

context)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
324, in do_execute
cursor.execute(statement, parameters)
  File "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in 
execute
self.errorhandler(self, exc, value)
  File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
raise errorclass, errorvalue

ProgrammingError: (ProgrammingError) (1064, 'You have an error in your
SQL syntax; check the manual that corresponds to your MySQL server
version for the right syntax to use near \':
"\'eaa6e8e248ce4f3784282b0c3f51384a\'", u\'created_at\':
"\'2014-08-05T06:34:28+00:\' at line 1') 'INSERT INTO meteringlabels
(tenant_id, id, name, description) VALUES (%s, %s, %s, %s)'
({u'keystone_tenant_id': u'eaa6e8e248ce4f3784282b0c3f51384a',
u'created_at': u'2014-08-05T06:34:28+00:00', u'updated_at':
u'2014-08-05T06:34:28+00:00', u'admin_blocked': False, u'id': 817,
u'blocked': False}, '81070432-d514-4606-b564-e7f76138d985', 'billable',
'')

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: neutron (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Description changed:

  Installation works about few months, got this message in logstash.
  Happens once, there is no surrounding activity (no requests to API).
  
+ Havanna, ubuntu-cloud-archive, 2013.2.3-0ubuntu1.1
+ 
  Aug 17 21:48:59 api1 neutron.openstack.common.db.sqlalchemy.session[12400]:   
 ERROR DB exception wrapped.
  Traceback (most recent call last):
-   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py",
 line 524, in _wrap
- return f(*args, **kwargs)
-   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py",
 line 718, in flush
- return super(Session, self).flush(*args, **kwargs)
-   File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
1818, in flush
- self._flush(objects)
-   File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
1936, in _flush
- transaction.rollback(_capture_exception=True)
-   File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", 
line 58, in __exit__
- compat.reraise(exc_type, exc_value, exc_tb)
-   File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
1900, 

[Yahoo-eng-team] [Bug 1329313] [NEW] server migration fails if it image in glance was deleted

2014-06-12 Thread George Shuklin
Public bug reported:

If instance is migrated from hypervisor by 'nova host-servers-migrate'
and it image was deleted, instance fails to start with message

{u'message': u'Image d2ab45e6-3db0-450b-b5aa-8b0646e063a2 could not be
found.', u'code': 404, u'created': u'2014-06-12T12:39:27Z'}

Steps to reproduce:
1. Create instance
2. Delete image that instance starts from.
3. Run nova host-servers-migrate on host where instance running

Expected behavior:
Instance migrate successfully.

Actual behavior:
Instance transferring to new hypervisor but fail to start with message:

status: ERROR
fault: {u'message': u'Image d2ab45e6-3db0-450b-b5aa-8b0646e063a2 could not be 
found.', u'code': 404, u'created': u'2014-06-12T12:39:27Z'}

nova-compute at destination hypervisor:


Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3162, 
in finish_resize
disk_info, image)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3130, 
in _finish_resize
block_device_info, power_on)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4605, in finish_migration
block_device_info=None, inject_files=False)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2389, in _create_image
project_id=instance['project_id'])
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 179, in cache
*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 336, in create_image
prepare_template(target=base, max_size=size, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", 
line 246, in inner
return f(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", 
line 167, in call_if_not_exists
fetch_func(target=target, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 645, 
in fetch_image
max_size=max_size)
  File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 196, in 
fetch_to_raw
max_size=max_size)
  File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 190, in 
fetch
image_service.download(context, image_id, dst_path=path)
  File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 349, in 
download 
_reraise_translated_image_exception(image_id)
  File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 347, in 
download 
image_chunks = self._client.call(context, 1, 'data', image_id)
  File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 212, in 
call
return getattr(client.images, method)(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 127, 
in data
% urllib.quote(str(image_id)))
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
272, in raw_request
return self._http_request(url, method, **kwargs)
  File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 
233, in _http_request
raise exc.from_response(resp, body_str)
ImageNotFound: Image d2ab45e6-3db0-450b-b5aa-8b0646e063a2 could not be found.


Version: havana,  1:2013.2.3-0ubuntu1~cloud0 (ubuntu)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329313

Title:
  server migration fails if it image in glance was deleted

Status in OpenStack Compute (Nova):
  New

Bug description:
  If instance is migrated from hypervisor by 'nova host-servers-migrate'
  and it image was deleted, instance fails to start with message

  {u'message': u'Image d2ab45e6-3db0-450b-b5aa-8b0646e063a2 could not be
  found.', u'code': 404, u'created': u'2014-06-12T12:39:27Z'}

  Steps to reproduce:
  1. Create instance
  2. Delete image that instance starts from.
  3. Run nova host-servers-migrate on host where instance running

  Expected behavior:
  Instance migrate successfully.

  Actual behavior:
  Instance transferring to new hypervisor but fail to start with message:

  status: ERROR
  fault: {u'message': u'Image d2ab45e6-3db0-450b-b5aa-8b0646e063a2 could not be 
found.', u'code': 404, u'created': u'2014-06-12T12:39:27Z'}

  nova-compute at destination hypervisor:

  
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3162, 
in finish_resize
  disk_info, image)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3130, 
in _finish_resize
  block_device_info, power_on)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
4605, in finish_migration
  block_device_info=None, inject_files=False)
File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
2389, in _create_image
  proj

[Yahoo-eng-team] [Bug 1323383] [NEW] Ubuntu source package for neutron can not be rebuild

2014-05-26 Thread George Shuklin
Public bug reported:

Ubuntu's source package for neutron can not be rebuild twice:

1. There is no proper clean target.
2. neutron.egg-info included in neutron_2013.2.3.orig.tar.gz (regardless of 
.gitignore in original git).

That cause problem when package is build twice from same source. 1st
build is fine, 2nd cause following errors:

(each type of error cited once)
1. dpkg-source: warning: newly created empty file 
'build/lib.linux-x86_64-2.7/neutron/openstack/common/__init__.py' will not be 
represented in diff
2. dpkg-source: error: cannot represent change to neutron/__init__.pyc: binary 
file contents changed

3. dpkg-source: info: local changes detected, the modified files are:
 neutron-2013.2.3/neutron.egg-info/entry_points.txt
 neutron-2013.2.3/neutron.egg-info/requires.txt

1 and 2 caused by lack of clean target.

3rd error is more problematic: 
tar -tzvf neutron_2013.2.3.orig.tar.gz|grep egg
drwxrwxr-x jenkins/jenkins  0 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/
-rw-rw-r-- jenkins/jenkins   1800 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/PKG-INFO
-rw-rw-r-- jenkins/jenkins  1 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/dependency_links.txt
-rw-rw-r-- jenkins/jenkins 16 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/top_level.txt
-rw-rw-r-- jenkins/jenkins  52753 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/SOURCES.txt
-rw-rw-r-- jenkins/jenkins   3654 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/entry_points.txt
-rw-rw-r-- jenkins/jenkins  1 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/not-zip-safe
-rw-rw-r-- jenkins/jenkins406 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/requires.txt

But git repository stated it should not be included to source/git:
https://github.com/openstack/neutron/blob/stable/havana/.gitignore
(neutron.egg-info/).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323383

Title:
  Ubuntu source package for neutron can not be rebuild

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Ubuntu's source package for neutron can not be rebuild twice:

  1. There is no proper clean target.
  2. neutron.egg-info included in neutron_2013.2.3.orig.tar.gz (regardless of 
.gitignore in original git).

  That cause problem when package is build twice from same source. 1st
  build is fine, 2nd cause following errors:

  (each type of error cited once)
  1. dpkg-source: warning: newly created empty file 
'build/lib.linux-x86_64-2.7/neutron/openstack/common/__init__.py' will not be 
represented in diff
  2. dpkg-source: error: cannot represent change to neutron/__init__.pyc: 
binary file contents changed

  3. dpkg-source: info: local changes detected, the modified files are:
   neutron-2013.2.3/neutron.egg-info/entry_points.txt
   neutron-2013.2.3/neutron.egg-info/requires.txt

  1 and 2 caused by lack of clean target.

  3rd error is more problematic: 
  tar -tzvf neutron_2013.2.3.orig.tar.gz|grep egg
  drwxrwxr-x jenkins/jenkins  0 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/
  -rw-rw-r-- jenkins/jenkins   1800 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/PKG-INFO
  -rw-rw-r-- jenkins/jenkins  1 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/dependency_links.txt
  -rw-rw-r-- jenkins/jenkins 16 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/top_level.txt
  -rw-rw-r-- jenkins/jenkins  52753 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/SOURCES.txt
  -rw-rw-r-- jenkins/jenkins   3654 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/entry_points.txt
  -rw-rw-r-- jenkins/jenkins  1 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/not-zip-safe
  -rw-rw-r-- jenkins/jenkins406 2014-04-03 20:49 
neutron-2013.2.3/neutron.egg-info/requires.txt

  But git repository stated it should not be included to source/git:
  https://github.com/openstack/neutron/blob/stable/havana/.gitignore
  (neutron.egg-info/).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310571] [NEW] ovs pluging floods auth.log (~200Mb/day)

2014-04-21 Thread George Shuklin
Public bug reported:

ovs plugin floods auth.log with repeative messages:

Apr 20 06:25:20 pp3 sudo:  neutron : TTY=unknown ; PWD=/ ; USER=root ; 
COMMAND=/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovs-vsctl 
--timeout=2 --format=json -- --columns=name,external_ids list Interface
Apr 20 06:25:20 pp3 sudo: pam_unix(sudo:session): session opened for user root 
by (uid=108)
Apr 20 06:25:20 pp3 sudo: pam_unix(sudo:session): session closed for user root

Those messages has no meaning, I think they should be disabled in
rsyslog configuration.

Here same bug was fixed by cisco: https://bugs.launchpad.net/openstack-
cisco/+bug/1197428

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: ubuntu
 Importance: Undecided
 Status: New

** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1310571

Title:
  ovs pluging floods auth.log (~200Mb/day)

Status in OpenStack Neutron (virtual network service):
  New
Status in Ubuntu:
  New

Bug description:
  ovs plugin floods auth.log with repeative messages:

  Apr 20 06:25:20 pp3 sudo:  neutron : TTY=unknown ; PWD=/ ; USER=root ; 
COMMAND=/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovs-vsctl 
--timeout=2 --format=json -- --columns=name,external_ids list Interface
  Apr 20 06:25:20 pp3 sudo: pam_unix(sudo:session): session opened for user 
root by (uid=108)
  Apr 20 06:25:20 pp3 sudo: pam_unix(sudo:session): session closed for user root

  Those messages has no meaning, I think they should be disabled in
  rsyslog configuration.

  Here same bug was fixed by cisco: https://bugs.launchpad.net
  /openstack-cisco/+bug/1197428

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1310571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297920] [NEW] Completely disabled availability zone cause horizon to trace at availability zones list

2014-03-26 Thread George Shuklin
Public bug reported:

If all compute nodes in some availability zone are disabled, horizon
trace at availability zones list.

Steps to reproduce:

1. Create host aggregate and availability zone (nova aggreage-create some some)
2. Add some (at least one) host to that host aggregate (nova aggreage-add-host 
some compute_host)
3. Disable service at all compute_hosts (nova service-disable compute_host 
nova-compute)
4. Go to (in dashboard) Admin -> System Info -> Availability Zones

Expected result:

Output with list of availabilizy zones

Actual Result:


TemplateSyntaxError at /admin/info/

'NoneType' object has no attribute 'items'

Request Method: GET
Request URL:http://78.140.137.204/horizon/admin/info/
Django Version: 1.5.4
Exception Type: TemplateSyntaxError
Exception Value:

'NoneType' object has no attribute 'items'

Exception Location: 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/info/tables.py
 in get_hosts, line 60
Python Executable:  /usr/bin/python
Python Version: 2.7.3
Python Path:

['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..',
 '/usr/lib/python2.7',
 '/usr/lib/python2.7/plat-linux2',
 '/usr/lib/python2.7/lib-tk',
 '/usr/lib/python2.7/lib-old',
 '/usr/lib/python2.7/lib-dynload',
 '/usr/local/lib/python2.7/dist-packages',
 '/usr/lib/python2.7/dist-packages',
 '/usr/share/openstack-dashboard/',
 '/usr/share/openstack-dashboard/openstack_dashboard']

Server time:Wed, 26 Mar 2014 15:53:32 +


P. S.
+--+--+---+--+---++-+
| Binary   | Host | Zone  | Status   | State | Updated_at   
  | Disabled Reason |
+--+--+---+--+---++-+
| nova-scheduler   | pp1  | internal  | enabled  | up| 
2014-03-26T15:54:34.00 | None|
| nova-consoleauth | pp1  | internal  | enabled  | up| 
2014-03-26T15:54:32.00 | None|
| nova-conductor   | pp1  | internal  | enabled  | up| 
2014-03-26T15:54:39.00 | None|
| nova-cert| pp1  | internal  | enabled  | up| 
2014-03-26T15:54:35.00 | None|
| nova-compute | pp7  | test,nova | disabled | up| 
2014-03-26T15:54:34.00 | None|
| nova-compute | pp4  | nova  | enabled  | up| 
2014-03-26T15:54:33.00 | None|
| nova-compute | pp3  | nova  | enabled  | up| 
2014-03-26T15:54:39.00 | None|
+--+--+---+--+---++-+

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1297920

Title:
  Completely disabled availability zone cause horizon to trace at
  availability zones list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If all compute nodes in some availability zone are disabled, horizon
  trace at availability zones list.

  Steps to reproduce:

  1. Create host aggregate and availability zone (nova aggreage-create some 
some)
  2. Add some (at least one) host to that host aggregate (nova 
aggreage-add-host some compute_host)
  3. Disable service at all compute_hosts (nova service-disable compute_host 
nova-compute)
  4. Go to (in dashboard) Admin -> System Info -> Availability Zones

  Expected result:

  Output with list of availabilizy zones

  Actual Result:

  
  TemplateSyntaxError at /admin/info/

  'NoneType' object has no attribute 'items'

  Request Method:   GET
  Request URL:  http://78.140.137.204/horizon/admin/info/
  Django Version:   1.5.4
  Exception Type:   TemplateSyntaxError
  Exception Value:  

  'NoneType' object has no attribute 'items'

  Exception Location:   
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/admin/info/tables.py
 in get_hosts, line 60
  Python Executable:/usr/bin/python
  Python Version:   2.7.3
  Python Path:  

  ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..',
   '/usr/lib/python2.7',
   '/usr/lib/python2.7/plat-linux2',
   '/usr/lib/python2.7/lib-tk',
   '/usr/lib/python2.7/lib-old',
   '/usr/lib/python2.7/lib-dynload',
   '/usr/local/lib/python2.7/dist-packages',
   '/usr/lib/python2.7/dist-packages',
   '/usr/share/openstack-dashboard/',
   '/usr/share/openstack-dashboard/openstack_dashboard']

  Server time:Wed, 26 Mar 2014 15:53:32 +

  
  P. S.
  
+--+--+---+--+---++-+
  | Binary   | Host | Zone  | Status   | State | Updated_at 
| Disabled Reason |
  
+

[Yahoo-eng-team] [Bug 1288859] [NEW] Load ballancer can't choose proper port in multi-network configuration

2014-03-06 Thread George Shuklin
Public bug reported:

If LBaaS functionality enabled and instances has more that one network
interfaces, horizon incorrectly choose members ports to add in the LB
pool.

Steps to reproduce:

0. nova, neutron with configured LBaaS functions, horizon.
1. Create 1st network (e.g. net1)
2. Create 2nd network (e.g. net2)
3. Create few (e.g. 6) instances with networks attached to both networks.
4. Create LB pool
5. Go to member page and click 'add members'
6. Select all instances from step 3, click add

Expected result:
all selected interfaces will be in same network.

Actual result:
Some interfaces are selected from net1, some from net2. 

And there is no way to plug instance to LB pool with proper interface
via horizon, because add member dialog do not allow to choose port of
instance.

Checked on havanna and icehouse-2.

** Affects: horizon
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: horizon => neutron

** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1288859

Title:
  Load ballancer can't choose proper port in multi-network configuration

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If LBaaS functionality enabled and instances has more that one network
  interfaces, horizon incorrectly choose members ports to add in the LB
  pool.

  Steps to reproduce:

  0. nova, neutron with configured LBaaS functions, horizon.
  1. Create 1st network (e.g. net1)
  2. Create 2nd network (e.g. net2)
  3. Create few (e.g. 6) instances with networks attached to both networks.
  4. Create LB pool
  5. Go to member page and click 'add members'
  6. Select all instances from step 3, click add

  Expected result:
  all selected interfaces will be in same network.

  Actual result:
  Some interfaces are selected from net1, some from net2. 

  And there is no way to plug instance to LB pool with proper interface
  via horizon, because add member dialog do not allow to choose port of
  instance.

  Checked on havanna and icehouse-2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1288859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286209] Re: unhandled trace if no namespaces in metering agent

2014-02-28 Thread George Shuklin
neutron-plugin-metering-agent1:2013.2.1-0ubuntu1~cloud0

** Project changed: neutron => neutron (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1286209

Title:
  unhandled trace if no namespaces in metering agent

Status in “neutron” package in Ubuntu:
  New

Bug description:
  If network node has no active routers on it l3-agent, metering-agent
  tracing:

  
  2014-02-28 17:04:51.286 1121 DEBUG 
neutron.services.metering.agents.metering_agent [-] Get router traffic counters 
_get_traffic_counters 
/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py:214
  2014-02-28 17:04:51.286 1121 DEBUG neutron.openstack.common.lockutils [-] Got 
semaphore "metering-agent" for method "_invoke_driver"... inner 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py:191
  2014-02-28 17:04:51.286 1121 DEBUG neutron.common.log [-] 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
 method get_traffic_counters called with arguments 
(, [{u'status': u'ACTIVE', 
u'name': u'r', u'gw_port_id': u'86be6088-d967-45a8-bf69-8af76d956a3e', 
u'admin_state_up': True, u'tenant_id': u'1483a06525a5485e8a7dd64abaa66619', 
u'_metering_labels': [{u'rules': [{u'remote_ip_prefix': u'0.0.0.0/0', 
u'direction': u'ingress', u'metering_label_id': 
u'19de35e4-ea99-4d84-9fbf-7b0c7a390540', u'id': 
u'3991421b-50ce-46ea-b264-74bb47d09e65', u'excluded': False}, 
{u'remote_ip_prefix': u'0.0.0.0/0', u'direction': u'egress', 
u'metering_label_id': u'19de35e4-ea99-4d84-9fbf-7b0c7a390540', u'id': 
u'706e55db-e2f7-4eb9-940a-67144a075a2c', u'excluded': False}], u'id': 
u'19de35e4-ea99-4d84-9fbf-7b0c7a390540'}], u'id': 
u'5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8'}]) {} wrapper 
/usr/lib/python2.7/dist-packages/neutron/common/log.py:33
  2014-02-28 17:04:51.286 1121 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8', 'iptables', '-t', 'filter', 
'-L', 'neutron-meter-l-19de35e4-ea9', '-n', '-v', '-x', '-Z'] execute 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:43
  2014-02-28 17:04:51.291 1121 DEBUG neutron.agent.linux.utils [-]
  Command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8', 'iptables', '-t', 'filter', 
'-L', 'neutron-meter-l-19de35e4-ea9', '-n', '-v', '-x', '-Z']
  Exit code: 1
  Stdout: ''
  Stderr: 'Cannot open network namespace: No such file or directory\n' execute 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:60
  2014-02-28 17:04:51.291 1121 ERROR neutron.openstack.common.loopingcall [-] 
in fixed duration looping call
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
Traceback (most recent call last):
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py", 
line 78, in _inner
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 163, in _metering_loop
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
self._add_metering_infos()
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 155, in _add_metering_infos
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
accs = self._get_traffic_counters(self.context, self.routers.values())
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 215, in _get_traffic_counters
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
return self._invoke_driver(context, routers, 'get_traffic_counters')
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File "/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py", 
line 247, in inner
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
retval = f(*args, **kwargs)
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 180, in _invoke_driver
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
{'driver': cfg.CONF.metering_driver,
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   
File "/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1648, in 
__getattr__
  2014-02-28 17:04:51.291 1121 TRACE neutron.openstack

[Yahoo-eng-team] [Bug 1286209] [NEW] unhandled trace if no namespaces in metering agent

2014-02-28 Thread George Shuklin
Public bug reported:

If network node has no active routers on it l3-agent, metering-agent
tracing:


2014-02-28 17:04:51.286 1121 DEBUG 
neutron.services.metering.agents.metering_agent [-] Get router traffic counters 
_get_traffic_counters 
/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py:214
2014-02-28 17:04:51.286 1121 DEBUG neutron.openstack.common.lockutils [-] Got 
semaphore "metering-agent" for method "_invoke_driver"... inner 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py:191
2014-02-28 17:04:51.286 1121 DEBUG neutron.common.log [-] 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
 method get_traffic_counters called with arguments 
(, [{u'status': u'ACTIVE', 
u'name': u'r', u'gw_port_id': u'86be6088-d967-45a8-bf69-8af76d956a3e', 
u'admin_state_up': True, u'tenant_id': u'1483a06525a5485e8a7dd64abaa66619', 
u'_metering_labels': [{u'rules': [{u'remote_ip_prefix': u'0.0.0.0/0', 
u'direction': u'ingress', u'metering_label_id': 
u'19de35e4-ea99-4d84-9fbf-7b0c7a390540', u'id': 
u'3991421b-50ce-46ea-b264-74bb47d09e65', u'excluded': False}, 
{u'remote_ip_prefix': u'0.0.0.0/0', u'direction': u'egress', 
u'metering_label_id': u'19de35e4-ea99-4d84-9fbf-7b0c7a390540', u'id': 
u'706e55db-e2f7-4eb9-940a-67144a075a2c', u'excluded': False}], u'id': 
u'19de35e4-ea99-4d84-9fbf-7b0c7a390540'}], u'id': 
u'5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8'}]) {} wrapper 
/usr/lib/python2.7/dist-packages/neutron/common/log.py:33
2014-02-28 17:04:51.286 1121 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8', 'iptables', '-t', 'filter', 
'-L', 'neutron-meter-l-19de35e4-ea9', '-n', '-v', '-x', '-Z'] execute 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:43
2014-02-28 17:04:51.291 1121 DEBUG neutron.agent.linux.utils [-]
Command: ['sudo', 'ip', 'netns', 'exec', 
'qrouter-5ccfe6b8-9c3b-44c4-9580-da0d74ccdcf8', 'iptables', '-t', 'filter', 
'-L', 'neutron-meter-l-19de35e4-ea9', '-n', '-v', '-x', '-Z']
Exit code: 1
Stdout: ''
Stderr: 'Cannot open network namespace: No such file or directory\n' execute 
/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:60
2014-02-28 17:04:51.291 1121 ERROR neutron.openstack.common.loopingcall [-] in 
fixed duration looping call
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
Traceback (most recent call last):
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py", 
line 78, in _inner
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
self.f(*self.args, **self.kw)
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 163, in _metering_loop
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
self._add_metering_infos()
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 155, in _add_metering_infos
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
accs = self._get_traffic_counters(self.context, self.routers.values())
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 215, in _get_traffic_counters
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
return self._invoke_driver(context, routers, 'get_traffic_counters')
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py", line 
247, in inner
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
retval = f(*args, **kwargs)
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/neutron/services/metering/agents/metering_agent.py",
 line 180, in _invoke_driver
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
{'driver': cfg.CONF.metering_driver,
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/oslo/config/cfg.py", line 1648, in __getattr__
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
raise NoSuchOptError(name)
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall 
NoSuchOptError: no such option: metering_driver
2014-02-28 17:04:51.291 1121 TRACE neutron.openstack.common.loopingcall

No routers is perfectly fine state for l3-agent, and this should not
cause errors.

** Affects: neutron (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notif

[Yahoo-eng-team] [Bug 1276629] [NEW] Non-working tunnels after IP change of nodes (ovs_tunnel_endpoints doesn't clean)

2014-02-05 Thread George Shuklin
Public bug reported:

If any OVS-enabled host with GRE tunnes change it IP, neutron do not
discard entry in ovs_tunnel_endpoints table and recreate gre-x
interfaces in br-tun on every boot.

Expected behavior: automaic removing of entries in ovs_tunnel_endpoints
when IP address is changed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276629

Title:
  Non-working tunnels after IP change of nodes (ovs_tunnel_endpoints
  doesn't clean)

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If any OVS-enabled host with GRE tunnes change it IP, neutron do not
  discard entry in ovs_tunnel_endpoints table and recreate gre-x
  interfaces in br-tun on every boot.

  Expected behavior: automaic removing of entries in
  ovs_tunnel_endpoints when IP address is changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1276629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271958] [NEW] nova compute fail to remove instance with port if network is broken

2014-01-23 Thread George Shuklin
Public bug reported:

If user was capable to create broken network configuration, instance
become undeletable.  Reason why user can create broken networking is
under investigation (current hypothesis: if network (neutron) created in
one tennant and instance in other, and user is admin in both tenants, it
cause broken configuration).

But such instance deletetion cause trace on nova-compute:

Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", 
line 461, in _process_data
**args)
  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
result = getattr(proxyobj, method)(ctxt, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 353, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 90, in wrapped
payload)
  File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 73, in wrapped
return f(self, context, *args, **kw)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 243, in 
decorated_function
pass
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 229, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 294, in 
decorated_function
function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 271, in 
decorated_function
e, sys.exc_info())
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 258, in 
decorated_function
return function(self, context, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1792, 
in terminate_instance
do_terminate_instance(instance, bdms)
  File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", 
line 246, in inner
return f(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1784, 
in do_terminate_instance
reservations=reservations)
  File "/usr/lib/python2.7/dist-packages/nova/hooks.py", line 105, in inner
rv = f(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1757, 
in _delete_instance
user_id=user_id)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1729, 
in _delete_instance
self._shutdown_instance(context, db_inst, bdms)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1639, 
in _shutdown_instance
network_info = self._get_instance_nw_info(context, instance)
  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 876, in 
_get_instance_nw_info
instance)
  File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
455, in get_instance_nw_info
result = self._get_instance_nw_info(context, instance, networks)  
  File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
463, in _get_instance_nw_info
nw_info = self._build_network_info_model(context, instance, networks)
  File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
1009, in _build_network_info_model
subnets)
  File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 
962, in _nw_info_build_network
label=network_name,
UnboundLocalError: local variable 'network_name' referenced before assignment


The reason is following code :

def _nw_info_build_network(self, port, networks, subnets):
# NOTE(danms): This loop can't fail to find a network since we
# filtered ports to only the ones matching networks in our parent
for net in networks:
if port['network_id'] == net['id']:
network_name = net['name']
break

(if no net found network_name become undefined).

Following patch should allow instance deletion in case of networking
problems:

diff --git a/nova/network/neutronv2/api.py b/nova/network/neutronv2/api.py
index a41924d..8a44f99 100644
--- a/nova/network/neutronv2/api.py
+++ b/nova/network/neutronv2/api.py
@@ -939,6 +939,8 @@ class API(base.Base):
 if port['network_id'] == net['id']:
 network_name = net['name']
 break
+else:
+network_name = ""

 bridge = None
 ovs_interfaceid = None

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "Fix deletion of instances with broken networking"
   
https://bugs.launchpad.net/bugs/1271958/+attachment/3955123/+files/nova-compute-fix-broken-net-instance-deletion.patch

** Description changed:

  If user was capable to create broken network configuration, instance
  become undeletable.  Reason why user can create broken networking is
  under investigation (current hypothesis: if network (neutron) created in
  one tennant and i

[Yahoo-eng-team] [Bug 1269394] [NEW] button "Launch Instance (quota exceded)" does not change back if some instances terminated

2014-01-15 Thread George Shuklin
Public bug reported:

Steps to reproduce:
1. Create maximum amount of instances allowed by quota
2. Go to Project -> Instances.
3. Terminate any instance

Expected behavior: Button "Launch Instance (quota exceded)" become enabled and 
changed to "Launch Instance"
Actual behavior: Button not change label and continiune to be disabled

Note: that bug happens only when amount of instances changed from MAX to
MAX-1. Next instances termination enable button as expected.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1269394

Title:
  button "Launch Instance (quota exceded)" does not change back if some
  instances terminated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1. Create maximum amount of instances allowed by quota
  2. Go to Project -> Instances.
  3. Terminate any instance

  Expected behavior: Button "Launch Instance (quota exceded)" become enabled 
and changed to "Launch Instance"
  Actual behavior: Button not change label and continiune to be disabled

  Note: that bug happens only when amount of instances changed from MAX
  to MAX-1. Next instances termination enable button as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1269394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp