[Yahoo-eng-team] [Bug 1463891] [NEW] VRRP: admin_state down on HA port cause management failure to agent without proper log

2015-06-10 Thread Roey Dekel
Public bug reported:

Tried to check how admin_state down affects HA ports.
Noticed that management data between them stoped and cause them to become 
master. Although traffic to connected floating IP remain working.
Problem is: no log on OVS agent idicated why it's processing a port update or 
why it's setting it's VLAN tag to 4095.
(06:39:44 PM) amuller: there should be an INFO level log saying something like: 
"Setting port admin_state to {True/False}"(06:39:56 PM) amuller: with the port 
ID of course

Current log:
2015-06-08 10:25:25.782 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 'ha-8e0f96c5-78' has lost its 
vlan tag '1'!
2015-06-08 10:25:25.783 1055 INFO neutron.agent.securitygroups_rpc 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Refresh firewall rules
2015-06-08 10:25:26.784 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 
8e0f96c5-7891-46a4-8420-778454949bd0 updated. Details: {u'profile': {}, 
u'allowed_address_pairs': [], u'admin_state_up': False, u'network_id': 
u'6a5116a2-39f7-45bc-a432-3d624765d602', u'segmentation_id': 10, 
u'device_owner': u'network:router_ha_interface', u'physical_network': None, 
u'mac_address': u'fa:16:3e:02:cb:47', u'device': 
u'8e0f96c5-7891-46a4-8420-778454949bd0', u'port_security_enabled': True, 
u'port_id': u'8e0f96c5-7891-46a4-8420-778454949bd0', u'fixed_ips': 
[{u'subnet_id': u'f81913ba-328f-4374-96f2-1a7fd44d7fb1', u'ip_address': 
u'169.254.192.3'}], u'network_type': u'vxlan'}
2015-06-08 10:25:26.940 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Configuration for device 
8e0f96c5-7891-46a4-8420-778454949bd0 completed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463891

Title:
  VRRP: admin_state down on HA port cause management failure to agent
  without proper log

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Tried to check how admin_state down affects HA ports.
  Noticed that management data between them stoped and cause them to become 
master. Although traffic to connected floating IP remain working.
  Problem is: no log on OVS agent idicated why it's processing a port update or 
why it's setting it's VLAN tag to 4095.
  (06:39:44 PM) amuller: there should be an INFO level log saying something 
like: "Setting port admin_state to {True/False}"(06:39:56 PM) amuller: with the 
port ID of course

  Current log:
  2015-06-08 10:25:25.782 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 'ha-8e0f96c5-78' has lost its 
vlan tag '1'!
  2015-06-08 10:25:25.783 1055 INFO neutron.agent.securitygroups_rpc 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Refresh firewall rules
  2015-06-08 10:25:26.784 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Port 
8e0f96c5-7891-46a4-8420-778454949bd0 updated. Details: {u'profile': {}, 
u'allowed_address_pairs': [], u'admin_state_up': False, u'network_id': 
u'6a5116a2-39f7-45bc-a432-3d624765d602', u'segmentation_id': 10, 
u'device_owner': u'network:router_ha_interface', u'physical_network': None, 
u'mac_address': u'fa:16:3e:02:cb:47', u'device': 
u'8e0f96c5-7891-46a4-8420-778454949bd0', u'port_security_enabled': True, 
u'port_id': u'8e0f96c5-7891-46a4-8420-778454949bd0', u'fixed_ips': 
[{u'subnet_id': u'f81913ba-328f-4374-96f2-1a7fd44d7fb1', u'ip_address': 
u'169.254.192.3'}], u'network_type': u'vxlan'}
  2015-06-08 10:25:26.940 1055 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-b5a70070-2c49-47f2-9c77-49ba88851f4b ] Configuration for device 
8e0f96c5-7891-46a4-8420-778454949bd0 completed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453766] [NEW] LBaaS - can't associate monitor to pool

2015-05-11 Thread Roey Dekel
Public bug reported:

Created monitor is not shown at "Associate Monitor" selection at the
pool (image attached).

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot from 2015-05-11 14:16:59.png"
   
https://bugs.launchpad.net/bugs/1453766/+attachment/4395424/+files/Screenshot%20from%202015-05-11%2014%3A16%3A59.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453766

Title:
  LBaaS - can't associate monitor to pool

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Created monitor is not shown at "Associate Monitor" selection at the
  pool (image attached).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451399] [NEW] subnet-create arguments order is too strict (not backward compatibility)

2015-05-04 Thread Roey Dekel
Public bug reported:

Changing arguments order cause CLI error:

[root@puma14 ~(keystone_admin)]# neutron subnet-create public --gateway 
10.35.178.254  10.35.178.0/24 --name public_subnet
Invalid values_specs 10.35.178.0/24

Changing order:

[root@puma14 ~(keystone_admin)]# neutron subnet-create --gateway 10.35.178.254 
--name public_subnet public 10.35.178.0/24
Created a new subnet:
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {"start": "10.35.178.1", "end": "10.35.178.253"} |
| cidr  | 10.35.178.0/24   |
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 10.35.178.254|
| host_routes   |  |
| id| 19593f99-b13c-4624-9755-983d7406cb47 |
| ip_version| 4|
| ipv6_address_mode |  |
| ipv6_ra_mode  |  |
| name  | public_subnet|
| network_id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
| subnetpool_id |  |
| tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
+---+--+

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451399

Title:
  subnet-create arguments order is too strict (not backward
  compatibility)

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Changing arguments order cause CLI error:

  [root@puma14 ~(keystone_admin)]# neutron subnet-create public --gateway 
10.35.178.254  10.35.178.0/24 --name public_subnet
  Invalid values_specs 10.35.178.0/24

  Changing order:

  [root@puma14 ~(keystone_admin)]# neutron subnet-create --gateway 
10.35.178.254 --name public_subnet public 10.35.178.0/24
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "10.35.178.1", "end": "10.35.178.253"} |
  | cidr  | 10.35.178.0/24   |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 10.35.178.254|
  | host_routes   |  |
  | id| 19593f99-b13c-4624-9755-983d7406cb47 |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | public_subnet|
  | network_id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
  | subnetpool_id |  |
  | tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451391] [NEW] --router:external=True syntax is invalid

2015-05-04 Thread Roey Dekel
Public bug reported:

Kilo syntax is not backward compatibility:

[root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external=True
usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
  [--max-width ] [--prefix PREFIX]
  [--request-format {json,xml}]
  [--tenant-id TENANT_ID] [--admin-state-down]
  [--shared] [--router:external]
  [--provider:network_type ]
  [--provider:physical_network ]
  [--provider:segmentation_id ]
  [--vlan-transparent {True,False}]
  NAME
neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'

Current syntax supports:
[root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
| mtu   | 0|
| name  | public   |
| provider:network_type | vlan |
| provider:physical_network | physnet  |
| provider:segmentation_id  | 193  |
| router:external   | True |
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
+---+--+

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451391

Title:
  --router:external=True syntax is invalid

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Kilo syntax is not backward compatibility:

  [root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external=True
  usage: neutron net-create [-h] [-f {shell,table,value}] [-c COLUMN]
[--max-width ] [--prefix PREFIX]
[--request-format {json,xml}]
[--tenant-id TENANT_ID] [--admin-state-down]
[--shared] [--router:external]
[--provider:network_type ]
[--provider:physical_network 
]
[--provider:segmentation_id ]
[--vlan-transparent {True,False}]
NAME
  neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'

  Current syntax supports:
  [root@puma14 ~(keystone_admin)]# neutron net-create public 
--provider:network_type vlan --provider:physical_network physnet 
--provider:segmentation_id 193 --router:external
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| df7af5c2-c84b-4991-b370-f8a854c29a80 |
  | mtu   | 0|
  | name  | public   |
  | provider:network_type | vlan |
  | provider:physical_network | physnet  |
  | provider:segmentation_id  | 193  |
  | router:external   | True |
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | 7e8736e9aba546e98be4a71a92d67a77 |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451391/+subscriptions

-- 
Mailing list

[Yahoo-eng-team] [Bug 1451358] [NEW] admin project see packstack created demo network under network-topology

2015-05-04 Thread Roey Dekel
Public bug reported:

Using Kilo I noticed that I can see demo private network via horizon
while logging via project admin.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot from 2015-05-04 11:24:48.png"
   
https://bugs.launchpad.net/bugs/1451358/+attachment/4390552/+files/Screenshot%20from%202015-05-04%2011%3A24%3A48.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1451358

Title:
  admin project see packstack created demo network under network-
  topology

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Using Kilo I noticed that I can see demo private network via horizon
  while logging via project admin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1451358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440484] [NEW] ha network deletion when there are no attached ports

2015-04-05 Thread Roey Dekel
Public bug reported:

ha network used for VRRP should be deleted after not used anymore ( for
example, after all routers on the network were deleted)

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- ha network deletion after not used
+ ha network deletion when there are no attached ports

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440484

Title:
  ha network deletion when there are no attached ports

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ha network used for VRRP should be deleted after not used anymore (
  for example, after all routers on the network were deleted)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424548] [NEW] neutron allow to create network with flase physical network

2015-02-23 Thread Roey Dekel
Public bug reported:

Try to create a new network with false --provider:physical_network (typo 
mistake).
Neutron allow the mistake which was hard to found.
Could have check if the physical network is valid (such as in 
/etc/neutron/plugin.ini ).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424548

Title:
  neutron allow to create network with flase physical network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Try to create a new network with false --provider:physical_network (typo 
mistake).
  Neutron allow the mistake which was hard to found.
  Could have check if the physical network is valid (such as in 
/etc/neutron/plugin.ini ).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414445] [NEW] neutron let you remove all l3-ha router agents

2015-01-25 Thread Roey Dekel
Public bug reported:

Non enforcement for min_l3_agents_per_router (nor minimum of 1, which is
a valid case for having a router at all, or 2 which is a valid case for
l3-ha)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414445

Title:
  neutron let you remove all l3-ha router agents

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Non enforcement for min_l3_agents_per_router (nor minimum of 1, which
  is a valid case for having a router at all, or 2 which is a valid case
  for l3-ha)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408237] [NEW] Can't connect a network to existing instance

2015-01-07 Thread Roey Dekel
Public bug reported:

Tried to connect a new network to a working instance (hotplug vif) -
cannot be done.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1408237

Title:
  Can't connect a network to existing instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Tried to connect a new network to a working instance (hotplug vif) -
  cannot be done.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1408237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398845] [NEW] RBAC not preventing a creation of subnet via creation of new network

2014-12-03 Thread Roey Dekel
Public bug reported:

Changing "create_subnet" to "role:admin" in neutron_policy.json is
not preventing from non-admin user the creation of a new subnet while
creating new network (new network button)

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: rbac

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398845

Title:
  RBAC not preventing a creation of subnet via creation of new network

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Changing "create_subnet" to "role:admin" in neutron_policy.json is
  not preventing from non-admin user the creation of a new subnet while
  creating new network (new network button)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1395434] [NEW] Horizon RBAC - (need to) Hide tab if no permissions available

2014-11-23 Thread Roey Dekel
Public bug reported:

Assume I (as sys admin) want to hide LBAAS services from tenant owners,
therefore I changed every LBAAS feature (such as create pool, update
vip, delete member etc.) in neutron_policy.json to rule:admin_only.

Current result : LBAAS tab is accessiable (for tenant's owners) with no
content nor permissions for creating/updating/deleting

Expected result (in my opinion): hide LBAAS tab

Comment:
LBAAS is just an example, I think that every feature's tab without permissions 
should be hidden, unless it has important data to present.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon rbac

** Project changed: barbican => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1395434

Title:
  Horizon RBAC - (need to) Hide tab if no permissions available

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Assume I (as sys admin) want to hide LBAAS services from tenant
  owners, therefore I changed every LBAAS feature (such as create pool,
  update vip, delete member etc.) in neutron_policy.json to
  rule:admin_only.

  Current result : LBAAS tab is accessiable (for tenant's owners) with
  no content nor permissions for creating/updating/deleting

  Expected result (in my opinion): hide LBAAS tab

  Comment:
  LBAAS is just an example, I think that every feature's tab without 
permissions should be hidden, unless it has important data to present.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1395434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328700] [NEW] On mass deletion - some VM's stuck on ERROR due to connection failed to neutron

2014-06-10 Thread Roey Dekel
Public bug reported:

Description of problem:
When doing mass deletion (more then 64 VM's parallely), some VM's got stuck on 
ERROR state with the error:
"Connection to neutron failed: Maximum attempts reached", "code": 500, 
"created": "2014-06-10T21:34:38Z"

Version-Release number of selected component (if applicable):
openstack-neutron-openvswitch-2014.1-26.el7ost.noarch
python-neutronclient-2.3.4-2.el7ost.noarch
python-neutron-2014.1-26.el7ost.noarch
openstack-neutron-ml2-2014.1-26.el7ost.noarch
openstack-neutron-2014.1-26.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Setup an environment with a lot of ACTIVE VM's (more then 64)
2. Run mass deletion
 for each in `nova list | grep ACTIVE | cut -d"|" -f3` ; do nova delete 
$each & done

Actual results:
[root@cougar16 ~(keystone_stress1)]$ nova list
+--++++-+---+
| ID   | Name   | Status | Task State | 
Power State | Networks  |
+--++++-+---+
| 0731ab43-99da-409a-99b9-627287b0a80a | stress1-42 | ERROR  | deleting   | 
Running | private1=192.168.1.61 |
+--++++-+---+
[root@cougar16 ~(keystone_stress1)]$ nova show stress1-42
+--+---+
| Property | Value  
   |
+--+---+
| OS-DCF:diskConfig| MANUAL 
   |
| OS-EXT-AZ:availability_zone  | nova   
   |
| OS-EXT-STS:power_state   | 1  
   |
| OS-EXT-STS:task_state| deleting   
   |
| OS-EXT-STS:vm_state  | error  
   |
| OS-SRV-USG:launched_at   | 2014-06-10T21:28:53.00 
   |
| OS-SRV-USG:terminated_at | -  
   |
| accessIPv4   |
   |
| accessIPv6   |
   |
| config_drive |
   |
| created  | 2014-06-10T21:27:33Z   
   |
| fault| {"message": "Connection to neutron 
failed: Maximum attempts reached", "code": 500, "created": 
"2014-06-10T21:34:38Z"} |
| flavor   | mini (0)   
   |
| hostId   | 
0c295b885647eb08a3c04a15eb86f9746430dd635c5f8c6291315508
  |
| id   | 0731ab43-99da-409a-99b9-627287b0a80a   
   |
| image| cirros 
(ae31ea8c-c5ca-4ca1-9662-3545304d8e79)  
   |
| key_name | -  
   |
| metadata | {} 
   |
| name | stress1-42 
 

[Yahoo-eng-team] [Bug 1292090] [NEW] nova compute log for not enough resources is not informative enough

2014-03-13 Thread Roey Dekel
Public bug reported:

Tried to boot 128 mini VM's (cirros 0.3.1 with 64 MB RAM and no disk),
and got the next error:

2014-03-10 11:09:11.089 3177 ERROR nova.compute.manager [req-2d77626c-
6fe6-4e33-9a25-c7c43df07c5a 36feb44e6d71467aa01d46894654252c
8338e986a31b4f3cb30476126f147192] [instance: 0c06ed9e-
52d2-4140-ac05-6fac7dc3b1f8] Error: Insufficient compute resources.

This error message doesn't explains which limit was reached (maybe some
quota-limit was reached).

Reproduce:

for i in $(seq 1 128); do
nova boot stress-${i} --flavor 0 --image cirros-0.3.1 --nic net-id=`neutron 
net-list | grep netInt | cut -d" " -f2` & done

Results:
---
( take notice for stress-i, for i in {107, 111, 119, 123, 124, 126, 13, 15, 17, 
18, 19, 28, 35, 37, 4, 41, 42, 44, 48, 49, 5, 51, 54, 60, 62, 64, 65, 69, 83, 
9, 91, 98} )
[root@cougar16 ~(keystone_admin)]# nova list
+--++++-+---+
| ID | Name | Status | Task State | Power State | Networks |
+--++++-+---+
| 7b171137-55f3-4051-a4b2-a98ac394fe39 | instance1 | ERROR | None | NOSTATE | |
| 2735176e-9109-44e2-8534-8b9e40b91fbc | stress-1 | ACTIVE | None | Running | 
netInt216=192.168.216.74 |
| 04fc9bed-fa6a-45b6-aea1-d2dd0c0a461e | stress-10 | ACTIVE | None | Running | 
netInt216=192.168.216.80 |
| 196eb4e8-4fa1-4237-abd2-dd99203de3ad | stress-100 | ACTIVE | None | Running | 
netInt216=192.168.216.78 |
| 4b10d616-cb5e-433c-bd70-09cc5fb6c650 | stress-101 | ACTIVE | None | Running | 
netInt216=192.168.216.27 |
| 4b3b3b9b-061c-432b-9c39-d765842f4760 | stress-102 | ACTIVE | None | Running | 
netInt216=192.168.216.45 |
| 20e66ca3-9177-4ae3-a755-0dcca5dc24ff | stress-103 | ACTIVE | None | Running | 
netInt216=192.168.216.6 |
| e111b279-27bf-4f96-87df-cd3407eb90ed | stress-104 | ACTIVE | None | Running | 
netInt216=192.168.216.38 |
| f7fb8e0b-d141-47d5-8d0f-b4ed3b853c23 | stress-105 | ACTIVE | None | Running | 
netInt216=192.168.216.61 |
| 3550af11-3591-4ee2-9ea8-83dcd9e77e9a | stress-106 | ACTIVE | None | Running | 
netInt216=192.168.216.30 |
| 0bdca75d-fe9c-49a0-b789-58a6ca9fd0fe | stress-107 | ERROR | None | NOSTATE | |
| a79317ec-e3ff-4d96-b64f-8389a3ab79b1 | stress-108 | ACTIVE | None | Running | 
netInt216=192.168.216.76 |
| cfca6ffd-97c9-48c9-a0ee-a887482b94be | stress-109 | ACTIVE | None | Running | 
netInt216=192.168.216.31 |
| b7422a03-8f3c-491b-bf41-8ac8cdd2f1d1 | stress-11 | ACTIVE | None | Running | 
netInt216=192.168.216.55 |
| 93ee7204-a53f-4bdc-a34a-ff7a954ad373 | stress-110 | ACTIVE | None | Running | 
netInt216=192.168.216.9 |
| 6e36b033-bd79-44a6-882f-a664567a69ff | stress-111 | ERROR | None | NOSTATE | |
| cc294eec-093f-4564-9ab9-e4f28fc3ac99 | stress-112 | ACTIVE | None | Running | 
netInt216=192.168.216.47 |
| 85892e9e-9e58-44ab-8a41-ef9532a54983 | stress-113 | ACTIVE | None | Running | 
netInt216=192.168.216.41 |
| 6e917473-ca22-4bf2-8ba8-b1e3761e3aba | stress-114 | ACTIVE | None | Running | 
netInt216=192.168.216.65 |
| af9e8c49-8d53-4bf8-b96b-82d8a89cf84a | stress-115 | ACTIVE | None | Running | 
netInt216=192.168.216.58 |
| 51afb7d3-4f3f-4182-bc75-6608dbf2894c | stress-116 | ACTIVE | None | Running | 
netInt216=192.168.216.92, 192.168.216.98 |
| 6c222d79-1d4d-4d32-aad7-10155a07f5c8 | stress-117 | ACTIVE | None | Running | 
netInt216=192.168.216.33 |
| 81c1f8ec-b3cf-40ac-a1ae-44d043ab6cc9 | stress-118 | ACTIVE | None | Running | 
netInt216=192.168.216.51 |
| c77b7e7f-2c48-4709-a64b-58ba4ef4f282 | stress-119 | ERROR | None | NOSTATE | |
| fd1f192a-dd61-4ad8-8fb8-6cf49abe3e16 | stress-12 | ACTIVE | None | Running | 
netInt216=192.168.216.89 |
| 5405c600-04af-457e-8a0b-560c26b4267b | stress-120 | ACTIVE | None | Running | 
netInt216=192.168.216.18 |
| a97c1833-9634-4a4d-bf9f-ffa915c33bdc | stress-121 | ACTIVE | None | Running | 
netInt216=192.168.216.39 |
| 6577d161-7311-4058-a596-e6684906d238 | stress-122 | ACTIVE | None | Running | 
netInt216=192.168.216.83 |
| 0c06ed9e-52d2-4140-ac05-6fac7dc3b1f8 | stress-123 | ERROR | None | NOSTATE | |
| 4529ad2c-4e10-47e7-8f28-4629e27b58c9 | stress-124 | ERROR | None | NOSTATE | |
| 4acc9354-a54d-4da6-8637-fb870f38996a | stress-125 | ACTIVE | None | Running | 
netInt216=192.168.216.4 |
| 371f0084-ee5b-4c0c-bfdf-2bcae240dde9 | stress-126 | ERROR | None | NOSTATE | |
| 84867daf-7686-4b60-aa04-c9b29e6fc726 | stress-127 | ACTIVE | None | Running | 
netInt216=192.168.216.5 |
| 5ce00c24-35d9-4891-9eea-c9df3c661657 | stress-128 | ACTIVE | None | Running | 
netInt216=192.168.216.2 |
| d160e306-0542-4d63-86df-7c26a9010a1d | stress-13 | ERROR | None | NOSTATE | |
| c37eac35-9940-4ed2-a672-d746093cc7ee | stress-14 | ACTIVE | None | Running | 
netInt216=192.168.216.21 |
| b294ba72-bb0f-4135-9755-2e2dcc0638c3 | stress-15 | ERROR | None | NOSTATE | |
| 4c74c2bc-4c8c-47c3-8d2

[Yahoo-eng-team] [Bug 1292077] [NEW] VM created with 2 IP's in the same network

2014-03-13 Thread Roey Dekel
Public bug reported:

Tried to create 128 mini  Vm's (cirros 0.3.1 with 64MB RAM and no disk) at 
once, after updating quotas.
Besides getting errors for not enough resources (clue anyone?) I got VM's with 
2 IP's

Reproduce:

for i in $(seq 1 128); do
nova boot stress-${i} --flavor 0 --image cirros-0.3.1 --nic net-id=`neutron 
net-list | grep netInt | cut -d" " -f2` & done

Results:
---
( take notice for stress-i, for i in {116, 31, 79, 90, 92} )
[root@cougar16 ~(keystone_admin)]# nova list
+--++++-+---+
| ID   | Name   | Status | Task State | 
Power State | Networks  |
+--++++-+---+
| 7b171137-55f3-4051-a4b2-a98ac394fe39 | instance1  | ERROR  | None   | 
NOSTATE |   |
| 2735176e-9109-44e2-8534-8b9e40b91fbc | stress-1   | ACTIVE | None   | 
Running | netInt216=192.168.216.74  |
| 04fc9bed-fa6a-45b6-aea1-d2dd0c0a461e | stress-10  | ACTIVE | None   | 
Running | netInt216=192.168.216.80  |
| 196eb4e8-4fa1-4237-abd2-dd99203de3ad | stress-100 | ACTIVE | None   | 
Running | netInt216=192.168.216.78  |
| 4b10d616-cb5e-433c-bd70-09cc5fb6c650 | stress-101 | ACTIVE | None   | 
Running | netInt216=192.168.216.27  |
| 4b3b3b9b-061c-432b-9c39-d765842f4760 | stress-102 | ACTIVE | None   | 
Running | netInt216=192.168.216.45  |
| 20e66ca3-9177-4ae3-a755-0dcca5dc24ff | stress-103 | ACTIVE | None   | 
Running | netInt216=192.168.216.6   |
| e111b279-27bf-4f96-87df-cd3407eb90ed | stress-104 | ACTIVE | None   | 
Running | netInt216=192.168.216.38  |
| f7fb8e0b-d141-47d5-8d0f-b4ed3b853c23 | stress-105 | ACTIVE | None   | 
Running | netInt216=192.168.216.61  |
| 3550af11-3591-4ee2-9ea8-83dcd9e77e9a | stress-106 | ACTIVE | None   | 
Running | netInt216=192.168.216.30  |
| 0bdca75d-fe9c-49a0-b789-58a6ca9fd0fe | stress-107 | ERROR  | None   | 
NOSTATE |   |
| a79317ec-e3ff-4d96-b64f-8389a3ab79b1 | stress-108 | ACTIVE | None   | 
Running | netInt216=192.168.216.76  |
| cfca6ffd-97c9-48c9-a0ee-a887482b94be | stress-109 | ACTIVE | None   | 
Running | netInt216=192.168.216.31  |
| b7422a03-8f3c-491b-bf41-8ac8cdd2f1d1 | stress-11  | ACTIVE | None   | 
Running | netInt216=192.168.216.55  |
| 93ee7204-a53f-4bdc-a34a-ff7a954ad373 | stress-110 | ACTIVE | None   | 
Running | netInt216=192.168.216.9   |
| 6e36b033-bd79-44a6-882f-a664567a69ff | stress-111 | ERROR  | None   | 
NOSTATE |   |
| cc294eec-093f-4564-9ab9-e4f28fc3ac99 | stress-112 | ACTIVE | None   | 
Running | netInt216=192.168.216.47  |
| 85892e9e-9e58-44ab-8a41-ef9532a54983 | stress-113 | ACTIVE | None   | 
Running | netInt216=192.168.216.41  |
| 6e917473-ca22-4bf2-8ba8-b1e3761e3aba | stress-114 | ACTIVE | None   | 
Running | netInt216=192.168.216.65  |
| af9e8c49-8d53-4bf8-b96b-82d8a89cf84a | stress-115 | ACTIVE | None   | 
Running | netInt216=192.168.216.58  |
| 51afb7d3-4f3f-4182-bc75-6608dbf2894c | stress-116 | ACTIVE | None   | 
Running | netInt216=192.168.216.92, 192.168.216.98  |
| 6c222d79-1d4d-4d32-aad7-10155a07f5c8 | stress-117 | ACTIVE | None   | 
Running | netInt216=192.168.216.33  |
| 81c1f8ec-b3cf-40ac-a1ae-44d043ab6cc9 | stress-118 | ACTIVE | None   | 
Running | netInt216=192.168.216.51  |
| c77b7e7f-2c48-4709-a64b-58ba4ef4f282 | stress-119 | ERROR  | None   | 
NOSTATE |   |
| fd1f192a-dd61-4ad8-8fb8-6cf49abe3e16 | stress-12  | ACTIVE | None   | 
Running | netInt216=192.168.216.89  |
| 5405c600-04af-457e-8a0b-560c26b4267b | stress-120 | ACTIVE | None   | 
Running | netInt216=192.168.216.18  |
| a97c1833-9634-4a4d-bf9f-ffa915c33bdc | stress-121 | ACTIVE | None   | 
Running | netInt216=192.168.216.39  |
| 6577d161-7311-4058-a596-e6684906d238 | stress-122 | ACTIVE | None   | 
Running | netInt216=192.168.216.83  |
| 0c06ed9e-52d2-4140-ac05-6fac7dc3b1f8 | stress-123 | ERROR  | None   | 
NOSTATE |   |
| 4529ad2c-4e10-47e7-8f28-4629e27b58c9 | stress-124 | ERROR  | None   | 
NOSTATE |   |
| 4acc9354-a54d-4da6-8637

[Yahoo-eng-team] [Bug 1290054] [NEW] Returning associated port to floating IP from admin state up false to true doesn't unblocks connection via floating ip

2014-03-09 Thread Roey Dekel
Public bug reported:

I tried to see if admin_state_up false blocks connection via floating ip
(it worked). Then I tried to reverse it and the connection stayed
blocks.

Reproduce:
1. boot an instance:
 174  nova boot instance1 --flavor 1 --image cirros-0.3.1 --nic net-id=`neutron 
net-list | grep netInt | cut -d" " -f2`
2. attach floating ip to it:
  175  neutron floatingip-create netExt193
  176  neutron port-list 
  177  neutron floatingip-associate 2b3459aa-2606-42b2-ab15-4df68733b4ad 
4c4f6c98-a0ad-4f04-95ef-22aef0ebcbbc
3. verify communication:
  184  ping 10.35.178.3
4. turn admin_state_up false
  186  neutron port-update 4c4f6c98-a0ad-4f04-95ef-22aef0ebcbbc 
--admin-state-up false
5. verify stoped connection:
  187  ping 10.35.178.3
6. return admin state to up:
  188  neutron port-update 4c4f6c98-a0ad-4f04-95ef-22aef0ebcbbc 
--admin-state-up true
7. see that still there is no connection: 
  189  ping 10.35.178.3

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290054

Title:
  Returning associated port to floating IP from admin state up false to
  true doesn't unblocks connection via floating ip

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I tried to see if admin_state_up false blocks connection via floating
  ip (it worked). Then I tried to reverse it and the connection stayed
  blocks.

  Reproduce:
  1. boot an instance:
   174  nova boot instance1 --flavor 1 --image cirros-0.3.1 --nic 
net-id=`neutron net-list | grep netInt | cut -d" " -f2`
  2. attach floating ip to it:
175  neutron floatingip-create netExt193
176  neutron port-list 
177  neutron floatingip-associate 2b3459aa-2606-42b2-ab15-4df68733b4ad 
4c4f6c98-a0ad-4f04-95ef-22aef0ebcbbc
  3. verify communication:
184  ping 10.35.178.3
  4. turn admin_state_up false
186  neutron port-update 4c4f6c98-a0ad-4f04-95ef-22aef0ebcbbc 
--admin-state-up false
  5. verify stoped connection:
187  ping 10.35.178.3
  6. return admin state to up:
188  neutron port-update 4c4f6c98-a0ad-4f04-95ef-22aef0ebcbbc 
--admin-state-up true
  7. see that still there is no connection: 
189  ping 10.35.178.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1290054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp