[Yahoo-eng-team] [Bug 1730892] Re: Nova Image Resize Generating Errors

2017-11-08 Thread Xuanzhou Perry Dong
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730892

Title:
  Nova Image Resize Generating Errors

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  When flavor disk size is larger than the image size, Nova will try to 
increase the image disk size to match the flavor disk size. In the process, it 
will call resize2fs to resize the image disk file system as will for 
raw-format, but this will generate an error since resize2fs should be executed 
on a partition block device instead of the whole block device (which includes 
boot sector, partition table, etc). 

  Steps to Reproduce
  ==

  1. Set the following configuration for nova-compute:
  use_cow_images = False
  force_raw_images = True

  2. nova boot --image cirros-0.3.5-x86_64-disk --nic net-
  id=6f0df6a5-8848-427b-8222-7b69d5602fe4 --flavor m1.tiny test_vm

  The following error log are generated:

  Nov 08 14:42:51 devstack01 nova-compute[10609]: DEBUG nova.virt.disk.api 
[None req-771fa44d-46ce-4486-9ed7-7a89ddb735ed demo admin] Unable to determine 
label for image  with error Unexpected error while running command.
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Command: e2label 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Exit code: 1
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stdout: u''
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stderr: u"e2label: Bad magic 
number in super-block while trying to open 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk\nCouldn't
 find valid filesystem superblock.\n". Cannot resize. {{(pid=10609) 
is_image_extendable /opt/stack/nova/nova/virt/disk/api.py:254}}

  Expected Result
  ===
  Wrong command should not be executed and no error logs should be generated.

  Actual Result
  =
  Error logs are generated:

  Environment
  ===
  1. Openstack Nova
  stack@devstack01:/opt/stack/nova$ git log -1
  commit 232458ae4e83e8b218397e42435baa9f1d025b68
  Merge: 650c9f3 9d400c3
  Author: Jenkins 
  Date:   Tue Oct 10 06:27:52 2017 +

  Merge "rp: Move RP._get|set_aggregates() to module scope"

  2. Hypervisor
  Libvirt + QEMU
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep libvirt
  ii  libvirt-bin3.6.0-1ubuntu5~cloud0  
amd64programs for the libvirt library
  ii  libvirt-clients3.6.0-1ubuntu5~cloud0  
amd64Programs for the libvirt library
  ii  libvirt-daemon 3.6.0-1ubuntu5~cloud0  
amd64Virtualization daemon
  ii  libvirt-daemon-system  3.6.0-1ubuntu5~cloud0  
amd64Libvirt daemon configuration files
  ii  libvirt-dev:amd64  3.6.0-1ubuntu5~cloud0  
amd64development files for the libvirt library
  ii  libvirt0:amd64 3.6.0-1ubuntu5~cloud0  
amd64library for interfacing with different virtualization systems
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep qemu
  ii  ipxe-qemu  1.0.0+git-20150424.a25a16d-1ubuntu1
all  PXE boot firmware - ROM images for qemu
  ii  qemu-block-extra:amd64 1:2.10+dfsg-0ubuntu3~cloud0
amd64extra block backend modules for qemu-system and qemu-utils
  ii  qemu-kvm   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU Full virtualization
  ii  qemu-slof  20151103+dfsg-1ubuntu1 
all  Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries
  ii  qemu-system-arm1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (arm)
  ii  qemu-system-common 1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries (common files)
  ii  qemu-system-mips   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (mips)
  ii  qemu-system-misc   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (miscellaneous)
  ii  qemu-system-ppc1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (ppc)
  ii  qemu-system-s390x  

[Yahoo-eng-team] [Bug 1730892] [NEW] Nova Image Resize Generating Errors

2017-11-07 Thread Xuanzhou Perry Dong
1:2.10+dfsg-0ubuntu3~cloud0  
  amd64QEMU utilities
stack@devstack01:/opt/stack/nova$ 

3. Networking type
Neutron with Openvswitch

** Affects: nova
 Importance: Low
 Assignee: Xuanzhou Perry Dong (oss-xzdong)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730892

Title:
  Nova Image Resize Generating Errors

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When flavor disk size is larger than the image size, Nova will try to 
increase the image disk size to match the flavor disk size. In the process, it 
will call resize2fs to resize the image disk file system as will for 
raw-format, but this will generate an error since resize2fs should be executed 
on a partition block device instead of the whole block device (which includes 
boot sector, partition table, etc). 

  Steps to Reproduce
  ==

  1. Set the following configuration for nova-compute:
  use_cow_images = False
  force_raw_images = True

  2. nova boot --image cirros-0.3.5-x86_64-disk --nic net-
  id=6f0df6a5-8848-427b-8222-7b69d5602fe4 --flavor m1.tiny test_vm

  The following error log are generated:

  Nov 08 14:42:51 devstack01 nova-compute[10609]: DEBUG nova.virt.disk.api 
[None req-771fa44d-46ce-4486-9ed7-7a89ddb735ed demo admin] Unable to determine 
label for image <LocalFileImage:{'path': 
'/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk', 
'format': 'raw'}> with error Unexpected error while running command.
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Command: e2label 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Exit code: 1
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stdout: u''
  Nov 08 14:42:51 devstack01 nova-compute[10609]: Stderr: u"e2label: Bad magic 
number in super-block while trying to open 
/opt/stack/data/nova/instances/73b49be7-0b5f-4a9d-a187-bbbf7fd59961/disk\nCouldn't
 find valid filesystem superblock.\n". Cannot resize. {{(pid=10609) 
is_image_extendable /opt/stack/nova/nova/virt/disk/api.py:254}}

  Expected Result
  ===
  Wrong command should not be executed and no error logs should be generated.

  Actual Result
  =
  Error logs are generated:

  Environment
  ===
  1. Openstack Nova
  stack@devstack01:/opt/stack/nova$ git log -1
  commit 232458ae4e83e8b218397e42435baa9f1d025b68
  Merge: 650c9f3 9d400c3
  Author: Jenkins <jenk...@review.openstack.org>
  Date:   Tue Oct 10 06:27:52 2017 +

  Merge "rp: Move RP._get|set_aggregates() to module scope"

  2. Hypervisor
  Libvirt + QEMU
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep libvirt
  ii  libvirt-bin3.6.0-1ubuntu5~cloud0  
amd64programs for the libvirt library
  ii  libvirt-clients3.6.0-1ubuntu5~cloud0  
amd64Programs for the libvirt library
  ii  libvirt-daemon 3.6.0-1ubuntu5~cloud0  
amd64Virtualization daemon
  ii  libvirt-daemon-system  3.6.0-1ubuntu5~cloud0  
amd64Libvirt daemon configuration files
  ii  libvirt-dev:amd64  3.6.0-1ubuntu5~cloud0  
amd64development files for the libvirt library
  ii  libvirt0:amd64 3.6.0-1ubuntu5~cloud0  
amd64library for interfacing with different virtualization systems
  stack@devstack01:/opt/stack/nova$ dpkg -l | grep qemu
  ii  ipxe-qemu  1.0.0+git-20150424.a25a16d-1ubuntu1
all  PXE boot firmware - ROM images for qemu
  ii  qemu-block-extra:amd64 1:2.10+dfsg-0ubuntu3~cloud0
amd64extra block backend modules for qemu-system and qemu-utils
  ii  qemu-kvm   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU Full virtualization
  ii  qemu-slof  20151103+dfsg-1ubuntu1 
all  Slimline Open Firmware -- QEMU PowerPC version
  ii  qemu-system1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries
  ii  qemu-system-arm1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (arm)
  ii  qemu-system-common 1:2.10+dfsg-0ubuntu3~cloud0
amd64QEMU full system emulation binaries (common files)
  ii  qemu-system-mips   1:2.10+dfsg-0ubuntu1~cloud0
amd64QEMU full system emulation binaries (mips)
  ii  qemu-system-misc   1:2.10+dfsg-0ubuntu1~cloud0
amd64  

[Yahoo-eng-team] [Bug 1714247] Re: Cleaning up deleted instances leaks resources

2017-10-11 Thread Xuanzhou Perry Dong
Thanks for the response. I have stopped and started the nova-compute
service. The re-starting of nova-compute service is shown in the log (I
am not sure why the stopping of nova-compute service is not shown;
probably I should use "raw").

stack@devstack01:~/devstack$ systemctl start devstack@n-cpu.service 


 AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to start 'devstack@n-cpu.service'.
Authenticating as: stack,,, (stack)
Password: 
 AUTHENTICATION COMPLETE ===

BR/Perry

** Changed in: nova
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714247

Title:
  Cleaning up deleted instances leaks resources

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  When the '_cleanup_running_deleted_instances' nova-compute manager
  periodic task cleans up an instance that still exists on the host
  although being deleted from the DB, the according network info is not
  properly retrieved. For this reason, vif ports will not be cleaned up.

  In this situation there may also be stale volume connections. Those
  will be leaked as well as os-brick attempts to flush those
  inaccessible devices, which will fail. As per a recent os-brick
  change, a 'force' flag must be set in order to ignore flush errors.

  Log: http://paste.openstack.org/raw/620048/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714247] Re: Cleaning up deleted instances leaks resources

2017-10-11 Thread Xuanzhou Perry Dong
Tested in the latest master branch using devstack:

1. vif is unplugged

See logs in: paste.openstack.org/show/623286/

2. no stale iscsi session

See logs in: http://paste.openstack.org/show/623288/

Hi, Lucian,

Could you check the logs to see if you do things differently?

BR/Perry



** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714247

Title:
  Cleaning up deleted instances leaks resources

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When the '_cleanup_running_deleted_instances' nova-compute manager
  periodic task cleans up an instance that still exists on the host
  although being deleted from the DB, the according network info is not
  properly retrieved. For this reason, vif ports will not be cleaned up.

  In this situation there may also be stale volume connections. Those
  will be leaked as well as os-brick attempts to flush those
  inaccessible devices, which will fail. As per a recent os-brick
  change, a 'force' flag must be set in order to ignore flush errors.

  Log: http://paste.openstack.org/raw/620048/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1714247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670625] [NEW] populate fdb entries before spawning vm

2017-03-07 Thread Perry
Public bug reported:

before spawning vm, neutron l2 agent has to wire up device on compute.
This is controlled by provisioning_block in neutron. However L2 agent on
compute takes much time to sync lots of fdb info during wire up device
and which causes VM takes long time to spawn and active. The fdb
synchronization could be done when spawning vm to improve the
performance of booting vm. This also delays next loop of daemon_loop to
get another new taps processed.

Steps to re-produce the problem:

L2 of linuxbridge and dedicated compute nodes

1) nova boot  --image  ubuntu-guest-image-14.04  --flavor m1.medium
--nic net-name=internal --security-groups unlocked --min-count 50 --max-
count 50 free_deletable

2)observe numerous log as below in neutron-linuxbridge-agent.log.

2017-03-07 05:01:43.220 25336 DEBUG neutron.agent.linux.utils [req-
534e9f59-0a66-4071-8c40-977f87a6be49 - - - - -] Running command:
['sudo', '/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf',
'bridge', 'fdb', 'append', '00:00:00:00:00:00', 'dev', 'vxlan-1', 'dst',
'10.160.152.141'] create_process /usr/lib/python2.7/site-
packages/neutron/agent/linux/utils.py:89

3)eventually, it takes lots of time to process new tap (6.2 seconds) in
neutron-linuxbridge-agent.log

2017-03-07 05:01:44.654 25336 DEBUG
neutron.plugins.ml2.drivers.agent._common_agent [req-835dd509-27b0-42e3
-903d-32577187d288 - - - - -] Loop iteration exceeded interval (2 vs.
6.2001709938)! daemon_loop /usr/lib/python2.7/site-
packages/neutron/plugins/ml2/drivers/agent/_common_agent.py:466

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  before spawning vm, neutron l2 agent has to wire up device on compute.
- This is controlled by provision_block in neutron. However L2 agent on
+ This is controlled by provisioning_block in neutron. However L2 agent on
  compute takes much time to sync lots of fdb info during wire up device
  and which causes VM takes long time to spawn and active. The fdb
  synchronization could be done when spawning vm to improve the
  performance of booting vm. This also delays next loop of daemon_loop to
  get another new taps processed.
- 
  
  Steps to re-produce the problem:
  
  L2 of linuxbridge and dedicated compute nodes
  
  1) nova boot  --image  ubuntu-guest-image-14.04-20161130-x86_64
  --flavor m1.medium --nic net-name=internal --security-groups unlocked
  --min-count 50 --max-count 50 free_deletable
  
  2)observe numerous log as below in neutron-linuxbridge-agent.log.
  
  2017-03-07 05:01:43.220 25336 DEBUG neutron.agent.linux.utils [req-
  534e9f59-0a66-4071-8c40-977f87a6be49 - - - - -] Running command:
  ['sudo', '/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf',
  'bridge', 'fdb', 'append', '00:00:00:00:00:00', 'dev', 'vxlan-1', 'dst',
  '10.160.152.141'] create_process /usr/lib/python2.7/site-
  packages/neutron/agent/linux/utils.py:89
  
  3)eventually, it takes lots of time to process new tap (6.2 seconds) in
  neutron-linuxbridge-agent.log
  
  2017-03-07 05:01:44.654 25336 DEBUG
  neutron.plugins.ml2.drivers.agent._common_agent [req-835dd509-27b0-42e3
  -903d-32577187d288 - - - - -] Loop iteration exceeded interval (2 vs.
  6.2001709938)! daemon_loop /usr/lib/python2.7/site-
  packages/neutron/plugins/ml2/drivers/agent/_common_agent.py:466

** Description changed:

  before spawning vm, neutron l2 agent has to wire up device on compute.
  This is controlled by provisioning_block in neutron. However L2 agent on
  compute takes much time to sync lots of fdb info during wire up device
  and which causes VM takes long time to spawn and active. The fdb
  synchronization could be done when spawning vm to improve the
  performance of booting vm. This also delays next loop of daemon_loop to
  get another new taps processed.
  
  Steps to re-produce the problem:
  
  L2 of linuxbridge and dedicated compute nodes
  
- 1) nova boot  --image  ubuntu-guest-image-14.04-20161130-x86_64
- --flavor m1.medium --nic net-name=internal --security-groups unlocked
- --min-count 50 --max-count 50 free_deletable
+ 1) nova boot  --image  ubuntu-guest-image-14.04  --flavor m1.medium
+ --nic net-name=internal --security-groups unlocked --min-count 50 --max-
+ count 50 free_deletable
  
  2)observe numerous log as below in neutron-linuxbridge-agent.log.
  
  2017-03-07 05:01:43.220 25336 DEBUG neutron.agent.linux.utils [req-
  534e9f59-0a66-4071-8c40-977f87a6be49 - - - - -] Running command:
  ['sudo', '/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf',
  'bridge', 'fdb', 'append', '00:00:00:00:00:00', 'dev', 'vxlan-1', 'dst',
  '10.160.152.141'] create_process /usr/lib/python2.7/site-
  packages/neutron/agent/linux/utils.py:89
  
  3)eventually, it takes lots of time to process new tap (6.2 seconds) in
  neutron-linuxbridge-agent.log
  
  2017-03-07 05:01:44.654 25336 DEBUG
  neutron.plugins.ml2.drivers.agent._common_agent [req-835dd509-27b0-42e3
  -903d-32577187d288 - - - - -] Loop iteration exceeded 

[Yahoo-eng-team] [Bug 1655605] [NEW] metadata proxy won't start in dhcp namespace when network(subnet) is removed from router

2017-01-11 Thread Perry
Public bug reported:

When adding network(subnet) into router immediately after creating
network(subnet), there is no metadata proxy process created in dhcp
namespace to listen on port 80. It causes problem when deleted
network(subnet) from router: it won't call metadata service successfully
until restarting dhcp service. Restarting dhcp service is just a
workaround and is not acceptable as solution.


This problem is introduced in Newton release. When adding network, it
will check whether the network has isolated ipv4 subnet. It queries all
ports belonging to the network, and see whether there is any port used
as gateway. if yes, then it thinks the subnet is not isolated. If we add
subnet to router immediately after creating subnet, the process of
network creation( creating metadata proxy) and the process of adding
subnet to interface happens at the same time. The seconds process
creates port as gateway quickly and then the first process checks and
treats it no isolated, and then will kill metadata proxy created soon
earlier.

# /etc/neutron/dhcp_agent.ini
enable_isolated_metadata = True
enable_metadata_network = True

#execute the following commands in batch without interruption.
neutron net-create network_1
neutron subnet-create --name subnet_1 network_1 172.60.0.0/24
neutron router-interface-add default subnet_1

# there is no 80 port.
 ip netns exec qdhcp-c5791b7d-ec3e-4e96-9a32-b9d1217ed330 netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
PID/Program name   
tcp        0      0 172.16.255.2:53         0.0.0.0:*               LISTEN      
16926/dnsmasq      
tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN      
16926/dnsmasq      
tcp6       0      0 fe80::f816:3eff:fe80:53 :::*                    LISTEN      
16926/dnsmasq      
udp        0      0 172.16.255.2:53         0.0.0.0:*                           
16926/dnsmasq      
udp        0      0 169.254.169.254:53      0.0.0.0:*                           
16926/dnsmasq      
udp        0      0 0.0.0.0:67              0.0.0.0:*                           
16926/dnsmasq      
udp6       0      0 :::547                  :::*                                
16926/dnsmasq      
udp6       0      0 fe80::f816:3eff:fe80:53 :::*                                
16926/dnsmasq

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655605

Title:
  metadata proxy won't start in dhcp namespace when network(subnet) is
  removed from router

Status in neutron:
  New

Bug description:
  When adding network(subnet) into router immediately after creating
  network(subnet), there is no metadata proxy process created in dhcp
  namespace to listen on port 80. It causes problem when deleted
  network(subnet) from router: it won't call metadata service
  successfully until restarting dhcp service. Restarting dhcp service is
  just a workaround and is not acceptable as solution.


  This problem is introduced in Newton release. When adding network, it
  will check whether the network has isolated ipv4 subnet. It queries
  all ports belonging to the network, and see whether there is any port
  used as gateway. if yes, then it thinks the subnet is not isolated. If
  we add subnet to router immediately after creating subnet, the process
  of network creation( creating metadata proxy) and the process of
  adding subnet to interface happens at the same time. The seconds
  process creates port as gateway quickly and then the first process
  checks and treats it no isolated, and then will kill metadata proxy
  created soon earlier.

  # /etc/neutron/dhcp_agent.ini
  enable_isolated_metadata = True
  enable_metadata_network = True

  #execute the following commands in batch without interruption.
  neutron net-create network_1
  neutron subnet-create --name subnet_1 network_1 172.60.0.0/24
  neutron router-interface-add default subnet_1

  # there is no 80 port.
   ip netns exec qdhcp-c5791b7d-ec3e-4e96-9a32-b9d1217ed330 netstat -tunlp
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address           Foreign Address         State     
  PID/Program name   
  tcp        0      0 172.16.255.2:53         0.0.0.0:*               LISTEN    
  16926/dnsmasq      
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN    
  16926/dnsmasq      
  tcp6       0      0 fe80::f816:3eff:fe80:53 :::*                    LISTEN    
  16926/dnsmasq      
  udp        0      0 172.16.255.2:53         0.0.0.0:*                         
  16926/dnsmasq      
  udp        0      0 169.254.169.254:53      0.0.0.0:*                         
  16926/dnsmasq      
  udp        0      0 0.0.0.0:67              0.0.0.0:*                         
  16926/dnsmasq      
  udp6    

[Yahoo-eng-team] [Bug 1629159] [NEW] delete router with error of failed unplugging ha interface

2016-09-29 Thread Perry
Public bug reported:

When deleting a Router, there are ERROR logs of failed unplugging ha
interface. This happens in  environment with stable/mitaka. What needs
to note is that router could be deleted successfully after the ERROR.

Reproduce steps:
neutron router-create test
neutron router-delete test
monitor log in neutron-l3-agent.log

This problem is different from existing defects. some existing defect
addressed problem of looping deleting router. some addressed problem of
race between router sync and router deleting. And some defect has
similar symptom which happened on different place,  such as bug 1606801.


2016-09-29 06:57:11.744 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'delete', 
'qrouter-74c4a209-2f42-4f45-b409-082939df0962'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
2016-09-29 06:57:11.835 6287 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142
2016-09-29 06:57:11.836 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'kill', '-9', '10728'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
2016-09-29 06:57:11.897 6287 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142
2016-09-29 06:57:11.898 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-74c4a209-2f42-4f45-b409-082939df0962', 'ip', 'link', 'delete', 
'ha-e210e603-0c'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
2016-09-29 06:57:11.961 6287 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-74c4a209-2f42-4f45-b409-082939df0962": No such file or directory

2016-09-29 06:57:11.962 6287 ERROR neutron.agent.linux.interface [-] Failed 
unplugging interface 'ha-e210e603-0c'
2016-09-29 06:57:11.962 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'kill', '-15', '10910'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84

** Affects: neutron
 Importance: Undecided
 Assignee: Perry (panxia6679)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Perry (panxia6679)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1629159

Title:
  delete router with error of  failed unplugging ha interface

Status in neutron:
  New

Bug description:
  When deleting a Router, there are ERROR logs of failed unplugging ha
  interface. This happens in  environment with stable/mitaka. What needs
  to note is that router could be deleted successfully after the ERROR.

  Reproduce steps:
  neutron router-create test
  neutron router-delete test
  monitor log in neutron-l3-agent.log

  This problem is different from existing defects. some existing defect
  addressed problem of looping deleting router. some addressed problem
  of race between router sync and router deleting. And some defect has
  similar symptom which happened on different place,  such as bug
  1606801.


  2016-09-29 06:57:11.744 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'delete', 
'qrouter-74c4a209-2f42-4f45-b409-082939df0962'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
  2016-09-29 06:57:11.835 6287 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142
  2016-09-29 06:57:11.836 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'kill', '-9', '10728'] create_process 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
  2016-09-29 06:57:11.897 6287 DEBUG neutron.agent.linux.utils [-] Exit code: 0 
execute 
/opt/bbc/openstack-2016.1-bbc234/neutron/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py:142
  2016-09-29 06:57:11.898 6287 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 

[Yahoo-eng-team] [Bug 1599688] [NEW] host.py assertion error during NOVA handling of HUP signal

2016-07-06 Thread Xuanzhou Perry Dong
Public bug reported:

Description
===
During handling of HUP signal in nova, the following exception is generated:

2016-07-07 01:36:18.012 DEBUG nova.virt.libvirt.host [-] Starting green 
dispatch thread from (pid=30178) _init_events /op
t/stack/nova/nova/virt/libvirt/host.py:341
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
listener.cb(fileno)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
result = function(*args, **kwargs)
  File "/opt/stack/nova/nova/utils.py", line 1053, in context_wrapper
return func(*args, **kwargs)
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 131, in 
_dispatch_thread
self._dispatch_events()
  File "/opt/stack/nova/nova/virt/libvirt/host.py", line 236, in 
_dispatch_events
assert _c
AssertionError


Steps to reproduce
==
1. Start a devstack with latest master branch.

2. Devstack doesn't start the nova-compute with daemon. So kill the
nova-compute started by devstack and replace it with "nohup
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf &"

3. Send a HUP signal to nova-compute process.

Expected result
===
Expect the nova-compute reloads the configuration file and no exception is 
generated.

Actual result
=
An exception is generated.

Environment
===
1. Nova version:

vagrant@vagrant-ubuntu-trusty-64:/opt/stack/nova/nova$ git log -1
commit 2d5460d085895a577734547660a8bcfc53b04de2
Merge: 51fdeaf 40ea165
Author: Jenkins <jenk...@review.openstack.org>
Date:   Wed Jun 22 06:18:23 2016 +

Merge "Publish proxy APIs deprecation in api ref doc"


Logs & Configs
======
As above.

** Affects: nova
 Importance: Medium
 Assignee: Xuanzhou Perry Dong (oss-xzdong)
     Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Xuanzhou Perry Dong (oss-xzdong)

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599688

Title:
  host.py assertion error during NOVA handling of HUP signal

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  During handling of HUP signal in nova, the following exception is generated:

  2016-07-07 01:36:18.012 DEBUG nova.virt.libvirt.host [-] Starting green 
dispatch thread from (pid=30178) _init_events /op
  t/stack/nova/nova/virt/libvirt/host.py:341
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/utils.py", line 1053, in context_wrapper
  return func(*args, **kwargs)
File "/opt/stack/nova/nova/virt/libvirt/host.py", line 131, in 
_dispatch_thread
  self._dispatch_events()
File "/opt/stack/nova/nova/virt/libvirt/host.py", line 236, in 
_dispatch_events
  assert _c
  AssertionError

  
  Steps to reproduce
  ==
  1. Start a devstack with latest master branch.

  2. Devstack doesn't start the nova-compute with daemon. So kill the
  nova-compute started by devstack and replace it with "nohup
  /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf &"

  3. Send a HUP signal to nova-compute process.

  Expected result
  ===
  Expect the nova-compute reloads the configuration file and no exception is 
generated.

  Actual result
  =
  An exception is generated.

  Environment
  ===
  1. Nova version:

  vagrant@vagrant-ubuntu-trusty-64:/opt/stack/nova/nova$ git log -1
  commit 2d5460d085895a577734547660a8bcfc53b04de2
  Merge: 51fdeaf 40ea165
  Author: Jenkins <jenk...@review.openstack.org>
  Date:   Wed Jun 22 06:18:23 2016 +

  Merge "Publish proxy APIs deprecation in api ref doc"

  
  Logs & Configs
  ==
  As above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1599688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526734] Re: Restart of nova-compute service fails

2016-07-06 Thread Xuanzhou Perry Dong
Already fixed by: https://review.openstack.org/284287

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526734

Title:
  Restart of nova-compute service fails

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Incomplete

Bug description:
  After sending HUP signal to nova-compute process we can observe trace
  in logs:

  2015-11-30 10:35:26.509 INFO oslo_service.service 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Caught SIGHUP, exiting
  2015-11-30 10:35:31.894 DEBUG oslo_concurrency.lockutils 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Acquired semaphore 
"singleton_lock" from (pid=24742) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
  2015-11-30 10:35:31.900 DEBUG oslo_concurrency.lockutils 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Releasing semaphore 
"singleton_lock" from (pid=24742) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
  2015-11-30 10:35:31.903 ERROR nova.service 
[req-ecb7f866-b041-4abb-9037-164443b8387f None None] Service error occurred 
during cleanup_host
  2015-11-30 10:35:31.903 TRACE nova.service Traceback (most recent call last):
  2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/service.py", line 312, in stop
  2015-11-30 10:35:31.903 TRACE nova.service self.manager.cleanup_host()
  2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/compute/manager.py", line 1323, in cleanup_host
  2015-11-30 10:35:31.903 TRACE nova.service 
self.instance_events.cancel_all_events()
  2015-11-30 10:35:31.903 TRACE nova.service   File 
"/opt/stack/nova/nova/compute/manager.py", line 578, in cancel_all_events
  2015-11-30 10:35:31.903 TRACE nova.service for instance_uuid, events in 
our_events.items():
  2015-11-30 10:35:31.903 TRACE nova.service AttributeError: 'NoneType' object 
has no attribute 'items'
  2015-11-30 10:35:31.903 TRACE nova.service

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348509] Re: the volume may leave over when we delete instance whose task_state is block_device_mapping

2016-07-04 Thread Xuanzhou Perry Dong
This bug can't be reproduced in the latest master branch. This probably
is fixed by the resource tracker lock for the instance action. Propose
to close this bug.

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348509

Title:
  the volume may leave over when  we delete instance whose task_state is
  block_device_mapping

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  here, two scenes may cause that a volume leaves over   when  we delete
  instance whose task_state is   block_device_mapping .The first scene
  is that using the boot volume created by image  creates instance; The
  other scene is that using image create instance  with a volume created
  through a image.

  Through analyzing, we find that the volume id is not update to
  block_device_mapping table in DB until a volume created by  an image
  through setting parameters in Blocking Device Mapping v2 is attached
  to an instance completely.If we delete the instance before the volume
  id is not update to the block_device_mapping table, the problem
  mentioned above will occur

  Two examples  to reproduce the problem on latest  icehousce:
  1. the first scene
  (1)root@devstack:~# nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  (2)root@devstack:~# nova boot --flavor m1.tiny --block-device 
id=61ebee75-5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=volume,device=vda,size=1,shutdown=removed,bootindex=0
 --nic net-id=354ba9ac-e6a7-4fd6-a49f-6ae18a815e95 tralon_test
  root@devstack:~# nova list
  
+--+-++--+-+---+
  | ID   | Name| Status | Task State
   | Power State | Networks  |
  
+--+-++--+-+---+
  | 57cbb39d-c93f-44eb-afda-9ce00110950d | tralon_test | BUILD  | 
block_device_mapping | NOSTATE | private=10.0.0.20 |
  
+--+-++--+-+---+
  (3)root@devstack:~# nova delete tralon_test
  root@devstack:~# nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  (4) root@devstack:~# cinder list
  
+--+---+--+--+-+--+--+
  |  ID  |   Status  | Name | Size | Volume 
Type | Bootable | Attached to  |
  
+--+---+--+--+-+--+--+
  | 3e5579a9-5aac-42b6-9885-441e861f6cc0 | available | None |  1   | None   
 |  false   |  |
  | a4121322-529b-4223-ac26-0f569dc7821e | available |  |  1   | None   
 |   true   |  |
  | a7ad846b-8638-40c1-be42-f2816638a917 |   in-use  |  |  1   | None   
 |   true   | 57cbb39d-c93f-44eb-afda-9ce00110950d |
  
+--+---+--+--+-+--+--+
  we can see that the instance  57cbb39d-c93f-44eb-afda-9ce00110950d was 
deleted while the volume still exists with the "in-use" status

  2. the scend scene
   (1)root@devstack:~# nova list
  ++--+++-+--+
  | ID | Name | Status | Task State | Power State | Networks |
  ++--+++-+--+
  ++--+++-+--+
  (2)root@devstack:~# nova boot --flavor m1.tiny --image 
61ebee75-5883-49a3-bf85-ad6f6c29fc1b --nic 
net-id=354ba9ac-e6a7-4fd6-a49f-6ae18a815e95  --block-device 
id=61ebee75-5883-49a3-bf85-ad6f6c29fc1b,source=image,dest=volume,device=vdb,size=1,shutdown=removed
 tralon_image_instance
  root@devstack:~# nova list
  
+--+---++--+-+---+
  | ID   | Name  | Status | 
Task State   | Power State | Networks  |
  
+--+---++--+-+---+
  | 

[Yahoo-eng-team] [Bug 1592270] [NEW] can get shared network/subnet, but fail to create port when fixed_ip is specified

2016-06-14 Thread Perry
Public bug reported:

For user who doesn't have admin role or isn't shared network's owner, he
/ she can see shared network and related subnet, but fail to create port
when specifying fixed_ips.


Policy to allow GET, but disallow to create port when specified fixed_ips.
#user can see share networks
"get_network": "rule:admin_or_owner or rule:shared or rule:external or 
rule:context_is_advsvc",
#user can see share subnets
"get_subnet": "rule:admin_or_owner or rule:shared",
#user won't be able to create port when specifying fixed_ips
"create_port:fixed_ips": "rule:context_is_advsvc or 
rule:admin_or_network_owner",

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592270

Title:
  can get shared network/subnet, but fail to create port when fixed_ip
  is specified

Status in neutron:
  New

Bug description:
  For user who doesn't have admin role or isn't shared network's owner,
  he / she can see shared network and related subnet, but fail to create
  port when specifying fixed_ips.

  
  Policy to allow GET, but disallow to create port when specified fixed_ips.
  #user can see share networks
  "get_network": "rule:admin_or_owner or rule:shared or rule:external or 
rule:context_is_advsvc",
  #user can see share subnets
  "get_subnet": "rule:admin_or_owner or rule:shared",
  #user won't be able to create port when specifying fixed_ips
  "create_port:fixed_ips": "rule:context_is_advsvc or 
rule:admin_or_network_owner",

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564129] [NEW] Implied Roles responses lack 'links' pointing back to the API call that generated them

2016-03-30 Thread Sean Perry
Public bug reported:

For instance:

 http https://poc.example.com:35357/v3/role_inferences/ 
"X-Auth-Token:d6b832b25b5f4eafa53ebd8399d41b82"
HTTP/1.1 200 OK
Connection: Keep-Alive
Content-Length: 371
Content-Type: application/json
Date: Wed, 30 Mar 2016 22:22:47 GMT
Keep-Alive: timeout=5, max=100
Server: Apache/2.4.7 (Ubuntu)
Vary: X-Auth-Token
x-openstack-request-id: req-7632b6fb-1ca2-4cf3-a9c4-b2d74627e282

{
"role_inferences": [
{
"implies": [
{
"id": "edd42085d3ab472e9cf13b3cf3c362b6",
"links": {
"self": 
"https://poc.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6;
},
"name": "Member"
}
],
"prior_role": {
"id": "5a912666c3704c22a20d4c35f3068a88",
"links": {
"self": 
"https://poc.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88;
},
"name": "testing"
}
}
]
}

While there are 'links' on the individual roles there is not one on the
response as a whole. This is the case with all of the Implied Roles
reponses.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1564129

Title:
  Implied Roles responses lack 'links' pointing back to the API call
  that generated them

Status in OpenStack Identity (keystone):
  New

Bug description:
  For instance:

   http https://poc.example.com:35357/v3/role_inferences/ 
"X-Auth-Token:d6b832b25b5f4eafa53ebd8399d41b82"
  HTTP/1.1 200 OK
  Connection: Keep-Alive
  Content-Length: 371
  Content-Type: application/json
  Date: Wed, 30 Mar 2016 22:22:47 GMT
  Keep-Alive: timeout=5, max=100
  Server: Apache/2.4.7 (Ubuntu)
  Vary: X-Auth-Token
  x-openstack-request-id: req-7632b6fb-1ca2-4cf3-a9c4-b2d74627e282

  {
  "role_inferences": [
  {
  "implies": [
  {
  "id": "edd42085d3ab472e9cf13b3cf3c362b6",
  "links": {
  "self": 
"https://poc.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6;
  },
  "name": "Member"
  }
  ],
  "prior_role": {
  "id": "5a912666c3704c22a20d4c35f3068a88",
  "links": {
  "self": 
"https://poc.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88;
  },
  "name": "testing"
  }
  }
  ]
  }

  While there are 'links' on the individual roles there is not one on
  the response as a whole. This is the case with all of the Implied
  Roles reponses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1564129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563113] [NEW] Implied Roles responses do not match the spec

2016-03-28 Thread Sean Perry
Public bug reported:

http --pretty format PUT
https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88/implies/edd42085d3ab472e9cf13b3cf3c362b6
"X-Auth-Token:4879c74089b744439057581c9d85bc19"

{
"role_inference": {
"implies": {
"id": "edd42085d3ab472e9cf13b3cf3c362b6", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6;
}, 
"name": "SomeRole1"
}, 
"prior_role": {
"id": "5a912666c3704c22a20d4c35f3068a88", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88;
}, 
"name": "testing"
}
}
}

https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-
api-v3.rst#create-role-inference-rule

{
"role_inference": {
"prior_role": {
"id": "--prior-role-id--",
"links": {
"self": "http://identity:35357/v3/roles/--prior-role-id--;
}
"name": "prior role name"
},
"implies":
{
"id": "--implied-role1-id--",
"link": {
"self": 
"http://identity:35357/v3/roles/--implied-role1-id--;
},
"name": "implied role1 name"
}
},
}

Note missing comma and s/links/link/. Also, json is usually output in
sorted order.

http --pretty format GET
https://identity.example.com:35357/v3/role_inferences "X-Auth-
Token:4879c74089b744439057581c9d85bc19"

{
"role_inferences": [
{
"implies": [
{
"id": "edd42085d3ab472e9cf13b3cf3c362b6", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6;
}, 
"name": "SomeRole1"
}
], 
"prior_role": {
"id": "5a912666c3704c22a20d4c35f3068a88", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88;
}, 
"name": "testing"
}
}
]
}

https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-
api-v3.rst#list-all-role-inference-rules

Again, s/link/links/. No missing comma though.

http --pretty format GET
https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88/implies/edd42085d3ab472e9cf13b3cf3c362b6
"X-Auth-Token:4879c74089b744439057581c9d85bc19"

{
"role_inference": {
"implies": {
"id": "edd42085d3ab472e9cf13b3cf3c362b6", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6;
}, 
"name": "SomeRole1"
}, 
"prior_role": {
"id": "5a912666c3704c22a20d4c35f3068a88", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88;
}, 
"name": "testing"
}
}
}

https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-
api-v3.rst#get-role-inference-rule

According to the spec, there is no "role_inference" wrapper here. Also,
a top level "links". There is also a missing comma but the 'links' for
implies is correct (only place this is true).

http --pretty format GET
https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88/implies
"X-Auth-Token:4879c74089b744439057581c9d85bc19"

{
"role_inference": {
"implies": [
{
"id": "edd42085d3ab472e9cf13b3cf3c362b6", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/edd42085d3ab472e9cf13b3cf3c362b6;
}, 
"name": "SomeRole1"
}
], 
"prior_role": {
"id": "5a912666c3704c22a20d4c35f3068a88", 
"links": {
"self": 
"https://identity.example.com:35357/v3/roles/5a912666c3704c22a20d4c35f3068a88;
}, 
"name": "testing"
}
}
}

https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-
api-v3.rst#list-implied-roles-for-role

This says there will also be a "links" key under role_inference (which
is wrong). Also, continued failure of s/link/links/.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 

[Yahoo-eng-team] [Bug 1070544] Re: Would like a single call to GET users having a role in a tenant

2015-09-24 Thread Sean Perry
Mark the python-keystoneclient ticket as "invalid" too.

** Changed in: python-keystoneclient
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1070544

Title:
  Would like a single call to GET users having a role in a tenant

Status in Keystone:
  Invalid
Status in python-keystoneclient:
  Invalid

Bug description:
  Feature Request:

  It would be nice to have a single HTTP call that would return user
  information for users that have an assigned role within a tenant.

  Something like:

  GET /users?role==

  would work...

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1070544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp