[Yahoo-eng-team] [Bug 1624751] Re: i18n: duplicate 'translate' marking in theme-preview.html

2016-09-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371986
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4340fc25fe618e849771e7cbca54b8b3383df3e5
Submitter: Jenkins
Branch:master

commit 4340fc25fe618e849771e7cbca54b8b3383df3e5
Author: Akihiro Motoki 
Date:   Sat Sep 17 21:33:21 2016 +

Remove duplicated inappropriate 'translate' tag

Change-Id: Id07287e20760116285cb5fda86aac272b8d06e35
Closes-Bug: #1624751


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624751

Title:
  i18n: duplicate 'translate' marking in theme-preview.html

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  There are duplicated and nested 'translated' mark in theme-
  preview.html.

 
   
 
 
   {$ 'Themable Option 1' | translate $}
 
   
   
 
 
  {$ 'Themable Option 2' | translate $}
 
   

  As a result, we have the following entries in PO files.

  {$ 'Themable Option 1' | translate $}
  {$ 'Themable Option 2' | translate $}

  Themable Option 1/2 should be extracted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587777] Re: Mitaka: dashboard performance

2016-09-24 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/158

Title:
  Mitaka: dashboard performance

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Environment: Openstack Mitaka on top of Leap 42.1, 1 control node, 2
  compute nodes, 3-node-Ceph-cluster.

  Issue: Since switching to Mitaka, we're experiencing severe delays
  when accessing the dashboard - i.e. switching between "Compute -
  Overview" and "Compute - Instances" takes 15+ seconds, even after
  multiple invocations.

  Steps to reproduce:
  1. Install Openstack Mitaka, incl. dashboard & navigate through the dashboard.

  Expected result:
  Browsing through the dashboard with reasonable waiting times.

  Actual result:
  Refreshing the dashboard can take up to 30 secs, switching between views 
(e.g. volumes to instances) takes about 15 secs in average.

  Additional information:
  I've had a look at the requests, the Apache logs and our control node's stats 
and noticed that it's a single call that's taking all the time... I see no 
indications of any error, it seems that once WSGI is invoked, that call simply 
takes its time. Intermediate curl requests are logged, so I see it's doing its 
work. Looking at "vmstat" I can see that it's user space taking all the load 
(Apache / mod_wsgi drives its CPU to 100%, while other CPUs are idle - and no 
i/o wait, no system space etc.).

  ---cut here---
  control1:/var/log # top
  top - 10:51:35 up 8 days, 18:16,  2 users,  load average: 2,17, 1,65, 1,48
  Tasks: 383 total,   2 running, 381 sleeping,   0 stopped,   0 zombie
  %Cpu0  : 31,7 us,  2,9 sy,  0,0 ni, 65,0 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu1  : 13,1 us,  0,7 sy,  0,0 ni, 86,2 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu2  : 17,2 us,  0,7 sy,  0,0 ni, 81,2 id,  1,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu3  : 69,4 us, 12,6 sy,  0,0 ni, 17,9 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu4  : 14,6 us,  1,0 sy,  0,0 ni, 84,4 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu5  : 16,9 us,  0,7 sy,  0,0 ni, 81,7 id,  0,7 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu6  : 17,3 us,  1,3 sy,  0,0 ni, 81,0 id,  0,3 wa,  0,0 hi,  0,0 si,  0,0 
st
  %Cpu7  : 21,2 us,  1,3 sy,  0,0 ni, 77,5 id,  0,0 wa,  0,0 hi,  0,0 si,  0,0 
st
  KiB Mem:  65943260 total, 62907676 used,  3035584 free, 1708 buffers
  KiB Swap:  2103292 total,0 used,  2103292 free. 53438560 cached Mem

PID USER  PR  NIVIRTRESSHR S  %CPU  %MEM TIME+ COMMAND
   6776 wwwrun20   0  565212 184504  13352 S 100,3 0,280   0:07.83 
httpd-prefork
   1130 root  20   0  399456  35760  22508 S 5,980 0,054 818:13.17 X
   1558 sddm  20   0  922744 130440  72148 S 5,316 0,198 966:03.82 
sddm-greeter
  20999 nova  20   0  285888 116292   5696 S 2,658 0,176 164:27.08 
nova-conductor
  21030 nova  20   0  758752 182644  16512 S 2,658 0,277  58:20.40 nova-api
  18757 heat  20   0  273912  73740   4612 S 2,326 0,112  50:48.72 
heat-engine
  18759 heat  20   0  273912  73688   4612 S 2,326 0,112   4:27.54 
heat-engine
  20995 nova  20   0  286236 116644   5696 S 2,326 0,177 164:38.89 
nova-conductor
  21027 nova  20   0  756204 180752  16980 S 2,326 0,274  58:20.09 nova-api
  21029 nova  20   0  756536 180644  16496 S 2,326 0,274 139:46.29 nova-api
  21031 nova  20   0  756888 180920  16512 S 2,326 0,274  58:36.37 nova-api
  24771 glance20   0 2312152 139000  17360 S 2,326 0,211  24:47.83 
glance-api
  24772 glance20   0  631672 111248   4848 S 2,326 0,169  22:59.77 
glance-api
  28424 cinder20   0  720972 108536   4968 S 2,326 0,165  28:31.42 
cinder-api
  28758 neutron   20   0  317708 101812   4472 S 2,326 0,154 153:45.55 
neutron-server

  #

  control1:/var/log # vmstat 1
  procs ---memory-- ---swap-- -io -system-- 
--cpu-
   r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa 
st
   1  0  0 2253144   1708 5344047200 46044 11  1 88 
 0  0
   0  0  0 2255588   1708 5344047600 0   568 3063 7627 15  1 83 
 0  0
   1  0  0 2247596   1708 5344047600 0   144 3066 6803 14  2 83 
 0  0
   1  0  0 2156008   1708 5344047600 072 3474 7193 25  3 72 
 0  0
   2  0  0 2131968   1708 5344048400 0   652 3497 8565 28  2 70 
 0  0
   3  1  0 2134000   1708 5344051200 0 14340 3629 10644 25  2 
71  2  0
   2  0  0 2136956   1708 5344058000 012 3483 10620 25  2 
70  3  0
   9  1  0 2138164   1708 5344059600 0   248 3442 9980 27  1 72 
 0  0
   4  0  0 2105160   1708 5344062800 0   428 

[Yahoo-eng-team] [Bug 1606475] Re: unable to create instance in liberty

2016-09-24 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606475

Title:
  unable to create instance in liberty

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Error launching new instance in

  Error: Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.  (HTTP
  500) (Request-ID: req-6831fe1e-3236-4a30-9bd4-1b8a923dcc5b)

  
   reply to message ID 0e988d6a55e74a1c9c8c3522d7b8b428\n']
  2016-07-26 13:39:23.516 12910 INFO oslo_messaging._drivers.amqpdriver [-] No 
calling threads waiting for msg_id : 71be54c972444d4cacdd6834f3be97c1
  2016-07-26 13:39:23.517 12910 INFO oslo_messaging._drivers.amqpdriver [-] No 
calling threads waiting for msg_id : fe31b3fcac8e4beea70e43ba676eaedb
  2016-07-26 13:39:23.519 12911 INFO oslo_messaging._drivers.amqpdriver [-] No 
calling threads waiting for msg_id : 2fdc3b9accd94c64b6dc09dda7405707
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
[req-e30b0bb2-4942-460e-96fc-86ba05f5be4c - - - - -] Exception during message 
handling: Timed out waiting for a reply to message ID 
dd70d3ed54d44cb68014e489293335b1
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 124, 
in _do_dispatch
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
new_args[argname] = self.serializer.deserialize_entity(ctxt, arg)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/rpc.py", line 111, in deserialize_entity
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher return 
self._base.deserialize_entity(context, entity)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 322, in 
deserialize_entity
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher entity 
= self._process_object(context, entity)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 284, in 
_process_object
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
context, objprim, version_manifest)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/conductor/api.py", line 77, in 
object_backport_versions
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
object_versions)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 247, in 
object_backport_versions
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
object_versions=object_versions)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in 
call
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
retry=self.retry)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in 
_send
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
timeout=timeout, retry=retry)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
431, in send
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher 
retry=retry)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
420, in _send
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher result 
= self._waiter.wait(msg_id, timeout)
  2016-07-26 13:39:23.519 12909 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
318, in wait
  2016-07-26 

[Yahoo-eng-team] [Bug 1627424] [NEW] FlushError on IPAllocation

2016-09-24 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/49/373249/1/check/gate-tempest-dsvm-neutron-
full-ubuntu-xenial/b1312ed/logs/screen-q-svc.txt.gz?level=TRACE

http://logs.openstack.org/68/347268/5/gate/gate-tempest-dsvm-neutron-
full-ubuntu-xenial/a82f85d/logs/

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure

** Description changed:

  http://logs.openstack.org/49/373249/1/check/gate-tempest-dsvm-neutron-
  full-ubuntu-xenial/b1312ed/logs/screen-q-svc.txt.gz?level=TRACE
+ 
+ http://logs.openstack.org/68/347268/5/gate/gate-tempest-dsvm-neutron-
+ full-ubuntu-xenial/a82f85d/logs/

** Tags added: gate-failure

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627424

Title:
  FlushError on IPAllocation

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/49/373249/1/check/gate-tempest-dsvm-neutron-
  full-ubuntu-xenial/b1312ed/logs/screen-q-svc.txt.gz?level=TRACE

  http://logs.openstack.org/68/347268/5/gate/gate-tempest-dsvm-neutron-
  full-ubuntu-xenial/a82f85d/logs/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627416] [NEW] Launch Instance: "Create New Volume" option should be available for "Image Snapshot"

2016-09-24 Thread Akihiro Motoki
Public bug reported:

In Nova, Image snapshot and Image are handled equivalently and we can
pass image snapshot ID as volume source in 'nova boot' command line.

$ glance image-list --property-filter image_type=snapshot
+--+---+
| ID   | Name  |
+--+---+
| 37407a15-5024-4520-883d-5a9dc0e2062f | snap1 |
+--+---+
$ nova boot --flavor m1.tiny --block-device 
source=snapshot,dest=volume,id=37407a15-5024-4520-883d-5a9dc0e2062f,size=1,bootindex=0,shutdown=remove
 vm3

On the other hand, horizon launch instance form does not provide 'Create
New Volume" option for "Image Snapshot" boot source. It should be
provided.

** Affects: horizon
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: newton-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1627416

Title:
  Launch Instance: "Create New Volume" option should be available for
  "Image Snapshot"

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Nova, Image snapshot and Image are handled equivalently and we can
  pass image snapshot ID as volume source in 'nova boot' command line.

  $ glance image-list --property-filter image_type=snapshot
  +--+---+
  | ID   | Name  |
  +--+---+
  | 37407a15-5024-4520-883d-5a9dc0e2062f | snap1 |
  +--+---+
  $ nova boot --flavor m1.tiny --block-device 
source=snapshot,dest=volume,id=37407a15-5024-4520-883d-5a9dc0e2062f,size=1,bootindex=0,shutdown=remove
 vm3

  On the other hand, horizon launch instance form does not provide
  'Create New Volume" option for "Image Snapshot" boot source. It should
  be provided.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1627416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626341] Re: The path "themes/default" next to the section title "Default" on Developer tab should be removed

2016-09-24 Thread Akihiro Motoki
Let's keep the current version.

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1626341

Title:
  The path "themes/default" next to the section title "Default" on
  Developer tab should be removed

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  "theme/default" is shown next to Default section title on the developer tab.
  Since other section does not include paths, this one should probably be 
removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1626341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1626306] Re: Unlocalized button label and text found in Configuration tab in Launch Instance dialog

2016-09-24 Thread Akihiro Motoki
I don't think English browser with Japanese (or some language) version of 
Horizon are a usual combination.
I don't think we should fix this.


** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1626306

Title:
  Unlocalized button label and text found in Configuration tab in Launch
  Instance dialog

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Project > Instances > Launch Instance > Configuration

  Found unlocalized the following GUI button label and text on
  Configuration tab in Launch Instance dialog:

  Choose File
  No file chosen

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1626306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627393] [NEW] Neuton-LBaaS and Octavia out of synch if TLS container secret ACLs not set up correctly

2016-09-24 Thread Stephen Balukoff
Public bug reported:

I'm hoping this is something that will go away with the neutron-lbaas
and Octavia merge.

Create a self-signed certificate like so:

openssl genrsa -des3 -out self-signed_encrypted.key 2048
openssl rsa -in self-signed_encrypted.key -out self-signed.key
openssl req -new -x509 -days 365 -key self-signed.key -out self-signed.crt

As the admin user, grant the demo user the ability to create cloud
resources on the demo project:

openstack role add --project demo --user demo creator

Now, become the demo user:

source ~/devstack/openrc demo demo

As the demo user, upload the self-signed certificate to barbican:

openstack secret store --name='test_cert' --payload-content-type='text/plain' 
--payload="$(cat self-signed.crt)"
openstack secret store --name='test_key' --payload-content-type='text/plain' 
--payload="$(cat self-signed.key)"
openstack secret container create --name='test_tls_container' 
--type='certificate' --secret="certificate=$(openstack secret list | awk '/ 
test_cert / {print $2}')" --secret="private_key=$(openstack secret list | awk 
'/ test_key / {print $2}')"

As the demo user, grant access to the the above secrets BUT NOT THE
CONTAINER to the 'admin' user. In my test, the admin user has ID:
02c0db7c648c4714971219ae81817ba7

openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_cert / {print $2}')
openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_key / {print $2}')

Now, as the demo user, attempt to deploy a neutron-lbaas listener using
the secret container above:

neutron lbaas-loadbalancer-create --name lb1 private-subnet
neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 --protocol 
TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret 
container list | awk '/ test_tls_container / {print $2}')

The neutron-lbaas command succeeds, but the Octavia deployment fails
since it can't access the secret container.

This is fixed if you remember to grant access to the TLS container to
the admin user like so:

openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack
secret container list | awk '/ test_tls_container / {print $2}')

However, neutron-lbaas and octavia should have similar failure scenarios
if the permissions aren't set up exactly right in any case.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: octavia
 Importance: Undecided
 Status: New


** Tags: tls

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627393

Title:
  Neuton-LBaaS and Octavia out of synch if TLS container secret ACLs not
  set up correctly

Status in neutron:
  New
Status in octavia:
  New

Bug description:
  I'm hoping this is something that will go away with the neutron-lbaas
  and Octavia merge.

  Create a self-signed certificate like so:

  openssl genrsa -des3 -out self-signed_encrypted.key 2048
  openssl rsa -in self-signed_encrypted.key -out self-signed.key
  openssl req -new -x509 -days 365 -key self-signed.key -out self-signed.crt

  As the admin user, grant the demo user the ability to create cloud
  resources on the demo project:

  openstack role add --project demo --user demo creator

  Now, become the demo user:

  source ~/devstack/openrc demo demo

  As the demo user, upload the self-signed certificate to barbican:

  openstack secret store --name='test_cert' --payload-content-type='text/plain' 
--payload="$(cat self-signed.crt)"
  openstack secret store --name='test_key' --payload-content-type='text/plain' 
--payload="$(cat self-signed.key)"
  openstack secret container create --name='test_tls_container' 
--type='certificate' --secret="certificate=$(openstack secret list | awk '/ 
test_cert / {print $2}')" --secret="private_key=$(openstack secret list | awk 
'/ test_key / {print $2}')"

  As the demo user, grant access to the the above secrets BUT NOT THE
  CONTAINER to the 'admin' user. In my test, the admin user has ID:
  02c0db7c648c4714971219ae81817ba7

  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_cert / {print $2}')
  openstack acl user add -u 02c0db7c648c4714971219ae81817ba7 $(openstack secret 
list | awk '/ test_key / {print $2}')

  Now, as the demo user, attempt to deploy a neutron-lbaas listener
  using the secret container above:

  neutron lbaas-loadbalancer-create --name lb1 private-subnet
  neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 
--protocol TERMINATED_HTTPS --name listener1 
--default-tls-container=$(openstack secret container list | awk '/ 
test_tls_container / {print $2}')

  The neutron-lbaas command succeeds, but the Octavia deployment fails
  since it can't access the secret container.

  This is fixed if you 

[Yahoo-eng-team] [Bug 1329333] Re: BadRequest: Invalid volume: Volume status must be available or error

2016-09-24 Thread Sean McGinnis
Appears to have since been fixed indirectly.

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329333

Title:
  BadRequest: Invalid volume: Volume status must be available or error

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Invalid

Bug description:
  traceback from:
  
http://logs.openstack.org/40/99540/2/check/check-grenade-dsvm/85c496c/console.html

  
  2014-06-12 13:28:15.833 | tearDownClass 
(tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern)
  2014-06-12 13:28:15.833 | 
---
  2014-06-12 13:28:15.833 | 
  2014-06-12 13:28:15.833 | Captured traceback:
  2014-06-12 13:28:15.833 | ~~~
  2014-06-12 13:28:15.833 | Traceback (most recent call last):
  2014-06-12 13:28:15.833 |   File "tempest/scenario/manager.py", line 157, 
in tearDownClass
  2014-06-12 13:28:15.833 | cls.cleanup_resource(thing, cls.__name__)
  2014-06-12 13:28:15.834 |   File "tempest/scenario/manager.py", line 119, 
in cleanup_resource
  2014-06-12 13:28:15.834 | resource.delete()
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py", line 35, in 
delete
  2014-06-12 13:28:15.834 | self.manager.delete(self)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/v1/volumes.py", line 228, in 
delete
  2014-06-12 13:28:15.834 | self._delete("/volumes/%s" % 
base.getid(volume))
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/base.py", line 162, in _delete
  2014-06-12 13:28:15.834 | resp, body = self.api.client.delete(url)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 229, in delete
  2014-06-12 13:28:15.834 | return self._cs_request(url, 'DELETE', 
**kwargs)
  2014-06-12 13:28:15.834 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 187, in 
_cs_request
  2014-06-12 13:28:15.835 | **kwargs)
  2014-06-12 13:28:15.835 |   File 
"/opt/stack/new/python-cinderclient/cinderclient/client.py", line 170, in 
request
  2014-06-12 13:28:15.835 | raise exceptions.from_response(resp, body)
  2014-06-12 13:28:15.835 | BadRequest: Invalid volume: Volume status must 
be available or error, but current status is: in-use (HTTP 400) (Request-ID: 
req-9337623a-e2b7-48a3-97ab-f7a4845f2cd8)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1329333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441733] Re: pip install or python setup.py install should include httpd/keystone.py

2016-09-24 Thread Matt Fischer
Puppet stopped shipping this script, we get it from the packages or the
code depending on how you install, so this bug no longer applies to us.

** Changed in: puppet-keystone
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1441733

Title:
  pip install or python setup.py install should include
  httpd/keystone.py

Status in OpenStack Identity (keystone):
  Invalid
Status in puppet-keystone:
  Invalid

Bug description:
  Now the recommended way to install keystone is via apache.  But
  httpd/keystone.py is not included when we do  python setup.py install
  in keystone. It should be included

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470635] Re: endpoints added with v3 are not visible with v2

2016-09-24 Thread Matt Fischer
This was fixed in keystone itself.

** Changed in: puppet-keystone
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1470635

Title:
  endpoints added with v3 are not visible with v2

Status in OpenStack Identity (keystone):
  Fix Released
Status in openstack-ansible:
  Opinion
Status in puppet-keystone:
  Won't Fix

Bug description:
  Create an endpoint with v3::

  # openstack --os-identity-api-version 3 [--admin credentials]
  endpoint create 

  try to list endpoints with v2::

  # openstack --os-identity-api-version 2 [--admin credentials]
  endpoint list

  nothing.

  We are in the process of trying to convert puppet-keystone to v3 with
  the goal of maintaining backwards compatibility.  That means, we want
  admins/operators not to have to change any existing workflow.  This
  bug causes openstack endpoint list to return nothing which breaks
  existing workflows and backwards compatibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1470635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611991] Re: [ovs firewall] Port masking adds wrong masks in several cases.

2016-09-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/353782
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0494f212aa625a03587af3d75e823008f1198012
Submitter: Jenkins
Branch:master

commit 0494f212aa625a03587af3d75e823008f1198012
Author: Inessa Vasilevskaya 
Date:   Thu Aug 11 02:21:29 2016 +0300

ovsfw: fix troublesome port_rule_masking

In several cases port masking algorithm borrowed
from networking_ovs_dpdk didn't behave correctly.
This caused non-restricted ports to be open due to
wrong tp_src field value in resulting ovs rules.

This was fixed by alternative port masking
implementation.

Functional and unit tests to cover the bug added as well.

Co-Authored-By: Jakub Libosvar 
Co-Authored-By: IWAMOTO Toshihiro 

Closes-Bug: #1611991
Change-Id: Idfc0e9c52e0dd08852c91c17e12edb034606a361


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611991

Title:
  [ovs firewall] Port masking adds wrong masks in several cases.

Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Seen on master devstack, ubuntu xenial.

  Steps to reproduce:

  1. Enable ovs firewall in /etc/neutron/plugins/ml2/ml2.conf

  [securitygroup]
  firewall_driver = openvswitch

  2. Create a security group with icmp, tcp to 22.

  3. Boot a VM, assign a floating ip.

  4. Check that port 23 can be accessed via tcp (telnet, nc, etc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1611991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616938] Re: XenAPI: failed to create image from volume backed instance with glance v2

2016-09-24 Thread Matt Riedemann
** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Status: Confirmed => In Progress

** No longer affects: glance

** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1616938

Title:
  XenAPI: failed to create image from volume backed instance with glance
  v2

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  XenServer with dirver XenAPI, it always fails to create image from
  volume.

  Tempest test:
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_create_ebs_image_and_check_boot

  2016-08-24 08:13:24.636 27347 DEBUG nova.compute.api 
[req-fd7afac4-a2ee-41cb-b785-02766806db26 
tempest-TestVolumeBootPatternV2-1645487335 
tempest-TestVolumeBootPatternV2-1645487335] [instance: 
5d5a8b10-655c-457e-8ad9-edbfb6ecd278] Creating snapshot from volume 
9953359c-327a-41bf-abec-f3c3da416390. snapshot_volume_backed 
/opt/stack/new/nova/nova/compute/api.py:2445^M
  2016-08-24 08:13:25.964 27347 INFO os_vif 
[req-fd7afac4-a2ee-41cb-b785-02766806db26 
tempest-TestVolumeBootPatternV2-1645487335 
tempest-TestVolumeBootPatternV2-1645487335] Loaded VIF plugin class '' with name 'ovs'^M
  2016-08-24 08:13:25.965 27347 INFO os_vif 
[req-fd7afac4-a2ee-41cb-b785-02766806db26 
tempest-TestVolumeBootPatternV2-1645487335 
tempest-TestVolumeBootPatternV2-1645487335] Loaded VIF plugin class '' with name 
'linux_bridge'^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions 
[req-fd7afac4-a2ee-41cb-b785-02766806db26 
tempest-TestVolumeBootPatternV2-1645487335 
tempest-TestVolumeBootPatternV2-1645487335] Unexpected exception in API method^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 338, in wrapped^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/common.py", line 372, in inner^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/validation/__init__.py", line 73, in wrapper^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/validation/__init__.py", line 73, in wrapper^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/servers.py", line 1072, in 
_action_create_image^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions 
metadata)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/api.py", line 146, in inner^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
f(self, context, instance, *args, **kw)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/api.py", line 2463, in 
snapshot_volume_backed^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
self.image_api.create(context, image_meta)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/image/api.py", line 106, in create^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
session.create(context, image_info, data=data)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/image/glance.py", line 626, in create^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions data, 
force_activate)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/image/glance.py", line 658, in _create_v2^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions 
context, 2, 'create', **sent_service_image_meta)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/image/glance.py", line 174, in call^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions result 
= getattr(client.images, method)(*args, **kwargs)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1564921] Re: nova rebuild fails after two rebuild requests when ironic is used

2016-09-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/306010
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=54b122caec1ae2418fc3e296be604d072cb5815a
Submitter: Jenkins
Branch:master

commit 54b122caec1ae2418fc3e296be604d072cb5815a
Author: Vladyslav Drok 
Date:   Thu Apr 14 21:15:10 2016 +0300

Update instance node on rebuild only when it is recreate

When using ironic virt driver, if scheduled_node is not specified
in rebuild_instance compute manager method (as it happens in case
of instance rebuild), the first ironic node is selected:

 computes = ComputeNodeList.get_all_by_host(context, host, use_slave)
 return computes[0]

After the first rebuild, instance.node is updated to be this first
ironic node, which causes subsequent rebuilds to fail, as virt driver
tries to set instance_uuid on a newly selected ironic node and fails.

Closes-bug: #1564921
Change-Id: I2fe6e439135ba6aa4120735d030ced31081ef202


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1564921

Title:
  nova rebuild fails after two rebuild requests when ironic is used

Status in Ironic:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  New
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  First nova rebuild request passes fine, but further requests fail with
  the following message:

  Instance b460e640-e601-4e68-b0e8-231e15201412 is already associated
  with a node, it cannot be associated with this other node
  10c0b922-cb39-412e-849a-27e66042d4c0 (HTTP 409)", "code": 500,
  "details": "  File \"/opt/stack/nova/nova/compute/manager.py\"

  The reason for this is that nova tries to reshcedule an instance
  during rebuild, and in case of ironic, there can't be 2 nodes
  associated with the same instance_uuid.

  This can be checked on devstack since change
  I0233f964d8f294f0ffd9edcb16b1aaf93486177f that introduced it with
  ironic virt driver and neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1564921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622616] Re: delete_subnet update_port appears racey with ipam

2016-09-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/373536
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=873b5ac837220c11e204c6610782f8b86c90bf03
Submitter: Jenkins
Branch:master

commit 873b5ac837220c11e204c6610782f8b86c90bf03
Author: Armando Migliaccio 
Date:   Tue Sep 20 14:23:40 2016 -0700

Retry port update on IpAddressAllocationNotFound

If a port update and a subnet delete interleave, there is a
chance that the IPAM update operation raises this exception.
Rather than throwing that up to the user under some sort of
conflict, bubble up a retry instead; that should bring things
back to sanity.

Closes-bug: #1622616

Change-Id: Ia8cac09349d4cb722737bdf0bec6c54b9e77f31d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622616

Title:
  delete_subnet update_port appears racey with ipam

Status in neutron:
  Fix Released

Bug description:
  failure spotted in a patch on a delete_subnet call:

  
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
[req-746d769c-2388-48e0-8e09-38e4190e5364 tempest-PortsTestJSON-432635984 -] 
delete failed: Exception deleting fixed_ip from port 
862b5dea-dca2-4669-b280-867175f5f351
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 526, in delete
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 87, in wrapped
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 83, in wrapped
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 123, in wrapped
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
traceback.format_exc())
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 118, in wrapped
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
  2016-09-10 01:04:43.452 

[Yahoo-eng-team] [Bug 1627295] [NEW] test_apt_v3_mirror_search_dns fails

2016-09-24 Thread nicoo
Public bug reported:

Hi,

On a machine where the local domain is defined, test_apt_v3_mirror_search_dns 
can fail:
hostname gets replaces by hostname.${local domain}.


Best,

  nicoo

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1627295

Title:
  test_apt_v3_mirror_search_dns fails

Status in cloud-init:
  New

Bug description:
  Hi,

  On a machine where the local domain is defined, test_apt_v3_mirror_search_dns 
can fail:
  hostname gets replaces by hostname.${local domain}.

  
  Best,

nicoo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1627295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627293] [NEW] Update Debian source.list

2016-09-24 Thread nicoo
Public bug reported:

Hi,

The Debian source list you are shipping has wrong backport entries.
Since February, we ship the following patch in Debian, please consider 
including it.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: debian

** Attachment added: "debian-sources.list.patch"
   
https://bugs.launchpad.net/bugs/1627293/+attachment/4747675/+files/debian-sources.list.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1627293

Title:
  Update Debian source.list

Status in cloud-init:
  New

Bug description:
  Hi,

  The Debian source list you are shipping has wrong backport entries.
  Since February, we ship the following patch in Debian, please consider 
including it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1627293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616938] Re: XenAPI: failed to create image from volume backed instance with glance v2

2016-09-24 Thread Nikhil Komawar
I think the xen driver needs a code to check the container and disk
formats supported at that glance installation.

A check using the schema call will do the trick

https://github.com/openstack/glance/blob/7c7dd626896d732d75c6b802a33b9462aee885fd/glance/api/v2/images.py#L980

https://github.com/openstack/glance/blob/7c7dd626896d732d75c6b802a33b9462aee885fd/glance/api/v2/images.py#L837

https://github.com/openstack/glance/blob/7c7dd626896d732d75c6b802a33b9462aee885fd/glance/api/v2/images.py#L887-L895

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1616938

Title:
  XenAPI: failed to create image from volume backed instance with glance
  v2

Status in Glance:
  Invalid
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  XenServer with dirver XenAPI, it always fails to create image from
  volume.

  Tempest test:
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_create_ebs_image_and_check_boot

  2016-08-24 08:13:24.636 27347 DEBUG nova.compute.api 
[req-fd7afac4-a2ee-41cb-b785-02766806db26 
tempest-TestVolumeBootPatternV2-1645487335 
tempest-TestVolumeBootPatternV2-1645487335] [instance: 
5d5a8b10-655c-457e-8ad9-edbfb6ecd278] Creating snapshot from volume 
9953359c-327a-41bf-abec-f3c3da416390. snapshot_volume_backed 
/opt/stack/new/nova/nova/compute/api.py:2445^M
  2016-08-24 08:13:25.964 27347 INFO os_vif 
[req-fd7afac4-a2ee-41cb-b785-02766806db26 
tempest-TestVolumeBootPatternV2-1645487335 
tempest-TestVolumeBootPatternV2-1645487335] Loaded VIF plugin class '' with name 'ovs'^M
  2016-08-24 08:13:25.965 27347 INFO os_vif 
[req-fd7afac4-a2ee-41cb-b785-02766806db26 
tempest-TestVolumeBootPatternV2-1645487335 
tempest-TestVolumeBootPatternV2-1645487335] Loaded VIF plugin class '' with name 
'linux_bridge'^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions 
[req-fd7afac4-a2ee-41cb-b785-02766806db26 
tempest-TestVolumeBootPatternV2-1645487335 
tempest-TestVolumeBootPatternV2-1645487335] Unexpected exception in API method^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 338, in wrapped^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/common.py", line 372, in inner^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/validation/__init__.py", line 73, in wrapper^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/validation/__init__.py", line 73, in wrapper^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/servers.py", line 1072, in 
_action_create_image^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions 
metadata)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/api.py", line 146, in inner^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
f(self, context, instance, *args, **kw)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/compute/api.py", line 2463, in 
snapshot_volume_backed^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
self.image_api.create(context, image_meta)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/image/api.py", line 106, in create^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions return 
session.create(context, image_info, data=data)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/image/glance.py", line 626, in create^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions data, 
force_activate)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/image/glance.py", line 658, in _create_v2^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions 
context, 2, 'create', **sent_service_image_meta)^M
  2016-08-24 08:13:26.049 27347 ERROR nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1588041] Re: [2.0 rc1] juju can't access vSphere VM deployed with Xenial, cloud-init fails to set SSH keys

2016-09-24 Thread Anastasia
** Changed in: juju
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1588041

Title:
  [2.0 rc1] juju can't access vSphere VM deployed with Xenial,  cloud-
  init fails to set SSH keys

Status in cloud-init:
  New
Status in juju:
  Fix Released

Bug description:
  I tried to do a bootstrap with vsphere as a provider using vsphere 6.0 and 
juju 1.25.5.
  -
    vsphere:
  type: vsphere
  host: '**.***.*.***'
  user: 'administra...@vsphere.oil'
  password: '**'
  datacenter: 'dc0'
  bootstrap-timeout: 1800
  logging-config: 
"=DEBUG;juju=DEBUG;golxc=TRACE;juju.container.lxc=TRACE"
  agent-stream: released
  -

  Initially, I did not specify the default series and bootstrap VM
  deployed with Xenial, however, juju  could not connect to it after
  getting the address and seems stuck trying to connect and I had to
  CTRL-C:

  -
  $ juju bootstrap -e vsphere
  ERROR the "vsphere" provider is provisional in this version of Juju. To use 
it anyway, set JUJU_DEV_FEATURE_FLAGS="vsphere-provider" in your shell 
environment
  $ export JUJU_DEV_FEATURE_FLAGS="vsphere-provider"
  $ juju bootstrap -e vsphere
  Bootstrapping environment "vsphere"
  Starting new instance for initial state server
  Launching instance
   - juju-e33e5800-edd9-4af7-8654-6d59b1e98eb9-machine-0
  Installing Juju agent on bootstrap instance
  Waiting for address
  Attempting to connect to 10.245.39.94:22
  Attempting to connect to fe80::250:56ff:fead:1b03:22
  ^CInterrupt signalled: waiting for bootstrap to exit
  ERROR failed to bootstrap environment: interrupted
  -

  When I specified the default series to be trusty, it worked:
  -
    vsphere:
  type: vsphere
  host: '**.***.*.***'
  user: 'administra...@vsphere.oil'
  password: '**'
  datacenter: 'dc0'
  default-series: trusty
  bootstrap-timeout: 1800
  logging-config: 
"=DEBUG;juju=DEBUG;golxc=TRACE;juju.container.lxc=TRACE"
  agent-stream: released
  -

  This was the output:

  -
  $ juju bootstrap -e vsphere
  Bootstrapping environment "vsphere"
  Starting new instance for initial state server
  Launching instance
   - juju-b157863b-3ed4-4ae5-8c3c-82ae7629bff7-machine-0
  Installing Juju agent on bootstrap instance
  Waiting for address
  Attempting to connect to 10.245.45.153:22
  Attempting to connect to fe80::250:56ff:fead:3fa2:22
  Warning: Permanently added '10.245.45.153' (ECDSA) to the list of known hosts.
  sudo: unable to resolve host ubuntuguest
  Logging to /var/log/cloud-init-output.log on remote host
  Running apt-get update
  Running apt-get upgrade
  Installing package: curl
  Installing package: cpu-checker
  Installing package: bridge-utils
  Installing package: rsyslog-gnutls
  Installing package: cloud-utils
  Installing package: cloud-image-utils
  Installing package: tmux
  Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP 
%{http_code}; time %{time_total}s; size %{size_download} bytes; speed 
%{speed_download} bytes/s ' --retry 10 -o $bin/tools.tar.gz 
<[https://streams.canonical.com/juju/tools/agent/1.25.5/juju-1.25.5-trusty-amd64.tgz]>
  Bootstrapping Juju machine agent
  Starting Juju machine agent (jujud-machine-0)
  Bootstrap agent installed
  vsphere -> vsphere
  Waiting for API to become available
  Waiting for API to become available
  Waiting for API to become available
  Waiting for API to become available
  Bootstrap complete
  -

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1588041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp