[Yahoo-eng-team] [Bug 1260932] [NEW] add all networks to a project instance

2013-12-13 Thread wangyubo
Public bug reported:

My openstack version is havana.
I create multi nova network in my cloud.
Then I create instance in a project from dashboard .
I found the instance will add all network to my instance, even the network do 
not belong the project.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260932

Title:
  add all networks to a project instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  My openstack version is havana.
  I create multi nova network in my cloud.
  Then I create instance in a project from dashboard .
  I found the instance will add all network to my instance, even the network do 
not belong the project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260919] [NEW] Select All box not functioning

2013-12-13 Thread Maithem
Public bug reported:

In the admin panel, images tab, checking the corner check box to select
all images is not function (i.e. it doesn't select an images)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260919

Title:
  Select All box not functioning

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the admin panel, images tab, checking the corner check box to
  select all images is not function (i.e. it doesn't select an images)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260915] [NEW] keystone:411 keystone did not start

2013-12-13 Thread Prakash Ramchandran
Public bug reported:

The keystone fails to initialize using devstack, and puts following
message, appears some initialization issue not sure if it's ssl or pki
cert issue, if some one can review and answer. mysql back end seems
fine. Here is call trace fail in starrup.sh of devstack ... also if
needs to be files with devstack or openstack not clear, any feedback
welcome.

+ screen -S stack -p key -X stuff 'cd /opt/stack/keystone && 
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf 
--log-config /etc/keystone/logging.conf -d --debug || echo "key failed to 
start" | tee "/opt/stack/s'atus/stack/key.failure"
+ echo 'Waiting for keystone to start...'
Waiting for keystone to start...
+ timeout 60 sh -c 'while ! curl --noproxy '\''*'\'' -s 
http://10.145.90.61:5000/v2.0/ >/dev/null; do sleep 1; done'
+ die 411 'keystone did not start'
+ local exitcode=0
+ set +o xtrace
[Call Trace]
./stack.sh:874:start_keystone
/home/stack/devstack/lib/keystone:411:die
[ERROR] /home/stack/devstack/lib/keystone:411 keystone did not start

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1260915

Title:
  keystone:411 keystone did not start

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The keystone fails to initialize using devstack, and puts following
  message, appears some initialization issue not sure if it's ssl or pki
  cert issue, if some one can review and answer. mysql back end seems
  fine. Here is call trace fail in starrup.sh of devstack ... also if
  needs to be files with devstack or openstack not clear, any feedback
  welcome.

  + screen -S stack -p key -X stuff 'cd /opt/stack/keystone && 
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf 
--log-config /etc/keystone/logging.conf -d --debug || echo "key failed to 
start" | tee "/opt/stack/s'atus/stack/key.failure"
  + echo 'Waiting for keystone to start...'
  Waiting for keystone to start...
  + timeout 60 sh -c 'while ! curl --noproxy '\''*'\'' -s 
http://10.145.90.61:5000/v2.0/ >/dev/null; do sleep 1; done'
  + die 411 'keystone did not start'
  + local exitcode=0
  + set +o xtrace
  [Call Trace]
  ./stack.sh:874:start_keystone
  /home/stack/devstack/lib/keystone:411:die
  [ERROR] /home/stack/devstack/lib/keystone:411 keystone did not start

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1260915/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259907] Re: check-grenade-dsvm marked as FAILED - n-api/g-api Logs have errors

2013-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/62107
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=1159e52a2c9c51976bc3be5ad504c88fb94c2fe1
Submitter: Jenkins
Branch:master

commit 1159e52a2c9c51976bc3be5ad504c88fb94c2fe1
Author: Sean Dague 
Date:   Fri Dec 13 18:46:21 2013 -0500

don't fail on dirty logs with grenade

because grenade is upgrading from old to new we might actually
expect the logs to be dirtier than in upstream tempest. The grenade
logs weren't scrubbed in the same ways during the development here
as the tempest regular runs.

Change-Id: Id1bcc2cc85e73a414d382756a65ea1d80dc10b00
Closes-Bug: #1259907


** Changed in: tempest
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259907

Title:
  check-grenade-dsvm marked as FAILED - n-api/g-api Logs have errors

Status in OpenStack Image Registry and Delivery Service (Glance):
  Incomplete
Status in OpenStack Compute (Nova):
  Triaged
Status in OpenStack Core Infrastructure:
  New
Status in Tempest:
  Fix Released

Bug description:
  Example:
  
http://logs.openstack.org/81/61281/1/check/check-grenade-dsvm/f42b658/console.html

  2013-12-11 00:14:33.892 | Log File: g-api
  2013-12-11 00:14:33.893 | 2013-12-11 00:04:32.459 9398 ERROR 
glance.api.v1.upload_utils [a954dd77-c926-4ef5-916c-0589e852bb1b 
4c3bf2863784478e8fc3dec275a7bdef 4af50376a7f44390b0d5790b0f3aa1f1] Received 
HTTP error while uploading image 88236e20-ced9-4868-b9bb-570d97edc446
  2013-12-11 00:14:33.893 | 
  2013-12-11 00:14:33.893 | 2013-12-11 00:04:32.472 9398 ERROR 
glance.api.v1.upload_utils [a954dd77-c926-4ef5-916c-0589e852bb1b 
4c3bf2863784478e8fc3dec275a7bdef 4af50376a7f44390b0d5790b0f3aa1f1] Unable to 
kill image 88236e20-ced9-4868-b9bb-570d97edc446: 
  2013-12-11 00:14:33.893 | 
  2013-12-11 00:14:34.044 | Log File: n-api
  2013-12-11 00:14:34.044 | 2013-12-11 00:04:32.475 ERROR nova.image.s3 
[req-64000d25-93ec-43d8-817b-f62ec9a17a16 demo demo] Failed to upload 
testbucket/bundle.img.manifest.xml to /tmp/tmpsS2EHo
  2013-12-11 00:14:34.045 | 
  2013-12-11 00:14:35.542 | Logs have errors
  2013-12-11 00:14:35.542 | FAILED

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1259907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260905] [NEW] Return policy error, not generic error if nova net-create/delete is forbidden by policy

2013-12-13 Thread Tushar
Public bug reported:

When nova net-create and net-delete are prohibited by policy, we should
raise policy violation error (403) to the user instead of service
unavailable (503) error which is incorrect.

Steps to reproduce:
1. Add the following policies to policy.json:
"network:create": "rule:admin_api",
"network:delete": "rule:admin_api"

2. As a non-admin user, run nova net-create:
$ nova net-create xyz 192.168.254.1/30
ERROR: Create networks failed (HTTP 503)

Here's the output of other forbidden commands:
$ nova baremetal-node-list
ERROR: Policy doesn't allow compute_extension:baremetal_nodes to be performed. 
(HTTP 403)

** Affects: nova
 Importance: Undecided
 Assignee: Tushar (tkay)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Tushar (tkay)

** Description changed:

  When nova net-create and net-delete are prohibited by policy, we should
  raise policy violation error (403) to the user instead of service
  unavailable (503) error which is incorrect.
  
  Steps to reproduce:
- 1. Add the following policies to policy.json: 
+ 1. Add the following policies to policy.json:
  "network:create": "rule:admin_api",
  "network:delete": "rule:admin_api"
  
  2. As a non-admin user, run nova net-create:
- nova net-create xyz 192.168.254.1/30
- ERROR: Create networks failed (HTTP 503) 
+ $ nova net-create xyz 192.168.254.1/30
+ ERROR: Create networks failed (HTTP 503)
  
  Here's the output of other forbidden commands:
  $ nova baremetal-node-list
  ERROR: Policy doesn't allow compute_extension:baremetal_nodes to be 
performed. (HTTP 403)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260905

Title:
  Return policy error, not generic error if nova net-create/delete is
  forbidden by policy

Status in OpenStack Compute (Nova):
  New

Bug description:
  When nova net-create and net-delete are prohibited by policy, we
  should raise policy violation error (403) to the user instead of
  service unavailable (503) error which is incorrect.

  Steps to reproduce:
  1. Add the following policies to policy.json:
  "network:create": "rule:admin_api",
  "network:delete": "rule:admin_api"

  2. As a non-admin user, run nova net-create:
  $ nova net-create xyz 192.168.254.1/30
  ERROR: Create networks failed (HTTP 503)

  Here's the output of other forbidden commands:
  $ nova baremetal-node-list
  ERROR: Policy doesn't allow compute_extension:baremetal_nodes to be 
performed. (HTTP 403)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257756] Re: multiple services running causes failure

2013-12-13 Thread Maithem
This is not a Nova bug, stacking and unstacking is done using scripts from 
devstack. Anyways, you can solve this problem by killing the n-api and n-cpu 
process. You can do something like this:
ps aux | grep "nova", then kill the nova-api and nova-compute processes. I 
believe this problem happens when you have multiple workers for nova-api. 

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257756

Title:
  multiple services running causes failure

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Scenario : when stack.sh(from devstack) script is run for the first
  time and if for any reason the user unstack the openstack by using
  unstack.sh and clean.sh. And then again install it

  Issue : during 2nd run of stack.sh , few nova services specially n-cpu
  and n-api fails running. It happens because of unstacking the last
  stack these services was not killed.

  During 2nd run, the log file says : multiple services running on the same 
address (same URI).
  there would be multiple duplicate services runs which causes current install 
fails.

  the above message can be view. on running the services
  for example. just run nova-api on terminal

  Traceback : it will show address already in use. ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260902] [NEW] improve load distribution on rabbitmq servers

2013-12-13 Thread Ravi Chunduru
Public bug reported:

Currently, neutron service reconnects to rabbit server when it detects a 
connection failure. It goes blindly picking up the first rabbit server 
configured always, try and move onto next if the connection attempt fails. 
It would never use second rabbit server if first one succeeds. 

Instead, we should distribute the connection load onto available rabbit
servers.

** Affects: neutron
 Importance: Undecided
 Assignee: Ravi Chunduru (ravivsn)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Ravi Chunduru (ravivsn)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260902

Title:
  improve load distribution on rabbitmq servers

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently, neutron service reconnects to rabbit server when it detects a 
connection failure. It goes blindly picking up the first rabbit server 
configured always, try and move onto next if the connection attempt fails. 
  It would never use second rabbit server if first one succeeds. 

  Instead, we should distribute the connection load onto available
  rabbit servers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250680] Re: vmware: Rebooting a powered off instance puts instance in error state

2013-12-13 Thread Shawn Hartsock
This is not a valid use case.
https://github.com/openstack/nova/blob/e9627002bd3df5c24fac5f0302ab683b31b4ddd6/nova/compute/api.py#L1955

Makes the action of rebooting a VM that is powered off an illegal
action.

** Description changed:

  Steps to reproduce:
  1. Launch an instance
  2. Shutdown instance
  3. Reboot instance
  
  Traceback: http://paste.openstack.org/show/52226/
+ 
+ Change in compute API here:
+   
https://github.com/openstack/nova/blob/e9627002bd3df5c24fac5f0302ab683b31b4ddd6/nova/compute/api.py#L1955
+ 
+ makes the action of rebooting an instance that is shutdown an invalid
+ action. So this whole bug isn't something that can be "fixed" at the
+ driver level.

** Changed in: nova
   Importance: High => Wishlist

** Changed in: openstack-vmwareapi-team
   Importance: High => Wishlist

** Changed in: nova
   Status: In Progress => Won't Fix

** Changed in: openstack-vmwareapi-team
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250680

Title:
  vmware: Rebooting a powered off instance puts instance in error state

Status in OpenStack Compute (Nova):
  Won't Fix
Status in The OpenStack VMwareAPI subTeam:
  Won't Fix

Bug description:
  Steps to reproduce:
  1. Launch an instance
  2. Shutdown instance
  3. Reboot instance

  Traceback: http://paste.openstack.org/show/52226/

  Change in compute API here:

https://github.com/openstack/nova/blob/e9627002bd3df5c24fac5f0302ab683b31b4ddd6/nova/compute/api.py#L1955

  makes the action of rebooting an instance that is shutdown an invalid
  action. So this whole bug isn't something that can be "fixed" at the
  driver level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1250680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259907] Re: check-grenade-dsvm marked as FAILED - n-api/g-api Logs have errors

2013-12-13 Thread Sean Dague
Part of the problem here is that the log fail detector in tempest is
actually breaking the grenade runs when it probably shouldn't. Grenade
was not a primary use case for that script.

** Also affects: grenade
   Importance: Undecided
   Status: New

** No longer affects: grenade

** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Confirmed

** Changed in: tempest
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259907

Title:
  check-grenade-dsvm marked as FAILED - n-api/g-api Logs have errors

Status in OpenStack Image Registry and Delivery Service (Glance):
  Incomplete
Status in OpenStack Compute (Nova):
  Triaged
Status in OpenStack Core Infrastructure:
  New
Status in Tempest:
  Confirmed

Bug description:
  Example:
  
http://logs.openstack.org/81/61281/1/check/check-grenade-dsvm/f42b658/console.html

  2013-12-11 00:14:33.892 | Log File: g-api
  2013-12-11 00:14:33.893 | 2013-12-11 00:04:32.459 9398 ERROR 
glance.api.v1.upload_utils [a954dd77-c926-4ef5-916c-0589e852bb1b 
4c3bf2863784478e8fc3dec275a7bdef 4af50376a7f44390b0d5790b0f3aa1f1] Received 
HTTP error while uploading image 88236e20-ced9-4868-b9bb-570d97edc446
  2013-12-11 00:14:33.893 | 
  2013-12-11 00:14:33.893 | 2013-12-11 00:04:32.472 9398 ERROR 
glance.api.v1.upload_utils [a954dd77-c926-4ef5-916c-0589e852bb1b 
4c3bf2863784478e8fc3dec275a7bdef 4af50376a7f44390b0d5790b0f3aa1f1] Unable to 
kill image 88236e20-ced9-4868-b9bb-570d97edc446: 
  2013-12-11 00:14:33.893 | 
  2013-12-11 00:14:34.044 | Log File: n-api
  2013-12-11 00:14:34.044 | 2013-12-11 00:04:32.475 ERROR nova.image.s3 
[req-64000d25-93ec-43d8-817b-f62ec9a17a16 demo demo] Failed to upload 
testbucket/bundle.img.manifest.xml to /tmp/tmpsS2EHo
  2013-12-11 00:14:34.045 | 
  2013-12-11 00:14:35.542 | Logs have errors
  2013-12-11 00:14:35.542 | FAILED

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1259907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260892] [NEW] DistributionNotFound: No distributions at all found for oslo.messaging>=1.2.0a11

2013-12-13 Thread Joshua Harlow
Public bug reported:

When building using:

$ ./smithy -a prepare -p conf/personas/in-a-box/basic-all.yaml

It appears that pip-download can not find (oslo.messaging>=1.2.0a11) -
which is not on pypi (yet) but is needed by various projects.

** Affects: anvil
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1260892

Title:
  DistributionNotFound: No distributions at all found for
  oslo.messaging>=1.2.0a11

Status in ANVIL for forging OpenStack.:
  New

Bug description:
  When building using:

  $ ./smithy -a prepare -p conf/personas/in-a-box/basic-all.yaml

  It appears that pip-download can not find (oslo.messaging>=1.2.0a11) -
  which is not on pypi (yet) but is needed by various projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1260892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254991] Re: python-glance-tests not installable

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1254991

Title:
  python-glance-tests not installable

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  When building glance via anvil with packaging tests enabled (the
  default), python-glance-tests package is not installable. When one
  tries to install it with yum, the following error occurs:

  # yum install python-glance-tests
  [... skipped ...]
  --> Finished Dependency Resolution
  Error: Package: 2:python-glance-tests-2014.1.dev141.g8a9cf72-1.el6.noarch 
(anvil)
 Requires: pysendfile = 2
 Available: pysendfile-2.0.0-3.el6.x86_64 (epel)
 pysendfile = 2.0.0-3.el6
   You could try using --skip-broken to work around the problem
   You could try running: rpm -Va --nofiles --nodigest

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1254991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1087131] Re: g-api unable to run

2013-12-13 Thread Dean Troyer
** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1087131

Title:
  g-api unable to run

Status in devstack - openstack dev environments:
  Invalid
Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  I am using the latest devstack and glance code. This is the error
  message when I run g-api using devstack.

  
==g-api=
  2012-12-06 13:28:08 11089 WARNING glance.api.v2.images [-] Could not find 
schema properties file schema-image.json. Continuing without custom properties
  2012-12-06 13:28:08 11089 INFO glance.db.sqlalchemy.api [-] not auto-creating 
glance registry DB
  2012-12-06 13:28:08 11089 DEBUG glance.notifier [-] Converted strategy alias 
rabbit to glance.notifier.notify_kombu.RabbitStrategy __init__ /opt/stack/g
  lance/glance/notifier/__init__.py:55
  2012-12-06 13:28:08 11089 INFO glance.notifier.notify_kombu [-] Connecting to 
AMQP server on localhost:5672
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
97, in wait
  readers.get(fileno, noop).cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
192, in main
  result = function(*args, **kwargs)
File 
"/usr/local/lib/python2.7/dist-packages/amqplib/client_0_8/channel.py", line 
843, in exchange_declare
  (40, 11),# Channel.exchange_declare_ok   
File 
"/usr/local/lib/python2.7/dist-packages/amqplib/client_0_8/abstract_channel.py",
 line 105, in wait
  return amqp_method(self, args)
File 
"/usr/local/lib/python2.7/dist-packages/amqplib/client_0_8/channel.py", line 
273, in _close
  (class_id, method_id))
  AMQPChannelException: (406, u"PRECONDITION_FAILED - cannot redeclare exchange 
'glance' in vhost '/' with different type, durable, internal or autodelete
   value", (40, 10), 'Channel.exchange_declare')   
  Removing descriptor: 6
  2012-12-06 13:28:09 11089 CRITICAL glance [-] (406, u"PRECONDITION_FAILED - 
cannot redeclare exchange 'glance' in vhost '/' with different type, durable
  , internal or autodelete value", (40, 10), 'Channel.exchange_declare')
  2012-12-06 13:28:09 11089 TRACE glance Traceback (most recent call last):
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/opt/stack/glance/bin/glance-api", line 60, in 
  2012-12-06 13:28:09 11089 TRACE glance 
server.start(config.load_paste_app, default_port=9292)
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/opt/stack/glance/glance/common/wsgi.py", line 208, in start
  2012-12-06 13:28:09 11089 TRACE glance self.run_child()
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/opt/stack/glance/glance/common/wsgi.py", line 259, in run_child
  2012-12-06 13:28:09 11089 TRACE glance self.run_server()
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/opt/stack/glance/glance/common/wsgi.py", line 279, in run_server
  2012-12-06 13:28:09 11089 TRACE glance self.app_func(),
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/opt/stack/glance/glance/common/config.py", line 187, in load_paste_app
  2012-12-06 13:28:09 11089 TRACE glance app = deploy.loadapp("config:%s" % 
conf_file, name=app_name)
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2012-12-06 13:28:09 11089 TRACE glance return loadobj(APP, uri, 
name=name, **kw)
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2012-12-06 13:28:09 11089 TRACE glance return context.create()
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2012-12-06 13:28:09 11089 TRACE glance return 
self.object_type.invoke(self)
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 203, in invoke
  2012-12-06 13:28:09 11089 TRACE glance app = context.app_context.create()
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2012-12-06 13:28:09 11089 TRACE glance return 
self.object_type.invoke(self)
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 146, in invoke
  2012-12-06 13:28:09 11089 TRACE glance return fix_call(context.object, 
context.global_conf, **context.local_conf)
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 56, in fix_call
  2012-12-06 13:28:09 11089 TRACE glance val = callable(*args, **kw)
  2012-12-06 13:28:09 11089 TRACE glance   File 
"/opt/stack/glance/gl

[Yahoo-eng-team] [Bug 985786] Re: qemu error: chardev opening backend file failed

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/985786

Title:
  qemu error: chardev opening backend file failed

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Spawning of instances fails with Qemu error "internal error Process
  exited while reading console log output: chardev: opening backend
  "file" failed".  Looks like permission issue because Devstackpy uses
  user's home directory and may be during instantiation the qemu/kvm
  user not able to use any file. Please check.

  --
  2012-04-13 19:39:58 DEBUG nova.utils 
[req-85fd790e-fdad-4496-9295-cc2037857704 31713aa0b6e3496cb43a48c4f654c03d 
89f940ceb1c847dab0134a30b5ff5629] Running cmd (subprocess): qemu-img create -f 
qcow2 -o 
cluster_size=2M,backing_file=/home/sumitsen/openstack/nova/instances/_base/b32d67e926138471cb0af63eb128961e27ec17f9
 /home/sumitsen/openstack/nova/instances/instance-000d/disk from (pid=857) 
execute /home/sumitsen/openstack/nova/app/nova/utils.py:220
  libvir: QEMU error : internal error Process exited while reading console log 
output: chardev: opening backend "file" failed
  2012-04-13 19:39:59 ERROR nova.compute.manager 
[req-85fd790e-fdad-4496-9295-cc2037857704 31713aa0b6e3496cb43a48c4f654c03d 
89f940ceb1c847dab0134a30b5ff5629] [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] Instance failed to spawn
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] Traceback (most recent call last):
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] File 
"/home/sumitsen/openstack/nova/app/nova/compute/manager.py", line 594, in _spawn
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] self._legacy_nw_info(network_info), 
block_device_info)
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] File 
"/home/sumitsen/openstack/nova/app/nova/exception.py", line 113, in wrapped
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] return f(*args, **kw)
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] File 
"/home/sumitsen/openstack/nova/app/nova/virt/libvirt/connection.py", line 898, 
in spawn
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] self._create_new_domain(xml)
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] File 
"/home/sumitsen/openstack/nova/app/nova/virt/libvirt/connection.py", line 1713, 
in _create_new_domain
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] domain.createWithFlags(launch_flags)
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] File 
"/usr/lib64/python2.6/site-packages/libvirt.py", line 541, in createWithFlags
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
  2012-04-13 19:39:59 TRACE nova.compute.manager [instance: 
8f547416-17d0-4482-8b09-b559d41355bf] libvirtError: internal error Process 
exited while reading console log output: chardev: opening backend "file" failed

  --

  [sumitsen@sorrygate-dr DevstackPy]$ ll 
~/openstack/nova/instances/instance-000d/
  total 4108
  -rw-rw 1 root root 0 Apr 13 19:39 console.log
  -rw-r--r-- 1 root root 8388608 Apr 13 19:39 disk
  -rw-r--r-- 1 root root 1192 Apr 13 19:39 libvirt.xml

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/985786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161165] Re: Move keystone auth to nova.conf.sample

2013-12-13 Thread Dean Troyer
Resolved by https://review.openstack.org/52258 via bug
https://bugs.launchpad.net/nova/+bug/1240753

** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161165

Title:
  Move keystone auth to nova.conf.sample

Status in devstack - openstack dev environments:
  Invalid
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  For a while, nova has supported specifying the keystone authentication
  info in a [keystone_authtoken] section in nova.conf, instead of a
  [filter:authtoken] section of api-paste.ini. However, in the nova
  repository, the example keystone authentication info is still in api-
  paste.ini instead of nova.conf.sample.

  It is easier on operators if only the nova.conf file needs to be edited, and 
seeing the content in etc/nova/api-paste.ini can be
  confusing.

  I recommend moving the content from api-paste.ini to nova.conf.sample.

  Note that this requires modifying devstack since removing these lines
  from api-paste.ini will cause devstack to fail, and it's used to gate
  nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1161165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1186448] Re: Install not completed

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1186448

Title:
  Install not completed

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  So trying to use install after preparing got me the following. It
  seems like py2rpm shouldn't be getting called to build more rpms since
  the prepare stage was supposed to do this??? Is this suppoesd to
  happen?

  sudo ./smithy -a install

  Password: 
   _  _______   _  ___   _  _  ____   _
  (  _  )(  _`\ (  _`\ ( ) ( )(  _`\(_   _)(  _  )(  _`\ ( ) ( )
  | ( ) || |_) )| (_(_)| `\| || (_(_) | |  | (_) || ( (_)| |/'/'
  | | | || ,__/'|  _)_ | , ` |`\__ \  | |  |  _  || |  _ | , <
  | (_) || || (_( )| |`\ |( )_) | | |  | | | || (_( )| |\`\
  (_)(_)(/'(_) (_)`\) (_)  (_) (_)(/'(_) (_)
  Anvil: | 2013.1-dev | 
   Let us get on with the show! 
  Action Runner-
  INFO: @anvil.distro : Matched distro rhel for platform 
Linux-2.6.32-220.23.1.el6.YAHOO.20120713.x86_64-x86_64-with-redhat-6.2-Santiago
  INFO: @anvil : Starting action install on 2013-05-31T23:20:26.548723 for 
distro: rhel
  INFO: @anvil : Using persona: conf/personas/in-a-box/basic.yaml
  INFO: @anvil : In root directory: /home/harlowja/openstack
  INFO: @anvil.actions.base : Processing components for action install.
  INFO: @anvil.actions.base : Activating in the following order:
  INFO: @anvil.actions.base : |-- general
  INFO: @anvil.actions.base : |-- db
  INFO: @anvil.actions.base : |-- rabbit-mq
  INFO: @anvil.actions.base : |-- oslo-config
  INFO: @anvil.actions.base : |-- keystone
  INFO: @anvil.actions.base : |-- keystone-client
  INFO: @anvil.actions.base : |-- glance
  INFO: @anvil.actions.base : |-- glance-client
  INFO: @anvil.actions.base : |-- cinder-client
  INFO: @anvil.actions.base : |-- quantum-client
  INFO: @anvil.actions.base : |-- nova
  INFO: @anvil.actions.base : |-- nova-client
  INFO: @anvil.actions.base : Booting up your components.
  INFO: @anvil.actions.base : Reading passwords using a unencrypted keyring @ 
/etc/anvil/passwords.cfg
  INFO: @anvil.actions.base : Verifying that the components are ready to 
rock-n-roll.
  INFO: @anvil.actions.base : Warming up component configurations.
  INFO: @anvil.actions.install : Configuring general.
  INFO: @anvil.actions.install : Configuring db.
  INFO: @anvil.actions.install : Configuring rabbit-mq.
  INFO: @anvil.actions.install : Configuring oslo-config.
  INFO: @anvil.actions.install : Configuring keystone.
  INFO: @anvil.components.base_install : Configuring 3 files:
  INFO: @anvil.components.base_install : |-- keystone.conf
  INFO: @anvil.components.base_install : |-- logging.conf
  INFO: @anvil.components.base_install : |-- policy.json
  INFO: @anvil.components.base_install : Creating 3 sym-links:
  INFO: @anvil.components.base_install : |-- /etc/keystone/policy.json => 
/home/harlowja/openstack/keystone/config/policy.json
  INFO: @anvil.components.base_install : |-- /etc/keystone/logging.conf => 
/home/harlowja/openstack/keystone/config/logging.conf
  INFO: @anvil.components.base_install : |-- /etc/keystone/keystone.conf => 
/home/harlowja/openstack/keystone/config/keystone.conf
  INFO: @anvil.actions.install : Configuring keystone-client.
  INFO: @anvil.actions.install : Configuring glance.
  INFO: @anvil.components.base_install : Configuring 6 files:
  INFO: @anvil.components.base_install : |-- glance-api.conf
  INFO: @anvil.components.base_install : |-- glance-registry.conf
  INFO: @anvil.components.base_install : |-- glance-api-paste.ini
  INFO: @anvil.components.base_install : |-- glance-registry-paste.ini
  INFO: @anvil.components.base_install : |-- policy.json
  INFO: @anvil.components.base_install : |-- logging.conf
  INFO: @anvil.components.base_install : Creating 6 sym-links:
  INFO: @anvil.components.base_install : |-- /etc/glance/policy.json => 
/home/harlowja/openstack/glance/config/policy.json
  INFO: @anvil.components.base_install : |-- /etc/glance/logging.conf => 
/home/harlowja/openstack/glance/config/logging.conf
  INFO: @anvil.components.base_install : |-- /etc/glance/glance-registry.conf 
=> /home/harlowja/openstack/glance/config/glance-registry.conf
  INFO: @anvil.components.base_install : |-- 
/etc/glance/glance-registry-paste.ini => 
/home/harlowja/openstack/glance/config/glance-registry-paste.ini
  INFO: @anvil.components.base_install : |-- /etc/glance/glance-api.conf => 
/home/harlowja/openstack/glance/config/glance-api.conf
  INFO: @anvil.components.base_install : |-- /etc/glance/glance-api-paste.ini 
=> /home/harlowja/openstack/glance/config/glance-api-paste.ini
  INFO: @anvil.actions.install : Configuring glance-client.
  INFO: @anvil.actions.install

[Yahoo-eng-team] [Bug 1179747] Re: install action fails with pip-build directory error with pip > 1.2.1

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1179747

Title:
  install action fails with pip-build directory error with pip > 1.2.1

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Get this error output with pip 1.3.1 installed, problem goes away with
  pip 1.2.1.

  Related change: https://github.com/pypa/pip/pull/780/files

  ...
  ...
  INFO: @anvil.actions.install : Installing general.
  INFO: @anvil.components : Setting up 25 distribution packages:
  INFO: @anvil.components : |-- iputils
  INFO: @anvil.components : |-- sudo
  INFO: @anvil.components : |-- mlocate
  INFO: @anvil.components : |-- curl
  INFO: @anvil.components : |-- git
  INFO: @anvil.components : |-- coreutils
  INFO: @anvil.components : |-- python-devel
  INFO: @anvil.components : |-- python
  INFO: @anvil.components : |-- tcpdump
  INFO: @anvil.components : |-- python-distutils-extra
  INFO: @anvil.components : |-- python-setuptools
  INFO: @anvil.components : |-- unzip
  INFO: @anvil.components : |-- openssh-server
  INFO: @anvil.components : |-- gawk
  INFO: @anvil.components : |-- python-paste-deploy1.5
  INFO: @anvil.components : |-- wget
  INFO: @anvil.components : |-- libxslt-devel
  INFO: @anvil.components : |-- python-sphinx10
  INFO: @anvil.components : |-- dnsmasq-utils
  INFO: @anvil.components : |-- python-webob1.0
  INFO: @anvil.components : |-- python-routes1.12
  INFO: @anvil.components : |-- libxml2-devel
  INFO: @anvil.components : |-- psmisc
  INFO: @anvil.components : |-- python-nose1.1
  INFO: @anvil.components : |-- lsof
  Installing: 100% 
|#|
 Time: 00:00:00
  INFO: @anvil.components : Setting up 20 python packages:
  INFO: @anvil.components : |-- nose-exclude
  INFO: @anvil.components : |-- python-subunit
  INFO: @anvil.components : |-- openstack.nose_plugin
  INFO: @anvil.components : |-- lxml
  INFO: @anvil.components : |-- testtools
  INFO: @anvil.components : |-- testrepository
  INFO: @anvil.components : |-- pycrypto
  INFO: @anvil.components : |-- prettytable
  INFO: @anvil.components : |-- pylint
  INFO: @anvil.components : |-- distribute
  INFO: @anvil.components : |-- keyring
  INFO: @anvil.components : |-- pep8
  INFO: @anvil.components : |-- nosehtmloutput
  INFO: @anvil.components : |-- sqlalchemy
  INFO: @anvil.components : |-- coverage
  INFO: @anvil.components : |-- sqlalchemy-migrate
  INFO: @anvil.components : |-- requests
  INFO: @anvil.components : |-- fixtures
  INFO: @anvil.components : |-- pysqlite
  INFO: @anvil.components : |-- cliff
  Installing: 100% 
|#|
 Time: 00:00:00
   ___
  / She turned me \
  \ into a newt!  /
   ---
\ ||   ||
  \__ ||-mm||
\ (  )/_)//
  (oo)/
  v--v
  ProcessExecutionError: Unexpected error while running command.
  Command: pip-python install -q cliff
  Exit code: 1
  Stdout: 'The temporary folder for building (/tmp/pip-build-melwitt) is not 
owned by your user!\npip will not work until the temporary folder is either 
deleted or owned by your user account.\n'
  Stderr: 'Traceback (most recent call last):\n  File "/usr/bin/pip-python", 
line 9, in \nload_entry_point(\'pip==1.3.1\', \'console_scripts\', 
\'pip\')()\n  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 
299, in load_entry_point\nreturn 
get_distribution(dist).load_entry_point(group, name)\n  File 
"/usr/lib/python2.6/site-packages/pkg_resources.py", line 2229, in 
load_entry_point\nreturn ep.load()\n  File 
"/usr/lib/python2.6/site-packages/pkg_resources.py", line 1948, in load\n
entry = __import__(self.module_name, globals(),globals(), [\'__name__\'])\n  
File "/usr/lib/python2.6/site-packages/pip/__init__.py", line 9, in \n  
  from pip.util import get_installed_distributions, get_prog\n  File 
"/usr/lib/python2.6/site-packages/pip/util.py", line 15, in \nfrom 
pip.locations import site_packages, running_under_virtualenv, 
virtualenv_no_global\n  File 
"/usr/lib/python2.6/site-packages/pip/locations.py", line 64, in \nb
 uild_prefix = _get_build_prefix()\n  File 
"/usr/lib/python2.6/site-packages/pip/locations.py", line 54, in 
_get_build_prefix\nraise 
pip.exceptions.InstallationError(msg)\npip.exceptions.InstallationError:

[Yahoo-eng-team] [Bug 1212165] Re: Openvswitch missing and still using even with linuxbridge when using Neutron persona

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1212165

Title:
  Openvswitch missing and still using even with linuxbridge when using
  Neutron persona

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  As discussed here : http://lists.openstack.org/pipermail/openstack-
  dev/2013-August/013468.html

  there are two distinct things : 
   - one could want to use linuxbridge as core plugin, and then 
openstack-neutron-openvswich should not be built and installed
   - one could want to use openvswitch, and then anvil should use the RDO repo

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1212165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260891] [NEW] network_model tests mix IP() and FixedIP

2013-12-13 Thread Aaron Rosen
Public bug reported:

network_model tests mix IP() and FixedIP

** Affects: nova
 Importance: Medium
 Assignee: Aaron Rosen (arosen)
 Status: In Progress


** Tags: network

** Changed in: nova
 Assignee: (unassigned) => Aaron Rosen (arosen)

** Changed in: nova
   Importance: Undecided => Medium

** Tags added: network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260891

Title:
  network_model tests mix IP() and FixedIP

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  network_model tests mix IP() and FixedIP

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1157871] Re: Add advanced dependency checking

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1157871

Title:
  Add advanced dependency checking

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Currently there exists mapping of pip-requires dependencies to package
  ones, but there is not currently any strong inter-component dependency
  checking.

  Aka: we should be able to error out when this happens, keystone
  requires this version, but nova that version, and they are
  incompatible...

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1157871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1187106] Re: Lost changelog when building openstack specs

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1187106

Title:
  Lost changelog when building openstack specs

Status in ANVIL for forging OpenStack.:
  Invalid

Bug description:
  It seems like the changelog is lost when building the openstack spec
  files. Previously it was being created from the git change log which
  was useful to have to know exactly what changes went into the spec
  file. It seems to have gone missing???

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1187106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1198041] Re: Quantum -> neutron

2013-12-13 Thread Joshua Harlow
** Changed in: anvil/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1198041

Title:
  Quantum -> neutron

Status in ANVIL for forging OpenStack.:
  Fix Released
Status in anvil havana series:
  Fix Released

Bug description:
  Quantum is now called neutron. This affects our internal names in
  anvil as well.

  We need to update to this new naming convention (and ensure that the
  rpm for neutron/neutronclient obsoletes the quantum one). Updating
  docs and examples and file and yaml names and such.

  It appears that the move to neutron hasn't been fully completed but we
  should start adjusting as soon as possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1198041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1193085] Re: Allow for disabling init.d inclusion/generation

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1193085

Title:
  Allow for disabling init.d inclusion/generation

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Add a boolean to configuration that allows for turning off the
  creation and inclusion of the init.d scripts.

  Default this to off would be fine ;)

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1193085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1193088] Re: Allow for disabling rpm config inclusion

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1193088

Title:
  Allow for disabling rpm config inclusion

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Create a boolean that allows for disabling the rpm configuration file
  inclusion/creation.

  Default this to off would be fine ;)

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1193088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218122] Re: pip failed with Destination path already exists

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1218122

Title:
  pip failed with Destination path already exists

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  I am trying to set up OpenStack on a VM instance, with image
  6.2.1533-18.

  I ran through the following steps

  sudo yum install git
  git clone https://github.com/stackforge/anvil.git
  cd anvil/
  git checkout stable/grizzly
  sudo ./smithy --bootstrap
  ./smithy -a prepare -d ~/openstack

  The following error pops up

  INFO: @anvil.packaging.base : pip failed
   ___
  < You have been borked. >
   ---
    \ ||   ||
  \__ ||-mm||
    \ (  )/_)//
  (oo)/
  v--v
  Error: Destination path 
'/home/weidu/openstack/deps/download/pysqlite-2.6.3.tar.gz' already exists

  Hope this can be fixed.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1218122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1195842] Re: Cinder building failing?

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1195842

Title:
  Cinder building failing?

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Installing build requirements for anvil-source
  Build requirements are installed
  make: *** [openstack-cinder-2013.2.a252.g066a676-1.el6.src.rpm.mark] Error 1
  make: *** Waiting for unfinished jobs
  openstack-glance-2013.2.a60.g95adbb4-1.el6.src.rpm is processed
   ___
  / She turned me \
  \ into a newt!  /
   ---
\ ||   ||
  \__ ||-mm||
\ (  )/_)//
  (oo)/
  v--v
  ProcessExecutionError: Unexpected error while running command.
  Command: make '-f' '/home/harlowja/openstack/deps/binary-anvil.mk' '-j' 2
  Exit code: 2
  Stdout: "', mode 'w' at 0x7f057c255150>>"
  Stderr: "', mode 'w' at 0x7f057c2551e0>>"

  ---Verbose mode

  DEBUG: @anvil.shell : Writing to file 
'/home/harlowja/openstack/deps/binary-anvil.mk' (1030 bytes) (flush=True)
  DEBUG: @anvil.shell : > SRC_REPO_DIR := 
/home/harlowja/openstack/repo/anvil-source
  LOGS_DIR := /home/harlowja/openstack/deps/output
  RPMBUILD := rpmbuild
  RPMBUILD_FLAGS := --rebuild --define '_topdir 
/home/harlowja/openstack/deps/rpmbuild'

  YUM_BUILDDEP := yum-builddep
  YUM_BUILDDEP_FLAGS := -q -y

  REPO_NAME := $(shell basename $(SRC_REPO_DIR))
  BUILDDEP_MARK := builddep-$(REPO_NAME).mark
  MARKS := $(foreach archive,$(shell (cd $(SRC_REPO_DIR) && echo 
*src.rpm)),$(archive).mark)

  
  all: $(MARKS)

  
  # NOTE(aababilov): yum-builddep is buggy and can fail when several
  # package names are given, so, pass them one by one
  $(BUILDDEP_MARK):
@echo "Installing build requirements for $(REPO_NAME)"
@for pkg in $(SRC_REPO_DIR)/*; do\
$(YUM_BUILDDEP) $(YUM_BUILDDEP_FLAGS) $$pkg; \
done &> $(LOGS_DIR)/yum-builddep-$(REPO_NAME).log
@touch "$@"
@echo "Build requirements are installed"

  
  %.mark: $(SRC_REPO_DIR)/% $(BUILDDEP_MARK)
@$(RPMBUILD) $(RPMBUILD_FLAGS) -- $< &> $(LOGS_DIR)/rpmbuild-$*.log
@touch "$@"
@echo "$* is processed"

  DEBUG: @anvil.shell : Appending to file '/home/harlowja/openstack/deps.trace' 
(61 bytes) (flush=True)
  DEBUG: @anvil.shell : >> FILE_TOUCHED - 
/home/harlowja/openstack/deps/binary-anvil.mk

  DEBUG: @anvil.shell : Creating directory 
'/home/harlowja/openstack/deps/rpmbuild'
  DEBUG: @anvil.shell : Creating directory 
'/home/harlowja/openstack/deps/rpmbuild/SPECS'
  DEBUG: @anvil.shell : Creating directory 
'/home/harlowja/openstack/deps/rpmbuild/SOURCES'
  DEBUG: @anvil.shell : Running cmd: ['make', '-f', 
'/home/harlowja/openstack/deps/binary-anvil.mk', '-j', '2']
  DEBUG: @anvil.shell : In working directory: 
'/home/harlowja/openstack/deps/marks-binary'
  make: *** [openstack-cinder-2013.2.a252.g066a676-1.el6.src.rpm.mark] Error 1
  make: *** Waiting for unfinished jobs
  openstack-keystone-2013.2.b1.134.g911c315-1.el6.src.rpm is processed
  DEBUG: @anvil.shell : Recursively deleting directory tree starting at 
'/home/harlowja/openstack/deps/rpmbuild'
   __
  / We used to dream  \
  | of living in a|
  \ corridor! /
   ---
\ ||   ||
  \__ ||-mm||
\ (  )/_)//
  (oo)/
  v--v
  Traceback (most recent call last):
File "/home/harlowja/anvil/anvil/__main__.py", line 219, in main
  run(args)
File "/home/harlowja/anvil/anvil/__main__.py", line 123, in run
  runner.run(persona_obj)
File "/home/harlowja/anvil/anvil/actions/base.py", line 341, in run
  self._run(persona, component_order, instances)
File "/home/harlowja/anvil/anvil/actions/build.py", line 42, in _run
  dependency_handler.build_binary()
File "/home/harlowja/anvil/anvil/packaging/yum.py", line 213, in 
build_binary
  self._execute_make(makefile_name, marks_dir)
File "/home/harlowja/anvil/anvil/packaging/yum.py", line 222, in 
_execute_make
  stdout_fh=sys.stdout, stderr_fh=sys.stderr)
File "/home/harlowja/anvil/anvil/shell.py", line 168, in execute
  stderr=stderr, cmd=str_cmd)
  ProcessExecutionError: Unexpected error while running command.
  Command: make '-f' '/home/harlowja/openstack/deps/binary-anvil.mk' '-j' 2
  Exit code: 2
  Stdout: "', mode 'w' at 0x7fd10621b150>>"

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1195842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1208335] Re: multipip not detecting same version conflicts

2013-12-13 Thread Joshua Harlow
** Changed in: anvil/havana
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1208335

Title:
  multipip not detecting same version conflicts

Status in ANVIL for forging OpenStack.:
  Fix Released
Status in anvil havana series:
  Fix Released

Bug description:
  It appears that multipip is not detecting same version conflicts.

  $ ./multipip 'x>1' 'x<=1'
  x>1

  $ ./multipip 'x<=1' 'x>1'
  x>1

  I would have expected this to report a conflict instead of picking one
  of the 2 incompatabile choices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1208335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1192664] Re: novaclient not installed on RHEL6.2, stable/grizzly

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1192664

Title:
  novaclient not installed on RHEL6.2, stable/grizzly

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Using anvil, stable/grizzly branch, in RHEL6.2

  After doing anvil bootstrap, prepare, install, and start, I tried to do a 
nova command. But nova didn't exist:
  [rloo@csp0091 ~]$ nova list
  -bash: nova: command not found

  I looked and novaclient was not installed. However, a 'yum ls | grep
  novaclient' shows:

  python-novaclient.noarch2:2.13.0-1.el6
anvil 
  python-novaclient-doc.noarch1:2.10.0-2.el6epel

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1192664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1200722] Re: Anvil bootstrap stage failing because of logilab-astng and logilab-common conflict

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1200722

Title:
  Anvil bootstrap stage failing because of logilab-astng and logilab-
  common conflict

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Here is the error bootsrap throws for anvil (stable/grizzly):

  Bootstrapping RHEL 6.2
  Please wait...
  Downloading epel-release-6-8.noarch.rpm to /tmp...
  Installing packages: gcc git patch python python-devel createrepo yum-utils 
yum-plugin-remove-with-leaves PyYAML rpm-build python-pip python-argparse 
python-setuptools
  Package patch-2.6-6.el6.x86_64 already installed and latest version
  Package PyYAML-3.10-3.el6.x86_64 already installed and latest version
  Package python-setuptools is obsoleted by python-distribute, trying to 
install python-distribute-0.6.10-0.y.1.0.0.x86_64 instead
  Installing packages: pyflakes = 0.7.2 pylint = 0.25.2 python-argparse 
python-cheetah >= 2.4.4 python-d2to1 >= 0.2.10 python-flake8 = 2 
python-iniparse python-iso8601 >= 0.1.4 python-keyring python-netifaces >= 0.5 
python-ordereddict python-pbr >= 0.5.16 python-pep8 = 1.4.5 python-progressbar 
python-psutil python-termcolor
  Package python-argparse-1.2.1-2.el6.noarch already installed and latest 
version
  Package python-iniparse-0.3.1-2.1.el6.noarch already installed and latest 
version
  Downloading Python requirements: cheetah>=2.4.4 flake8==2.0 pbr>=0.5.16 
pep8==1.4.5 pyflakes==0.7.2 pylint==0.25.2 termcolor
  Building RPMs for  cheetah>=2.4.4 flake8==2.0 pbr>=0.5.16 pep8==1.4.5 
pyflakes==0.7.2 pylint==0.25.2 termcolor
  Cannot install package python-setuptools-0.7.3-0.el6.noarch. It is obsoleted 
by installed package python-distribute-0.6.10-0.y.1.0.0.x86_64
  Cannot install package python-setuptools-0.8-0.el6.noarch. It is obsoleted by 
installed package python-distribute-0.6.10-0.y.1.0.0.x86_64

  
  Transaction Check Error:
file /usr/lib/python2.6/site-packages/logilab/__init__.pyc conflicts 
between attempted installs of python-logilab-astng-0.24.3-0.el6.noarch and 
python-logilab-common-0.59.1-0.el6.noarch
file /usr/lib/python2.6/site-packages/logilab/__init__.pyo conflicts 
between attempted installs of python-logilab-astng-0.24.3-0.el6.noarch and 
python-logilab-common-0.59.1-0.el6.noarch

  Error Summary
  -

  Bootstrapping RHEL 6.2 failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1200722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189553] Re: PBR is breaking anvil rpmbuilds

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1189553

Title:
  PBR is breaking anvil rpmbuilds

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Can't build anything that uses PBR due to it behaving **badly** (shame
  on it)

  Link to other bugs...

  - https://review.openstack.org/#/c/32237/
  - https://bugs.launchpad.net/pbr/+bug/1189068
  - https://bugs.launchpad.net/pbr/+bug/1189935

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1189553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1194905] Re: Horizon is not built

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: New => Fix Released

** Changed in: anvil/grizzly
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1194905

Title:
  Horizon is not built

Status in ANVIL for forging OpenStack.:
  Fix Released
Status in anvil grizzly series:
  Fix Released

Bug description:
  Horizon binary RPM build exits with an error:

  # rpmbuild python-django-horizon.spec -bb
  ...
  + /usr/bin/python manage.py collectstatic --noinput 
--pythonpath=../../lib/python2.7/site-packages/
  /usr/lib/python2.6/site-packages/cinderclient/openstack/common/version.py:21: 
UserWarning: Module openstack_dashboard was already imported from 
/home/stack/rpmbuild/BUILDROOT/python-django-horizon-2013.1-1.el6.x86_64/usr/share/openstack-dashboard/openstack_dashboard/__init__.pyc,
 but /home/stack/rpmbuild/BUILD/horizon-2013.1 is being added to sys.path
import pkg_resources
  Unknown command: 'collectstatic'
  Type 'manage.py help' for usage.
  error: Bad exit status from /var/tmp/rpm-tmp.W3btlt (%install)

  
  RPM build errors:
  Bad exit status from /var/tmp/rpm-tmp.W3btlt (%install)

  Horizon's manage.py depends on OpenStack client packages because
  Horizon settings.py depends on them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1194905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1194187] Re: add default value for patches:download and patches:package

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1194187

Title:
  add default value for patches:download and patches:package

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  if no value provide in components for patches:download or patches:package, 
use 
  conf/patches/$comp/download and conf/patches/$comp/package

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1194187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211647] Re: Anvil removing keyring

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1211647

Title:
  Anvil removing keyring

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Anvil on install is installing a newer keyring and then on uninstall
  is also removing said package yet that package (or its newer version)
  is a anvil runtime dependency. We need to ensure that even though it
  was upgraded that we don't remove anvil runtime dependencies on
  uninstall (even if said package was upgraded).

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1211647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1194488] Re: kombu-requirement.patch breaks Anvil master

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1194488

Title:
  kombu-requirement.patch breaks Anvil master

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  kombu-requirement.patch is not appropriate for Havana Quantum. Drop
  the patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1194488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1194213] Re: Keystone epoch is hardcoded as 1

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1194213

Title:
  Keystone epoch is hardcoded as 1

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Epoch in openstack-keystone.spec is hardcoded as 1. Therefore, EPEL
  keystone can be chosen by yum.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1194213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1203094] Re: post-download patches break packaging

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1203094

Title:
  post-download patches break packaging

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  if post-download patch remove file, ./smithy -a package died,
  complaining on this file not found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1203094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1212213] Re: Unable to run build action twice on the same system

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1212213

Title:
  Unable to run build action twice on the same system

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Due to too broad globbing in binary.mk template,  ./smithy -a build
  fails when source repository already has metadata generated.
  Corresponding log file ends with line:

  No such package(s): /root/openstack/repo/anvil-source/repodata

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1212213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1191042] Re: logilab-astng conflicts with logilab-common

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1191042

Title:
  logilab-astng conflicts with logilab-common

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Cannot install  logilab-common and logilab-astng with anvil
  simultaneously: they conflict on site-packages/logilab/__init__.py
  file.

  Both packages are required by pylint 0.25.2 that is required by
  OpenStack tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1191042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1203818] Re: rpmbuild error: error: line 17: Illegal char '-' in: Version: 2.1-1-g2d82221

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1203818

Title:
  rpmbuild error: error: line 17: Illegal char '-' in: Version:
  2.1-1-g2d82221

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  rpmbuild produce error on spec files, generated by anvil
  looks like it connect with commit-after-download-patch code, which change 
version, returned by setup.py --version

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1203818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1192675] Re: smithy -a start doesn't detect errors

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1192675

Title:
  smithy -a start doesn't detect errors

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Using anvil, stable/grizzly branch, RHEL6.2.

  When I ran ./smithy -a start, I got the happy cow (or whatever it was
  at the end).

  However, the output actually showed several errors:

  INFO: @anvil.components.base_runtime : Starting program api-metadata under 
component nova.
  ERROR: @anvil.components.base_runtime : Failed to start program api-metadata 
under component nova.
  INFO: @anvil.components.base_runtime : Starting program compute under 
component nova.
  INFO: @anvil.components.base_runtime : Starting program network under 
component nova.
  INFO: @anvil.components.base_runtime : Starting program conductor under 
component nova.
  INFO: @anvil.components.base_runtime : Starting program api-ec2 under 
component nova.
  ERROR: @anvil.components.base_runtime : Failed to start program api-ec2 under 
component nova.
  INFO: @anvil.components.base_runtime : Starting program scheduler under 
component nova.
  INFO: @anvil.components.base_runtime : Starting program api-os-compute under 
component nova.
  ERROR: @anvil.components.base_runtime : Failed to start program 
api-os-compute under component nova.

  The above are serious errors.

  This one, however, is fine:

  ERROR: @anvil.components.helpers.glance : Installing 
'http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img' fail
  ed due to: Image named cirros-0.3.1-x86_64-disk already exists.
  Traceback (most recent call last):
File "/home/rloo/anvil/anvil/components/helpers/glance.py", line 451, in 
install
  (name, img_id) = img_handle.install()
File "/home/rloo/anvil/anvil/components/helpers/glance.py", line 399, in 
install
  img_id = self._register(tgt_image_name, unpack_info)
File "/home/rloo/anvil/anvil/components/helpers/glance.py", line 322, in 
_register
  self._check_name(image_name)
File "/home/rloo/anvil/anvil/components/helpers/glance.py", line 280, in 
_check_name
  raise IOError("Image named %s already exists." % (name))
  IOError: Image named cirros-0.3.1-x86_64-disk already exists.

  I guess if it isn't easy to flag certain ERRORs, maybe it should spit
  out something about 'Cannot determine if everything started properly;
  check this output for ERRORs'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1192675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188335] Re: Building more rpms than required?

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1188335

Title:
  Building more rpms than required?

Status in ANVIL for forging OpenStack.:
  Invalid

Bug description:
  It appears that when selecting the python dependencies to download
  that pip also downloads dependencies of those dependencies and then
  includes these in the download directory. When py2rpm is then called
  given that directory it then goes and builds rpms for all these other
  files as well. Perhaps we should filter the download/build
  directory???

  Example:

  INFO: @anvil.packaging.base : Downloading Python dependencies:
  INFO: @anvil.packaging.base : |-- WebOb==1.2.3
  INFO: @anvil.packaging.base : |-- babel>=0.9.6
  INFO: @anvil.packaging.base : |-- cliff-tablib>=1.0
  INFO: @anvil.packaging.base : |-- cliff>=1.3.2
  INFO: @anvil.packaging.base : |-- coverage>=3.6
  INFO: @anvil.packaging.base : |-- discover
  INFO: @anvil.packaging.base : |-- flake8==2.0
  INFO: @anvil.packaging.base : |-- hacking>=0.5.3,<0.6
  INFO: @anvil.packaging.base : |-- hp3parclient>=1.0.0
  INFO: @anvil.packaging.base : |-- jsonschema>=0.7,<2
  INFO: @anvil.packaging.base : |-- lxml>=2.3
  INFO: @anvil.packaging.base : |-- netaddr>=0.7.6
  INFO: @anvil.packaging.base : |-- nose-exclude
  INFO: @anvil.packaging.base : |-- nosehtmloutput>=0.0.3
  INFO: @anvil.packaging.base : |-- nosexcover
  INFO: @anvil.packaging.base : |-- openstack.nose-plugin>=0.7
  INFO: @anvil.packaging.base : |-- pam>=0.1.4
  INFO: @anvil.packaging.base : |-- pastedeploy>=1.5.0
  INFO: @anvil.packaging.base : |-- pbr>=0.5.10,<0.6
  INFO: @anvil.packaging.base : |-- pep8==1.4.5
  INFO: @anvil.packaging.base : |-- pyflakes==0.7.2
  INFO: @anvil.packaging.base : |-- pylint==0.25.2
  INFO: @anvil.packaging.base : |-- pysqlite
  INFO: @anvil.packaging.base : |-- python-ldap==2.3.13
  INFO: @anvil.packaging.base : |-- python-subunit
  INFO: @anvil.packaging.base : |-- qpid-python
  INFO: @anvil.packaging.base : |-- routes>=1.12.3
  INFO: @anvil.packaging.base : |-- setuptools-git>=0.4
  INFO: @anvil.packaging.base : |-- sphinx>=1.1.2
  INFO: @anvil.packaging.base : |-- sqlalchemy-migrate>=0.7.2
  INFO: @anvil.packaging.base : |-- sqlalchemy>=0.7.8,<=0.7.9
  INFO: @anvil.packaging.base : |-- testrepository>=0.0.13
  INFO: @anvil.packaging.base : |-- testscenarios<0.5
  INFO: @anvil.packaging.base : |-- warlock>=0.7.0,<2
  INFO: @anvil.packaging.base : |-- wsgiref>=0.1.2
  INFO: @anvil.packaging.base : |-- xattr>=0.6.0

  What was rpm(ized) - which is much bigger than the previous list.

  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/Babel-0.9.6.zip
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/Jinja2-2.7.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/MarkupSafe-0.18.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/PasteDeploy-1.5.0.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/Pygments-1.6.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/Routes-1.13.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/SQLAlchemy-0.7.9.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/Sphinx-1.2b1.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/Tempita-0.5.1.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/WebOb-1.2.3.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/cliff-1.3.3.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/cliff-tablib-1.0.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/cmd2-0.6.5.1.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/colorama-0.2.5.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/coverage-3.6.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/d2to1-0.2.10.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/decorator-3.4.0.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/discover-0.4.0.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/distribute-0.6.45.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/docutils-0.10.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/extras-0.0.3.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/fixtures-0.3.12.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/download/flake8-2.0.tar.gz
  INFO: @anvil.packaging.yum : |-- 
/home/harlowja/openstack/deps/down

[Yahoo-eng-team] [Bug 1219125] Re: Not installing clients

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1219125

Title:
  Not installing clients

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  It appears on grizzly the nova client package is not being installed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1219125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250926] Re: havana-1 install broken

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1250926

Title:
  havana-1 install broken

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  When building and installing openstack with -o
  conf/origins/havana-1.yaml install action fails:

  ProcessExecutionError: Unexpected error while running command.
  Command: '/root/anvil/tools/yyoom' '--verbose' transaction '--install' 
'MySQL-python' '--install' mysql '--install' 'mysql-server' '--install' 
'openstack-cinder' '--install' 'openstack-cinder==2013.2' '--install' 
'openstack-glance' '--install' 'openstack-glance==2013.2' '--install' 
'openstack-keystone' '--install' 'openstack-keystone==2013.2' '--install' 
'openstack-nova-api' '--install' 'openstack-nova-cert' '--install' 
'openstack-nova-compute' '--install' 'openstack-nova-conductor' '--install' 
'openstack-nova-network' '--install' 'openstack-nova-scheduler' '--install' 
'openstack-nova==2013.2' '--install' 'python-cinderclient==1.0.6' '--install' 
'python-glanceclient==0.11.0' '--install' 'python-keystoneclient==0.4.1' 
'--install' 'python-neutronclient==2.3.1' '--install' 
'python-novaclient==2.15.0' '--install' 'python-oslo-config==1.2.1' '--install' 
'python-swiftclient==1.8.0' '--install' 'rabbitmq-server' '--prefer-repo' 
'anvil-deps' '--prefer-repo' anvil
  Exit code: 1
  Stdout: ''
  Stderr: ">"

  # tail -n 20 /root/openstack/deps/output/yyoom-transaction-install.log
  YYOOM INFO:   - python-glanceclient==0.11.0
  YYOOM INFO:   - python-keystoneclient==0.4.1
  YYOOM INFO:   - python-neutronclient==2.3.1
  YYOOM INFO:   - python-novaclient==2.15.0
  YYOOM INFO:   - python-oslo-config==1.2.1
  YYOOM INFO:   - python-swiftclient==1.8.0
  YYOOM INFO:   - rabbitmq-server
  YYOOM INFO: 2:python-nova-2013.2-1.el6.noarch requires python-six < 1.4
  YYOOM ERROR: Building Transaction failed
  YYOOM ERROR: Transaction failed
  Traceback (most recent call last):
File "/root/anvil/tools/yyoom", line 452, in main
  return options.func(yum_base, options) or 0
File "/root/anvil/tools/yyoom", line 279, in _run
  yum_base.install(max(matches))
File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
  self.gen.next()
File "/root/anvil/tools/yyoom", line 431, in _transaction
  raise RuntimeError("Transaction failed: %s" % code)
  RuntimeError: Transaction failed: 1

  
  Package python-six of version 1.3 is available and installed at the moment:

  [root@donkey017 anvil]# yum list --showduplicates python-six
  Loaded plugins: fastestmirror, security
  Loading mirror speeds from cached hostfile
   * epel: ftp.tlk-l.net
  Installed Packages
  python-six.noarch 
1.3-0.el6   
 @anvil-deps
  Available Packages
  python-six.noarch 
1.3-0.el6   
 anvil-deps 
  python-six.noarch 
1.4.1-1.el6 
 epel   

  
  yum fails to install python-nova, too:

  # yum install python-nova
  [...]
  --> Finished Dependency Resolution
  Error: Package: 2:python-nova-2013.2-1.el6.noarch (anvil)
 Requires: python-six < 1.4
 Removing: python-six-1.3-0.el6.noarch (@anvil-deps)
 python-six = 1.3-0.el6
 Updated By: python-six-1.4.1-1.el6.noarch (epel)
 python-six = 1.4.1-1.el6
   You could try using --skip-broken to work around the problem
   You could try running: rpm -Va --nofiles --nodigest

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1250926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223893] Re: Neutron: package metering agent and vpn agent

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1223893

Title:
  Neutron: package metering agent and vpn agent

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  With basic-neutron persona on clean CentOS 6.4, build action fails,
  because rpmbuild fails for openstack-neutron:

  RPM build errors:
  Installed (but unpackaged) file(s) found:
     /etc/neutron/metering_agent.ini
     /etc/neutron/vpn_agent.ini
     /usr/bin/neutron-metering-agent
     /usr/bin/neutron-vpn-agent

  VPN agent and metering angent should be packaged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1223893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250921] Re: Service for neutron-metadata-agent

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1250921

Title:
  Service for neutron-metadata-agent

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Neutron metadata agent needs a service file in /etc/init.d. Moving it
  to separate package (with metadata proxy) also looks like good idea.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1250921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250917] Re: Invalid neutron lock_path

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1250917

Title:
  Invalid neutron lock_path

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  For some reason, lock_path option for neutron is set to dir inside
  $HOME/openstack/, so neutron server and agents don't have access to
  it. It breaks at least l3-agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1250917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237476] Re: keystone init script has an "-all" after it.

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1237476

Title:
  keystone init script has an "-all" after it.

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  The keystone service init script is openstack-nova-keystone-all
  instead of just openstack-nova-keystone.  This is causing some
  problems when trying to use openstack puppet modules to manage the
  keystone service since they correctly expect the init script is
  openstack-nova-keystone.  I believe this is because the binary for
  keystone is keystone-all and the init scripts are generated in a
  generic fashion of: openstack-nova-.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1237476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218229] Re: Build fails when there are no dependencies to build

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1218229

Title:
  Build fails when there are no dependencies to build

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  When all needed dependencies are already present in repos, anvil-deps-
  sources directory is not created, and build action fails:

  [...]
   __
  / We'd better not risk \
  | another frontal assault, |
  \ that rabbit's dynamite.  /
   --
\ ||   ||
  \__ ||-mm||
\ (  )/_)//
  (oo)/
  v--v
  Traceback (most recent call last):
File "/home/melnikov/anvil/anvil/__main__.py", line 217, in main
  run(args)
File "/home/melnikov/anvil/anvil/__main__.py", line 121, in run
  runner.run(persona_obj)
File "/home/melnikov/anvil/anvil/actions/base.py", line 341, in run
  self._run(persona, component_order, instances)
File "/home/melnikov/anvil/anvil/actions/build.py", line 40, in _run
  dependency_handler.build_binary()
File "/home/melnikov/anvil/anvil/packaging/yum.py", line 172, in 
build_binary
  files_only=True):
File "/home/melnikov/anvil/anvil/shell.py", line 242, in listdir
  all_contents = os.listdir(path)
  OSError: [Errno 2] No such file or directory: 
'/home/melnikov/openstack/repo/anvil-deps-sources'

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1218229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243154] Re: error during component install from tag

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Won't Fix

** Changed in: anvil
   Status: Won't Fix => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1243154

Title:
  error during component install from tag

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  ProcessExecutionError occurs when trying to install component from tag.
  For instance, use some old tag for the 'nova-client' component:
  ...
  get_from: "git://github.com/openstack/python-novaclient.git?tag=2.9.0"
  ...

  And run './smithy -a prepare':

  ProcessExecutionError: Unexpected error while running command.
  Command: '/root/anvil/tools/specprint' '-f' 
'/root/openstack/deps/rpmbuild/SPECS/python-novaclient.spec'
  Exit code: 1
  Stdout: ''
  Stderr: 'error: line 11: Illegal char \'-\' in: Version:  
2.10.0-2.11.0-2.11.1-2.12.0-2.13.0-2.14.0-2.14.1-2.15.0-2.9.0
  Traceback (most recent call last):
  File "/root/anvil/tools/specprint", line 84, in 
  print(json.dumps(analyze_spec(options.file), sort_keys=True, indent=4))
File "/root/anvil/tools/specprint", line 50, in analyze_spec
  raise IOError(str(e).strip() + ": " + spec_filename)
  IOError: can\'t parse specfile: 
/root/openstack/deps/rpmbuild/SPECS/python-novaclient.spec'

  Further exploration showed that this is possibly a pip related problem:
  > git clone https://github.com/openstack/python-novaclient.git
  > cd python-novaclient
  > git checkout -b test 2.9.0
  > python
  >>> from pip import req
  >>> r = req.InstallRequirement.from_line('./')
  >>> r.source_dir = './'
  >>> r.run_egg_info()
  >>> r.installed_version
  >>> '2.10.0-2.11.0-2.11.1-2.12.0-2.13.0-2.14.0-2.14.1-2.15.0-2.9.0' <- yes, 
not what was expected.

  Pip version: 1.4.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1243154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218728] Re: Anvil install openstack fails

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1218728

Title:
  Anvil install openstack fails

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  These are the steps i executed on a OpenStack VM with image
  6.2.1533-18.

  $ sudo yum install git
  $ git clone https://github.com/stackforge/anvil.git
  $ cd anvil/
  $ git checkout stable/grizzly
  $ sudo ./smithy —bootstrap
  $ ./smithy -a prepare -d ~/openstack
  $ sudo ./smithy -a build -d ~/openstack
  $ sudo ./smithy -a install -d ~/openstack

  The error is as follows

  YYOOM INFO: Running yum cleanup
  warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: 
NOKEY
  Importing GPG key 0x0608B895:
   Userid : EPEL (6) 
   Package: epel-release-6-8.noarch (@/epel-release-6-8.noarch)
   From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
  Failed: Didn't install any keys
   __
  / What is the airspeed \
  | velocity of an   |
  \ unladen swallow? /
   --
\ ||   ||
  \__ ||-mm||
\ (  )/_)//
  (oo)/
  v--v
  ProcessExecutionError: Unexpected error while running command.
  Command: '/home/weidu/anvil/tools/yyoom' transaction '--install' 
'python-routes' '--install' 'openstack-nova-api' '--install' 'python-suds' 
'--install' 'python-webob' '--install' 'python-amqplib' '--install' 
'python-ldap' '--install' 'python-nosehtmloutput' '--install' 'python-xattr' 
'--install' 'python-mock' '--install' 'python-eventlet' '--install' 
'openstack-nova-conductor' '--install' 'python-openstack-nose-plugin' 
'--install' 'python-sqlalchemy' '--install' 'python-paste' '--install' 
'MySQL-python' '--install' 'python-pysqlite' '--install' 'python-psycopg2' 
'--install' 'python-boto' '--install' 'python-lxml' '--install' 'python-pyasn1' 
'--install' pyparsing '--install' 'python-paramiko' '--install' 'python-kombu' 
'--install' 'python-sphinx' '--install' 'python-subunit' '--install' 
'python-migrate' '--install' 'python-mox' '--install' 'python-netifaces' 
'--install' 'python-wsgiref' '--install' 'openstack-nova-scheduler' '--install' 
'mysql-server' '--install' 'openstack-nova-network' '--install' 
'python-coverage' '--install' 'python-setuptools-git' '--install' 
'openstack-glance' '--install' 'python-prettytable' '--install' 
'python-jsonschema' '--install' mysql '--install' 'python-nose-exclude' 
'--install' 'python-setuptools' '--install' 'python-passlib' '--install' 
'python-greenlet' '--install' pyflakes '--install' 'python-warlock' '--install' 
pylint '--install' 'python-websockify' '--install' 'python-keyring' '--install' 
'python-cliff-tablib' '--install' 'python-fixtures' '--install' 
'python-cheetah' '--install' 'python-httplib2' '--install' 'python-lockfile' 
'--install' 'python-testtools' '--install' 'python-simplejson' '--install' 
'python-nosexcover' '--install' 'python-anyjson' '--install' 'python-stevedore' 
'--install' 'python-hp3parclient' '--install' 'python-feedparser' '--install' 
'python-discover' '--install' 'python-pam' '--install' 'python-paste-deploy' 
'--install' 'python-importlib' '--install' 'openstack-cinder' '--install' 
'rabbitmq-server' '--install' 'openstack-nova-cert' '--install' 'python-cliff' 
'--install' pyOpenSSL '--install' 'python-testrepository' '--install' 
'python-unittest2' '--install' 'python-netaddr' '--install' 'python-babel' 
'--install' 'python-requests' '--install' 'python-argparse' '--install' 
'openstack-keystone' '--install' 'openstack-nova-compute' '--install' 
'python-webtest' '--install' 'python-memcached' '--install' 'python-pep8' 
'--install' 'python-iso8601' '--install' 'python-crypto' '--install' 
'python-nose'
  Exit code: 1
  Stdout: ''
  Stderr: "', mode 'w' at 0x7f0e72abc1e0>>"

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1218728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189710] Re: On uninstall anvil removes the rpms and also the source.

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1189710

Title:
  On uninstall anvil removes the rpms and also the source.

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  When uninstalling anvil seems to remove the rpms (thats good) and also
  the source trees (thats bad). This means that when running install
  after uninstalling that the source trees are now gone (thats bad) and
  configuration files can't be found there (thats also bad) and then
  smithy blows up. Likely we need a new action called remove that does
  the full uninstall while the previous uninstall just removes the
  packages + the configuration files that were copied/configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1189710/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190432] Re: Applying release number while building openstack rpms

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1190432

Title:
  Applying release number while building openstack rpms

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  In stable/folsom under conf/components/nova.yaml ,
  I could specify

  release: "1-1"

  And while building rpms that version would be picked up and build 
nova-2013.1.1-1 rpm instead of nova-2013.1
  In stable-grizzly, I specify the release variable in components/*.yaml, but 
it is not picked up.

  Is there a way to add release number while building rpms?

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1190432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1188899] Re: Downloading python dependencies & building packages is not resumable

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1188899

Title:
  Downloading python dependencies & building packages is not resumable

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  The new downloading via pip is nice and so is the package building.

  Except when one package breaks building, right at the end of the
  package building process

  This forces anvil to re-download all dependencies, and start
  completely over building packages.

  It would be nice to use the tracewriter or other functionality to be
  able to pick up where we left off, instead of having to start
  completely over (which can be a slow process).

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1188899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190429] Re: No patches applied while building openstack rpms

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1190429

Title:
  No patches applied while building openstack rpms

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  I have a bunch of patches under conf/patches directory for nova, keystone and 
nova-client.
  But after the rpms are built, I do not see any of the patches being applied 
to the nova or keystone python files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1190429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190588] Re: Anvil fails because of broken version by openstack.common.setup

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1190588

Title:
  Anvil fails because of broken version by openstack.common.setup

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  openstack.common.setup can report version incorrectly (several version
  instead of one):

  $ python setup.py --version
  2.2.1
  2.2.2a

  This happens when several tags contain the commit:

  $ git tag --contains HEAD
  2.2.1
  2.2.2a

  PBR has fixed this problem, but it is buggy itself
  (https://bugs.launchpad.net/pbr/+bug/1189935,
  https://bugs.launchpad.net/anvil/+bug/1189553).

  Anyway, we cannot use the new PBR because we want to build an old
  program that is buggy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1190588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1192327] Re: mysqld not being started in stable/grizzly

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1192327

Title:
  mysqld not being started in stable/grizzly

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Using anvil, stable/grizzly branch.

  When I run 'sudo ./smithy -a install --no-prompt-passwords -v', I get
  an error. Turns out mysqld was never started. The fix (from Josh) was
  to modify anvil/component/db.py as follows:

  [rloo@csp0091 components]$ diff db.py.orig db.py
  137c137
  < class DBRuntime(bruntime.ProgramRuntime):
  ---
  > class DBRuntime(bruntime.ServiceRuntime):

  DEBUG: @anvil.shell : Applying chmod: '/etc/my.cnf' to 644
  INFO: @anvil.components.db : Attempting to set your db password just incase 
it wasn't set previously.
  INFO: @anvil.components.db : Ensuring your database is started before we 
operate on it.
  DEBUG: @anvil.shell : Running cmd: ['mysql', '--user=root', '--password=', 
'-e', "USE mysql; UPDATE user SET password=PASSWORD('0d999e26efabdfb097aa') 
WHERE User='root';  FLUSH PRIVILEGES;"]
  WARNING: @anvil.components.db : Couldn't set your db password. It might have 
already been set by a previous process.
  INFO: @anvil.components.helpers.db : Ensuring the database is started.
  INFO: @anvil.components.helpers.db : Giving user root full control of all 
databases.
  DEBUG: @anvil.shell : Running cmd: ['mysql', '--user=root', 
'--password=0d999e26efabdfb097aa', '-e', "GRANT ALL PRIVILEGES ON *.* TO 
'root'@'%' IDENTIFIED BY '0d999e26efabdfb097aa'; FLUSH PRIVILEGES;"]
   ___
  / She turned me \
  \ into a newt!  /
   ---
\ ||   ||
  \__ ||-mm||
\ (  )/_)//
  (oo)/
  v--v
  Traceback (most recent call last):
File "/home/rloo/anvil/anvil/__main__.py", line 217, in main
  run(args)
File "/home/rloo/anvil/anvil/__main__.py", line 121, in run
  runner.run(persona_obj)
File "/home/rloo/anvil/anvil/actions/base.py", line 341, in run
  self._run(persona, component_order, instances)
File "/home/rloo/anvil/anvil/actions/install.py", line 132, in _run
  *removals
File "/home/rloo/anvil/anvil/actions/base.py", line 323, in _run_phase
  result = functors.run(instance)
File "/home/rloo/anvil/anvil/actions/install.py", line 126, in 
  run=lambda i: i.post_install(),
File "/home/rloo/anvil/anvil/components/db.py", line 134, in post_install
  **dbhelper.get_shared_passwords(self))
File "/home/rloo/anvil/anvil/components/helpers/db.py", line 93, in 
grant_permissions
  utils.execute_template(*cmds, params=params)
File "/home/rloo/anvil/anvil/utils.py", line 293, in execute_template
  **kargs)
File "/home/rloo/anvil/anvil/shell.py", line 164, in execute
  stderr=stderr, cmd=str_cmd)
  ProcessExecutionError: Unexpected error while running command.
  Command: mysql '--user=root' '--password=0d999e26efabdfb097aa' '-e' 'GRANT 
ALL PRIVILEGES ON *.* TO '\''root'\''@'\''%'\'' IDENTIFIED BY 
'\''0d999e26efabdfb097aa'\''; FLUSH PRIVILEGES;'
  Exit code: 1
  Stdout: ''
  Stderr: "ERROR 2002 (HY000): Can't connect to local MySQL server through 
socket '/var/lib/mysql/mysql.sock' (2)\n"

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1192327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210657] Re: Didn't install any keys

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1210657

Title:
  Didn't install any keys

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  The following is happening during ./smithy -a build

  INFO: @anvil.packaging.yum : Installing build requirements:
  INFO: @anvil.packaging.yum : |-- mysql-devel
  INFO: @anvil.packaging.yum : |-- python-setuptools
  INFO: @anvil.packaging.yum : |-- unzip
  INFO: @anvil.packaging.yum : |-- sqlite-devel
  INFO: @anvil.packaging.yum : |-- python
  INFO: @anvil.packaging.yum : |-- libxml2-devel
  INFO: @anvil.packaging.yum : |-- sudo
  INFO: @anvil.packaging.yum : |-- python-devel
  INFO: @anvil.packaging.yum : |-- psmisc
  INFO: @anvil.packaging.yum : |-- postgresql-devel
  INFO: @anvil.packaging.yum : |-- openldap-devel
  INFO: @anvil.packaging.yum : |-- wget
  INFO: @anvil.packaging.yum : |-- libxslt-devel
  INFO: @anvil.packaging.yum : |-- tcpdump
  INFO: @anvil.packaging.yum : |-- python-distutils-extra
  INFO: @anvil.shell : You can watch progress in another terminal with:
  INFO: @anvil.shell : tail -f 
/home/harlowja/openstack/deps/output/yyoom-transaction-install.log
   
  / It's time for the  \
  | penguin on top of  |
  | your television to |
  \ explode.   /
   
\ ||   ||
  \__ ||-mm||
\ (  )/_)//
  (oo)/
  v--v
  ProcessExecutionError: Unexpected error while running command.
  Command: '/home/harlowja/anvil/tools/yyoom' '--verbose' transaction 
'--install' 'mysql-devel' '--install' 'python-setuptools' '--install' unzip 
'--install' 'sqlite-devel' '--install' python '--install' 'libxml2-devel' 
'--install' sudo '--install' 'python-devel' '--install' psmisc '--install' 
'postgresql-devel' '--install' 'openldap-devel' '--install' wget '--install' 
'libxslt-devel' '--install' tcpdump '--install' 'python-distutils-extra'

  ... LOG OUTPUT ...

  YYOOM DEBUG: Success - deps resolved
  warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: 
NOKEY
  Importing GPG key 0x0608B895:
   Userid : EPEL (6) 
   Package: epel-release-6-8.noarch (@/epel-release-6-8.noarch)
   From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
  Traceback (most recent call last):
File "/home/harlowja/anvil/tools/yyoom", line 343, in 
  sys.exit(main(sys.argv))
File "/home/harlowja/anvil/tools/yyoom", line 335, in main
  return options.func(_get_yum_base(), options) or 0
File "/home/harlowja/anvil/tools/yyoom", line 174, in _run
  yum_base.install(**pkg)
File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
  self.gen.next()
File "/home/harlowja/anvil/tools/yyoom", line 311, in _transaction
  rpmDisplay=callback)
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 4989, in 
processTransaction
  self._checkSignatures(pkgs,callback)
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 5032, in 
_checkSignatures
  self.getKeyForPackage(po, self._askForGPGKeyImport)
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 4754, in 
getKeyForPackage
  raise Errors.YumBaseError, _("Didn't install any keys")

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1210657/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1192329] Re: MySQL-python not installed on RHEL6.2, stable/grizzly

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1192329

Title:
  MySQL-python not installed on RHEL6.2, stable/grizzly

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Using anvil, stable/grizzly branch, in RHEL 6.2.

  When running "sudo ./smithy -a install --no-prompt-passwords -v', I
  get an error because MySQL-python wasn't installed. I fixed it with
  "yum install MySQL-python".

  The error:

  INFO: @anvil.actions.install : Post-installing keystone.
  INFO: @anvil.components.helpers.db : Dropping mysql database: keystone
  DEBUG: @anvil.shell : Running cmd: ['mysql', '--user=root', 
'--password=0d999e26efabdfb097aa', '-e', 'DROP DATABASE IF EXISTS keystone;']
  INFO: @anvil.components.helpers.db : Creating mysql database: keystone (utf8)
  DEBUG: @anvil.shell : Running cmd: ['mysql', '--user=root', 
'--password=0d999e26efabdfb097aa', '-e', 'CREATE DATABASE keystone CHARACTER 
SET utf8;']
  INFO: @anvil.components.keystone : Syncing keystone to database: keystone
  DEBUG: @anvil.shell : Running cmd: ['sudo', '-u', 'keystone', 
'/usr/bin/keystone-manage', '--debug', '-v', 'db_sync']
  DEBUG: @anvil.shell : In working directory: '/usr/bin'
   ___
  < You have been borked. >
   ---
\ ||   ||
  \__ ||-mm||
\ (  )/_)//
  (oo)/
  v--v
  Traceback (most recent call last):
File "/home/rloo/anvil/anvil/__main__.py", line 217, in main
  run(args)
File "/home/rloo/anvil/anvil/__main__.py", line 121, in run
  runner.run(persona_obj)
File "/home/rloo/anvil/anvil/actions/base.py", line 341, in run
  self._run(persona, component_order, instances)
File "/home/rloo/anvil/anvil/actions/install.py", line 132, in _run
  *removals
File "/home/rloo/anvil/anvil/actions/base.py", line 323, in _run_phase
  result = functors.run(instance)
File "/home/rloo/anvil/anvil/actions/install.py", line 126, in 
  run=lambda i: i.post_install(),
File "/home/rloo/anvil/anvil/components/keystone.py", line 58, in 
post_install
  self._sync_db()
File "/home/rloo/anvil/anvil/components/keystone.py", line 66, in _sync_db
  utils.execute_template(*cmds, cwd=self.bin_dir, 
params=self.config_params(None))
File "/home/rloo/anvil/anvil/utils.py", line 293, in execute_template
  **kargs)
File "/home/rloo/anvil/anvil/shell.py", line 164, in execute
  stderr=stderr, cmd=str_cmd)
  ProcessExecutionError: Unexpected error while running command.
  Command: sudo '-u' keystone '/usr/bin/keystone-manage' '--debug' '-v' 
'db_sync'
  Exit code: 1
  Stdout: ''
  Stderr: 'Traceback (most recent call last):\n  File 
"/usr/bin/keystone-manage", line 28, in \ncli.main(argv=sys.argv, 
config_files=config_files)\n  File 
"/usr/lib/python2.6/site-packages/keystone/cli.py", line 175, in main\n
CONF.command.cmd_class.main()\n  File 
"/usr/lib/python2.6/site-packages/keystone/cli.py", line 54, in main\n
driver.db_sync()\n  File 
"/usr/lib/python2.6/site-packages/keystone/identity/backends/sql.py", line 156, 
in db_sync\nmigration.db_sync()\n  File 
"/usr/lib/python2.6/site-packages/keystone/common/sql/migration.py", line 49, 
in db_sync\ncurrent_version = db_version()\n  File 
"/usr/lib/python2.6/site-packages/keystone/common/sql/migration.py", line 61, 
in db_version\nreturn versioning_api.db_version(CONF.sql.connection, 
repo_path)\n  File "", line 2, in db_version\n  File 
"/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py", line 
155, in with_engine\nengine = construct_engine(url, **kw)\n  File "/u
 sr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py", line 140, 
in construct_engine\nreturn create_engine(engine, **kwargs)\n  File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/__init__.py", line 338, 
in create_engine\nreturn strategy.create(*args, **kwargs)\n  File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/strategies.py", line 64, 
in create\ndbapi = dialect_cls.dbapi(**dbapi_args)\n  File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/connectors/mysqldb.py", line 52, 
in dbapi\nreturn __import__(\'MySQLdb\')\nImportError: No module named 
MySQLdb\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1192329/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1192379] Re: Not uninstalling client packages

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1192379

Title:
  Not uninstalling client packages

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  It appears that with the new daemon_to_package and so forth that we
  are now only removing the packages with daemon_to_packages and not
  client packages and such. That seems broken...

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1192379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1191047] Re: Anvil continues working if pip download failed

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1191047

Title:
  Anvil continues working if pip download failed

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  If all attempts of pip downloading failed, anvil continues building
  packages and surely cannot install OpenStack because of missing
  dependencies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1191047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244579] Re: keystone component starting fails

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1244579

Title:
  keystone component starting fails

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Keystone component starting fails when trying to start all components.
  Looks like this is broken after "Rename keystone-all daemon to keystone" 
commit [64625bd1bbfa07c8f644a63e676b1b15cc7c2b94].

  See command output:

  > smithy -a start -v
  ...
  INFO: @anvil.actions.start : Starting keystone.
  DEBUG: @anvil.shell : Running shell cmd: u"service 'openstack-keystone-all' 
status"
  INFO: @anvil.components.base_runtime : Starting program all under component 
keystone.
  DEBUG: @anvil.shell : Running shell cmd: u"service 'openstack-keystone-all' 
start"
  ERROR: @anvil.components.base_runtime : Failed to start program all under 
component keystone.
  ...
  Traceback (most recent call last):
    File "/root/anvil/anvil/__main__.py", line 212, in main
  run(args)
    File "/root/anvil/anvil/__main__.py", line 117, in run
  runner.run(persona_obj)
    File "/root/anvil/anvil/actions/base.py", line 337, in run
  self._run(persona, component_order, instances)
    File "/root/anvil/anvil/actions/start.py", line 53, in _run
  *removals
    File "/root/anvil/anvil/actions/base.py", line 319, in _run_phase
  result = functors.run(instance)
    File "/root/anvil/anvil/actions/start.py", line 47, in 
  run=lambda i: i.start(),
    File "/root/anvil/anvil/components/base_runtime.py", line 176, in start
  self.name))
  RuntimeError: Failed to start all for component keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1244579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189716] Re: Not correctly removing past phases

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1189716

Title:
  Not correctly removing past phases

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  When installing action finishes, we need to remove any previous
  uninstall 'tracking' data.

  Same with package uninstalling (should remove the package install
  'tracking data) and so on...

  If this doesn't occur then phases + actions will be skipped if
  repeated.

  To see this try the following.

  Prepare->install->uninstall->install (it doesn't work since the
  uninstall didn't remove the install tracking mark/data)

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1189716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189708] Re: Keystone missing keystone-paste.ini

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1189708

Title:
  Keystone missing keystone-paste.ini

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  It seems like keystone now split paste and regular config. We need to
  make sure we install that paste config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1189708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189707] Re: Ensure anvil doesn't remove packages it depends on

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1189707

Title:
  Ensure anvil doesn't remove packages it depends on

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  When uninstalling it seems like anvil can remove its own packages (or
  required other packages) that it requires. Thats not so good.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1189707/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1192766] Re: Config files now in rpms (being automatically removed)

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1192766

Title:
  Config files now in rpms (being automatically removed)

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Due to how y! integration works, the inclusion of RPMs having
  configuration files now means that the removal of said RPMs triggers
  the removal of the configuration files. This seems like new behavior
  that we may not want for the time being.

  Thoughts??

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1192766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1196623] Re: Enable pylint/pep8 gating checks.

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1196623

Title:
  Enable pylint/pep8 gating checks.

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Enable pylint/pep8 gating checks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1196623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1208288] Re: Hacking checks are not really enabled

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1208288

Title:
  Hacking checks are not really enabled

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  Hacking checks are not really enabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1208288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1227311] Re: smithy start fails due to keyring dependency

2013-12-13 Thread Joshua Harlow
** Changed in: anvil
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1227311

Title:
  smithy start fails due to keyring dependency

Status in ANVIL for forging OpenStack.:
  Fix Released

Bug description:
  As per
  
https://github.com/openstack/requirements/commit/d1c6021aae9b0892e7e9f8d7660476ba13a75071
  in openstack's global requirements, keyring should be pinned to < 2.0.

  Getting a version greater than 2.0 results in:

  sudo ./smithy -a start
  Traceback (most recent call last):
File "/usr/lib64/python2.6/runpy.py", line 122, in _run_module_as_main
  "__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.6/runpy.py", line 34, in _run_code
  exec code in run_globals
File "/home/vagrant/anvil/anvil/__main__.py", line 29, in 
  from anvil import actions
File "/home/vagrant/anvil/anvil/actions/__init__.py", line 17, in 
  from anvil.actions import build
File "/home/vagrant/anvil/anvil/actions/build.py", line 18, in 
  from anvil.actions import base as action
File "/home/vagrant/anvil/anvil/actions/base.py", line 26, in 
  from anvil import passwords as pw
File "/home/vagrant/anvil/anvil/passwords.py", line 22, in 
  from keyring.backend import CryptedFileKeyring
  ImportError: cannot import name CryptedFileKeyring

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1227311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245861] Re: server not paused after pause request

2013-12-13 Thread Matt Riedemann
*** This bug is a duplicate of bug 1226412 ***
https://bugs.launchpad.net/bugs/1226412

There are a few tests failing with the same error, the instance isn't
going to PAUSED state within the timeout, so tracking those in bug
1226412.

** This bug has been marked a duplicate of bug 1226412
   guest doesn't reach PAUSED state within 200s in the gate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245861

Title:
  server not paused after pause request

Status in OpenStack Compute (Nova):
  New

Bug description:
  
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_pause_unpause_server

  2013-10-29 10:01:25.476 | Traceback (most recent call last):
  2013-10-29 10:01:25.476 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 232, in 
test_pause_unpause_server
  2013-10-29 10:01:25.477 | 
self.client.wait_for_server_status(self.server_id, 'PAUSED')
  2013-10-29 10:01:25.477 |   File 
"tempest/services/compute/json/servers_client.py", line 159, in 
wait_for_server_status
  2013-10-29 10:01:25.477 | return waiters.wait_for_server_status(self, 
server_id, status)
  2013-10-29 10:01:25.478 |   File "tempest/common/waiters.py", line 80, in 
wait_for_server_status
  2013-10-29 10:01:25.478 | raise exceptions.TimeoutException(message)
  2013-10-29 10:01:25.478 | TimeoutException: Request timed out
  2013-10-29 10:01:25.479 | Details: Server 
4f67710d-9ef4-4215-8212-e90ab9eea55a failed to reach PAUSED status within the 
required time (400 s). Current status: ACTIVE.

  Related api and cpu events:
  
http://logs.openstack.org/22/50122/9/check/check-tempest-devstack-vm-postgres-full/fc0e7c3/logs/screen-n-api.txt.gz#_2013-10-29_09_38_05_153
  
http://logs.openstack.org/22/50122/9/check/check-tempest-devstack-vm-postgres-full/fc0e7c3/logs/screen-n-cpu.txt.gz#_2013-10-29_09_38_05_262

  No exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1245861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260876] [NEW] VMware Soft Reboot not honored

2013-12-13 Thread Shawn Hartsock
Public bug reported:

When a soft reboot is requested by the user, the VMware driver issues a
hard boot instead.

** Affects: nova
 Importance: Undecided
 Assignee: Shawn Hartsock (hartsock)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260876

Title:
  VMware Soft Reboot not honored

Status in OpenStack Compute (Nova):
  New

Bug description:
  When a soft reboot is requested by the user, the VMware driver issues
  a hard boot instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260864] [NEW] Hard to determine current location in containers panel with pseudo-folders

2013-12-13 Thread Rob Raymond
Public bug reported:

As a user navigates inside a hierarchy of pseudo-folders, it is
difficult to know where they are. As a result users may upload an object
but not be able to find where in hierarchy it is loaded.

Adding a hyperlinked breadcrumb would remind user of location and give
them a way to navigate.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Raymond (rob-raymond)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Raymond (rob-raymond)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260864

Title:
  Hard to determine current location in containers panel with pseudo-
  folders

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As a user navigates inside a hierarchy of pseudo-folders, it is
  difficult to know where they are. As a result users may upload an
  object but not be able to find where in hierarchy it is loaded.

  Adding a hyperlinked breadcrumb would remind user of location and give
  them a way to navigate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260864/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260859] [NEW] hard for user to distinguish pseudo-folders from objects

2013-12-13 Thread Rob Raymond
Public bug reported:

Currently when we show contents in containers panel, it is hard for the user to 
tell what is a a pseudo folder and what is an object.
We could use the column that is currently blank for pseudo-folders to indicate 
what it is. This column shows size for objects so there would be no conflict.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Raymond (rob-raymond)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Raymond (rob-raymond)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260859

Title:
  hard for user to distinguish pseudo-folders from objects

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently when we show contents in containers panel, it is hard for the user 
to tell what is a a pseudo folder and what is an object.
  We could use the column that is currently blank for pseudo-folders to 
indicate what it is. This column shows size for objects so there would be no 
conflict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260853] [NEW] EC2 Client Tokens aren't reported in DescribeInstances

2013-12-13 Thread Burt Holzman
Public bug reported:

RunInstances now supports client tokens to allow idempotent RunInstance
calls, but DescribeInstances doesn't return any information on them.
According to the EC2 API examples, it should be wrapped in 
tags.

http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-
query-DescribeInstances.html

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ec2

** Tags added: ec2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260853

Title:
  EC2 Client Tokens aren't reported in DescribeInstances

Status in OpenStack Compute (Nova):
  New

Bug description:
  RunInstances now supports client tokens to allow idempotent
  RunInstance calls, but DescribeInstances doesn't return any
  information on them. According to the EC2 API examples, it should be
  wrapped in  tags.

  http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-
  query-DescribeInstances.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245629] Re: keystone is ignoring debug=True

2013-12-13 Thread Brant Knudson
** Also affects: oslo
   Importance: Undecided
   Status: New

** Changed in: oslo
 Assignee: (unassigned) => Brant Knudson (blk-u)

** Changed in: keystone
Milestone: None => icehouse-2

** Changed in: keystone
   Status: New => In Progress

** Changed in: oslo
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1245629

Title:
  keystone is ignoring debug=True

Status in OpenStack Identity (Keystone):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  In Progress

Bug description:
  even after setting debug = True and verbose = True in keystone.conf,
  default_log_levels["keystone"] stays on INFO, preventing (for example)
  the identity drivers from producing any debug output, thus making it
  impossible to track problems with them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1245629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260806] [NEW] Defaulting device names fails to update the database

2013-12-13 Thread Nikola Đipanov
Public bug reported:

_default_block_device_names method of the compute manager, would call
the conductor block_device_mapping_update method with the wrong
arguments, causing a TypeError and ultimately the instance to fail.

This bug happens only when using a driver that does not provid it's own
implementation of default_device_names_for_instance, (currently only the
libvirt driver does this).

Also affects havana since https://review.openstack.org/#/c/40229/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260806

Title:
  Defaulting device names fails to update the database

Status in OpenStack Compute (Nova):
  New

Bug description:
  _default_block_device_names method of the compute manager, would call
  the conductor block_device_mapping_update method with the wrong
  arguments, causing a TypeError and ultimately the instance to fail.

  This bug happens only when using a driver that does not provid it's
  own implementation of default_device_names_for_instance, (currently
  only the libvirt driver does this).

  Also affects havana since https://review.openstack.org/#/c/40229/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260791] [NEW] ovs agents flapping

2013-12-13 Thread Robert Pothier
Public bug reported:

During deployment of instances using nova boot commands,  I noticed that
the Open vSwitch agents on the compute nodes alive status goes from xxx
to :-) and vice versa for all the compute nodes at random.

Getting the below traceback, the issue seems to be similar to a NEC agent bug.
https://bugs.launchpad.net/neutron/+bug/1235106

2013-11-26 20:07:00.941 16044 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py", line 
438, in _process_data
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp 
**args)
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/securitygroups_rpc.py", line 
102, in security_groups_provider_updated
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp 
self.sg_agent.security_groups_provider_updated()
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/securitygroups_rpc.py", line 
151, in security_groups_provider_updated
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp 
self.refresh_firewall()
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/securitygroups_rpc.py", line 
175, in refresh_firewall
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp 
self.context, device_ids)
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/securitygroups_rpc.py", line 
58, in security_group_rules_for_devices
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp 
topic=self.topic)
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/proxy.py", line 
130, in call
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp 
exc.info, real_topic, msg.get('method'))
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp Timeout: 
Timeout while waiting on RPC response - topic: "q-plugin", RPC method: 
"security_group_rules_for_devices" info: ""
2013-11-26 20:07:00.941 16044 TRACE neutron.openstack.common.rpc.amqp
2013-11-26 20:07:00.942 16044 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py", line 
438, in _process_data
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp 
**args)
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/securitygroups_rpc.py", line 
102, in security_groups_provider_updated
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp 
self.sg_agent.security_groups_provider_updated()
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/securitygroups_rpc.py", line 
151, in security_groups_provider_updated
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp 
self.refresh_firewall()
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/securitygroups_rpc.py", line 
175, in refresh_firewall
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp 
self.context, device_ids)
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/securitygroups_rpc.py", line 
58, in security_group_rules_for_devices
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp 
topic=self.topic)
2013-11-26 20:07:00.942 16044 TRACE neutron.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/proxy.py

[Yahoo-eng-team] [Bug 1260782] [NEW] continuous rpc errors

2013-12-13 Thread Robert Pothier
Public bug reported:

Seeing continuous RPC errors with Havana.


2013-12-13 15:34:37.621 8619 ERROR neutron.openstack.common.rpc.common [-] 
Failed to publish message to topic 'reply_e4c0169d68c64218a798b508002f32ee': 
[Errno 104] Connection reset by peer
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
Traceback (most recent call last):
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 565, in ensure
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
return method(*args, **kwargs)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 676, in _publish
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
publisher = cls(self.conf, self.channel, topic, **kwargs)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 332, in __init__
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
type='direct', **options)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 298, in __init__
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
self.reconnect(channel)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/impl_kombu.py", 
line 306, in reconnect
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
routing_key=self.routing_key)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 82, in 
__init__
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
self.revive(self._channel)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 216, in revive
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
self.declare()
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 102, in 
declare
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
self.exchange.declare()
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 166, in declare
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
nowait=nowait, passive=passive,
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 604, in 
exchange_declare
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
self._send_method((40, 10), args)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 62, in 
_send_method
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
self.channel_id, method_sig, args, content,
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/local/lib/python2.7/dist-packages/amqp/method_framing.py", line 227, in 
write_method
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
write_frame(1, channel, payload)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/local/lib/python2.7/dist-packages/amqp/transport.py", line 183, in 
write_frame
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
frame_type, channel, size, payload, 0xce,
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenio.py", line 307, in sendall
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common tail 
= self.send(data, flags)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenio.py", line 293, in send
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common 
total_sent += fd.send(data[total_sent:], flags)
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common error: 
[Errno 104] Connection reset by peer
2013-12-13 15:34:37.621 8619 TRACE neutron.openstack.common.rpc.common

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260782

Title:
  continuous rpc errors

Status in Op

[Yahoo-eng-team] [Bug 1260771] [NEW] Fake compute driver cannot deploy image with hypervisor_type attribute

2013-12-13 Thread Cedric Brandily
Public bug reported:

The fake compute driver does not provide the attribute supported_instances to 
ImagePropertiesFilter scheduler filter.
So ImagePropertiesFilter refuses to deploy images with hypervisor_type=fake 
property on fake computes.


Consequently, fake computes can not be used in multi hypervisor_types 
deployments because in this case hypervisor_type property on image is mandatory 
to avoid mixing one hypervisor_type image with another hypervisor_type compute.

** Affects: nova
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: New


** Tags: grizzly-backport-potential havana-backport-potential

** Changed in: nova
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260771

Title:
  Fake compute driver cannot deploy image with hypervisor_type attribute

Status in OpenStack Compute (Nova):
  New

Bug description:
  The fake compute driver does not provide the attribute supported_instances to 
ImagePropertiesFilter scheduler filter.
  So ImagePropertiesFilter refuses to deploy images with hypervisor_type=fake 
property on fake computes.

  
  Consequently, fake computes can not be used in multi hypervisor_types 
deployments because in this case hypervisor_type property on image is mandatory 
to avoid mixing one hypervisor_type image with another hypervisor_type compute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260717] Re: Notifications are not sent from Glance

2013-12-13 Thread Nadya Privalova
** Project changed: glance => ceilometer

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1260717

Title:
  Notifications are not sent from Glance

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When we were testing Ceilometer it was found that there is no notifications 
from Glance. 
  We found out that before moving to oslo.messging everything was fine. 

  Ceilometer was included from the test completely: we just monitored Rabbit 
queue and no messages appeared.
  Configs in glance-api.conf:
  notifier_strategy = rabbit

  From the code I see that notifier_strategy is deprecated  in favor of
  notification_driver but anyway currently  notifier_strategy = rabbit
  <=> notification_driver = messaging.

  One more note: only reply_%UUID% topic appeared in Rabbit when image
  was created. And no messages at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1260717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213215] Re: Volumes left in Error state prevent tear down of nova compute resource

2013-12-13 Thread John Griffith
The issue from the perspective of the Cinder delete is that the tempest
min scenario test doesn't bother to deal with things like failures in
it's sequence.  What's happening here is that the ssh is raising a
timeout exception which is not handled and blows things up.  So we dump
out of the scenario test and try do cleanup, that's great, but we left
the instance in it's current state with a volume attached.

>From the volume perspective, just catch the exception and do some proper
clean up.  I'll put a patch up in tempest in a moment to at least
address that portion of it.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213215

Title:
  Volumes left in Error state prevent tear down of nova compute resource

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Confirmed

Bug description:
  Occasionally running tempest in parallel will fail several tests with
  timeout errors. The only nontimeout failure message is that the
  ServerRescueTest failed to delete a volume because it was still marked
  as in use. My guess is that the leftover volume is somehow interfering
  with the other tests causing them to timeout. But, I haven't looked at
  the logs in detail so it's just a wild guess.

  
  2013-08-16 14:11:42.074 | 
==
  2013-08-16 14:11:42.075 | FAIL: 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_auto_disk_config[gate]
  2013-08-16 14:11:42.075 | 
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_rebuild_server_with_auto_disk_config[gate]
  2013-08-16 14:11:42.075 | 
--
  2013-08-16 14:11:42.075 | _StringException: Empty attachments:
  2013-08-16 14:11:42.075 |   stderr
  2013-08-16 14:11:42.076 |   stdout
  2013-08-16 14:11:42.076 | 
  2013-08-16 14:11:42.076 | Traceback (most recent call last):
  2013-08-16 14:11:42.076 |   File 
"tempest/api/compute/servers/test_disk_config.py", line 64, in 
test_rebuild_server_with_auto_disk_config
  2013-08-16 14:11:42.076 | wait_until='ACTIVE')
  2013-08-16 14:11:42.076 |   File "tempest/api/compute/base.py", line 140, in 
create_server
  2013-08-16 14:11:42.076 | server['id'], kwargs['wait_until'])
  2013-08-16 14:11:42.077 |   File 
"tempest/services/compute/json/servers_client.py", line 160, in 
wait_for_server_status
  2013-08-16 14:11:42.077 | time.sleep(self.build_interval)
  2013-08-16 14:11:42.077 |   File 
"/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 
52, in signal_handler
  2013-08-16 14:11:42.077 | raise TimeoutException()
  2013-08-16 14:11:42.077 | TimeoutException
  2013-08-16 14:11:42.077 | 
  2013-08-16 14:11:42.077 | 
  2013-08-16 14:11:42.078 | 
==
  2013-08-16 14:11:42.078 | FAIL: setUpClass 
(tempest.api.compute.images.test_image_metadata.ImagesMetadataTestXML)
  2013-08-16 14:11:42.078 | setUpClass 
(tempest.api.compute.images.test_image_metadata.ImagesMetadataTestXML)
  2013-08-16 14:11:42.078 | 
--
  2013-08-16 14:11:42.078 | _StringException: Traceback (most recent call last):
  2013-08-16 14:11:42.078 |   File 
"tempest/api/compute/images/test_image_metadata.py", line 46, in setUpClass
  2013-08-16 14:11:42.078 | cls.client.wait_for_image_status(cls.image_id, 
'ACTIVE')
  2013-08-16 14:11:42.079 |   File 
"tempest/services/compute/xml/images_client.py", line 167, in 
wait_for_image_status
  2013-08-16 14:11:42.079 | raise exceptions.TimeoutException
  2013-08-16 14:11:42.079 | TimeoutException: Request timed out
  2013-08-16 14:11:42.079 | 
  2013-08-16 14:11:42.079 | 
  2013-08-16 14:11:42.079 | 
==
  2013-08-16 14:11:42.079 | FAIL: 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
  2013-08-16 14:11:42.080 | 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_detach_volume[gate,negative]
  2013-08-16 14:11:42.080 | 
--
  2013-08-16 14:11:42.080 | _StringException: Empty attachments:
  2013-08-16 14:11:42.080 |   stderr
  2013-08-16 14:11:42.080 |   stdout
  2013-08-16 14:11:42.080 | 
  2013-08-16 14:11:42.081 | Traceback (most recent call last):
  2013-08-16 14:11:42.081 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 184, in 
test_rescued_vm_detach_volume
  2013-08-16 14:11:42.081 | 
self.servers_client.wait_for_server_status(self.server_id, 'RESCUE')
  2013-08-16 14:11:42.081 |   File 
"tempest/services/compute/json/serv

[Yahoo-eng-team] [Bug 1259440] Re: Cannot get info of a trust use admin_token

2013-12-13 Thread Dolph Mathews
The admin_token does not represent a user and carries no explicit
authorization that can be delegated. It's just a magical hack for
bootstrapping keystone and should be removed from the wsgi pipeline
after that.

** Changed in: keystone
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1259440

Title:
  Cannot get info of a trust use admin_token

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Hi all.

  When I test the lastest code of master, I found the following bugs:
  If I use admin_token whitch is in the keystone.conf to access some V3 apis of 
trust, I will get this error message: '{"error": {"message": "Could not find 
token, tokentoken.", "code": 404, "title": "Not Found"}}'.

  The apis are as follow:
  curl -H "X-Auth-Token:tokentoken" -H "Content-Type:application/json" 
http://127.0.0.1:35357/v3/OS-TRUST/trusts/2a096d2e24f54429a744e388fa292c12
  curl -X DELETE -H "X-Auth-Token:tokentoken" -H 
"Content-Type:application/json" 
http://127.0.0.1:35357/v3/OS-TRUST/trusts/2a096d2e24f54429a744e388fa292c12
  curl -X HEAD -H "X-Auth-Token:tokentoken" -H "Content-Type:application/json" 
http://127.0.0.1:35357/v3/OS-TRUST/trusts/2a096d2e24f54429a744e388fa292c12/roles/4238a14d2fd34cf68e8f1ae7f2fb8f8a
  curl -i -X GET -H "X-Auth-Token:tokentoken" -H 
"Content-Type:application/json" 
http://127.0.0.1:35357/v3/OS-TRUST/trusts/2a096d2e24f54429a744e388fa292c12/roles

  The reason is that the function named "_get_user_id" in the file of
  "trust.controllers.py" will try to get token info from context.
  Because admin_token is not in db, a 404 exception will be raised.

  def _get_user_id(self, context):
  if 'token_id' in context:
  token_id = context['token_id']
  token = self.token_api.get_token(token_id)
  user_id = token['user']['id']
  return user_id
  return None

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1259440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239956] Re: Remove obsolete code of urlparse importing

2013-12-13 Thread Dolph Mathews
Yep! It looks like the move to six resolved this.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1239956

Title:
  Remove obsolete code of urlparse importing

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Python 2.5 is not supported anymore, and in
  keystoneclient/httpclient.py, urlparse is useless.

  keysotneclient is depended by a dozen of project, "import urlparse"
  will induce py33 issue. So, clear  it up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1239956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257648] Re: keystone multi processes can not work

2013-12-13 Thread Dolph Mathews
Keystone does not yet support multiple workers. There's a patch from
late in the havana cycle to introduce it, however.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257648

Title:
  keystone multi processes can not work

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  In havana keystone, I found the code about multi processes,but they
  don't be called,so keystone is also can not work in multi processes.

  I found the code in /keystone/openstack/common/service.py,such as the
  class ProcessLauncher、Launcher.  But there is nowhere to call them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1257648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260637] Re: Missing network tests in tempest

2013-12-13 Thread Eugene Nikanorov
** Project changed: neutron => tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260637

Title:
  Missing network tests in tempest

Status in Tempest:
  New

Bug description:
  Missing tempests are as follow:

  Creation of shared networks by an admin user  
  
  Creation of a network by an admin user setting  a tenant id   
 
  Negative: update a non existent network   

  Negative: update forbidden attributes: status, tenant_id, id  
  
  Delete a subnet by deleting the network it is associated wit  

  Negative: delete non existent network 

  Negative: Delete a network associated with a subnet with active ports

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1260637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250331] Re: Tempest Running hang on case test_load_balancer.LoadBalancerXML.test_create_update_delete_pool_vip

2013-12-13 Thread Sean Dague
Realistically the fact that your qpid crashes on powerpc means that I
think you have much deeper issues with your environment. This might be a
neutron bug, but it doesn't seem like there is enough info for that team
to dive into it.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1250331

Title:
  Tempest Running hang on case
  test_load_balancer.LoadBalancerXML.test_create_update_delete_pool_vip

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  Invalid

Bug description:
  When we run the tempest test bucket against our Openstack env on ppc64, 
sometimes it will hang on case 
tempest.api.network.test_load_balancer.LoadBalancerXML.test_create_update_delete_pool_vip
 for a long time.
  We have to restart qpidd  and neutron services, than the env will back to 
normal. 
  Note that the issue doesn't happen every time, but we met it several times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1250331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260738] [NEW] image is creating with option size= a negative number

2013-12-13 Thread anju Tiwari
Public bug reported:

I just try to create an image by giving size =-1
and the image is creating succesfully. 


 glance image-create --name cirros --is-public true --container-format
bare --disk-format qcow2 --location
https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk
--size -1


+--+--+
| Property | Value|
+--+--+
| checksum | None |
| container_format | bare |
| created_at   | 2013-12-13T13:48:07  |
| deleted  | False|
| deleted_at   | None |
| disk_format  | qcow2|
| id   | 2da4e4f9-5f1a-4c8d-a67c-272588e2efbc |
| is_public| True |
| min_disk | 0|
| min_ram  | 0|
| name | cirros   |
| owner| 6a2db75adb964c5b84010fa22b464715 |
| protected| False|
| size | -1   |
| status   | active   |
| updated_at   | 2013-12-13T13:48:38  |
+--+--+

** Affects: glance
 Importance: Undecided
 Assignee: anju Tiwari (anjutiwari5)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => anju Tiwari (anjutiwari5)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1260738

Title:
  image is creating with option size= a negative number

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  I just try to create an image by giving size =-1
  and the image is creating succesfully. 


   glance image-create --name cirros --is-public true --container-format
  bare --disk-format qcow2 --location
  https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk
  --size -1


  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | bare |
  | created_at   | 2013-12-13T13:48:07  |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | qcow2|
  | id   | 2da4e4f9-5f1a-4c8d-a67c-272588e2efbc |
  | is_public| True |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros   |
  | owner| 6a2db75adb964c5b84010fa22b464715 |
  | protected| False|
  | size | -1   |
  | status   | active   |
  | updated_at   | 2013-12-13T13:48:38  |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1260738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260644] Re: ServerRescueTest may fail due to RESCUE taking too long

2013-12-13 Thread Sean Dague
This looks like the root cause is Nova exploding on the transition. I'm
going to mark the Tempest side invalid.

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260644

Title:
  ServerRescueTest may fail due to RESCUE taking too long

Status in OpenStack Compute (Nova):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  In the grenade test [0] for a bp I'm working on, ServerRescueTestXML
  rescue_unrescue test failed because the VM did not get into RESCUE
  state in time. It seems that the test is flacky.

  From the tempest log [1] I see the sequence VM ACTIVE, RESCUE issues,
  WAIT, timeout, DELETE VM.

  From the nova cpu log [1], following request ID: req-6c20654c-
  c00c-4932-87ad-8cfec9866399, I see that the RESCUE RCP is received
  immediately by n-cpu, however then the requests starves for 3 minutes
  waiting for a  "compute_resources" lock.

  The VM is than deleted by the test and when nova tries to process the
  RESCUE it throws and exception as the VM is not there:

  bc-b27a-83c39b7566c8] Traceback (most recent call last):
  bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/compute/manager.py", 
line 2664, in rescue_instance
  bc-b27a-83c39b7566c8] rescue_image_meta, admin_password)
  bc-b27a-83c39b7566c8]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2109, in rescue
  bc-b27a-83c39b7566c8] write_to_disk=True)
  bc-b27a-83c39b7566c8]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3236, in to_xml
  bc-b27a-83c39b7566c8] libvirt_utils.write_to_file(xml_path, xml)
  bc-b27a-83c39b7566c8]   File 
"/opt/stack/new/nova/nova/virt/libvirt/utils.py", line 494, in write_to_file
  bc-b27a-83c39b7566c8] with open(path, 'w') as f:
  bc-b27a-83c39b7566c8] IOError: [Errno 2] No such file or directory: 
u'/opt/stack/data/nova/instances/a5099beb-f4a2-47bc-b27a-83c39b7566c8/libvirt.xml'
  bc-b27a-83c39b7566c8] 

  There may be a problem in nova as well, as RESCUE is held for 3
  minutes waiting on a lock.

  [0] https://review.openstack.org/#/c/60434/
  [1] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/tempest.txt.gz
  [2] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/new/screen-n-cpu.txt.gz?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242501] Re: Jenkins failed due to TestGlanceAPI.test_get_details_filter_changes_since

2013-12-13 Thread Alan Pevec
** No longer affects: glance/folsom

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242501

Title:
  Jenkins failed due to
  TestGlanceAPI.test_get_details_filter_changes_since

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance grizzly series:
  In Progress
Status in Glance havana series:
  Fix Committed

Bug description:
  Now we're running into the Jenkins failure due to below test case
  failure:

  2013-10-20 06:12:31.930 | 
==
  2013-10-20 06:12:31.930 | FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_get_details_filter_changes_since
  2013-10-20 06:12:31.930 | 
--
  2013-10-20 06:12:31.930 | _StringException: Traceback (most recent call last):
  2013-10-20 06:12:31.931 |   File 
"/home/jenkins/workspace/gate-glance-python27/glance/tests/unit/v1/test_api.py",
 line 1358, in test_get_details_filter_changes_since
  2013-10-20 06:12:31.931 | self.assertEquals(res.status_int, 400)
  2013-10-20 06:12:31.931 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 322, in assertEqual
  2013-10-20 06:12:31.931 | self.assertThat(observed, matcher, message)
  2013-10-20 06:12:31.931 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertThat
  2013-10-20 06:12:31.931 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-20 06:12:31.931 | MismatchError: 200 != 400

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1242501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260731] [NEW] On an external network dhcp create a port which is not isolated

2013-12-13 Thread Sylvain Afchain
Public bug reported:

Dhcp port on a external network is not isolated/protected, thus an
external ip is able to request the dnsmasq dns. If there is an resolver
in the dhcp_conf file (dns_dnsmasq_server) an external user is able to
access to this server through the dnsmasq port. This could be a security
issue.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1260731

Title:
  On an external network dhcp create a port which is not isolated

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Dhcp port on a external network is not isolated/protected, thus an
  external ip is able to request the dnsmasq dns. If there is an
  resolver in the dhcp_conf file (dns_dnsmasq_server) an external user
  is able to access to this server through the dnsmasq port. This could
  be a security issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1260731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260717] [NEW] Notifications are not sent from Glance

2013-12-13 Thread Nadya Privalova
Public bug reported:

When we were testing Ceilometer it was found that there is no notifications 
from Glance. 
We found out that before moving to oslo.messging everything was fine. 

Ceilometer was included from the test completely: we just monitored Rabbit 
queue and no messages appeared.
Configs in glance-api.conf:
notifier_strategy = rabbit

>From the code I see that notifier_strategy is deprecated  in favor of
notification_driver but anyway currently  notifier_strategy = rabbit <=>
notification_driver = messaging.

One more note: only reply_%UUID% topic appeared in Rabbit when image was
created. And no messages at all.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1260717

Title:
  Notifications are not sent from Glance

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When we were testing Ceilometer it was found that there is no notifications 
from Glance. 
  We found out that before moving to oslo.messging everything was fine. 

  Ceilometer was included from the test completely: we just monitored Rabbit 
queue and no messages appeared.
  Configs in glance-api.conf:
  notifier_strategy = rabbit

  From the code I see that notifier_strategy is deprecated  in favor of
  notification_driver but anyway currently  notifier_strategy = rabbit
  <=> notification_driver = messaging.

  One more note: only reply_%UUID% topic appeared in Rabbit when image
  was created. And no messages at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1260717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255419] Re: jenkins tests fails for neutron/grizzly duo to iso8601 version requirement conflict

2013-12-13 Thread Alan Pevec
*** This bug is a duplicate of bug 1242501 ***
https://bugs.launchpad.net/bugs/1242501

** This bug has been marked a duplicate of bug 1242501
   Jenkins failed due to TestGlanceAPI.test_get_details_filter_changes_since

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1255419

Title:
  jenkins tests fails for neutron/grizzly duo to iso8601 version
  requirement conflict

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Glance grizzly series:
  In Progress
Status in OpenStack Core Infrastructure:
  Invalid
Status in Tempest:
  Invalid
Status in tempest grizzly series:
  Invalid

Bug description:
  2013-11-27 02:51:09.989 | 2013-11-27 02:51:09 Installed /opt/stack/new/neutron
  2013-11-27 02:51:09.990 | 2013-11-27 02:51:09 Processing dependencies for 
quantum==2013.1.5.a1.g666826a
  2013-11-27 02:51:09.991 | 2013-11-27 02:51:09 error: Installed distribution 
iso8601 0.1.4 conflicts with requirement iso8601>=0.1.8
  2013-11-27 02:51:09.991 | 2013-11-27 02:51:09 ++ failed
  2013-11-27 02:51:09.993 | 2013-11-27 02:51:09 ++ local r=1
  2013-11-27 02:51:09.993 | 2013-11-27 02:51:09 +++ jobs -p
  2013-11-27 02:51:09.994 | 2013-11-27 02:51:09 ++ kill
  2013-11-27 02:51:09.994 | 2013-11-27 02:51:09 ++ set +o xtrace
  2013-11-27 02:51:09.995 | 2013-11-27 02:51:09 stack.sh failed: full log in 
/opt/stack/new/devstacklog.txt.2013-11-27-024805

  full log https://jenkins02.openstack.org/job/periodic-tempest-
  devstack-vm-neutron-stable-grizzly/43/console

  the root case is that recently iso8601 updated to >=0.1.8 , python-
  novaclient updated to catch this.  but stable/glance requirement is
  iso8601<=0.1.4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1255419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242501] Re: Jenkins failed due to TestGlanceAPI.test_get_details_filter_changes_since

2013-12-13 Thread Alan Pevec
** Changed in: glance/folsom
   Status: In Progress => Won't Fix

** Changed in: glance/folsom
 Assignee: Alan Pevec (apevec) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1242501

Title:
  Jenkins failed due to
  TestGlanceAPI.test_get_details_filter_changes_since

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance folsom series:
  Won't Fix
Status in Glance grizzly series:
  In Progress
Status in Glance havana series:
  Fix Committed

Bug description:
  Now we're running into the Jenkins failure due to below test case
  failure:

  2013-10-20 06:12:31.930 | 
==
  2013-10-20 06:12:31.930 | FAIL: 
glance.tests.unit.v1.test_api.TestGlanceAPI.test_get_details_filter_changes_since
  2013-10-20 06:12:31.930 | 
--
  2013-10-20 06:12:31.930 | _StringException: Traceback (most recent call last):
  2013-10-20 06:12:31.931 |   File 
"/home/jenkins/workspace/gate-glance-python27/glance/tests/unit/v1/test_api.py",
 line 1358, in test_get_details_filter_changes_since
  2013-10-20 06:12:31.931 | self.assertEquals(res.status_int, 400)
  2013-10-20 06:12:31.931 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 322, in assertEqual
  2013-10-20 06:12:31.931 | self.assertThat(observed, matcher, message)
  2013-10-20 06:12:31.931 |   File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertThat
  2013-10-20 06:12:31.931 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-20 06:12:31.931 | MismatchError: 200 != 400

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1242501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260697] [NEW] network_device_mtu configuration does not apply on LibvirtHybridOVSBridgeDriver to OVS VIF ports and VETH pairs

2013-12-13 Thread Daniel Gollub
Public bug reported:

Due to this missing functionality MTU can not be increased/adapted to specific 
requirements.
For example configure compute node VIFs to make use of jumbo frames.


LinuxOVSInterfaceDriver and LinuxBridgeInterfaceDriver are using 
network_device_mtu to the MTU configure some of the created VIFs.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260697

Title:
  network_device_mtu configuration does not apply on
  LibvirtHybridOVSBridgeDriver to OVS VIF ports and VETH pairs

Status in OpenStack Compute (Nova):
  New

Bug description:
  Due to this missing functionality MTU can not be increased/adapted to 
specific requirements.
  For example configure compute node VIFs to make use of jumbo frames.

  
  LinuxOVSInterfaceDriver and LinuxBridgeInterfaceDriver are using 
network_device_mtu to the MTU configure some of the created VIFs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260692] [NEW] LBaaS: stacktraces in q-lbaas while running tempest lbaas api tests

2013-12-13 Thread Oleg Bondarev
Public bug reported:

Following stacktraces appeared in q-lbaas after merging
https://review.openstack.org/#/c/40381:

2013-12-10 14:02:10.655 6158 ERROR 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
[req-0316f127-ac85-4d14-b6c6-57d58272f9e6 c7798670e00d40f7809a65f04536b969 
f21028acd29f43dc935c5aeb3950bb06] Create member 
b4b0bd55-a70e-48f5-8816-fdbcba372cc1 failed on device driver haproxy_ns
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager Traceback (most 
recent call last):
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/haproxy/agent_manager.py",
 line 276, in create_member
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
driver.create_member(member)
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 308, in create_member
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
self._refresh_device(member['pool_id'])
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 283, in _refresh_device
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
self.deploy_instance(logical_config)
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 279, in deploy_instance
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
self.create(logical_config)
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 90, in create
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
self._plug(namespace, logical_config['vip']['port'])
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 246, in _plug
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
namespace=namespace
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/agent/linux/interface.py", line 186, in plug
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
ns_dev.link.set_address(mac_address)
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 230, in set_address
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
self._as_root('set', self.name, 'address', mac_address)
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 217, in _as_root
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
kwargs.get('use_root_namespace', False))
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 70, in _as_root
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager namespace)
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 81, in _execute
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager 
root_helper=root_helper)
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 75, in execute
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager raise 
RuntimeError(m)
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager RuntimeError: 
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalancer.drivers.haproxy.agent_manager Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link', 
'set', 'tapc8872a60-80', 'address', 'fa:16:3e:fe:37:64']
2013-12-10 14:02:10.655 6158 TRACE 
neutron.services.loadbalanc

[Yahoo-eng-team] [Bug 1260682] [NEW] LBaaS: stacktraces in q-svc while running tempest lbaas api tests

2013-12-13 Thread Oleg Bondarev
Public bug reported:

Followin stacktraces appeared after merging
https://review.openstack.org/#/c/40381:

2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py", line 438, in 
_process_data
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp **args)
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/common/rpc.py", line 45, in dispatch
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py", line 172, 
in dispatch
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/haproxy/plugin_driver.py",
 line 171, in update_status
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp 
context, obj_id['monitor_id'], obj_id['pool_id'], status)
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py", line 651, 
in update_pool_health_monitor
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp assoc 
= self._get_pool_health_monitor(context, id, pool_id)
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py", line 635, 
in _get_pool_health_monitor
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp 
monitor_id=id, pool_id=pool_id)
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp 
PoolMonitorAssociationNotFound: Monitor 7cea505d-d5cb-4d3f-958a-6dd14edb65e1 is 
not associated with Pool 2ce72926-fcbb-442e-b9e6-7724ec6c472c
2013-12-12 14:34:39.961 6522 TRACE neutron.openstack.common.rpc.amqp 
2013-12-12 14:34:39.963 6522 ERROR neutron.openstack.common.rpc.common 
[req-bc48bc7a-b9dd-429f-a522-ab6ff56adfc9 None None] Returning exception 
Monitor 7cea505d-d5cb-4d3f-958a-6dd14edb65e1 is not associated with Pool 
2ce72926-fcbb-442e-b9e6-7724ec6c472c to caller
2013-12-12 14:34:39.963 6522 ERROR neutron.openstack.common.rpc.common 
[req-bc48bc7a-b9dd-429f-a522-ab6ff56adfc9 None None] ['Traceback (most recent 
call last):\n', '  File 
"/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py", line 438, in 
_process_data\n**args)\n', '  File 
"/opt/stack/new/neutron/neutron/common/rpc.py", line 45, in dispatch\n
neutron_ctxt, version, method, namespace, **kwargs)\n', '  File 
"/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py", line 172, 
in dispatch\nresult = getattr(proxyobj, method)(ctxt, **kwargs)\n', '  File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/haproxy/plugin_driver.py",
 line 171, in update_status\ncontext, obj_id[\'monitor_id\'], 
obj_id[\'pool_id\'], status)\n', '  File 
"/opt/stack/new/neutron/neutron/db/loadbalancer/loadbalancer_db.py", line 651, 
in update_pool_health_monitor\nassoc = 
self._get_pool_health_monitor(context, id, pool_id)\n', '  File 
"/opt/stack/new/neutron/neutron/
 db/loadbalancer/loadbalancer_db.py", line 635, in _get_pool_health_monitor\n   
 monitor_id=id, pool_id=pool_id)\n', 'PoolMonitorAssociationNotFound: Monitor 
7cea505d-d5cb-4d3f-958a-6dd14edb65e1 is not associated with Pool 
2ce72926-fcbb-442e-b9e6-7724ec6c472c\n']
2013-12-12 14:34:40.265 6522 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
2013-12-12 14:34:40.265 6522 TRACE neutron.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-12-12 14:34:40.265 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py", line 438, in 
_process_data
2013-12-12 14:34:40.265 6522 TRACE neutron.openstack.common.rpc.amqp **args)
2013-12-12 14:34:40.265 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/common/rpc.py", line 45, in dispatch
2013-12-12 14:34:40.265 6522 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
2013-12-12 14:34:40.265 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py", line 172, 
in dispatch
2013-12-12 14:34:40.265 6522 TRACE neutron.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
2013-12-12 14:34:40.265 6522 TRACE neutron.openstack.common.rpc.amqp   File 
"/opt/stack/new/neutron/neutron/services/loadbalancer/drivers/haproxy/plugin_driver.py",
 line 174

[Yahoo-eng-team] [Bug 1260675] [NEW] horizon less variables not available to custom dashboard less styles

2013-12-13 Thread Jiri Tomasek
Public bug reported:

Right now, it is possible to include custom stylesheets for custom dashboards 
into Horizon. (See [1])
But the way it works, it is not possible to use horizon (and through that also 
bootstrap) less variables in those custom stylesheets.

In short, solution is to import custom stylesheets at the end of
horizon.less file.

Implementation:

At the end of horizon.less, @import dashboards.less file, dashboards.less is 
generated less file, that includes imports of the list of less files with the 
same name as dashboards that horizon include. eg:
dashboards.less:
@import infrastructure.less
...

Problem is how to generate dashboards.less file. Could it be somehow
achieved by using django-compressor?

[1] http://docs.openstack.org/developer/horizon/topics/customizing.html
#custom-stylesheets

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  Right now, it is possible to include custom stylesheets for custom dashboards 
into Horizon. (See [1])
  But the way it works, it is not possible to use horizon (and through that 
also bootstrap) less variables in those custom stylesheets.
  
  In short, solution is to import custom stylesheets at the end of
  horizon.less file.
  
  Implementation:
  
- At the end of horizon.less, @import dashboards.less file, dashboards.less is 
generated less file, that includes imports of the list of less files with the 
same name as dashboards that horizon include. eg: 
+ At the end of horizon.less, @import dashboards.less file, dashboards.less is 
generated less file, that includes imports of the list of less files with the 
same name as dashboards that horizon include. eg:
  dashboards.less:
  @import infrastructure.less
  ...
  
  Problem is how to generate dashboards.less file. Could it be somehow
- achieved by using django-compress?
+ achieved by using django-compressor?
  
  [1] http://docs.openstack.org/developer/horizon/topics/customizing.html
  #custom-stylesheets

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1260675

Title:
  horizon less variables not available to custom dashboard less styles

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Right now, it is possible to include custom stylesheets for custom dashboards 
into Horizon. (See [1])
  But the way it works, it is not possible to use horizon (and through that 
also bootstrap) less variables in those custom stylesheets.

  In short, solution is to import custom stylesheets at the end of
  horizon.less file.

  Implementation:

  At the end of horizon.less, @import dashboards.less file, dashboards.less is 
generated less file, that includes imports of the list of less files with the 
same name as dashboards that horizon include. eg:
  dashboards.less:
  @import infrastructure.less
  ...

  Problem is how to generate dashboards.less file. Could it be somehow
  achieved by using django-compressor?

  [1]
  http://docs.openstack.org/developer/horizon/topics/customizing.html
  #custom-stylesheets

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1260675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260667] [NEW] Tox fails to build environment because of MySQL-Python version

2013-12-13 Thread sahid
Public bug reported:

During tox building environment, it try to install the package MySQL-
python version 1.2.4 but the build failed with the error:

Traceback (most recent call last):
  File "", line 16, in 
  File "/opt/stack/nova/.tox/py27/build/MySQL-python/setup.py", line 18, in 

metadata, options = get_config()
  File "setup_posix.py", line 43, in get_config
libs = mysql_config("libs_r")
  File "setup_posix.py", line 25, in mysql_config
raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260667

Title:
  Tox fails to build environment because of MySQL-Python version

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  During tox building environment, it try to install the package MySQL-
  python version 1.2.4 but the build failed with the error:

  Traceback (most recent call last):
File "", line 16, in 
File "/opt/stack/nova/.tox/py27/build/MySQL-python/setup.py", line 18, in 

  metadata, options = get_config()
File "setup_posix.py", line 43, in get_config
  libs = mysql_config("libs_r")
File "setup_posix.py", line 25, in mysql_config
  raise EnvironmentError("%s not found" % (mysql_config.path,))
  EnvironmentError: mysql_config not found

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260618] Re: nova-api, nova-cert, nova-network not shown after installation. please guide.

2013-12-13 Thread Abhishek Chanda
This is a question more suitable for ask.openstack.org

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260618

Title:
  nova-api,nova-cert,nova-network not shown after installation. please
  guide.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Please help me - Issue description : I install havana ( with nova-network as 
i am very new to it) 1 controller node and 1 compute node. This topology and 
script i used to install grizzly which got installed. So very minor changes in 
script were made. Now for havana few minor changes i made in conf files -
  Nova.conf - [database] connection is used rather sql_connection
  Cinder - Same as above.

  Now when I launch script it runs and on controller service shown using 
nova-manage service list
  Binary   Host Zone
 Status State Updated_At
  nova-conductor control  internal 
enabled :-)2013-12-13 06:36:41
  nova-consoleauth control   internal 
enabled :-)   2013-12-13 06:36:41
  nova-scheduler  control  internal 
enabled :-)2013-12-13 06:36:42
  root@control:/etc# ps aux | grep nova
  nova  1471  0.0  1.6  67772 ?Ss   Dec12   0:35 /usr/bin/python 
/usr/bin/nova-consoleauth --config-file=/etc/nova/nova.conf
  nova  1472  0.0  1.6  67808 ?Ss   Dec12   0:35 /usr/bin/python 
/usr/bin/nova-conductor --config-file=/etc/nova/nova.conf
  nova  1474  0.0  1.6  68416 ?Ss   Dec12   0:40 /usr/bin/python 
/usr/bin/nova-scheduler --config-file=/etc/nova/nova.conf
  nova  1476  0.0  0.7  32276 ?Ss   Dec12   0:12 /usr/bin/python 
/usr/bin/nova-novncproxy --config-file=/etc/nova/nova.conf
  root  4479  0.0  0.0  13588   912 pts/2S+   12:08   0:00 grep 
--color=auto nova

  Same way on compute Node script runs well but nova-compute, 
nova-network,nova-meata-api does not start. When restarted service on 
controller node result is like below-
  root@control:/etc# cd /etc/init.d/; for i in $( ls nova-* ); do sudo service 
$i restart; done
  stop: Unknown instance: 
  nova-api start/running, process 4497
  stop: Unknown instance: 
  nova-cert start/running, process 4508
  nova-conductor stop/waiting
  nova-conductor start/running, process 4519
  nova-consoleauth stop/waiting
  nova-consoleauth start/running, process 4530
  nova-novncproxy stop/waiting
  nova-novncproxy start/running, process 4541
  nova-scheduler stop/waiting
  nova-scheduler start/running, process 4556
  so i find nova-api, nova-cert, nova-network,nova-metadata-api services are 
not running. Now the Log and Conf files- Log first

  1. nova-api -
  2013-12-12 16:27:43.068 11954 INFO nova.wsgi [-] osapi_compute listening on 
0.0.0.0:8774
  2013-12-12 16:27:43.068 11954 INFO nova.openstack.common.service [-] Starting 
1 workers
  2013-12-12 16:27:43.070 11954 INFO nova.openstack.common.service [-] Started 
child 12085
  2013-12-12 16:27:43.082 11954 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
  2013-12-12 16:27:43.088 11954 INFO nova.wsgi [-] metadata listening on 
0.0.0.0:8775
  2013-12-12 16:27:43.093 11954 INFO nova.openstack.common.service [-] Starting 
1 workers
  2013-12-12 16:27:43.095 11954 INFO nova.openstack.common.service [-] Started 
child 12086
  2013-12-12 16:27:43.075 12085 INFO nova.osapi_compute.wsgi.server [-] (12085) 
wsgi starting up on http://0.0.0.0:8774/
  2013-12-12 16:27:44.005 12086 INFO nova.metadata.wsgi.server [-] (12086) wsgi 
starting up on http://0.0.0.0:8775/
  2013-12-12 16:29:32.864 12036 INFO nova.openstack.common.service [-] Caught 
SIGTERM, exiting
  2013-12-12 16:29:32.864 12086 INFO nova.openstack.common.service [-] Caught 
SIGTERM, exiting
  2013-12-12 16:29:32.864 12085 INFO nova.openstack.common.service [-] Caught 
SIGTERM, exiting
  2013-12-12 16:29:32.864 12036 INFO nova.wsgi [-] Stopping WSGI server.
  2013-12-12 16:29:32.864 12086 INFO nova.wsgi [-] Stopping WSGI server.
  2013-12-12 16:29:32.865 12085 INFO nova.wsgi [-] Stopping WSGI server.
  2013-12-12 16:29:32.867 11954 INFO nova.openstack.common.service [-] Caught 
SIGTERM, stopping children
  2013-12-12 16:29:32.867 11954 INFO nova.openstack.common.service [-] Waiting 
on 3 children to exit
  2013-12-12 16:29:32.868 11954 INFO nova.openstack.common.service [-] Child 
12086 exited with status 1
  2013-12-12 16:29:32.868 11954 INFO nova.openstack.common.service [-] Child 
12036 exited with status 1
  2013-12-12 16:29:32.869 11954 INFO nova.openstack.common.service [-] Child 
12085 exited with status 1

  2. Nova-cert-
  2013-12-12 16:27:42.600 11994 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _peri

[Yahoo-eng-team] [Bug 1226469] Re: deleted image requested in setUpClass (tempest.api.compute.images.test_list_image_filters:ListImageFiltersTestJSON)

2013-12-13 Thread Attila Fazekas
** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: swift
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1226469

Title:
  deleted image requested in setUpClass
  (tempest.api.compute.images.test_list_image_filters:ListImageFiltersTestJSON)

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Object Storage (Swift):
  New
Status in Tempest:
  Incomplete

Bug description:
  This occurred in: http://logs.openstack.org/79/46879/1/check/gate-
  tempest-devstack-vm-full/6d0c9aa/

  The relevant part of the glance-registry log:

  2013-09-17 05:45:51.023 22288 INFO glance.registry.api.v1.images 
[c85dd8b4-ec3b-40e9-91dd-7346681aa947 6efac1abd8d3460382749a65a04e179e 
02a9c1884e0943d4ba4d9f414700590f] Successfully retrieved image 
cabbbaef-e97e-4ded-a8d6-c277983fbc3c
  2013-09-17 05:45:51.026 22288 DEBUG keystoneclient.middleware.auth_token [-] 
Authenticating user token __call__ 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:532
  2013-09-17 05:45:51.027 22288 DEBUG keystoneclient.middleware.auth_token [-] 
Removing headers from request environment: 
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
 _remove_auth_headers 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:591
  2013-09-17 05:45:51.027 22288 DEBUG keystoneclient.middleware.auth_token [-] 
Returning cached token a1c0f2b2085517948ad0a0ba27808c76 _cache_get 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:982
  2013-09-17 05:45:51.028 22288 DEBUG glance.api.policy [-] Loaded policy 
rules: {u'context_is_admin': 'role:admin', u'default': '@', 
u'manage_image_cache': 'role:admin'} load_rules 
/opt/stack/new/glance/glance/api/policy.py:75
  2013-09-17 05:45:51.029 22288 DEBUG routes.middleware [-] Matched PUT 
/images/cabbbaef-e97e-4ded-a8d6-c277983fbc3c __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:100
  2013-09-17 05:45:51.029 22288 DEBUG routes.middleware [-] Route path: 
'/images/{id}', defaults: {'action': u'update', 'controller': 
} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
  2013-09-17 05:45:51.029 22288 DEBUG routes.middleware [-] Match dict: 
{'action': u'update', 'controller': , 'id': u'cabbbaef-e97e-4ded-a8d6-c277983fbc3c'} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2013-09-17 05:45:51.030 22288 DEBUG glance.registry.api.v1.images 
[a26f57b5-2f1d-4b76-b0f2-65d2df1a9469 6efac1abd8d3460382749a65a04e179e 
02a9c1884e0943d4ba4d9f414700590f] Updating image 
cabbbaef-e97e-4ded-a8d6-c277983fbc3c with metadata: {u'status': u'deleted'} 
update /opt/stack/new/glance/glance/registry/api/v1/images.py:436
  2013-09-17 05:45:51.049 22288 INFO glance.registry.api.v1.images 
[a26f57b5-2f1d-4b76-b0f2-65d2df1a9469 6efac1abd8d3460382749a65a04e179e 
02a9c1884e0943d4ba4d9f414700590f] Updating metadata for image 
cabbbaef-e97e-4ded-a8d6-c277983fbc3c
  2013-09-17 05:45:51.052 22288 DEBUG keystoneclient.middleware.auth_token [-] 
Authenticating user token __call__ 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:532
  2013-09-17 05:45:51.052 22288 DEBUG keystoneclient.middleware.auth_token [-] 
Removing headers from request environment: 
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
 _remove_auth_headers 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:591
  2013-09-17 05:45:51.052 22288 DEBUG keystoneclient.middleware.auth_token [-] 
Returning cached token a1c0f2b2085517948ad0a0ba27808c76 _cache_get 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:982
  2013-09-17 05:45:51.053 22288 DEBUG glance.api.policy [-] Loaded policy 
rules: {u'context_is_admin': 'role:admin', u'default': '@', 
u'manage_image_cache': 'role:admin'} load_rules 
/opt/stack/new/glance/glance/api/policy.py:75
  2013-09-17 05:45:51.053 22288 DEBUG routes.middleware [-] Matched DELETE 
/images/cabbbaef-e97e-4ded-a8d6-c277983fbc3c __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:100
  2013-09-17 05:45:51.053 22288 DEBUG routes.middleware [-] Route path: 
'/images/{id}', defaults: {'action': u'delete', 'controller': 
} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
  2013-09-17 05:45:51.053 22288 DEBUG routes.middleware [-] Match dict: 
{'action': u'delete', 'controller': , 'id': u'cabbbaef-e97e-4ded-a8d6-c277983f

[Yahoo-eng-team] [Bug 1260644] [NEW] ServerRescueTest may fail due to RESCUE taking too long

2013-12-13 Thread Andrea Frittoli
Public bug reported:

In the grenade test [0] for a bp I'm working on, ServerRescueTestXML
rescue_unrescue test failed because the VM did not get into RESCUE state
in time. It seems that the test is flacky.

>From the tempest log [1] I see the sequence VM ACTIVE, RESCUE issues,
WAIT, timeout, DELETE VM.

>From the nova cpu log [1], following request ID: req-6c20654c-c00c-4932
-87ad-8cfec9866399, I see that the RESCUE RCP is received immediately by
n-cpu, however then the requests starves for 3 minutes waiting for a
"compute_resources" lock.

The VM is than deleted by the test and when nova tries to process the
RESCUE it throws and exception as the VM is not there:

bc-b27a-83c39b7566c8] Traceback (most recent call last):
bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/compute/manager.py", 
line 2664, in rescue_instance
bc-b27a-83c39b7566c8] rescue_image_meta, admin_password)
bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", 
line 2109, in rescue
bc-b27a-83c39b7566c8] write_to_disk=True)
bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", 
line 3236, in to_xml
bc-b27a-83c39b7566c8] libvirt_utils.write_to_file(xml_path, xml)
bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/virt/libvirt/utils.py", 
line 494, in write_to_file
bc-b27a-83c39b7566c8] with open(path, 'w') as f:
bc-b27a-83c39b7566c8] IOError: [Errno 2] No such file or directory: 
u'/opt/stack/data/nova/instances/a5099beb-f4a2-47bc-b27a-83c39b7566c8/libvirt.xml'
bc-b27a-83c39b7566c8] 

There may be a problem in nova as well, as RESCUE is held for 3 minutes
waiting on a lock.

[0] https://review.openstack.org/#/c/60434/
[1] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/tempest.txt.gz
[2] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/new/screen-n-cpu.txt.gz?

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260644

Title:
  ServerRescueTest may fail due to RESCUE taking too long

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  In the grenade test [0] for a bp I'm working on, ServerRescueTestXML
  rescue_unrescue test failed because the VM did not get into RESCUE
  state in time. It seems that the test is flacky.

  From the tempest log [1] I see the sequence VM ACTIVE, RESCUE issues,
  WAIT, timeout, DELETE VM.

  From the nova cpu log [1], following request ID: req-6c20654c-
  c00c-4932-87ad-8cfec9866399, I see that the RESCUE RCP is received
  immediately by n-cpu, however then the requests starves for 3 minutes
  waiting for a  "compute_resources" lock.

  The VM is than deleted by the test and when nova tries to process the
  RESCUE it throws and exception as the VM is not there:

  bc-b27a-83c39b7566c8] Traceback (most recent call last):
  bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/compute/manager.py", 
line 2664, in rescue_instance
  bc-b27a-83c39b7566c8] rescue_image_meta, admin_password)
  bc-b27a-83c39b7566c8]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2109, in rescue
  bc-b27a-83c39b7566c8] write_to_disk=True)
  bc-b27a-83c39b7566c8]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3236, in to_xml
  bc-b27a-83c39b7566c8] libvirt_utils.write_to_file(xml_path, xml)
  bc-b27a-83c39b7566c8]   File 
"/opt/stack/new/nova/nova/virt/libvirt/utils.py", line 494, in write_to_file
  bc-b27a-83c39b7566c8] with open(path, 'w') as f:
  bc-b27a-83c39b7566c8] IOError: [Errno 2] No such file or directory: 
u'/opt/stack/data/nova/instances/a5099beb-f4a2-47bc-b27a-83c39b7566c8/libvirt.xml'
  bc-b27a-83c39b7566c8] 

  There may be a problem in nova as well, as RESCUE is held for 3
  minutes waiting on a lock.

  [0] https://review.openstack.org/#/c/60434/
  [1] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/tempest.txt.gz
  [2] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/new/screen-n-cpu.txt.gz?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260588] Re: Change retry to attempt for retry filter logic

2013-12-13 Thread Jay Lau
Thanks Zhongyue for the comments.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260588

Title:
  Change retry to attempt for retry filter logic

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  After patch of Ia355810b106fee14a55f48081301a310979befac,  retry
  filter was renamed to IgnoreAttemptedHostsFilter and its variable
  retry was changed to attempt, so it is better to update nova scheduler
  and compute logic by replace retry to attempt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >