[Yahoo-eng-team] [Bug 1359590] [NEW] Getty's on serial consoles need to be consistent
Public bug reported: In our cloud images today we launch a getty on ttyS0, as long as it's not in a container. We don't launch a getty on ttyS1-n even if they exist. In MAAS, which also uses cloud images, it would often be useful to put getty's on the serial port that is mapped to remote serial access, such as IPMI SOL. It is however difficult to know which is the correct getty. Broadly speaking I think we should have a consistent approach to getty's. That might mean: * launch a getty on each ttySn that passes an stty test (as per ttyS0 currently) * allow cloud-init to prevent some of those, via vendordata or userdata or default behaviour per-cloud or * launch no getty's on ttyS, but * allow cloud-init to create them based on vendordata or userdata or default behaviour per-cloud ** Affects: cloud-init Importance: Undecided Status: New ** Affects: util-linux (Ubuntu) Importance: Undecided Status: New ** Also affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1359590 Title: Getty's on serial consoles need to be consistent Status in Init scripts for use on cloud images: New Status in “util-linux” package in Ubuntu: New Bug description: In our cloud images today we launch a getty on ttyS0, as long as it's not in a container. We don't launch a getty on ttyS1-n even if they exist. In MAAS, which also uses cloud images, it would often be useful to put getty's on the serial port that is mapped to remote serial access, such as IPMI SOL. It is however difficult to know which is the correct getty. Broadly speaking I think we should have a consistent approach to getty's. That might mean: * launch a getty on each ttySn that passes an stty test (as per ttyS0 currently) * allow cloud-init to prevent some of those, via vendordata or userdata or default behaviour per-cloud or * launch no getty's on ttyS, but * allow cloud-init to create them based on vendordata or userdata or default behaviour per-cloud To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1359590/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359596] [NEW] Objects should be able to backport related objects automatically
Public bug reported: Following change https://review.openstack.org/#/c/114594 adds checking for related versions of objects. This is imho wrong because it will make for unnecessary versioning code that will need to be written by developers. Better way to do this would be to declare version on the ObjectField and then do all the necesary backports automatically as the code is always: primitive['field_name'] = ( objects.RlatedObject().object_make_compatible( primitive, field_version)) And thus can be done in the superclass in a generic way with a little bit of tweaking of the ObjectField to know it's expected version, and stop the proliferation of boilerplate that can be an easy source of bugs. Furthermore it will stop the unnecessary proliferation of versions of all related objects. We would need to bump the version of the object that owns another object only when we require new functionality from the owned object. ** Affects: nova Importance: High Status: Confirmed ** Tags: unified-objects -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1359596 Title: Objects should be able to backport related objects automatically Status in OpenStack Compute (Nova): Confirmed Bug description: Following change https://review.openstack.org/#/c/114594 adds checking for related versions of objects. This is imho wrong because it will make for unnecessary versioning code that will need to be written by developers. Better way to do this would be to declare version on the ObjectField and then do all the necesary backports automatically as the code is always: primitive['field_name'] = ( objects.RlatedObject().object_make_compatible( primitive, field_version)) And thus can be done in the superclass in a generic way with a little bit of tweaking of the ObjectField to know it's expected version, and stop the proliferation of boilerplate that can be an easy source of bugs. Furthermore it will stop the unnecessary proliferation of versions of all related objects. We would need to bump the version of the object that owns another object only when we require new functionality from the owned object. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1359596/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359608] [NEW] Abstract driver signatures for update catalog entities are wrong
Public bug reported: In catalog/core.py, the abstract signature for a number of the update methods are incorrect and don't match what is actually implemented in the driver ** Affects: keystone Importance: Low Assignee: Henry Nash (henry-nash) Status: New ** Changed in: keystone Assignee: (unassigned) = Henry Nash (henry-nash) ** Changed in: keystone Importance: Undecided = Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1359608 Title: Abstract driver signatures for update catalog entities are wrong Status in OpenStack Identity (Keystone): New Bug description: In catalog/core.py, the abstract signature for a number of the update methods are incorrect and don't match what is actually implemented in the driver To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1359608/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359617] [NEW] libvirt: driver calls volume connect twice for every volume on boot
Public bug reported: Libvirt driver will attempt to connect the volume on the hipervisor twice for every volume provided to the instance when booting. If you examine the libvirt driver's spawn() method, both _get_guest_xml (by means of get_guest_storage_config) and _create_domain_and_network will call the _connect_volume method which works out the volume driver and then dispatches the connect logic. This is especially bad in the iscsi volume driver case, where we do 2 rootwraped calls in the best case, one of which is the target rescan, that can in theory add and remove devices in the kernel. I suspect that fixing this will make a number of races that have to do with the volume not being present when expected on the hypervisor, at least less likely to happen, in addition to making the boot process with volumes more performant. An example of a race condition that may be caused or made worse by this is: https://bugs.launchpad.net/cinder/+bug/1357677 ** Affects: nova Importance: High Status: Confirmed ** Tags: libvirt volumes -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1359617 Title: libvirt: driver calls volume connect twice for every volume on boot Status in OpenStack Compute (Nova): Confirmed Bug description: Libvirt driver will attempt to connect the volume on the hipervisor twice for every volume provided to the instance when booting. If you examine the libvirt driver's spawn() method, both _get_guest_xml (by means of get_guest_storage_config) and _create_domain_and_network will call the _connect_volume method which works out the volume driver and then dispatches the connect logic. This is especially bad in the iscsi volume driver case, where we do 2 rootwraped calls in the best case, one of which is the target rescan, that can in theory add and remove devices in the kernel. I suspect that fixing this will make a number of races that have to do with the volume not being present when expected on the hypervisor, at least less likely to happen, in addition to making the boot process with volumes more performant. An example of a race condition that may be caused or made worse by this is: https://bugs.launchpad.net/cinder/+bug/1357677 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1359617/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359637] [NEW] Many CSS logic in inline-editing
Public bug reported: We have way too many styling logic in our javascript files. This one focus particularly on the inline-editing tables. We need to isolate these logic and implement them in a CSS file where it belongs. ** Affects: horizon Importance: Undecided Assignee: Thai Tran (tqtran) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359637 Title: Many CSS logic in inline-editing Status in OpenStack Dashboard (Horizon): In Progress Bug description: We have way too many styling logic in our javascript files. This one focus particularly on the inline-editing tables. We need to isolate these logic and implement them in a CSS file where it belongs. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1359637/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359651] [NEW] xenapi: still get MAP_DUPLICATE_KEY in some edge cases
Public bug reported: Older version of XenServer require us to keep the live copy of xenstore updated in sync with the copy of xenstore recorded in the xenapi metadata for that VM. Code inspection has shown that we don't consistently keep those two copies up to date. While its hard to reproduce this errors, (add_ip_address_to_vm seems particuarly likely to hit issues), it seems best to tidy up the xenstore writing code so we consistently add/remove keys from the live copy and the copy in xenapi. ** Affects: nova Importance: Medium Assignee: John Garbutt (johngarbutt) Status: Triaged ** Tags: xenserver -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1359651 Title: xenapi: still get MAP_DUPLICATE_KEY in some edge cases Status in OpenStack Compute (Nova): Triaged Bug description: Older version of XenServer require us to keep the live copy of xenstore updated in sync with the copy of xenstore recorded in the xenapi metadata for that VM. Code inspection has shown that we don't consistently keep those two copies up to date. While its hard to reproduce this errors, (add_ip_address_to_vm seems particuarly likely to hit issues), it seems best to tidy up the xenstore writing code so we consistently add/remove keys from the live copy and the copy in xenapi. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1359651/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359649] [NEW] Level 1 and level 2 links doesn’t work when keypair is created
Public bug reported: Description: Level 1 and level 2 links doesn’t work when keypair is created Steps to Execute: 1. Login to Horizon 2. Click on Access Security - Click “Create Key” button 3. Give name test and press button “CreateKeypair” 4. Download Keypair page will appear and key is downloaded automatically. 5. Now click on level 1 or level 2 link on left side. (Project, Compute, Network, Orchestration, Murano) Expected Result: Tree should expand or contract. Actual Result: Links become unresponsive. IMP: Now click on any level 3 links, all links will start working. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359649 Title: Level 1 and level 2 links doesn’t work when keypair is created Status in OpenStack Dashboard (Horizon): New Bug description: Description: Level 1 and level 2 links doesn’t work when keypair is created Steps to Execute: 1. Login to Horizon 2. Click on Access Security - Click “Create Key” button 3. Give name test and press button “CreateKeypair” 4. Download Keypair page will appear and key is downloaded automatically. 5. Now click on level 1 or level 2 link on left side. (Project, Compute, Network, Orchestration, Murano) Expected Result: Tree should expand or contract. Actual Result: Links become unresponsive. IMP: Now click on any level 3 links, all links will start working. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1359649/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359658] [NEW] resource id links not working properly in heat resource tab view
Public bug reported: While checking the stack details page of the dashboard horizon . At the Resource tab a table is listed with all the resources avaiable for the stack along with the hyper link provided to each resource name .Upon clicking on the name we will be re-directed to another page where it will display a tab with all the information regarding the Resource along with the header as Resource Detail: Resource_Name In that very tab all the details about that resource are listed including it's id with a hyperlink The bug is that when we click on that hyper link an error page will be displayed that url in not found . Iam attaching the necessary screen shots to demostrate it. ** Affects: horizon Importance: Undecided Status: New ** Attachment added: resource_id not found.png https://bugs.launchpad.net/bugs/1359658/+attachment/4183247/+files/resource_id%20not%20found.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359658 Title: resource id links not working properly in heat resource tab view Status in OpenStack Dashboard (Horizon): New Bug description: While checking the stack details page of the dashboard horizon . At the Resource tab a table is listed with all the resources avaiable for the stack along with the hyper link provided to each resource name .Upon clicking on the name we will be re-directed to another page where it will display a tab with all the information regarding the Resource along with the header as Resource Detail: Resource_Name In that very tab all the details about that resource are listed including it's id with a hyperlink The bug is that when we click on that hyper link an error page will be displayed that url in not found . Iam attaching the necessary screen shots to demostrate it. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1359658/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1354087] Re: [UI] 'dropdown' config types displays as checkboxes
** Also affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1354087 Title: [UI] 'dropdown' config types displays as checkboxes Status in OpenStack Dashboard (Horizon): New Status in OpenStack Data Processing (Sahara, ex. Savanna): In Progress Bug description: provisioning configs returned from plugin which has config_type attribute value equals to 'dropdown' diplays on dashboard as checkbox, not as dropdown list To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1354087/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359690] [NEW] Fix timezones in line chart
Public bug reported: Ceilometer gives us sample with UTC datetime, we are just passing that to the line chart, to it should recognise the UTC. Hover detail should be presented in local time for users. There might be more enhancements into this, like configurable timezones, but we need at least some quick fix that will work in J. In future we need to figure out how to configure timezones in Ceilometer and how to configure the same in Horizon. ** Affects: horizon Importance: Undecided Assignee: Ladislav Smola (lsmola) Status: In Progress ** Tags: ceilometer -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359690 Title: Fix timezones in line chart Status in OpenStack Dashboard (Horizon): In Progress Bug description: Ceilometer gives us sample with UTC datetime, we are just passing that to the line chart, to it should recognise the UTC. Hover detail should be presented in local time for users. There might be more enhancements into this, like configurable timezones, but we need at least some quick fix that will work in J. In future we need to figure out how to configure timezones in Ceilometer and how to configure the same in Horizon. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1359690/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1332058] Re: keystone behavior when one memcache backend is down
@Dolph, then I try to use backend=dogpile.cache.pylibmc and backend_argument=behaviors:tcp_nodelay:False I recieve an error from keystone: ERROR: __init__() got an unexpected keyword argument 'behaviors' (HTTP 400) ** Changed in: keystone Status: Invalid = New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1332058 Title: keystone behavior when one memcache backend is down Status in OpenStack Identity (Keystone): New Status in Mirantis OpenStack: Confirmed Bug description: Hi, Our implementation uses dogpile.cache.memcached as a backend for tokens. Recently, I have found interesting behavior when one of memcache regions went down. There is a 3-6 second delay when I try to get a token. If I have 2 backends then I have 6-12 seconds delay. It's very easy to test Test connection using for i in {1..20}; do (time keystone token-get log2) 21 | grep real | awk '{print $2}'; done Block one memcache backend using iptables -I INPUT -p tcp --dport 11211 -j DROP (Simulation power outage of node) Test the speed using for i in {1..20}; do (time keystone token-get log2) 21 | grep real | awk '{print $2}'; done Also I straced keystone process with strace -tt -s 512 -o /root/log1 -f -p PID and got 26872 connect(9, {sa_family=AF_INET, sin_port=htons(11211), sin_addr=inet_addr(10.108.2.3)}, 16) = -1 EINPROGRESS (Operation now in progress) though this IP is down Also I checked the code https://github.com/openstack/keystone/blob/master/keystone/common/kvs/core.py#L210-L237 https://github.com/openstack/keystone/blob/master/keystone/common/kvs/core.py#L285-L289 https://github.com/openstack/keystone/blob/master/keystone/common/kvs/backends/memcached.py#L96 and was not able to find any piece of details how keystone treats with backend when it's down There should be a logic which temporarily blocks backend when it's not accessible. After timeout period, backend should be probed (but not blocking get/set operations of current backends) and if connection is successful it should be added back to operation. Here is a sample how it could be implemented http://dogpilecache.readthedocs.org/en/latest/usage.html#changing- backend-behavior To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1332058/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1354512] Re: Anonymous user can download public image through Swift
** Information type changed from Private Security to Public ** Also affects: ossn Importance: Undecided Status: New ** Changed in: ossa Status: Incomplete = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1354512 Title: Anonymous user can download public image through Swift Status in OpenStack Image Registry and Delivery Service (Glance): New Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Security Notes: New Bug description: When Glance uses Swift as backend, and Swift uses delay_auth_decision feature (for temporary urls, for example), anyone can download public images anonymously from Swift by direct url. Steps to reproduce: 1 Set delay_auth_decision = 1 in Swift's proxy-server.conf. Set default_store = swift swift_store_multi_tenant = True swift_store_create_container_on_put = True in Glance's glance-api.conf. 2 Create a public image. glance image-create --name fake_image --file some_text_file_name --is-public True You may use a text file to reproduce the error for descriptive reasons. Use the got image id at the next step. 3 Download created image by curl. curl swift_endpoint/glance_image_id/image_id See your file in the output. If swift_store_container in your glance-api.conf is not 'glance', use appropriate prefix in the command above. Glance set read ACL to '.r:*,.rlistings' for all public images. Thus since anyone has access into Swift (by delay_auth_decision parameter), anyone can download a public image. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1354512/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359757] [NEW] DynamicSelectWidget is split into 2 lines after migration to bootstrap 3
Public bug reported: Also, previously it was a button with '+' glyph, now its simple link without button. ** Affects: horizon Importance: Undecided Assignee: Timur Sufiev (tsufiev-x) Status: New ** Tags: bootstrap ** Attachment added: dynamic_select_widget_broken.png https://bugs.launchpad.net/bugs/1359757/+attachment/4183495/+files/dynamic_select_widget_broken.png ** Changed in: horizon Assignee: (unassigned) = Timur Sufiev (tsufiev-x) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359757 Title: DynamicSelectWidget is split into 2 lines after migration to bootstrap 3 Status in OpenStack Dashboard (Horizon): New Bug description: Also, previously it was a button with '+' glyph, now its simple link without button. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1359757/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1315321] Re: image_size_cap not checked in v2 (CVE-2014-5356)
** Changed in: ossa Status: In Progress = Fix Committed ** Summary changed: - image_size_cap not checked in v2 (CVE-2014-5356) + [OSSA 2014-028] image_size_cap not checked in v2 (CVE-2014-5356) ** Changed in: ossa Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1315321 Title: [OSSA 2014-028] image_size_cap not checked in v2 (CVE-2014-5356) Status in OpenStack Image Registry and Delivery Service (Glance): Fix Committed Status in Glance havana series: Fix Committed Status in Glance icehouse series: Fix Committed Status in OpenStack Security Advisories: Fix Released Bug description: To reproduce (using devstack): create an image upload image data larger than image_size_cap This should result in an error, but doesn't To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1315321/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359774] [NEW] No way to specify initial services region during login
Public bug reported: If keystone is set up with multiple service regions, the initial service region is selected for you. This is done by searching the service catalog for the first non-identity service and then selecting the region for the first endpoint. This is an inconvenience when the user knows exactly which service region they want to use first. ** Affects: django-openstack-auth Importance: Undecided Assignee: Justin Pomeroy (jpomero) Status: New ** Affects: horizon Importance: Undecided Assignee: Justin Pomeroy (jpomero) Status: New ** Changed in: horizon Assignee: (unassigned) = Justin Pomeroy (jpomero) ** Also affects: django-openstack-auth Importance: Undecided Status: New ** Changed in: django-openstack-auth Assignee: (unassigned) = Justin Pomeroy (jpomero) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359774 Title: No way to specify initial services region during login Status in Django OpenStack Auth: New Status in OpenStack Dashboard (Horizon): New Bug description: If keystone is set up with multiple service regions, the initial service region is selected for you. This is done by searching the service catalog for the first non-identity service and then selecting the region for the first endpoint. This is an inconvenience when the user knows exactly which service region they want to use first. To manage notifications about this bug go to: https://bugs.launchpad.net/django-openstack-auth/+bug/1359774/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359805] [NEW] 'Requested operation is not valid: domain is not running' from check-tempest-dsvm-neutron-full
Public bug reported: I received the following error from the check-tempest-dsvm-neutron-full test suite after submitting a nova patch: 2014-08-21 14:11:25.059 | Captured traceback: 2014-08-21 14:11:25.059 | ~~~ 2014-08-21 14:11:25.059 | Traceback (most recent call last): 2014-08-21 14:11:25.059 | File tempest/api/compute/servers/test_server_actions.py, line 407, in test_suspend_resume_server 2014-08-21 14:11:25.059 | self.client.wait_for_server_status(self.server_id, 'SUSPENDED') 2014-08-21 14:11:25.059 | File tempest/services/compute/xml/servers_client.py, line 390, in wait_for_server_status 2014-08-21 14:11:25.059 | raise_on_error=raise_on_error) 2014-08-21 14:11:25.059 | File tempest/common/waiters.py, line 77, in wait_for_server_status 2014-08-21 14:11:25.059 | server_id=server_id) 2014-08-21 14:11:25.059 | BuildErrorException: Server a29ec7be-be83-4247-b7db-49bd4727d206 failed to build and is in ERROR status 2014-08-21 14:11:25.059 | Details: {'message': 'Requested operation is not valid: domain is not running', 'code': '500', 'details': 'None', 'created': '2014-08-21T13:49:49Z'} ** Affects: neutron Importance: Undecided Status: New ** Attachment added: check-tempest-dsvm-neutron-full-console.txt https://bugs.launchpad.net/bugs/1359805/+attachment/4183601/+files/check-tempest-dsvm-neutron-full-console.txt -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1359805 Title: 'Requested operation is not valid: domain is not running' from check- tempest-dsvm-neutron-full Status in OpenStack Neutron (virtual network service): New Bug description: I received the following error from the check-tempest-dsvm-neutron- full test suite after submitting a nova patch: 2014-08-21 14:11:25.059 | Captured traceback: 2014-08-21 14:11:25.059 | ~~~ 2014-08-21 14:11:25.059 | Traceback (most recent call last): 2014-08-21 14:11:25.059 | File tempest/api/compute/servers/test_server_actions.py, line 407, in test_suspend_resume_server 2014-08-21 14:11:25.059 | self.client.wait_for_server_status(self.server_id, 'SUSPENDED') 2014-08-21 14:11:25.059 | File tempest/services/compute/xml/servers_client.py, line 390, in wait_for_server_status 2014-08-21 14:11:25.059 | raise_on_error=raise_on_error) 2014-08-21 14:11:25.059 | File tempest/common/waiters.py, line 77, in wait_for_server_status 2014-08-21 14:11:25.059 | server_id=server_id) 2014-08-21 14:11:25.059 | BuildErrorException: Server a29ec7be-be83-4247-b7db-49bd4727d206 failed to build and is in ERROR status 2014-08-21 14:11:25.059 | Details: {'message': 'Requested operation is not valid: domain is not running', 'code': '500', 'details': 'None', 'created': '2014-08-21T13:49:49Z'} To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1359805/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359808] [NEW] extended_volumes slows down the nova instance list by 40..50%
Public bug reported: When listing ~4096 instances, the nova API (n-api) service has high CPU(100%) usage because it does individual SELECTs, for every server's block_device_mapping. Please use more efficient way for getting the block_device_mapping, when multiple instance queried. This line initiating the individual select: https://github.com/openstack/nova/blob/4b414adce745c07fbf2003ec25a5e554e634c8b7/nova/api/openstack/compute/contrib/extended_volumes.py#L32 ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1359808 Title: extended_volumes slows down the nova instance list by 40..50% Status in OpenStack Compute (Nova): New Bug description: When listing ~4096 instances, the nova API (n-api) service has high CPU(100%) usage because it does individual SELECTs, for every server's block_device_mapping. Please use more efficient way for getting the block_device_mapping, when multiple instance queried. This line initiating the individual select: https://github.com/openstack/nova/blob/4b414adce745c07fbf2003ec25a5e554e634c8b7/nova/api/openstack/compute/contrib/extended_volumes.py#L32 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1359808/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359805] Re: 'Requested operation is not valid: domain is not running' from check-tempest-dsvm-neutron-full
** Also affects: tempest Importance: Undecided Status: New ** Changed in: neutron Status: New = Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1359805 Title: 'Requested operation is not valid: domain is not running' from check- tempest-dsvm-neutron-full Status in OpenStack Neutron (virtual network service): Incomplete Status in Tempest: New Bug description: I received the following error from the check-tempest-dsvm-neutron- full test suite after submitting a nova patch: 2014-08-21 14:11:25.059 | Captured traceback: 2014-08-21 14:11:25.059 | ~~~ 2014-08-21 14:11:25.059 | Traceback (most recent call last): 2014-08-21 14:11:25.059 | File tempest/api/compute/servers/test_server_actions.py, line 407, in test_suspend_resume_server 2014-08-21 14:11:25.059 | self.client.wait_for_server_status(self.server_id, 'SUSPENDED') 2014-08-21 14:11:25.059 | File tempest/services/compute/xml/servers_client.py, line 390, in wait_for_server_status 2014-08-21 14:11:25.059 | raise_on_error=raise_on_error) 2014-08-21 14:11:25.059 | File tempest/common/waiters.py, line 77, in wait_for_server_status 2014-08-21 14:11:25.059 | server_id=server_id) 2014-08-21 14:11:25.059 | BuildErrorException: Server a29ec7be-be83-4247-b7db-49bd4727d206 failed to build and is in ERROR status 2014-08-21 14:11:25.059 | Details: {'message': 'Requested operation is not valid: domain is not running', 'code': '500', 'details': 'None', 'created': '2014-08-21T13:49:49Z'} To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1359805/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1337902] Re: do not show option for taking volume snapshots with --force is not supported by policy
** Changed in: horizon Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1337902 Title: do not show option for taking volume snapshots with --force is not supported by policy Status in OpenStack Dashboard (Horizon): Invalid Bug description: 1. log in with user demo 2. launch an instance from an image using create new volume 3. go to volumes - create snapshot snapshot is created with status error because --force option is not supported by cinder 2014-07-04 18:01:58.042 2434 ERROR oslo.messaging.rpc.dispatcher [req-f3e31ee1-60b4-4033-be63-acc07e9b9a32 5a67ce69c6824e17b44bf15003ccc29f d22192179d3042a587ebd06bd6fd48d1 - - -] Exception during message handling: Policy doesn't allow compute_extension:os-assisted-volume-snapshots:create to be performed. (HTTP 403) (Request-ID: req-5ad5aa61-a6f0-4919-b008-9cb9ff4c5a40) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 133, in _dispatch_and_reply 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 176, in _dispatch 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 122, in _do_dispatch 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher File /usr/lib/python2.7/site-packages/cinder/volume/manager.py, line 484, in create_snapshot 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher {'status': 'error'}) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher File /usr/lib/python2.7/site-packages/cinder/openstack/common/excutils.py, line 68, in __exit__ 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher File /usr/lib/python2.7/site-packages/cinder/volume/manager.py, line 475, in create_snapshot 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher model_update = self.driver.create_snapshot(snapshot_ref) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher File /usr/lib/python2.7/site-packages/cinder/openstack/common/lockutils.py, line 247, in inner 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher retval = f(*args, **kwargs) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher File /usr/lib/python2.7/site-packages/cinder/volume/drivers/glusterfs.py, line 310, in create_snapshot 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher return self._create_snapshot(snapshot) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher File /usr/lib/python2.7/site-packages/cinder/volume/drivers/glusterfs.py, line 428, in _create_snapshot 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher raise e 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher Forbidden: Policy doesn't allow compute_extension:os-assisted-volume-snapshots:create to be performed. (HTTP 403) (Request-ID: req-5ad5aa61-a6f0-4919-b008-9cb9ff4c5a40) 2014-07-04 18:01:58.042 2434 TRACE oslo.messaging.rpc.dispatcher 2014-07-04 18:01:58.043 2434 ERROR oslo.messaging._drivers.common [req-f3e31ee1-60b4-4033-be63-acc07e9b9a32 5a67ce69c6824e17b44bf15003ccc29f d22192179d3042a587ebd06bd6fd48d1 - - -] Returning exception Policy doesn't allow compute_extens ion:os-assisted-volume-snapshots:create to be performed. (HTTP 403) (Request-ID: req-5ad5aa61-a6f0-4919-b008-9cb9ff4c5a40) to caller 2014-07-04 18:01:58.043 2434 ERROR oslo.messaging._drivers.common [req-f3e31ee1-60b4-4033-be63-acc07e9b9a32 5a67ce69c6824e17b44bf15003ccc29f d22192179d3042a587ebd06bd6fd48d1 - - -] ['Traceback (most recent call last):\n', ' File /usr/ lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 133, in _dispatch_and_reply\nincoming.message))\n', ' File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 176, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', ' File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 122, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, **new_args)\n', '
[Yahoo-eng-team] [Bug 1359835] [NEW] select_destinations should send start/end notifications
Public bug reported: In the filter scheduler, schedule_run_instance sends notifications, but select_destinations does not. This is inconsistent, and we should send start/end notifications from both code paths. ** Affects: nova Importance: Medium Assignee: John Garbutt (johngarbutt) Status: Triaged ** Tags: scheduler -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1359835 Title: select_destinations should send start/end notifications Status in OpenStack Compute (Nova): Triaged Bug description: In the filter scheduler, schedule_run_instance sends notifications, but select_destinations does not. This is inconsistent, and we should send start/end notifications from both code paths. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1359835/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1014647] Re: Tempest has no test for soft reboot
I think I'll draft up a nova blueprint spec for Kilo to go over some of the ideas. ** Also affects: nova Importance: Undecided Status: New ** Changed in: nova Status: New = Triaged ** Tags added: api ** Changed in: nova Importance: Undecided = Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1014647 Title: Tempest has no test for soft reboot Status in OpenStack Compute (Nova): Triaged Status in Tempest: Confirmed Bug description: 1. soft reboot requires support from the guest to operate. The current nova implementation tells the guest to reboot and then waits. If the soft reboot did not happen, it triggers a hard reboot but after a default wait of 2 minutes. Solution: Provide a new soft_reboot_image_ref, defaults to None, that is used for soft reboot tests which. If the value is None then the test is skipped. 2. Because of (1), we should only use soft reboot when we are actually testing that feature. 3. The current soft reboot test does not check that a soft reboot was done rather than hard. It should check for the server state of REBOOT. Same issue for the hard reboot test. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1014647/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359857] [NEW] Reveal/Hide password icon is not visible
Public bug reported: And cannot be pressed. ** Affects: horizon Importance: Undecided Status: New ** Tags: bootstrap ** Attachment added: no_reveal_password_control.png https://bugs.launchpad.net/bugs/1359857/+attachment/4183749/+files/no_reveal_password_control.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359857 Title: Reveal/Hide password icon is not visible Status in OpenStack Dashboard (Horizon): New Bug description: And cannot be pressed. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1359857/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1297173] Re: Cannot view object details with Ceph backend
Marking invalid per above recommendation and the fact that this is being tracked as a bug in Ceph ** Changed in: horizon Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1297173 Title: Cannot view object details with Ceph backend Status in OpenStack Dashboard (Horizon): Invalid Bug description: Steps to reproduce: With Ceph configured as the Object Storage backend, Navigate to Object Store - Containers. Click on a Container in the list of containers. For an Object in In the Objects panel, Click on More - View Details. Expected Result: Details of object should be displayed in a pop-up modal. Actual Result: An Error pop-up appears with the text Error: An error occurred. Please try again later. I am able to retrieve object details using the swfitclient: $ swift list container1 $ swift stat container1 object1 Account: v1 Container: container1 Object: functions Content Type: binary/octet-stream Content Length: 59177 Last Modified: Tue, 25 Mar 2014 09:46:45 GMT ETag: a837e4f4ac61417e6385a896cf4ba409 Meta Mtime: 1394204154.954066 Accept-Ranges: bytes Server: Apache/2.2.22 (Ubuntu) To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1297173/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1358026] Re: smartos unit tests not isolated
*** This bug is a duplicate of bug 1316597 *** https://bugs.launchpad.net/bugs/1316597 ** This bug has been marked a duplicate of bug 1316597 test_smartos fails in a chroot -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1358026 Title: smartos unit tests not isolated Status in Init scripts for use on cloud images: New Bug description: The unit tests for SmartOS appear to actually want to run on a SmartOS system rather than mock that layer out. I would propose that the tests get mocked or moved to a functional test suite that isn't part of the normal test execution. Below is the output from the failing tests using cloud-init-0.7.5 (untarred with dependencies installed via the system) but let me know if any other details would be helpful or required. make test Running tests... F. == FAIL: test_b64_keys (tests.unittests.test_datasource.test_smartos.TestSmartOSDataSource) -- Traceback (most recent call last): File /usr/lib64/python2.7/site-packages/mocker.py, line 149, in test_method_wrapper result = test_method() File /var/tmp/portage/app-emulation/cloud-init-0.7.5-r2/work/cloud-init-0.7.5/tests/unittests/test_datasource/test_smartos.py, line 270, in test_b64_keys self.assertTrue(ret) AssertionError: False is not true begin captured logging cloudinit.importer: DEBUG: Looking for modules ['cloudinit.mergers.m_list'] that have attributes ['Merger'] cloudinit.importer: DEBUG: Found m_list with attributes ['Merger'] in ['cloudinit.mergers.m_list'] cloudinit.importer: DEBUG: Looking for modules ['cloudinit.mergers.m_dict'] that have attributes ['Merger'] cloudinit.importer: DEBUG: Found m_dict with attributes ['Merger'] in ['cloudinit.mergers.m_dict'] cloudinit.importer: DEBUG: Looking for modules ['cloudinit.mergers.m_str'] that have attributes ['Merger'] cloudinit.importer: DEBUG: Found m_str with attributes ['Merger'] in ['cloudinit.mergers.m_str'] cloudinit.mergers: DEBUG: Merging 'dict' into 'dict' using method '_handle_unknown' of 'LookupMerger: (3)' cloudinit.mergers: DEBUG: Merging using located merger 'DictMerger: (method=no_replace,recurse_str=False,recurse_dict=True,recurse_array=False,allow_delete=False)' since it had method '_on_dict' cloudinit.sources.DataSourceSmartOS: DEBUG: Host does not appear to be on SmartOS - end captured logging - == FAIL: test_b64_userdata (tests.unittests.test_datasource.test_smartos.TestSmartOSDataSource) -- Traceback (most recent call last): File /usr/lib64/python2.7/site-packages/mocker.py, line 149, in test_method_wrapper result = test_method() File /var/tmp/portage/app-emulation/cloud-init-0.7.5-r2/work/cloud-init-0.7.5/tests/unittests/test_datasource/test_smartos.py, line 254, in test_b64_userdata self.assertTrue(ret) AssertionError: False is not true begin captured logging cloudinit.importer: DEBUG: Looking for modules ['cloudinit.mergers.m_list'] that have attributes ['Merger'] cloudinit.importer: DEBUG: Found m_list with attributes ['Merger'] in ['cloudinit.mergers.m_list'] cloudinit.importer: DEBUG: Looking for modules ['cloudinit.mergers.m_dict'] that have attributes ['Merger'] cloudinit.importer: DEBUG: Found m_dict with attributes ['Merger'] in ['cloudinit.mergers.m_dict'] cloudinit.importer: DEBUG: Looking for modules ['cloudinit.mergers.m_str'] that have attributes ['Merger'] cloudinit.importer: DEBUG: Found m_str with attributes ['Merger'] in ['cloudinit.mergers.m_str'] cloudinit.mergers: DEBUG: Merging 'dict' into 'dict' using method '_handle_unknown' of 'LookupMerger: (3)' cloudinit.mergers: DEBUG: Merging using located merger 'DictMerger: (method=no_replace,recurse_str=False,recurse_dict=True,recurse_array=False,allow_delete=False)' since it had method '_on_dict' cloudinit.sources.DataSourceSmartOS: DEBUG: Host does not appear to be on SmartOS - end captured logging - == FAIL: test_base64_all
[Yahoo-eng-team] [Bug 1359871] [NEW] Cancel button too short on dialogs
Public bug reported: On modal dialogs, the cancel button is too short. It should be the same height as the Launch/Save button to its right. Steps to reproduce: 1. Go to Project Instances 2. Press Launch Instance This appears to be a simple matter of using a 12px font rather than 13px. ** Affects: horizon Importance: Low Assignee: Jeremy Moffitt (jeremy-moffitt) Status: New ** Tags: low-hanging-fruit ** Attachment added: cancel.png https://bugs.launchpad.net/bugs/1359871/+attachment/4183805/+files/cancel.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359871 Title: Cancel button too short on dialogs Status in OpenStack Dashboard (Horizon): New Bug description: On modal dialogs, the cancel button is too short. It should be the same height as the Launch/Save button to its right. Steps to reproduce: 1. Go to Project Instances 2. Press Launch Instance This appears to be a simple matter of using a 12px font rather than 13px. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1359871/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359869] [NEW] Stats collection in netscaler driver does not work.
Public bug reported: Stats collection in netscaler driver does not work. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1359869 Title: Stats collection in netscaler driver does not work. Status in OpenStack Neutron (virtual network service): New Bug description: Stats collection in netscaler driver does not work. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1359869/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1292648] Re: [SRU] cloud-init should check/format Azure empheral disks each boot
This was fixed released a long time ago. ** Changed in: cloud-init Status: Fix Committed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1292648 Title: [SRU] cloud-init should check/format Azure empheral disks each boot Status in Init scripts for use on cloud images: Fix Released Status in “cloud-init” package in Ubuntu: Fix Released Status in “cloud-init” source package in Precise: Fix Released Status in “cloud-init” source package in Saucy: Fix Released Bug description: SRU Justification [IMPACT] Users on Windows Azure are guaranteed to have the ephemeral device as ext4 for the first boot. Subsequent boots may result in fuse mounted NTFS file system. [TEST CASE] Defined in comment 3. [Regression Potential] Low. This change is scope only to the Windows Azure datasource and the behavior complies with expected behavior of Windows Azure: the ephemeral disk is not durable between boots. From Microsoft documentation: Because data on a resource disk may not be durable across reboots, it is often used by applications and processes running in the virtual machine for transient and temporary storage of data. It is also used to store page or swap files for the operating system. (See http://www.windowsazure.com/en-us/documentation/articles/storage-windows-attach-disk/) Even so, the change to Cloud-init is scope to only replace the ephemeral disk when the disk is 1) NTFS; 2) has a label of Temporary Storage; and 3) has no files on it. When the disk matches, cloud-init will turn the code paths for formating the ephemeral disk for that boot only. [ORIGINAL REPORT] On Windows Azure, the ephemeral disk should be treated as ephemeral per boot, not per instance. Microsoft has informed us that under the following conditions an ephemeral disk may disappear: 1. The user resizes the instance 2. A fault causes the instance to move from one physical host to another 3. A machine is shutdown and then started again Essentially, on Azure, the ephemeral disk is extremely ephemeral. Users who hit any of the above situations are discovering that /mnt is mount with their default NTFS file system. ProblemType: Bug DistroRelease: Ubuntu 14.04 Package: cloud-init 0.7.5~bzr964-0ubuntu1 [modified: usr/lib/python2.7/dist-packages/cloudinit/config/cc_disk_setup.py usr/lib/python2.7/dist-packages/cloudinit/config/cc_final_message.py usr/lib/python2.7/dist-packages/cloudinit/config/cc_seed_random.py usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceAzure.py usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceCloudSigma.py usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceSmartOS.py] ProcVersionSignature: Ubuntu 3.13.0-17.37-generic 3.13.6 Uname: Linux 3.13.0-17-generic x86_64 ApportVersion: 2.13.3-0ubuntu1 Architecture: amd64 Date: Fri Mar 14 17:53:20 2014 PackageArchitecture: all SourcePackage: cloud-init UpgradeStatus: No upgrade log present (probably fresh install) To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1292648/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359889] [NEW] NetScaler LBaaS Driver: Remove SNAT port creation by driver
Public bug reported: The NetScaler driver need not create the snat port. The middleware controlcenter will be creating the snat port whenever required. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1359889 Title: NetScaler LBaaS Driver: Remove SNAT port creation by driver Status in OpenStack Neutron (virtual network service): New Bug description: The NetScaler driver need not create the snat port. The middleware controlcenter will be creating the snat port whenever required. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1359889/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359890] [NEW] NetScaler LBaaS Driver: enable pool member status refresh
Public bug reported: Pool Members status should be ACTIVE/UP when the monitor in the NetScaler backend has detected it is UP and should be INACTIVE/Down when the NetScaler backend has detected it is DOWN. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1359890 Title: NetScaler LBaaS Driver: enable pool member status refresh Status in OpenStack Neutron (virtual network service): New Bug description: Pool Members status should be ACTIVE/UP when the monitor in the NetScaler backend has detected it is UP and should be INACTIVE/Down when the NetScaler backend has detected it is DOWN. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1359890/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359893] [NEW] NetScaler LBaaS Driver: Enable async creation of VIP in the NetScaler backend
Public bug reported: VIP creation to NS backend assumes it is synchronous. There are cases where new Backend Appliances get created this requires VIP creation call to be asynchronous. This change should enable creation of VIP to be asynchronous. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1359893 Title: NetScaler LBaaS Driver: Enable async creation of VIP in the NetScaler backend Status in OpenStack Neutron (virtual network service): New Bug description: VIP creation to NS backend assumes it is synchronous. There are cases where new Backend Appliances get created this requires VIP creation call to be asynchronous. This change should enable creation of VIP to be asynchronous. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1359893/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359901] [NEW] Volume Backups dialogs missing bootstrap treatments
Public bug reported: Cancel buttons in the Create Volume Backups dialog and the Restore Volume Backup dialog have no border. ** Affects: horizon Importance: Undecided Assignee: Julie Gravel (julie-gravel) Status: New ** Tags: bootstrap ** Changed in: horizon Assignee: (unassigned) = Julie Gravel (julie-gravel) ** Tags added: bootstrap -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359901 Title: Volume Backups dialogs missing bootstrap treatments Status in OpenStack Dashboard (Horizon): New Bug description: Cancel buttons in the Create Volume Backups dialog and the Restore Volume Backup dialog have no border. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1359901/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359969] [NEW] base template includes improper html
Public bug reported: https://github.com/openstack/horizon/blob/a0f7235278cfe187b2ff31bfb787548735111c8b/horizon/templates/base.html#L40 contains a self-closing div tag. This is not a valid syntax for HTML5. http://dev.w3.org/html5/html-author/#tags Although I'm not aware of any specific wrong behaviors that result from this we should certainly make sure our pages are well-formed HTML ** Affects: horizon Importance: Undecided Assignee: Doug Fish (drfish) Status: New ** Changed in: horizon Assignee: (unassigned) = Doug Fish (drfish) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1359969 Title: base template includes improper html Status in OpenStack Dashboard (Horizon): New Bug description: https://github.com/openstack/horizon/blob/a0f7235278cfe187b2ff31bfb787548735111c8b/horizon/templates/base.html#L40 contains a self-closing div tag. This is not a valid syntax for HTML5. http://dev.w3.org/html5/html-author/#tags Although I'm not aware of any specific wrong behaviors that result from this we should certainly make sure our pages are well-formed HTML To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1359969/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1326140] Re: Cannot repartition root
Sorry, I forgot all about this bug report. We fixed the problem long ago IIRC it turned out not to be a cloud-init problem, cloud-init just changed the timing. We initially fixed it by patching a cloud-init script, but even that patch was later removed. So cloud-init is completely exonerated! ** Changed in: cloud-init Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1326140 Title: Cannot repartition root Status in Init scripts for use on cloud images: Invalid Bug description: I need to repartition the root drive. After repartitioning everything looks good. e2fsck -f /etc/sda1 works and I can make a file system and mount the new partition. However, when I reboot it fails with: bad geometry: block count exceeds size of device. However, the exact same code works if I first apt-get purge cloud-init. I am testing in hyper-v with a Ubuntu 14.04 image downloaded from Azure. The same code also works in vmware and a modified version works in centos. I don't understand how cloud-init could be affecting things since it fails well before cloud-init starts. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1326140/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359995] [NEW] Tempest failed to delete user
Public bug reported: check-tempest-dsvm-full failed on a keystone change. Here's the main log: http://logs.openstack.org/73/111573/4/check/check-tempest-dsvm-full/c5ce3bd/console.html The traceback shows: File tempest/api/volume/test_volumes_list.py, line 80, in tearDownClass File tempest/services/identity/json/identity_client.py, line 189, in delete_user Unauthorized: Unauthorized Details: {error: {message: The request you have made requires authentication. (Disable debug mode to suppress these details.), code: 401, title: Unauthorized}} So it's trying to delete the user and it gets unauthorized. Maybe the token was expired or marked invalid for some reason. There's something wrong here, but the keystone logs are useless for debugging now that it's running in Apache httpd. The logs don't have the request or result line, so you can't find where the request was being made. Also, Tempest should be able to handle the token being invalidated. It should just get a new token and try with that. ** Affects: devstack Importance: Undecided Status: New ** Affects: keystone Importance: Undecided Status: New ** Affects: tempest Importance: Undecided Status: New ** Also affects: tempest Importance: Undecided Status: New ** Also affects: devstack Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1359995 Title: Tempest failed to delete user Status in devstack - openstack dev environments: New Status in OpenStack Identity (Keystone): New Status in Tempest: New Bug description: check-tempest-dsvm-full failed on a keystone change. Here's the main log: http://logs.openstack.org/73/111573/4/check/check-tempest-dsvm-full/c5ce3bd/console.html The traceback shows: File tempest/api/volume/test_volumes_list.py, line 80, in tearDownClass File tempest/services/identity/json/identity_client.py, line 189, in delete_user Unauthorized: Unauthorized Details: {error: {message: The request you have made requires authentication. (Disable debug mode to suppress these details.), code: 401, title: Unauthorized}} So it's trying to delete the user and it gets unauthorized. Maybe the token was expired or marked invalid for some reason. There's something wrong here, but the keystone logs are useless for debugging now that it's running in Apache httpd. The logs don't have the request or result line, so you can't find where the request was being made. Also, Tempest should be able to handle the token being invalidated. It should just get a new token and try with that. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1359995/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1359999] [NEW] Identity panels show loading icon when click on it
Public bug reported: When I click on Identity in the right navigation, and then click on either Project or User, I get a brief grey backdrop+loading icon before the table shows up. Clicking on any of the other panels doesn't show this. We should be consistent. I've only see the brief grey backdrop+loading icon when pulling up a modal. Please see attached image. ** Affects: horizon Importance: Undecided Status: New ** Attachment added: Untitled.png https://bugs.launchpad.net/bugs/135/+attachment/4184114/+files/Untitled.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/135 Title: Identity panels show loading icon when click on it Status in OpenStack Dashboard (Horizon): New Bug description: When I click on Identity in the right navigation, and then click on either Project or User, I get a brief grey backdrop+loading icon before the table shows up. Clicking on any of the other panels doesn't show this. We should be consistent. I've only see the brief grey backdrop+loading icon when pulling up a modal. Please see attached image. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/135/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1360010] [NEW] Database Terminate Instance should be red btn-danger
Public bug reported: https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/databases/tables.py#L38 class TerminateInstance(tables.BatchAction): name = terminate action_present = _(Terminate) action_past = _(Scheduled termination of %(data_type)s) data_type_singular = _(Instance) data_type_plural = _(Instances) classes = (ajax-modal,) == should be 'btn-danger' icon = off ** Affects: horizon Importance: Undecided Assignee: Cindy Lu (clu-m) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1360010 Title: Database Terminate Instance should be red btn-danger Status in OpenStack Dashboard (Horizon): New Bug description: https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/databases/tables.py#L38 class TerminateInstance(tables.BatchAction): name = terminate action_present = _(Terminate) action_past = _(Scheduled termination of %(data_type)s) data_type_singular = _(Instance) data_type_plural = _(Instances) classes = (ajax-modal,) == should be 'btn-danger' icon = off To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1360010/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1360011] [NEW] SSH Auth fails in AdvancedNetworkOps scenario
Public bug reported: Affects all neutron full jobs and check-grenade-dsvm-partial-ncpu The latter runs nova network. In the past 7 days: 105 hits (12 in gate) grenade: 30 neutron-standard: 1 neutron-full: 74 in the past 36 hours: 72 hits (8 in gate) grenade: 0 neutron-standard: 1 neutron-full: 71 Something apparently has fixed the issue in the grenade test but screwed the neutron tests. Logstash query (from console, as there is no clue in logs) available at [1] The issue manifests as a failure to authenticate to the server (SSH server responds). then paramiko starts returning errors like [2], until the timeout expires [1] http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVFJBQ0VcIiBBTkQgbWVzc2FnZTpcIlNTSEV4Y2VwdGlvbjogRXJyb3IgcmVhZGluZyBTU0ggcHJvdG9jb2wgYmFubmVyW0Vycm5vIDEwNF0gQ29ubmVjdGlvbiByZXNldCBieSBwZWVyXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOC0yMFQxMTo1NDoyMCswMDowMCIsInRvIjoiMjAxNC0wOC0yMVQyMzo1NDoyMCswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA4NjY1MjkzODA2LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9 [2] http://logs.openstack.org/10/98010/5/gate/gate-tempest-dsvm-neutron-full/aca3f89/console.html#_2014-08-21_08_36_14_931 ** Affects: neutron Importance: High Assignee: Salvatore Orlando (salvatore-orlando) Status: New ** Tags: neutron-full-job -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1360011 Title: SSH Auth fails in AdvancedNetworkOps scenario Status in OpenStack Neutron (virtual network service): New Bug description: Affects all neutron full jobs and check-grenade-dsvm-partial-ncpu The latter runs nova network. In the past 7 days: 105 hits (12 in gate) grenade: 30 neutron-standard: 1 neutron-full: 74 in the past 36 hours: 72 hits (8 in gate) grenade: 0 neutron-standard: 1 neutron-full: 71 Something apparently has fixed the issue in the grenade test but screwed the neutron tests. Logstash query (from console, as there is no clue in logs) available at [1] The issue manifests as a failure to authenticate to the server (SSH server responds). then paramiko starts returning errors like [2], until the timeout expires [1] http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVFJBQ0VcIiBBTkQgbWVzc2FnZTpcIlNTSEV4Y2VwdGlvbjogRXJyb3IgcmVhZGluZyBTU0ggcHJvdG9jb2wgYmFubmVyW0Vycm5vIDEwNF0gQ29ubmVjdGlvbiByZXNldCBieSBwZWVyXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOC0yMFQxMTo1NDoyMCswMDowMCIsInRvIjoiMjAxNC0wOC0yMVQyMzo1NDoyMCswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDA4NjY1MjkzODA2LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9 [2] http://logs.openstack.org/10/98010/5/gate/gate-tempest-dsvm-neutron-full/aca3f89/console.html#_2014-08-21_08_36_14_931 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1360011/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1360012] [NEW] Database Launch Instance should show Flavor Details
Public bug reported: The modal has a Flavor dropdown, but user has no idea what the Flavor is defined as. We should follow the same convention has what the Launch Instance modal has. See attached image. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1360012 Title: Database Launch Instance should show Flavor Details Status in OpenStack Dashboard (Horizon): New Bug description: The modal has a Flavor dropdown, but user has no idea what the Flavor is defined as. We should follow the same convention has what the Launch Instance modal has. See attached image. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1360012/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1360014] [NEW] Database Launch Instance Flavor list should be sorted
Public bug reported: It should be sorted like it is on Launch Instance dialog and make use CREATE_INSTANCE_FLAVOR_SORT in local_settings.py. ** Affects: horizon Importance: Undecided Assignee: Cindy Lu (clu-m) Status: New ** Attachment added: Untitled.png https://bugs.launchpad.net/bugs/1360014/+attachment/4184160/+files/Untitled.png ** Changed in: horizon Assignee: (unassigned) = Cindy Lu (clu-m) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1360014 Title: Database Launch Instance Flavor list should be sorted Status in OpenStack Dashboard (Horizon): New Bug description: It should be sorted like it is on Launch Instance dialog and make use CREATE_INSTANCE_FLAVOR_SORT in local_settings.py. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1360014/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1360022] [NEW] min_ram and min_disk is ignored when boot from volume
Public bug reported: When boot from volume and the volume is created from a image, the original image's min_ram, min_disk attributes are ignored, this is not good. The reason of this failure is because the _check_requested_image() in compute/api.py ignore if the source if a volume. ** Affects: nova Importance: Undecided Assignee: jiang, yunhong (yunhong-jiang) Status: New ** Changed in: nova Assignee: (unassigned) = jiang, yunhong (yunhong-jiang) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1360022 Title: min_ram and min_disk is ignored when boot from volume Status in OpenStack Compute (Nova): New Bug description: When boot from volume and the volume is created from a image, the original image's min_ram, min_disk attributes are ignored, this is not good. The reason of this failure is because the _check_requested_image() in compute/api.py ignore if the source if a volume. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1360022/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1360039] [NEW] on second page cann't edit instances
Public bug reported: under instances panel,only first page can edit instance. If click more to next page,i can't edit instance ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1360039 Title: on second page cann't edit instances Status in OpenStack Dashboard (Horizon): New Bug description: under instances panel,only first page can edit instance. If click more to next page,i can't edit instance To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1360039/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1328700] Re: On mass deletion - some VM's stuck on ERROR due to connection failed to neutron
[Expired for neutron because there has been no activity for 60 days.] ** Changed in: neutron Status: Incomplete = Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1328700 Title: On mass deletion - some VM's stuck on ERROR due to connection failed to neutron Status in OpenStack Neutron (virtual network service): Expired Bug description: Description of problem: When doing mass deletion (more then 64 VM's parallely), some VM's got stuck on ERROR state with the error: Connection to neutron failed: Maximum attempts reached, code: 500, created: 2014-06-10T21:34:38Z Version-Release number of selected component (if applicable): openstack-neutron-openvswitch-2014.1-26.el7ost.noarch python-neutronclient-2.3.4-2.el7ost.noarch python-neutron-2014.1-26.el7ost.noarch openstack-neutron-ml2-2014.1-26.el7ost.noarch openstack-neutron-2014.1-26.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. Setup an environment with a lot of ACTIVE VM's (more then 64) 2. Run mass deletion for each in `nova list | grep ACTIVE | cut -d| -f3` ; do nova delete $each done Actual results: [root@cougar16 ~(keystone_stress1)]$ nova list +--++++-+---+ | ID | Name | Status | Task State | Power State | Networks | +--++++-+---+ | 0731ab43-99da-409a-99b9-627287b0a80a | stress1-42 | ERROR | deleting | Running | private1=192.168.1.61 | +--++++-+---+ [root@cougar16 ~(keystone_stress1)]$ nova show stress1-42 +--+---+ | Property | Value | +--+---+ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| deleting | | OS-EXT-STS:vm_state | error | | OS-SRV-USG:launched_at | 2014-06-10T21:28:53.00 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-06-10T21:27:33Z | | fault| {message: Connection to neutron failed: Maximum attempts reached, code: 500, created: 2014-06-10T21:34:38Z} | | flavor | mini (0) | | hostId | 0c295b885647eb08a3c04a15eb86f9746430dd635c5f8c6291315508 | | id | 0731ab43-99da-409a-99b9-627287b0a80a | | image| cirros (ae31ea8c-c5ca-4ca1-9662-3545304d8e79)
[Yahoo-eng-team] [Bug 1350107] Re: Could not find a version that satisfies the requirement oslo.config=1.4.0.0a3
** Changed in: nova Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1350107 Title: Could not find a version that satisfies the requirement oslo.config=1.4.0.0a3 Status in OpenStack Compute (Nova): Invalid Bug description: https://review.openstack.org/#/c/110101/ introduced the requirement oslo.config=1.4.0.0a3, however, this package doesn't exist on pypi: https://pypi.python.org/simple/oslo.config/ http://pypi.openstack.org/openstack/oslo.config/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1350107/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1222682] Re: Live Migration does not work unless cache=none
** Changed in: nova Status: In Progress = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1222682 Title: Live Migration does not work unless cache=none Status in OpenStack Compute (Nova): Won't Fix Status in OpenStack Manuals: Won't Fix Bug description: If O_DIRECT is not supported, 'cache=none' is not set. Thus, Live Migration gives the following error: Live Migration failure: Unsafe migration: Migration may lead to data corruption if disks use cache != none However, it is ok to migrate if the storage is coherent across nodes. So Nova's live-migration should support the Libvirt's unsafe=True flag. Ideally there should be a nova flag which can be set to pass through the libvirt 'unsafe=true' flag. My thinking is that if 'cache=writethrough' is set then the underlying storage should be safe, but perhaps it's best not to make that assumption and rather to have a nova.conf flag. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1222682/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1215012] Re: _reschedule_or_error will detach other instances' volumes on Cinder side
patch was abandoned, marking as invalid ** Changed in: nova Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1215012 Title: _reschedule_or_error will detach other instances' volumes on Cinder side Status in OpenStack Compute (Nova): Invalid Bug description: _reschedule_or_error will detach other instances' volumes on Cinder side The bug is related to the bug#1195947: https://bugs.launchpad.net/nova/+bug/1195947. When a user creates (in error) an instance using volumes which is already in use by another instance, the error is correctly detected, but the _reschedule_or_error function will incorrectly detach all BDM-volumes given in request, regardless off whether each volume has been attached to instance or not. It'll rewrite the record on the Cinder side, cause the conflict. So we need a protection here, volume-rollback would only permit to detach the volume which is truly belonging to the instance. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1215012/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 929771] Re: No content-type in create volume type request returns 500
nova volumes does not exist anymre ** Changed in: nova Status: Confirmed = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/929771 Title: No content-type in create volume type request returns 500 Status in OpenStack Compute (Nova): Won't Fix Bug description: A 500 is returned on a POST request to create a new volume_type if you don't include a content-type header. curl -H x-auth-token: auth token http:/server mangement url/v1.1/7account/os-volume-types -XPOST -T SATA.json | python -m json.tool { computeFault: { code: 500, message: The server has either erred or is incapable of performing\r\nthe requested operation.\r\n } } 2012-02-09 11:46:27,337 DEBUG routes.middleware [req-91b89854-65dc-4d84-afcd-ae557977b928 a15a08c27d3b42499ef5972ce6c61a40 7e6c1592093d4ffabae29c886bcf741d] Matched POST /7e6c1592093d4ffabae29c886bcf741d/os-volume-types from (pid=30322) debug /usr/lib/python2.6/dist-packages/nova/log.py:175 2012-02-09 11:46:27,338 DEBUG routes.middleware [req-91b89854-65dc-4d84-afcd-ae557977b928 a15a08c27d3b42499ef5972ce6c61a40 7e6c1592093d4ffabae29c886bcf741d] Route path: '/{project_id}/os-volume-types', defaults: {'action': u'create', 'controller': nova.api.openstack.wsgi.Resource object at 0x29c1390} from (pid=30322) debug /usr/lib/python2.6/dist-packages/nova/log.py:175 2012-02-09 11:46:27,338 DEBUG routes.middleware [req-91b89854-65dc-4d84-afcd-ae557977b928 a15a08c27d3b42499ef5972ce6c61a40 7e6c1592093d4ffabae29c886bcf741d] Match dict: {'action': u'create', 'controller': nova.api.openstack.wsgi.Resource object at 0x29c1390, 'project_id': u'7e6c1592093d4ffabae29c886bcf741d'} from (pid=30322) debug /usr/lib/python2.6/dist-packages/nova/log.py:175 2012-02-09 11:46:27,339 INFO nova.api.openstack.wsgi [req-91b89854-65dc-4d84-afcd-ae557977b928 a15a08c27d3b42499ef5972ce6c61a40 7e6c1592093d4ffabae29c886bcf741d] POST http://z2-api1.ohthree.com:8774/v1.1/7e6c1592093d4ffabae29c886bcf741d/os-volume-types 2012-02-09 11:46:27,339 DEBUG nova.api.openstack.wsgi [req-91b89854-65dc-4d84-afcd-ae557977b928 a15a08c27d3b42499ef5972ce6c61a40 7e6c1592093d4ffabae29c886bcf741d] Unrecognized Content-Type provided in request from (pid=30322) debug /usr/lib/python2.6/dist-packages/nova/log.py:175 2012-02-09 11:46:27,340 ERROR nova.api.openstack [req-91b89854-65dc-4d84-afcd-ae557977b928 a15a08c27d3b42499ef5972ce6c61a40 7e6c1592093d4ffabae29c886bcf741d] Caught error: create() takes exactly 3 non-keyword arguments (2 given) (nova.api.openstack): TRACE: Traceback (most recent call last): (nova.api.openstack): TRACE: File /usr/lib/python2.6/dist-packages/nova/api/openstack/__init__.py, line 41, in __call__ (nova.api.openstack): TRACE: return req.get_response(self.application) (nova.api.openstack): TRACE: File /usr/lib/pymodules/python2.6/webob/request.py, line 919, in get_response (nova.api.openstack): TRACE: application, catch_exc_info=False) (nova.api.openstack): TRACE: File /usr/lib/pymodules/python2.6/webob/request.py, line 887, in call_application (nova.api.openstack): TRACE: app_iter = application(self.environ, start_response) (nova.api.openstack): TRACE: File /usr/lib/python2.6/dist-packages/keystone/middleware/auth_token.py, line 343, in __call__ (nova.api.openstack): TRACE: return self._forward_request(env, start_response, proxy_headers) (nova.api.openstack): TRACE: File /usr/lib/python2.6/dist-packages/keystone/middleware/auth_token.py, line 576, in _forward_request (nova.api.openstack): TRACE: return self.app(env, start_response) (nova.api.openstack): TRACE: File /usr/lib/pymodules/python2.6/webob/dec.py, line 159, in __call__ (nova.api.openstack): TRACE: return resp(environ, start_response) (nova.api.openstack): TRACE: File /usr/lib/pymodules/python2.6/webob/dec.py, line 159, in __call__ (nova.api.openstack): TRACE: return resp(environ, start_response) (nova.api.openstack): TRACE: File /usr/lib/pymodules/python2.6/webob/dec.py, line 159, in __call__ (nova.api.openstack): TRACE: return resp(environ, start_response) (nova.api.openstack): TRACE: File /usr/lib/pymodules/python2.6/routes/middleware.py, line 131, in __call__ (nova.api.openstack): TRACE: response = self.app(environ, start_response) (nova.api.openstack): TRACE: File /usr/lib/pymodules/python2.6/webob/dec.py, line 159, in __call__ (nova.api.openstack): TRACE: return resp(environ, start_response) (nova.api.openstack): TRACE: File /usr/lib/pymodules/python2.6/webob/dec.py, line 147, in __call__ (nova.api.openstack): TRACE: resp = self.call_func(req, *args, **self.kwargs) (nova.api.openstack): TRACE: File /usr/lib/pymodules/python2.6/webob/dec.py, line 208, in call_func (nova.api.openstack):
[Yahoo-eng-team] [Bug 970409] Re: Deleting volumes with snapshots should be allowed for some backends
nova volumes does not exist anymore ** Changed in: nova Status: Triaged = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/970409 Title: Deleting volumes with snapshots should be allowed for some backends Status in Cinder: Won't Fix Status in OpenStack Compute (Nova): Won't Fix Bug description: Right now, nova-volumes does not allow volumes to be deleted that have snapshots attached. Some backends may support this so it should be configurable by the administrator whether to allow volumes with snapshots to be deleted. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/970409/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 988557] Re: volume attach should be separated in to its own extension
this already happened, nova volumes has been removed ** Changed in: nova Status: Confirmed = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/988557 Title: volume attach should be separated in to its own extension Status in OpenStack Compute (Nova): Invalid Bug description: Since volumes is moving to its own api endpoint for the moment, and soon its own project, I think the volume attach functionality should be moved to its own extension, so that the rest of the os-volumes extension can be disabled (since you don't need it) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/988557/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1005956] Re: First VM provisioning is slow for large VM images
Since this bug was filed a lot has been done to make this faster ** Changed in: nova Status: Confirmed = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1005956 Title: First VM provisioning is slow for large VM images Status in OpenStack Compute (Nova): Won't Fix Bug description: To provision a VM, OpenStack: (1) copies the image over the network to the compute node in _base directory (2) If its a qcow2 image, it is converted to raw (3) Creates a copy of the raw using cp operation. (4) Creates qcow2 disks from the image in (3). (2) can eliminated using force_to_raw=False in nova.conf However, copying is a costly operation, if the image size is large. Image size can be large due to VM snapshotting, or for Windows images. Need to eliminate it. Can we simply create qcow2 disks from the image copied over the network? Something like: qemu-img create -o qcow2 new.img 10G -b image_from_network To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1005956/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 884582] Re: EC2 API: DescribeImages does not return custom properties
** Changed in: nova Status: Confirmed = Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/884582 Title: EC2 API: DescribeImages does not return custom properties Status in OpenStack Compute (Nova): Opinion Bug description: Glance supports settings custom properties, but these are then not retrievable through the EC2 API (that I can see). I think the natural way to expose them is through the tagSet collection of DescribeImagesResponseItemType. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/884582/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 925748] Re: We should call unicode() instead of str()
until we have the dependency issue addressed for python3 this isn't worth addressing IMHO ** Changed in: nova Status: Triaged = Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/925748 Title: We should call unicode() instead of str() Status in OpenStack Compute (Nova): Opinion Bug description: Until we move to Python 3, we need to be careful of unicode vs ascii strings. Calling str(s) on a string which contains extended characters gives an exception. Please see related bug Bug #822666, which was a specific instance of this. This is potentially a huge change though, so I'm not sure how we should attack it; here are a few starting thoughts: Replace all calls to str() with unicode() (or deprecate and replace over time) Anywhere we call _() we're OK, because gettext is configured in unicode mode If we have something like this: %s %s % (a, b) could we be in trouble there also? Other ideas: We could change the default python character encoding ? We could target Python 3 in parallel to Python 2 ? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/925748/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp