[Yahoo-eng-team] [Bug 1427539] [NEW] lbaasv2 old synchronous driver import fails to redirect to new path
Public bug reported: recently all v2 drivers were moved to the neutron_lbaas.drivers package from the neutron_lbaas.services.loadbalancer.drivers package. To maintain backwards compatibility with some of them, redirect of imports was implemented for configs that have not updated to the new paths. the synchronous_namespace_driver.py module's redirect is broken. ** Affects: neutron Importance: Undecided Assignee: Brandon Logan (brandon-logan) Status: In Progress ** Tags: lbaas ** Changed in: neutron Assignee: (unassigned) => Brandon Logan (brandon-logan) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427539 Title: lbaasv2 old synchronous driver import fails to redirect to new path Status in OpenStack Neutron (virtual network service): In Progress Bug description: recently all v2 drivers were moved to the neutron_lbaas.drivers package from the neutron_lbaas.services.loadbalancer.drivers package. To maintain backwards compatibility with some of them, redirect of imports was implemented for configs that have not updated to the new paths. the synchronous_namespace_driver.py module's redirect is broken. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427539/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427522] [NEW] affinity server group is limited on one host
Public bug reported: In OpenStack Configuration Reference section scheduling, it states following: Are in a set of group hosts (if requested) (ServerGroupAffinityFilter). It looks like affinity server group can be placed on multiple hosts, but after some trials and investigation, the affinity server group is limited on one host. When the current host doesn't have enough resource to host the subsequent VMs, the nova scheduler returns "No Valid Host Was Found". I don't exactly know whether one affinity server group plans to support multiple hosts. If yes the current implementation needs to update. Otherwise, above document needs to reword. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1427522 Title: affinity server group is limited on one host Status in OpenStack Compute (Nova): New Bug description: In OpenStack Configuration Reference section scheduling, it states following: Are in a set of group hosts (if requested) (ServerGroupAffinityFilter). It looks like affinity server group can be placed on multiple hosts, but after some trials and investigation, the affinity server group is limited on one host. When the current host doesn't have enough resource to host the subsequent VMs, the nova scheduler returns "No Valid Host Was Found". I don't exactly know whether one affinity server group plans to support multiple hosts. If yes the current implementation needs to update. Otherwise, above document needs to reword. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1427522/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427521] [NEW] client side filter is missing in vpn tables
Public bug reported: client side filter option is missing in vpn tables ** Affects: horizon Importance: Undecided Assignee: Masco Kaliyamoorthy (masco) Status: New ** Changed in: horizon Assignee: (unassigned) => Masco Kaliyamoorthy (masco) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427521 Title: client side filter is missing in vpn tables Status in OpenStack Dashboard (Horizon): New Bug description: client side filter option is missing in vpn tables To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427521/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427520] [NEW] Language change option is not working new panels
Public bug reported: In horizon i have added a new panel "My Panel" ,which consists of table and table action to upload a file , After this i navigated to settings and changed the preferred language(to Hindhi) , I noticed that all the panel are affected with new language(Hindhi) but "My Panel" is uneffected and still is in English language ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427520 Title: Language change option is not working new panels Status in OpenStack Dashboard (Horizon): New Bug description: In horizon i have added a new panel "My Panel" ,which consists of table and table action to upload a file , After this i navigated to settings and changed the preferred language(to Hindhi) , I noticed that all the panel are affected with new language(Hindhi) but "My Panel" is uneffected and still is in English language To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427520/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427517] [NEW] client side filter is missing in firewall tables
Public bug reported: client side filter option is missing in all the firewall tables ** Affects: horizon Importance: Undecided Assignee: Masco Kaliyamoorthy (masco) Status: In Progress ** Changed in: horizon Assignee: (unassigned) => Masco Kaliyamoorthy (masco) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427517 Title: client side filter is missing in firewall tables Status in OpenStack Dashboard (Horizon): In Progress Bug description: client side filter option is missing in all the firewall tables To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427517/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427509] [NEW] add oauth and federation authentication to config file
Public bug reported: Recently federation and oauth support are no longer optional features. In the [auth] section of the keystone config file, they should be indicated as valid options for authentication. But perhaps now included in the default 'methods' option. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1427509 Title: add oauth and federation authentication to config file Status in OpenStack Identity (Keystone): New Bug description: Recently federation and oauth support are no longer optional features. In the [auth] section of the keystone config file, they should be indicated as valid options for authentication. But perhaps now included in the default 'methods' option. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1427509/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427485] [NEW] Fernet tokens contain a version identifier that is not integrity verified
Public bug reported: Fernet tokens all start with a plaintext string of either "F00" or "F01" indicating either "version 0" (normal unscoped and scoped tokens) or "version 1" (trust-based tokens). That versioning lies outside of the integrity-verified portion of the token, and is thus susceptible to manipulation by end users. With only two token versions, this doesn't present any issues that I'm aware of, but to harden ourselves against the future, we should move that versioning information into the integrity-verified portion of the message. Otherwise, we carry of risk of future implementations inadvertently introducing privilege escalation vulnerabilities, a means for end users to disable authorization checks by supplying older versions, etc, etc. In addition, the format prefix was originally intended to make it easier for remote clients (keystonemiddleware.auth_token) to parse apart and validate tokens without going back to talk to Keystone. Since that's not the plan here (Fernet tokens must be validated with Keystone, since that's the only place where the encryption keys are accessible), the entire format prefix ("F00" / "F01") can be dropped, as long as Keystone can still validate the tokens it's issuing. ** Affects: keystone Importance: Medium Assignee: Dolph Mathews (dolph) Status: New ** Tags: fernet ** Tags added: fernet ** Summary changed: - Fernet tokens contain non-integrity verified version identifier + Fernet tokens contain a version identifier that is not integrity verified -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1427485 Title: Fernet tokens contain a version identifier that is not integrity verified Status in OpenStack Identity (Keystone): New Bug description: Fernet tokens all start with a plaintext string of either "F00" or "F01" indicating either "version 0" (normal unscoped and scoped tokens) or "version 1" (trust-based tokens). That versioning lies outside of the integrity-verified portion of the token, and is thus susceptible to manipulation by end users. With only two token versions, this doesn't present any issues that I'm aware of, but to harden ourselves against the future, we should move that versioning information into the integrity-verified portion of the message. Otherwise, we carry of risk of future implementations inadvertently introducing privilege escalation vulnerabilities, a means for end users to disable authorization checks by supplying older versions, etc, etc. In addition, the format prefix was originally intended to make it easier for remote clients (keystonemiddleware.auth_token) to parse apart and validate tokens without going back to talk to Keystone. Since that's not the plan here (Fernet tokens must be validated with Keystone, since that's the only place where the encryption keys are accessible), the entire format prefix ("F00" / "F01") can be dropped, as long as Keystone can still validate the tokens it's issuing. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1427485/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1043886] Re: Firewall rules are not updated if you restart nova-compute
** Changed in: nova Status: Triaged => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1043886 Title: Firewall rules are not updated if you restart nova-compute Status in OpenStack Compute (Nova): Fix Released Bug description: IptablesFirewallDriver from nova/virt/firewall.py keeps a list of instances in self.instances. When nova-compute starts this is empty. It is not loaded at start in some way, instead it is filled using the prepare_instance_filter method. This method is called from the virt drivers in a few scenarios that are different on libvirt and xenapi (these are the ones I checked). On xenapi it only happens during spawn, on libvirt it also happens during hard reboot. This means that if you have some running instances using some security group, and then for some reason restart the nova-compute service - updates to the security group (i.e. adding/removing some rule) will not be propagated to iptables correctly. On libvirt you can "fix" this by rebooting an instance hard. On xenapi you can't fix it. I added an ugly hack to make xenapi work like I want it to (but I can see that it is not fit for inclusion). I would be happy to fix this in some less ugly way if someone gave me a helpful hint of what the core devs would consider be a good way to solve it. To me perhaps the reasonable thing would be for IptablesFirewallDriver to treat self.instances as a cache and if some instance is not there, then check if it is running and if so - fetch the network_info + do prepare_instance_filter. Anyway, here is my ugly hack patch, perhaps it helps someone or gives more insight into what I mean :-): --- /home/atomia/jma_backup/nova/virt/xenapi/vmops.py 2012-06-12 15:04:56.0 +0200 +++ /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py 2012-08-30 16:37:58.226715150 +0200 @@ -32,6 +32,7 @@ from nova.compute import api as compute from nova.compute import power_state +from nova.compute import utils as compute_utils from nova import context as nova_context from nova import db from nova import exception @@ -1749,6 +1750,16 @@ def refresh_security_group_rules(self, security_group_id): """ recreates security group rules for every instance """ +LOG.debug("JMA: refresh_security_group_rules for " + str(security_group_id) + ", the firewall driver is of type " + self.firewall_driver.__class__.__name__) + +import nova.network +nw_api = nova.network.API() +context = nova_context.get_admin_context() +security_group = db.security_group_get(context, security_group_id) +for instance in security_group['instances']: +nw_info = compute_utils.legacy_network_info(nw_api.get_instance_nw_info(context, instance)) +self.firewall_driver.prepare_instance_filter(instance, nw_info) + self.firewall_driver.refresh_security_group_rules(security_group_id) def refresh_security_group_members(self, security_group_id): To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1043886/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427474] [NEW] IPv6 SLAAC subnet create should update ports on net
Public bug reported: If ports are first created on a network, and then an IPv6 SLAAC or DHCPv6-stateless subnet is created on that network, then the ports created prior to the subnet create are not getting automatically updated (associated) with addresses for the SLAAC/DHCPv6-stateless subnet, as required. Note that this problem was discussed in the Neutron multiple-ipv6-prefixes blueprint, but is being addressed with a separate Neutron bug since this is a bug that can potentially be backported to Juno. ** Affects: neutron Importance: Undecided Assignee: Dane LeBlanc (leblancd) Status: New ** Changed in: neutron Assignee: (unassigned) => Dane LeBlanc (leblancd) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427474 Title: IPv6 SLAAC subnet create should update ports on net Status in OpenStack Neutron (virtual network service): New Bug description: If ports are first created on a network, and then an IPv6 SLAAC or DHCPv6-stateless subnet is created on that network, then the ports created prior to the subnet create are not getting automatically updated (associated) with addresses for the SLAAC/DHCPv6-stateless subnet, as required. Note that this problem was discussed in the Neutron multiple-ipv6-prefixes blueprint, but is being addressed with a separate Neutron bug since this is a bug that can potentially be backported to Juno. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427474/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427467] [NEW] Oversight when copying configdrive during live migration on Hyperv
Public bug reported: When fixing bug https://launchpad.net/bugs/1322096 there was an oversight. When copying the iso we check if the instance requires a config drive and if that config drive is a iso, by checking the value "config_drive_cdrom" from the conf. This value can change and thus, even if the instance has a iso configdrive attached it will be ommitted in the way the fix was implemented. A better idea would be to check if the instance has a attached iso. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1427467 Title: Oversight when copying configdrive during live migration on Hyperv Status in OpenStack Compute (Nova): New Bug description: When fixing bug https://launchpad.net/bugs/1322096 there was an oversight. When copying the iso we check if the instance requires a config drive and if that config drive is a iso, by checking the value "config_drive_cdrom" from the conf. This value can change and thus, even if the instance has a iso configdrive attached it will be ommitted in the way the fix was implemented. A better idea would be to check if the instance has a attached iso. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1427467/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427465] [NEW] vArmour fwaas agent broken, unit tests skipped, CI not running
Public bug reported: https://github.com/openstack/neutron- fwaas/blob/master/neutron_fwaas/services/firewall/agents/varmour/varmour_router.py#L64 vArmour L3 agent _router_added is calling neutron.agent.l3.router_info.RouterInfo__init__ and its not passing mandatory parameters (interface_driver, agent_conf). These parameters were introduced in change IDs: I33a23eb37678d94cea3ace8afe090935b1e70685 I0ec75d731d816955c1915e283a137abcb51ac232 The unit tests that would catch this error: https://github.com/openstack/neutron- fwaas/blob/master/neutron_fwaas/tests/unit/services/firewall/drivers/varmour/test_varmour_fwaas.py#L153 Are being skipped at the gate (Which is essentially another bug, why do these unit tests require a REST call to succeed, and are failing to complete the REST call with the default configuration?) https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/tests/unit/services/firewall/drivers/varmour/test_varmour_fwaas.py#L182 Finally, 3rd party vArmour CI is not being run. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427465 Title: vArmour fwaas agent broken, unit tests skipped, CI not running Status in OpenStack Neutron (virtual network service): New Bug description: https://github.com/openstack/neutron- fwaas/blob/master/neutron_fwaas/services/firewall/agents/varmour/varmour_router.py#L64 vArmour L3 agent _router_added is calling neutron.agent.l3.router_info.RouterInfo__init__ and its not passing mandatory parameters (interface_driver, agent_conf). These parameters were introduced in change IDs: I33a23eb37678d94cea3ace8afe090935b1e70685 I0ec75d731d816955c1915e283a137abcb51ac232 The unit tests that would catch this error: https://github.com/openstack/neutron- fwaas/blob/master/neutron_fwaas/tests/unit/services/firewall/drivers/varmour/test_varmour_fwaas.py#L153 Are being skipped at the gate (Which is essentially another bug, why do these unit tests require a REST call to succeed, and are failing to complete the REST call with the default configuration?) https://github.com/openstack/neutron-fwaas/blob/master/neutron_fwaas/tests/unit/services/firewall/drivers/varmour/test_varmour_fwaas.py#L182 Finally, 3rd party vArmour CI is not being run. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427465/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427459] [NEW] Pools are retrieved for monitors detail even there are no pool association
Public bug reported: $ neutron lb-healthmonitor-show e1dbcea5-0028-4d78-a378-339b70e0d315 ++--+ | Field | Value| ++--+ | admin_state_up | True | | delay | 5| | id | e1dbcea5-0028-4d78-a378-339b70e0d315 | | max_retries| 2| | pools | | | tenant_id | b82b3fa05d8041e183d93aec15527f23 | | timeout| 3| | type | PING | ++--+ see detachments. ** Affects: horizon Importance: Undecided Assignee: Liyingjun (liyingjun) Status: In Progress ** Attachment added: "Screen Shot 2015-03-03 at 9.38.36 AM.png" https://bugs.launchpad.net/bugs/1427459/+attachment/4332803/+files/Screen%20Shot%202015-03-03%20at%209.38.36%20AM.png ** Changed in: horizon Assignee: (unassigned) => Liyingjun (liyingjun) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427459 Title: Pools are retrieved for monitors detail even there are no pool association Status in OpenStack Dashboard (Horizon): In Progress Bug description: $ neutron lb-healthmonitor-show e1dbcea5-0028-4d78-a378-339b70e0d315 ++--+ | Field | Value| ++--+ | admin_state_up | True | | delay | 5| | id | e1dbcea5-0028-4d78-a378-339b70e0d315 | | max_retries| 2| | pools | | | tenant_id | b82b3fa05d8041e183d93aec15527f23 | | timeout| 3| | type | PING | ++--+ see detachments. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427459/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427440] [NEW] V2 only keystone wont start - revoke not in loaded backends
Public bug reported: 2015-03-03 00:22:25.674809 mod_wsgi (pid=10468): Target WSGI script '/var/www/keystone/main' cannot be loaded as Python module. 2015-03-03 00:22:25.674835 mod_wsgi (pid=10468): Exception occurred processing WSGI script '/var/www/keystone/main'. 2015-03-03 00:22:25.674856 Traceback (most recent call last): 2015-03-03 00:22:25.674872 File "/var/www/keystone/main", line 25, in 2015-03-03 00:22:25.674947 application = wsgi_server.initialize_application(name) 2015-03-03 00:22:25.674970 File "/opt/stack/keystone/keystone/server/wsgi.py", line 51, in initialize_application 2015-03-03 00:22:25.675021 startup_application_fn=loadapp) 2015-03-03 00:22:25.675041 File "/opt/stack/keystone/keystone/server/common.py", line 44, in setup_backends 2015-03-03 00:22:25.675086 drivers.update(dependency.resolve_future_dependencies()) 2015-03-03 00:22:25.675098 File "/opt/stack/keystone/keystone/common/dependency.py", line 287, in resolve_future_dependencies 2015-03-03 00:22:25.675177 raise UnresolvableDependencyException(dependency, targets) 2015-03-03 00:22:25.675466 UnresolvableDependencyException: Unregistered dependency: revoke_api for [, , , ] The revoke api should be added to the load_backends in keystone/backends.py ** Affects: keystone Importance: Undecided Assignee: Steve Martinelli (stevemar) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1427440 Title: V2 only keystone wont start - revoke not in loaded backends Status in OpenStack Identity (Keystone): In Progress Bug description: 2015-03-03 00:22:25.674809 mod_wsgi (pid=10468): Target WSGI script '/var/www/keystone/main' cannot be loaded as Python module. 2015-03-03 00:22:25.674835 mod_wsgi (pid=10468): Exception occurred processing WSGI script '/var/www/keystone/main'. 2015-03-03 00:22:25.674856 Traceback (most recent call last): 2015-03-03 00:22:25.674872 File "/var/www/keystone/main", line 25, in 2015-03-03 00:22:25.674947 application = wsgi_server.initialize_application(name) 2015-03-03 00:22:25.674970 File "/opt/stack/keystone/keystone/server/wsgi.py", line 51, in initialize_application 2015-03-03 00:22:25.675021 startup_application_fn=loadapp) 2015-03-03 00:22:25.675041 File "/opt/stack/keystone/keystone/server/common.py", line 44, in setup_backends 2015-03-03 00:22:25.675086 drivers.update(dependency.resolve_future_dependencies()) 2015-03-03 00:22:25.675098 File "/opt/stack/keystone/keystone/common/dependency.py", line 287, in resolve_future_dependencies 2015-03-03 00:22:25.675177 raise UnresolvableDependencyException(dependency, targets) 2015-03-03 00:22:25.675466 UnresolvableDependencyException: Unregistered dependency: revoke_api for [, , , ] The revoke api should be added to the load_backends in keystone/backends.py To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1427440/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427437] [NEW] LDAP debug logging during unit tests brings us close to causing jenkins to fail our tests
Public bug reported: The Jenkins runs of our unit tests have a cap of 50Mb of log output..if we generate more than that, then it will fail out tests on the assumption that something is wrong. Our full run of our tests brings us perilously close already to this limit - primarily due to LDAP debug logging. We should switch off ldap debug logging for our unit tests. ** Affects: keystone Importance: Critical Assignee: Henry Nash (henry-nash) Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1427437 Title: LDAP debug logging during unit tests brings us close to causing jenkins to fail our tests Status in OpenStack Identity (Keystone): New Bug description: The Jenkins runs of our unit tests have a cap of 50Mb of log output..if we generate more than that, then it will fail out tests on the assumption that something is wrong. Our full run of our tests brings us perilously close already to this limit - primarily due to LDAP debug logging. We should switch off ldap debug logging for our unit tests. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1427437/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427432] [NEW] lbaas related(?) check-grenade-dsvm-neutron failure
Public bug reported: https://review.openstack.org/#/c/160523/ (purely doc-only change) http://logs.openstack.org/23/160523/2/check/check-grenade-dsvm-neutron/6f82325/logs/new/screen-q-svc.txt.gz#_2015-03-02_23_28_04_319 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config cls._instance = cls() 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/opt/stack/new/neutron/neutron/manager.py", line 128, in __init__ 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config self._load_service_plugins() 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/opt/stack/new/neutron/neutron/manager.py", line 175, in _load_service_plugins 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config provider) 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/opt/stack/new/neutron/neutron/manager.py", line 133, in _get_plugin_instance 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config mgr = driver.DriverManager(namespace, plugin_provider) 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/usr/local/lib/python2.7/dist-packages/stevedore/driver.py", line 45, in __init__ 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config verify_requirements=verify_requirements, 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/usr/local/lib/python2.7/dist-packages/stevedore/named.py", line 55, in __init__ 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config verify_requirements) 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 170, in _load_plugins 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config self._on_load_failure_callback(self, ep, err) 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/usr/local/lib/python2.7/dist-packages/stevedore/driver.py", line 50, in _default_on_load_failure 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config raise err 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config ImportError: No module named neutron_lbaas.services.loadbalancer.plugin 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427432 Title: lbaas related(?) check-grenade-dsvm-neutron failure Status in OpenStack Neutron (virtual network service): New Bug description: https://review.openstack.org/#/c/160523/ (purely doc-only change) http://logs.openstack.org/23/160523/2/check/check-grenade-dsvm-neutron/6f82325/logs/new/screen-q-svc.txt.gz#_2015-03-02_23_28_04_319 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config cls._instance = cls() 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/opt/stack/new/neutron/neutron/manager.py", line 128, in __init__ 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config self._load_service_plugins() 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/opt/stack/new/neutron/neutron/manager.py", line 175, in _load_service_plugins 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config provider) 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/opt/stack/new/neutron/neutron/manager.py", line 133, in _get_plugin_instance 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config mgr = driver.DriverManager(namespace, plugin_provider) 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/usr/local/lib/python2.7/dist-packages/stevedore/driver.py", line 45, in __init__ 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config verify_requirements=verify_requirements, 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/usr/local/lib/python2.7/dist-packages/stevedore/named.py", line 55, in __init__ 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config verify_requirements) 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 170, in _load_plugins 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config self._on_load_failure_callback(self, ep, err) 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config File "/usr/local/lib/python2.7/dist-packages/stevedore/driver.py", line 50, in _default_on_load_failure 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config raise err 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config ImportError: No module named neutron_lbaas.services.loadbalancer.plugin 2015-03-02 23:28:04.319 15268 TRACE neutron.common.config To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427432/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.ne
[Yahoo-eng-team] [Bug 1427396] [NEW] lbaasv2 pool list not returning all data
Public bug reported: Each pool returned by the API on a request to list pools should include all relevant data, such as, session persistence and listeners. ** Affects: neutron Importance: Undecided Assignee: Phillip Toohill (phillip-toohill) Status: New ** Tags: lbaas ** Changed in: neutron Assignee: (unassigned) => Phillip Toohill (phillip-toohill) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427396 Title: lbaasv2 pool list not returning all data Status in OpenStack Neutron (virtual network service): New Bug description: Each pool returned by the API on a request to list pools should include all relevant data, such as, session persistence and listeners. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427396/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427391] [NEW] Serial console "cannot find instance"
Public bug reported: In some configurations, Serial Console says that instance cannot be found for the giving id, even though the instance is available. It seems to work fine in development environment, but not on system z server. ** Affects: horizon Importance: Undecided Assignee: Randy Bertram (rbertram) Status: In Progress ** Changed in: horizon Assignee: (unassigned) => Randy Bertram (rbertram) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427391 Title: Serial console "cannot find instance" Status in OpenStack Dashboard (Horizon): In Progress Bug description: In some configurations, Serial Console says that instance cannot be found for the giving id, even though the instance is available. It seems to work fine in development environment, but not on system z server. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427391/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1420942] Re: noVNC insecure cookie allows session hijacking
This has been published as OSSN-0044: https://wiki.openstack.org/wiki/OSSN/OSSN-0044 ** Changed in: ossn Status: New => Fix Released ** Changed in: ossn Assignee: (unassigned) => Paul McMillan (paul-mcmillan) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1420942 Title: noVNC insecure cookie allows session hijacking Status in OpenStack Compute (Nova): Invalid Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: This is a follow-on to https://bugs.launchpad.net/nova/+bug/1197459, where it was decided that the issues raised there were best practice hardening, but not practically exploitable. The noVNC websocket token cookie is not set as secure-only. This is practically exploitable by an attacker who can read user traffic. The setup is as follows: Nova and horizon configured to serve from https. Nova is patched to resolve #1409142. User is accessing the cloud through a man in the middle who controls all traffic to and from the user. [1] user -> attacker -> cloud(https) Here's what happens: 1) User logs into cloud securely via https://yourcloud.com/ 2) User securely accesses a server via websocket VNC and logs in. User (optionally) closes this window. 3) User opens a new browser tab to an insecure site (it can be any insecure site at all) 4) On receiving the request for the insecure site, the attacker fetches it from the source, and rewrites it to include an invisible attack iframe before serving it to the user. [2] 5) The attack iframe directs the user's browser to open http://yourcloud.com:6080 inside the hidden iframe (even if you don't serve that site insecurely) 6) When the user's browser requests http://yourcloud.com:6080, the attacker logs the request including the cookies, and responds with a blank page. 7) The attacker now has access to the auth token used to open the VNC socket (since the most recent one is sent in a cookie), and can stay connected to that socket indefinitely in any browser. A clever attacker will cycle the iframe contents repeatedly, and steal every VNC socket a user opens as the token cookies change, rather than just the most recent one. As long as the attacker stays connected to the socket, the connection stays open indefinitely. Note that the above attack does not involve the user clicking through any TLS warnings, and does not involve them actively clicking phishing links or anything similar. Fixing this is going to involve letting noVNC know when it is supposed to be behind TLS, and modifying cookie setting behavior accordingly. Django's documentation on this is a good starting place for a fairly standard approach to telling an application it is receiving HTTPS traffic: https://docs.djangoproject.com/en/1.7/ref/settings/#secure-proxy-ssl-header -Paul [1] As a practical aside, it is easy to become this mitm on most local network segments, so users who connect to any network with any untrusted users are vulnerable. An attacker who can only read user traffic (without the ability to block or modify it) can usually become a full mitm by spoofing DNS responses. [2] the attacker can actually do this step at any point in the process, even before step 1. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1420942/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1420942] Re: noVNC insecure cookie allows session hijacking
This should be marked as public now. As Tritan mentioned in comment#8, it's already been disclosed (not to mention that we already wrote and published an OSSN). ** Information type changed from Private Security to Public Security ** Also affects: ossn Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1420942 Title: noVNC insecure cookie allows session hijacking Status in OpenStack Compute (Nova): Invalid Status in OpenStack Security Advisories: Won't Fix Status in OpenStack Security Notes: Fix Released Bug description: This is a follow-on to https://bugs.launchpad.net/nova/+bug/1197459, where it was decided that the issues raised there were best practice hardening, but not practically exploitable. The noVNC websocket token cookie is not set as secure-only. This is practically exploitable by an attacker who can read user traffic. The setup is as follows: Nova and horizon configured to serve from https. Nova is patched to resolve #1409142. User is accessing the cloud through a man in the middle who controls all traffic to and from the user. [1] user -> attacker -> cloud(https) Here's what happens: 1) User logs into cloud securely via https://yourcloud.com/ 2) User securely accesses a server via websocket VNC and logs in. User (optionally) closes this window. 3) User opens a new browser tab to an insecure site (it can be any insecure site at all) 4) On receiving the request for the insecure site, the attacker fetches it from the source, and rewrites it to include an invisible attack iframe before serving it to the user. [2] 5) The attack iframe directs the user's browser to open http://yourcloud.com:6080 inside the hidden iframe (even if you don't serve that site insecurely) 6) When the user's browser requests http://yourcloud.com:6080, the attacker logs the request including the cookies, and responds with a blank page. 7) The attacker now has access to the auth token used to open the VNC socket (since the most recent one is sent in a cookie), and can stay connected to that socket indefinitely in any browser. A clever attacker will cycle the iframe contents repeatedly, and steal every VNC socket a user opens as the token cookies change, rather than just the most recent one. As long as the attacker stays connected to the socket, the connection stays open indefinitely. Note that the above attack does not involve the user clicking through any TLS warnings, and does not involve them actively clicking phishing links or anything similar. Fixing this is going to involve letting noVNC know when it is supposed to be behind TLS, and modifying cookie setting behavior accordingly. Django's documentation on this is a good starting place for a fairly standard approach to telling an application it is receiving HTTPS traffic: https://docs.djangoproject.com/en/1.7/ref/settings/#secure-proxy-ssl-header -Paul [1] As a practical aside, it is easy to become this mitm on most local network segments, so users who connect to any network with any untrusted users are vulnerable. An attacker who can only read user traffic (without the ability to block or modify it) can usually become a full mitm by spoofing DNS responses. [2] the attacker can actually do this step at any point in the process, even before step 1. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1420942/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427379] [NEW] AttributeError: 'Assignment' object has no attribute 'get_domain_by_name'
Public bug reported: 2015-03-02 13:16:45.493 19248 CRITICAL keystone [-] AttributeError: 'Assignment' object has no attribute 'get_domain_by_name' 2015-03-02 13:16:45.493 19248 TRACE keystone Traceback (most recent call last): 2015-03-02 13:16:45.493 19248 TRACE keystone File "/usr/bin/keystone-manage", line 44, in 2015-03-02 13:16:45.493 19248 TRACE keystone cli.main(argv=sys.argv, config_files=config_files) 2015-03-02 13:16:45.493 19248 TRACE keystone File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 311, in main 2015-03-02 13:16:45.493 19248 TRACE keystone CONF.command.cmd_class.main() 2015-03-02 13:16:45.493 19248 TRACE keystone File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 250, in main 2015-03-02 13:16:45.493 19248 TRACE keystone mapping['domain_id'] = get_domain_id(CONF.command.domain_name) 2015-03-02 13:16:45.493 19248 TRACE keystone File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 236, in get_domain_id 2015-03-02 13:16:45.493 19248 TRACE keystone return assignment_manager.driver.get_domain_by_name(name)['id'] 2015-03-02 13:16:45.493 19248 TRACE keystone AttributeError: 'Assignment' object has no attribute 'get_domain_by_name' ** Affects: keystone Importance: Undecided Assignee: Matthew Edmonds (edmondsw) Status: New ** Changed in: keystone Assignee: (unassigned) => Matthew Edmonds (edmondsw) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1427379 Title: AttributeError: 'Assignment' object has no attribute 'get_domain_by_name' Status in OpenStack Identity (Keystone): New Bug description: 2015-03-02 13:16:45.493 19248 CRITICAL keystone [-] AttributeError: 'Assignment' object has no attribute 'get_domain_by_name' 2015-03-02 13:16:45.493 19248 TRACE keystone Traceback (most recent call last): 2015-03-02 13:16:45.493 19248 TRACE keystone File "/usr/bin/keystone-manage", line 44, in 2015-03-02 13:16:45.493 19248 TRACE keystone cli.main(argv=sys.argv, config_files=config_files) 2015-03-02 13:16:45.493 19248 TRACE keystone File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 311, in main 2015-03-02 13:16:45.493 19248 TRACE keystone CONF.command.cmd_class.main() 2015-03-02 13:16:45.493 19248 TRACE keystone File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 250, in main 2015-03-02 13:16:45.493 19248 TRACE keystone mapping['domain_id'] = get_domain_id(CONF.command.domain_name) 2015-03-02 13:16:45.493 19248 TRACE keystone File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 236, in get_domain_id 2015-03-02 13:16:45.493 19248 TRACE keystone return assignment_manager.driver.get_domain_by_name(name)['id'] 2015-03-02 13:16:45.493 19248 TRACE keystone AttributeError: 'Assignment' object has no attribute 'get_domain_by_name' To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1427379/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427365] [NEW] openvswitch-agent init script does not source /etc/sysconfig/neutron
Public bug reported: The init script '/etc/init.d/openstack-neutron-openvswitch-agent' does not source /etc/sysconfig/neutron, causing the ml2 plugin configuration to not be read as the default value for NEUTRON_PLUGIN_CONF in the init script is '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'. I resolved this problem by adding the source just after NEUTRON_PLUGIN_ARGS="" (see attached). I'm running on openSUSE-13.1 using the SUSE open build service RPM repository (https://build.opensuse.org/project/show/Cloud:OpenStack:Juno). # rpm -qi openstack-neutron-openvswitch-agent Name: openstack-neutron-openvswitch-agent Version : 2014.2.3.dev28 Release : 1.1 Architecture: noarch Install Date: Mon Mar 2 11:55:57 2015 Group : Development/Languages/Python Size: 14893 License : Apache-2.0 Signature : RSA/SHA1, Fri Feb 27 20:08:54 2015, Key ID 893a90dad85f9316 Source RPM : openstack-neutron-2014.2.3.dev28-1.1.src.rpm Build Date : Fri Feb 27 20:07:52 2015 Build Host : build24 Relocations : (not relocatable) Vendor : obs://build.opensuse.org/Cloud:OpenStack URL : https://launchpad.net/neutron Summary : OpenStack Network - Open vSwitch Description : This package provides the OpenVSwitch Agent. Distribution: Cloud:OpenStack:Juno / openSUSE_13.1 ** Affects: neutron Importance: Undecided Status: New ** Patch added: "etc_init.d_openstack-neutron-openvswitch-agent.patch" https://bugs.launchpad.net/bugs/1427365/+attachment/4332544/+files/etc_init.d_openstack-neutron-openvswitch-agent.patch -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427365 Title: openvswitch-agent init script does not source /etc/sysconfig/neutron Status in OpenStack Neutron (virtual network service): New Bug description: The init script '/etc/init.d/openstack-neutron-openvswitch-agent' does not source /etc/sysconfig/neutron, causing the ml2 plugin configuration to not be read as the default value for NEUTRON_PLUGIN_CONF in the init script is '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'. I resolved this problem by adding the source just after NEUTRON_PLUGIN_ARGS="" (see attached). I'm running on openSUSE-13.1 using the SUSE open build service RPM repository (https://build.opensuse.org/project/show/Cloud:OpenStack:Juno). # rpm -qi openstack-neutron-openvswitch-agent Name: openstack-neutron-openvswitch-agent Version : 2014.2.3.dev28 Release : 1.1 Architecture: noarch Install Date: Mon Mar 2 11:55:57 2015 Group : Development/Languages/Python Size: 14893 License : Apache-2.0 Signature : RSA/SHA1, Fri Feb 27 20:08:54 2015, Key ID 893a90dad85f9316 Source RPM : openstack-neutron-2014.2.3.dev28-1.1.src.rpm Build Date : Fri Feb 27 20:07:52 2015 Build Host : build24 Relocations : (not relocatable) Vendor : obs://build.opensuse.org/Cloud:OpenStack URL : https://launchpad.net/neutron Summary : OpenStack Network - Open vSwitch Description : This package provides the OpenVSwitch Agent. Distribution: Cloud:OpenStack:Juno / openSUSE_13.1 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427365/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1424576] Re: RuntimeError: Unable to find group for option fatal_deprecations, maybe it's defined twice in the same group?
The config generator from the incubator is deprecated in favor of the new approach in oslo.config. ** Changed in: oslo-incubator Status: New => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1424576 Title: RuntimeError: Unable to find group for option fatal_deprecations, maybe it's defined twice in the same group? Status in OpenStack Compute (Nova): In Progress Status in The Oslo library incubator: Won't Fix Bug description: I tried to generate a nova.conf configuration file with the current state of the Nova repository (master) and got the following exception message: % tox -e genconfig genconfig create: /home/berendt/Repositories/nova/.tox/genconfig genconfig installdeps: -r/home/berendt/Repositories/nova/requirements.txt, -r/home/berendt/Repositories/nova/test-requirements.txt genconfig develop-inst: /home/berendt/Repositories/nova genconfig runtests: PYTHONHASHSEED='0' genconfig runtests: commands[0] | bash tools/config/generate_sample.sh -b . -p nova -o etc/nova Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py", line 303, in main() File "/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py", line 300, in main generate(sys.argv[1:]) File "/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py", line 128, in generate for group, opts in _list_opts(mod_obj): File "/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py", line 192, in _list_opts ret.setdefault(_guess_groups(opt, obj), []).append(opt) File "/home/berendt/Repositories/nova/nova/openstack/common/config/generator.py", line 172, in _guess_groups % opt.name RuntimeError: Unable to find group for option fatal_deprecations, maybe it's defined twice in the same group? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1424576/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427351] [NEW] cells: hypervisor API extension can't find compute_node services
Public bug reported: After the conversion to use Service objects in the hypervisor API extension the lookups for services are happening in the parent cell, not the child cells. This is due to cells redirects not being implemented in the Service object. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1427351 Title: cells: hypervisor API extension can't find compute_node services Status in OpenStack Compute (Nova): New Bug description: After the conversion to use Service objects in the hypervisor API extension the lookups for services are happening in the parent cell, not the child cells. This is due to cells redirects not being implemented in the Service object. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1427351/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427343] [NEW] missing entry point for cisco apic topology agent
Public bug reported: Cisco APIC topology agent [0] is missing the entry point. [0] neutron.plugins.ml2.drivers.cisco.apic.apic_topology.ApicTopologyService ** Affects: neutron Importance: Undecided Assignee: Ivar Lazzaro (mmaleckk) Status: In Progress ** Changed in: neutron Assignee: (unassigned) => Ivar Lazzaro (mmaleckk) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427343 Title: missing entry point for cisco apic topology agent Status in OpenStack Neutron (virtual network service): In Progress Bug description: Cisco APIC topology agent [0] is missing the entry point. [0] neutron.plugins.ml2.drivers.cisco.apic.apic_topology.ApicTopologyService To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427343/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427328] [NEW] [sahara] The mechanism used to avoid duplicate script names in jobs binaries is fragile
Public bug reported: Create a Job Binary in the internal db, with a specific script name (for example, "script_name"). If the user creates another job binary with the same script name, a unique UUID is added so that the name is unique. But, if the script name is long, the addition of the UUID can make it longer than the field limit (80 characters) and the job creation fails. Moreover, this mechanism is counterintuitive as it is not implemented for the job binary "name" field (an error is returned). Proposal: - remove the mechanism which tries to create an unique script name, and simply returns a validation error if a script name is duplicated. ** Affects: horizon Importance: Undecided Status: New ** Tags: sahara -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427328 Title: [sahara] The mechanism used to avoid duplicate script names in jobs binaries is fragile Status in OpenStack Dashboard (Horizon): New Bug description: Create a Job Binary in the internal db, with a specific script name (for example, "script_name"). If the user creates another job binary with the same script name, a unique UUID is added so that the name is unique. But, if the script name is long, the addition of the UUID can make it longer than the field limit (80 characters) and the job creation fails. Moreover, this mechanism is counterintuitive as it is not implemented for the job binary "name" field (an error is returned). Proposal: - remove the mechanism which tries to create an unique script name, and simply returns a validation error if a script name is duplicated. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427328/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1426544] Re: Nova switching to oslo_log blows up instance object repr in logs
** Changed in: oslo.log Status: Fix Committed => Fix Released ** Changed in: oslo.log Milestone: None => 0.4.0 ** Changed in: oslo.log Importance: Undecided => Critical -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1426544 Title: Nova switching to oslo_log blows up instance object repr in logs Status in OpenStack Compute (Nova): Confirmed Status in Logging configuration library for OpenStack: Fix Released Bug description: Logging with an instance kwarg used to just log the instance uuid but now it looks like after the change to use oslo_log we're logging the entire representation of the instance object which blows up the logs and makes things hard to read: http://logs.openstack.org/40/122240/19/gate/gate-tempest-dsvm-neutron- full/4ef0a02/logs/screen-n-cpu.txt.gz For example: 2015-02-27 18:03:45.510 8433 WARNING nova.compute.manager [-] Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='',created_at=2015-02-27T18:03:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description ='ImagesTestJSON-instance-902123019',display_name='ImagesTestJSON- instance-902123019',ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(6),host ='devstack-trusty-rax-dfw-1041350.slave.openstack.org',hostname ='imagestestjson-instance-902123019',id=9,image_ref='801203df- 0ef4-45c5-bdd2-f981358dff40',info_cache=InstanceInfoCache,instance_type_id=6,kernel_id='525a7006 -cf4b-45be- b2e8-a6fd46e52d47',key_data=None,key_name=None,launch_index=0,launched_at=2015-02-27T18:03:42Z,launched_on ='devstack-trusty-rax- dfw-1041350.slave.openstack.org',locked=False,locked_by=None,memory_mb=64,metadata={},new_flavor=None,node ='devstack-trusty-rax- dfw-1041350.slave.openstack.org',numa_topology=,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='e619583f4404465ab6d1d85c065c05c3',ramdisk_id='6949db12 -e5cd-416a- b1c5-7318595d3382',reservation_id='r-l5zukbbw',root_device_name='/dev/vda',root_gb=0,scheduled_at=None,security_groups=SecurityGroupList,shutdown_terminate=False,system_metadata={image_base_image_ref ='801203df- 0ef4-45c5-bdd2-f981358dff40',image_container_format='ami',image_disk_format='ami',image_kernel_id='525a7006 -cf4b-45be- b2e8-a6fd46e52d47',image_min_disk='0',image_min_ram='0',image_ramdisk_id='6949db12 -e5cd-416a- b1c5-7318595d3382'},tags=,task_state=None,terminated_at=None,updated_at=2015-02-27T18:03:45Z,user_data=None,user_id='78901d21abe4437384eaa007da8558e0',uuid =8997322e-a1fe- 4f80-ba89-2a34c54a8985,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active')Image not found during snapshot To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1426544/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427317] [NEW] Defunct plug-in configuration files
Public bug reported: After ML2, the Open vSwitch and Linux Bridge plug-ins became mechanisms/agents. However, the configuration files for these agents, particularly OVS with ovs_neutron_plugin.ini, generates confusion. Furthermore, distributions that package OpenStack take different routes for configuring these agents. With OVS in particular, some distributions add the agent configuration to ml2_conf.ini and others continue to use ovs_neutron_plugin.ini. These issues particularly impact documentation [1][2] such as the installation guide that provides step-by-step instructions on multiple distributions and the (upcoming) networking guide that attempts to provide several deployable distro-agnostic scenarios with step-by-step instructions. I suggest either using ml2_conf.ini for OVS and LB agent configuration or renaming the configuration files to something more meaningful and consistent. For example, ml2_conf_ovs.ini, ml2_conf_linuxbridge.ini, ovs_agent.ini or linuxbridge_agent.ini. [1] https://bugs.launchpad.net/openstack-manuals/+bug/1422038 [2] https://bugs.launchpad.net/openstack-manuals/+bug/1375746 ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427317 Title: Defunct plug-in configuration files Status in OpenStack Neutron (virtual network service): New Bug description: After ML2, the Open vSwitch and Linux Bridge plug-ins became mechanisms/agents. However, the configuration files for these agents, particularly OVS with ovs_neutron_plugin.ini, generates confusion. Furthermore, distributions that package OpenStack take different routes for configuring these agents. With OVS in particular, some distributions add the agent configuration to ml2_conf.ini and others continue to use ovs_neutron_plugin.ini. These issues particularly impact documentation [1][2] such as the installation guide that provides step-by-step instructions on multiple distributions and the (upcoming) networking guide that attempts to provide several deployable distro-agnostic scenarios with step-by-step instructions. I suggest either using ml2_conf.ini for OVS and LB agent configuration or renaming the configuration files to something more meaningful and consistent. For example, ml2_conf_ovs.ini, ml2_conf_linuxbridge.ini, ovs_agent.ini or linuxbridge_agent.ini. [1] https://bugs.launchpad.net/openstack-manuals/+bug/1422038 [2] https://bugs.launchpad.net/openstack-manuals/+bug/1375746 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427317/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427304] [NEW] [sahara] When the job binary creating fails, the job binary data is created anyway
Public bug reported: If the user tries to create a Job Binary using the internal db as storage, and the creating fails for a validation error, the job binary data is created anyway. The job data creation code should be executed at the same time/in the same transaction of the job binary creation (and rolled back if the latter fails). Found on current Horizon git master (Sahara from Juno, but the problem seems more Horizon-related). ** Affects: horizon Importance: Undecided Status: New ** Tags: sahara -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427304 Title: [sahara] When the job binary creating fails, the job binary data is created anyway Status in OpenStack Dashboard (Horizon): New Bug description: If the user tries to create a Job Binary using the internal db as storage, and the creating fails for a validation error, the job binary data is created anyway. The job data creation code should be executed at the same time/in the same transaction of the job binary creation (and rolled back if the latter fails). Found on current Horizon git master (Sahara from Juno, but the problem seems more Horizon-related). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427304/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427295] [NEW] nova-network with multi-host and update_dns_entries crashes during instance termination
Public bug reported: I have Openstack Nova set up using nova-network in multi-host mode. I wanted all instances to be able to resolve each-other via dns, so I enabled update_dns_entries=True in nova.conf Upon terminating an instance, I get the following traceback in nova- compute.log on the compute node hosting the instance: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply incoming.message)) File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch return self._do_dispatch(endpoint, method, ctxt, args) File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch result = getattr(endpoint, method)(ctxt, **new_args) File "/usr/lib/python2.7/dist-packages/nova/network/floating_ips.py", line 187, in deallocate_for_instance super(FloatingIP, self).deallocate_for_instance(context, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 568, in deallocate_for_instance network_ids = [fixed_ip.network_id for fixed_ip in fixed_ips] AttributeError: 'str' object has no attribute 'network_id' Some spelunking reveals that this was introduced in the following commit: https://github.com/openstack/nova/commit/03d34c975586788dc25249b5e0b962fc0634008c which changed the "fixed_ips" array to contain a list of string ip address, rather than fixed_ip objects, but neglected to update the code under the CONF.update_dns_entries branch below to match. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1427295 Title: nova-network with multi-host and update_dns_entries crashes during instance termination Status in OpenStack Compute (Nova): New Bug description: I have Openstack Nova set up using nova-network in multi-host mode. I wanted all instances to be able to resolve each-other via dns, so I enabled update_dns_entries=True in nova.conf Upon terminating an instance, I get the following traceback in nova- compute.log on the compute node hosting the instance: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply incoming.message)) File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch return self._do_dispatch(endpoint, method, ctxt, args) File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch result = getattr(endpoint, method)(ctxt, **new_args) File "/usr/lib/python2.7/dist-packages/nova/network/floating_ips.py", line 187, in deallocate_for_instance super(FloatingIP, self).deallocate_for_instance(context, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 568, in deallocate_for_instance network_ids = [fixed_ip.network_id for fixed_ip in fixed_ips] AttributeError: 'str' object has no attribute 'network_id' Some spelunking reveals that this was introduced in the following commit: https://github.com/openstack/nova/commit/03d34c975586788dc25249b5e0b962fc0634008c which changed the "fixed_ips" array to contain a list of string ip address, rather than fixed_ip objects, but neglected to update the code under the CONF.update_dns_entries branch below to match. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1427295/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427291] [NEW] ML2 hierarchical port binding needs additional tests
Public bug reported: Although the current unit tests cover the hierarchical port binding code reasonably well, additional tests are needed that verify the following: * Binding loops are properly avoided * Binding limit is detected * Dead-end binding attempts are handled properly ** Affects: neutron Importance: Medium Assignee: Robert Kukura (rkukura) Status: New ** Tags: ml2 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427291 Title: ML2 hierarchical port binding needs additional tests Status in OpenStack Neutron (virtual network service): New Bug description: Although the current unit tests cover the hierarchical port binding code reasonably well, additional tests are needed that verify the following: * Binding loops are properly avoided * Binding limit is detected * Dead-end binding attempts are handled properly To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427291/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427289] [NEW] [sahara] Back trace when a job binary is created using an existing script
Public bug reported: >From "Create Job Binary", add a valid name, "storage type" internal, and choose an existing job binary (neither "upload...", nor "create..." in the "Internal Binary" box. The following backtrace can be seen in Horizon logs: [02/Mar/2015 16:24:14] "POST /project/data_processing/job_binaries/create-job-binary HTTP/1.1" 200 6462 Internal Server Error: /project/data_processing/job_binaries/create-job-binary Traceback (most recent call last): File "/home/toscano/OpenStack/code/os/openstack/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 112, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/decorators.py", line 36, in dec return view_func(request, *args, **kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/decorators.py", line 52, in dec return view_func(request, *args, **kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/decorators.py", line 36, in dec return view_func(request, *args, **kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/decorators.py", line 84, in dec return view_func(request, *args, **kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py", line 69, in view return self.dispatch(request, *args, **kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/.venv/lib/python2.7/site-packages/django/views/generic/base.py", line 87, in dispatch return handler(request, *args, **kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/.venv/lib/python2.7/site-packages/django/views/generic/edit.py", line 171, in post return self.form_valid(form) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/forms/views.py", line 173, in form_valid exceptions.handle(self.request) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/exceptions.py", line 364, in handle six.reraise(exc_type, exc_value, exc_traceback) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/forms/views.py", line 170, in form_valid handled = form.handle(self.request, form.cleaned_data) File "/home/toscano/OpenStack/code/os/openstack/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py", line 183, in handle _("Unable to create job binary")) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/exceptions.py", line 364, in handle six.reraise(exc_type, exc_value, exc_traceback) File "/home/toscano/OpenStack/code/os/openstack/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py", line 169, in handle bin_url = self.handle_internal(request, context) File "/home/toscano/OpenStack/code/os/openstack/horizon/openstack_dashboard/dashboards/project/data_processing/job_binaries/forms.py", line 229, in handle_internal bin_id = result.id AttributeError: 'str' object has no attribute 'id' [02/Mar/2015 16:25:44] "POST /project/data_processing/job_binaries/create-job-binary HTTP/1.1" 500 39108 Reproduced on current Horizon git master. ** Affects: horizon Importance: Undecided Status: Confirmed ** Tags: sahara -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427289 Title: [sahara] Back trace when a job binary is created using an existing script Status in OpenStack Dashboard (Horizon): Confirmed Bug description: From "Create Job Binary", add a valid name, "storage type" internal, and choose an existing job binary (neither "upload...", nor "create..." in the "Internal Binary" box. The following backtrace can be seen in Horizon logs: [02/Mar/2015 16:24:14] "POST /project/data_processing/job_binaries/create-job-binary HTTP/1.1" 200 6462 Internal Server Error: /project/data_processing/job_binaries/create-job-binary Traceback (most recent call last): File "/home/toscano/OpenStack/code/os/openstack/horizon/.venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 112, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/decorators.py", line 36, in dec return view_func(request, *args, **kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/decorators.py", line 52, in dec return view_func(request, *args, **kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/decorators.py", line 36, in dec return view_func(request, *args, **kwargs) File "/home/toscano/OpenStack/code/os/openstack/horizon/horizon/decorators.py", line 84, in dec return view_func(request, *args, **kwargs)
[Yahoo-eng-team] [Bug 1427277] [NEW] [sahara] Detailed error on job binary creation is not shown
Public bug reported: When a job binary with an invalid name is created (for example: too long, like currently test_script_name_a5a330ee-bce0-11e4-beaf-3c970e1836cf), Sahara returns an well-defined exception, from the logs: DEBUG sahara.utils.api [-] Validation Error occurred: error_code=400, error_message=u'test_script_name_a5a330ee-bce0-11e4-beaf-3c970e1836cf' is too long, error_name=VALIDATION_ERROR bad_request /usr/lib/python2.7/site-packages/sahara/utils/api.py:245 but Horizon just says: Error: Unable to create job binary Tested on current Horizon master, Sahara from Juno (even if this is an horizon issue, I think). ** Affects: horizon Importance: Undecided Status: New ** Tags: sahara -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427277 Title: [sahara] Detailed error on job binary creation is not shown Status in OpenStack Dashboard (Horizon): New Bug description: When a job binary with an invalid name is created (for example: too long, like currently test_script_name_a5a330ee-bce0-11e4-beaf-3c970e1836cf), Sahara returns an well-defined exception, from the logs: DEBUG sahara.utils.api [-] Validation Error occurred: error_code=400, error_message=u'test_script_name_a5a330ee-bce0-11e4-beaf-3c970e1836cf' is too long, error_name=VALIDATION_ERROR bad_request /usr/lib/python2.7/site-packages/sahara/utils/api.py:245 but Horizon just says: Error: Unable to create job binary Tested on current Horizon master, Sahara from Juno (even if this is an horizon issue, I think). To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427277/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1423695] Re: gate-devstack-dsvm-cells fails attaching volume
Released in 2.22.0. ** Changed in: python-novaclient Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1423695 Title: gate-devstack-dsvm-cells fails attaching volume Status in OpenStack Compute (Nova): Invalid Status in Python client library for Nova: Fix Released Bug description: http://logs.openstack.org/07/154607/3/gate/gate-devstack-dsvm- cells/302ef5f/console.html#_2015-02-19_19_20_49_062 2015-02-19 19:20:47.642 | + nova volume-attach d16b33f6-7cf7-4e77-b149-84ba3eedf0e6 e2496f30-dac1-484b-b25c-c925a0766fa6 /dev/vdb 2015-02-19 19:20:49.034 | ERROR (NotFound): Not found (HTTP 404) (Request-ID: req-8ae38d4c-c754-4c45-92bd-0719bc2ecdd5) 2015-02-19 19:20:49.062 | + die 174 'Failure attaching volume ex-vol-f39662b7 to ex-vol-inst' 2015-02-19 19:20:49.062 | + local exitcode=1 2015-02-19 19:20:49.062 | + set +o xtrace 2015-02-19 19:20:49.062 | [Call Trace] 2015-02-19 19:20:49.063 | /opt/stack/new/devstack/exercises/volumes.sh:174:die 2015-02-19 19:20:49.065 | [ERROR] /opt/stack/new/devstack/exercises/volumes.sh:174 Failure attaching volume ex-vol-f39662b7 to ex-vol-inst Just started today: http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZXhlcmNpc2VzL3ZvbHVtZXMuc2hcIiBBTkQgbWVzc2FnZTpcIkZhaWx1cmUgYXR0YWNoaW5nIHZvbHVtZVwiIEFORCBtZXNzYWdlOlwidG8gZXgtdm9sLWluc3RcIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyNDM4MDc1MDI3Nn0= 35 hits in 24 hours, check and gate, all failures. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1423695/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427261] [NEW] Improve create instance without volume service
Public bug reported: Currently an error notification is raised when creating an instance on an OpenStack deployment without volume service. This change avoids the error notification as volume service is not required to boot an instance. ** Affects: horizon Importance: Undecided Assignee: Cedric Brandily (cbrandily) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427261 Title: Improve create instance without volume service Status in OpenStack Dashboard (Horizon): In Progress Bug description: Currently an error notification is raised when creating an instance on an OpenStack deployment without volume service. This change avoids the error notification as volume service is not required to boot an instance. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427261/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1419577] Re: when live-migrate failed, lun-id couldn't be rollback in havana
Agree that it's a vulnerability in Havana (since live-migration fails so often there). I wouldn't consider it a vulnerability in Icehouse/Juno, since you can't trigger live migration failure without administrative or physical access to the machines. It is a bug with security consequences there, and it should be fixed as soon as possible. ** Changed in: nova-project Status: New => Confirmed ** Project changed: nova-project => nova ** Changed in: nova Importance: Undecided => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1419577 Title: when live-migrate failed, lun-id couldn't be rollback in havana Status in OpenStack Compute (Nova): Confirmed Status in OpenStack Security Advisories: Incomplete Bug description: Hi, guys When live-migrate failed with error, lun-id of connection_info column in Nova's block_deivce_mapping table couldn't be rollback. and failed VM can have others volume. my test environment is following : Openstack Version : Havana ( 2013.2.3) Compute Node OS : 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Compute Node multipath : multipath-tools 0.4.9-3ubuntu7.2 test step is : 1) create 2 Compute node (host#1 and host#2) 2) create 1 VM on host#1 (vm01) 3) create 1 cinder volume (vol01) 4) attach 1 volume to vm01 (/dev/vdb) 5) live-migrate vm01 from host#1 to host#2 6) live-migrate success - please check the mapper by using multipath command in host#1 (# multipath -ll), then you can find mapper is not deleted. and the status of devices is "failed faulty" - please check the lun-id of vol01 7) Again, live-migrate vm01 from host#2 to host#1 (vm01 was migrated to host#2 at step 4) 8) live-migrate fail - please check the mapper in host#1 - please check the lun-id of vol01, then you can find the lun hav "two" igroups - please check the connection_info column in Nova's block_deivce_mapping table, then you can find lun-id couldn't be rollback This Bug is critical security issue because the failed VM can have others volume. and every backend storage of cinder-volume can have same problem because this is the bug of live-migration's rollback process. I suggest below methods to solve issue : 1) when live-migrate is complete, nova should delete mapper devices at origin host 2) when live-migrate is failed, nova should rollback lun-id in connection_info column 3) when live-migrate is failed, cinder should delete the mapping between lun and host (Netapp : igroup, EMC : storage_group ...) 4) when volume-attach is requested , cinder volume driver of vendors should make lun-id randomly for reduce of probability of mis-mapping please check this bug. Thank you. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1419577/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1384112] Re: endpoint, service, region can not be updated when using kvs driver
** Also affects: keystone/juno Importance: Undecided Status: New ** Changed in: keystone/juno Assignee: (unassigned) => wanghong (w-wanghong) ** Changed in: keystone/juno Status: New => In Progress ** Changed in: keystone/juno Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1384112 Title: endpoint, service, region can not be updated when using kvs driver Status in OpenStack Identity (Keystone): Fix Released Status in Keystone juno series: In Progress Bug description: region: curl -i -H "X-Auth-Token:$TOKEN" -H "Content-Type:application/json" http://192.168.70.105:35357/v3/regions/ed5ff7d3e26c48aeaf1f2f9fb2a4ad7e -d '{"region":{"description":"xxx"}}' -X PATCH {"error": {"message": "An unexpected error prevented the server from fulfilling your request: 'id' (Disable debug mode to suppress these details.)", "code": 500, "title": "Internal Server Error"}} service: curl -i -H "X-Auth-Token:$TOKEN" -H "Content-Type:application/json" http://192.168.70.105:35357/v3/services/f101743b55e54d2ba9cbf71d1f3456fc -d '{"service":{"type":"yy"}}' -X PATCH {"error": {"message": "An unexpected error prevented the server from fulfilling your request: 'id' (Disable debug mode to suppress these details.)", "code": 500, "title": "Internal Server Error"}} endpoint: curl -i -H "X-Auth-Token:$TOKEN" -H "Content-Type:application/json" http://192.168.70.105:35357/v3/endpoints/bbe21bf654e442edb21716cc00fb1c58 -d '{"endpoint":{"zz":"tt"}}' -X PATCH {"error": {"message": "An unexpected error prevented the server from fulfilling your request: 'region_id' (Disable debug mode to suppress these details.)", "code": 500, "title": "Internal Server Error"}} To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1384112/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1411478] Re: Any attribute that is equal to 'TRUE' or 'FALSE' is treated as boolean by LDAP drivers
Before we backport this to stable/juno, are there any legitimate use cases where people would be depending on the old behavior? Just want to ensure there's no risk to backporting. ** Also affects: keystone/juno Importance: Undecided Status: New ** Changed in: keystone/juno Status: New => Incomplete ** Changed in: keystone/juno Assignee: (unassigned) => Lin Hua Cheng (lin-hua-cheng) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1411478 Title: Any attribute that is equal to 'TRUE' or 'FALSE' is treated as boolean by LDAP drivers Status in OpenStack Identity (Keystone): Fix Committed Status in Keystone juno series: Incomplete Bug description: Our core LDAP driver makes a dangerous assumption that any attribute that is equal to the string 'TRUE' or 'FALSE' must be a boolean and will covert the value accordingly. For instance the following test: def test_hn1(self): ref = { 'name': 'TRUE', 'domain_id': CONF.identity.default_domain_id} ref = self.identity_api.create_user(ref) ref1 = self.identity_api.get_user(ref['id']) self.assertEqual(ref ,ref1) will fail (on an LDAP backend) with: MismatchError: !=: reference = {'domain_id': 'default', 'enabled': True, 'id': 'd4202d8717104d2bb2ab49fec5e7fe70', 'name': 'TRUE'} actual= {'domain_id': 'default', 'enabled': True, 'id': u'd4202d8717104d2bb2ab49fec5e7fe70', 'name': True} Ouch! Now that we have a schema for our models, perhaps we should use that to determine whether something is a boolean or not? e.g. for projects, we have: _project_properties = { 'description': validation.nullable(parameter_types.description), # NOTE(lbragstad): domain_id isn't nullable according to some backends. # The identity-api should be updated to be consistent with the # implementation. 'domain_id': parameter_types.id_string, 'enabled': parameter_types.boolean, 'parent_id': validation.nullable(parameter_types.id_string), 'name': { 'type': 'string', 'minLength': 1, 'maxLength': 64 } } For some reason the user/group ones don't exist yet, but we can fix that. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1411478/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1419577] [NEW] when live-migrate failed, lun-id couldn't be rollback in havana
*** This bug is a security vulnerability *** You have been subscribed to a public security bug: Hi, guys When live-migrate failed with error, lun-id of connection_info column in Nova's block_deivce_mapping table couldn't be rollback. and failed VM can have others volume. my test environment is following : Openstack Version : Havana ( 2013.2.3) Compute Node OS : 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Compute Node multipath : multipath-tools 0.4.9-3ubuntu7.2 test step is : 1) create 2 Compute node (host#1 and host#2) 2) create 1 VM on host#1 (vm01) 3) create 1 cinder volume (vol01) 4) attach 1 volume to vm01 (/dev/vdb) 5) live-migrate vm01 from host#1 to host#2 6) live-migrate success - please check the mapper by using multipath command in host#1 (# multipath -ll), then you can find mapper is not deleted. and the status of devices is "failed faulty" - please check the lun-id of vol01 7) Again, live-migrate vm01 from host#2 to host#1 (vm01 was migrated to host#2 at step 4) 8) live-migrate fail - please check the mapper in host#1 - please check the lun-id of vol01, then you can find the lun hav "two" igroups - please check the connection_info column in Nova's block_deivce_mapping table, then you can find lun-id couldn't be rollback This Bug is critical security issue because the failed VM can have others volume. and every backend storage of cinder-volume can have same problem because this is the bug of live-migration's rollback process. I suggest below methods to solve issue : 1) when live-migrate is complete, nova should delete mapper devices at origin host 2) when live-migrate is failed, nova should rollback lun-id in connection_info column 3) when live-migrate is failed, cinder should delete the mapping between lun and host (Netapp : igroup, EMC : storage_group ...) 4) when volume-attach is requested , cinder volume driver of vendors should make lun-id randomly for reduce of probability of mis-mapping please check this bug. Thank you. ** Affects: nova Importance: Undecided Status: Confirmed ** Affects: ossa Importance: Undecided Status: Incomplete -- when live-migrate failed, lun-id couldn't be rollback in havana https://bugs.launchpad.net/bugs/1419577 You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427209] Re: oslo.log doesn't log request_id, project_id, user_id in nova
** Changed in: oslo.log Assignee: (unassigned) => Davanum Srinivas (DIMS) (dims-v) ** Changed in: oslo.log Status: New => Confirmed ** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1427209 Title: oslo.log doesn't log request_id, project_id, user_id in nova Status in OpenStack Compute (Nova): New Status in Logging configuration library for OpenStack: Confirmed Bug description: The switch to oslo.log broke the nova logs so request_id, project_id, user_id are no longer logged. This is a critical breakage of the Nova logs, and makes them nearly useless. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1427209/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427158] Re: Rest call for group and user list is not working without domain_id
** Description changed: - In older build in PowerVC to get the group / user, we had used the following rest calls: - https://9.114.226.100/powervc/openstack/admin/v3/users - https://9.114.226.100/powervc/openstack/admin/v3/groups + To get the group / user, we had used the following rest calls: v3/users - For the recent builds, we are seeing the above command is not working and current working calls are : - https://9.114.226.100/powervc/openstack/admin/v3/ibm-roles/groups - https://9.114.226.100/powervc/openstack/admin/v3/ibm-roles/users + + we are seeing the above command is not working current working calls are : v3//groups So, it needs a domain id parameter to get the result. So it would definitely be nice if keystone would recognize that and not require the domain_id query parameter to make the /v3/groups and /v3/users commands work. ** Changed in: keystone Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1427158 Title: Rest call for group and user list is not working without domain_id Status in OpenStack Identity (Keystone): Invalid Bug description: To get the group / user, we had used the following rest calls: v3/users we are seeing the above command is not working current working calls are : v3//groups So, it needs a domain id parameter to get the result. So it would definitely be nice if keystone would recognize that and not require the domain_id query parameter to make the /v3/groups and /v3/users commands work. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1427158/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1424900] Re: Bootstrapping Vivid: ERROR failed to bootstrap environment, Permission denied (publickey), ci-info: no authorized ssh keys fingerprints found for user ubuntu
this was fixed in 0.7.7~bzr1067-0ubuntu1 uploaded to vivid 2015-02-26. ** Changed in: cloud-init Status: Confirmed => Fix Released ** Also affects: cloud-init (Ubuntu) Importance: Undecided Status: New ** Changed in: cloud-init Status: Fix Released => Fix Committed ** Changed in: cloud-init (Ubuntu) Status: New => Fix Released ** Changed in: cloud-init (Ubuntu) Importance: Undecided => Critical ** Changed in: cloud-init (Ubuntu) Importance: Critical => High -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1424900 Title: Bootstrapping Vivid: ERROR failed to bootstrap environment, Permission denied (publickey), ci-info: no authorized ssh keys fingerprints found for user ubuntu Status in Init scripts for use on cloud images: Fix Committed Status in cloud-init package in Ubuntu: Fix Released Bug description: Bootstrapping Vivid: ERROR failed to bootstrap environment, Permission denied (publickey), ci-info: no authorized ssh keys fingerprints found for user ubuntu # Scenario A user needs Unit 0 to be Vivid. The user manually sets default-series: vivid in ~/.juju/environments.yaml, then issues `juju bootstrap`. The bootstrap times out after the default 10 minute wait, with: "ERROR failed to bootstrap environment: waited for 10m0s without being able to connect: Permission denied (publickey)." The user cannot open an ssh session to the new instance using the local user keys, nor the keys in ~/.juju/. Juju stat and juju debug- log do not provide details as the bootstrap node cannot be contacted. Information is collected via nova console-log. The same user procedure does bootstrap successfully when default- series is set to utopic or trusty. This issue has been confirmed using two juju providers: openstack provider (serverstack) and maas provider (dellstack). It has been confirmed on juju-core 1.21.1-0ubuntu1~14.04.1~juju1 and 1.22-beta3-0ubuntu1~14.04.1~juju1. # The bootstrap node's nova console log reveals trouble: Cloud-init v. 0.7.7 running 'modules:final' at Tue, 24 Feb 2015 00:44:33 +. Up 11.03 seconds. ci-info: no authorized ssh keys fingerprints found for user ubuntu. ci-info: no authorized ssh keys fingerprints found for user ubuntu. # bootstrap output, with default-series: vivid jenkins@juju-env0-machine-13:~$ juju bootstrap Bootstrapping environment "osci-sv10-jdev" Starting new instance for initial state server Launching instance - 15abce53-fa1a-427e-83d9-551a642f4fd7 Installing Juju agent on bootstrap instance Waiting for address Attempting to connect to 172.17.110.41:22 ERROR failed to bootstrap environment: waited for 10m0s without being able to connect: Permission denied (publickey). See attachment for additional details. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1424900/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427228] [NEW] Allow to run neutron-ns-metadata-proxy as nobody
Public bug reported: Currently neutron-ns-metadata-proxy runs with neutron user/group permissions on l3-agent but we should allow to run it with less permissions as neutron user is allowed to run neutron-rootwrap. We should restrict as much as possible neutron-ns-metadata-proxy permissions as it's reachable from VMs. ** Affects: neutron Importance: Undecided Assignee: Cedric Brandily (cbrandily) Status: New ** Tags: l3-ipam-dhcp security ** Changed in: neutron Assignee: (unassigned) => Cedric Brandily (cbrandily) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427228 Title: Allow to run neutron-ns-metadata-proxy as nobody Status in OpenStack Neutron (virtual network service): New Bug description: Currently neutron-ns-metadata-proxy runs with neutron user/group permissions on l3-agent but we should allow to run it with less permissions as neutron user is allowed to run neutron-rootwrap. We should restrict as much as possible neutron-ns-metadata-proxy permissions as it's reachable from VMs. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427228/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1400966] Re: [OSSA-2014-041] Glance allows users to download and delete any file in glance-api server (CVE-2014-9493)
** Changed in: openstack-ansible/icehouse Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1400966 Title: [OSSA-2014-041] Glance allows users to download and delete any file in glance-api server (CVE-2014-9493) Status in OpenStack Image Registry and Delivery Service (Glance): Fix Released Status in Glance icehouse series: Fix Committed Status in Glance juno series: Fix Released Status in Ansible playbooks for deploying OpenStack: Fix Committed Status in openstack-ansible icehouse series: Fix Released Status in openstack-ansible juno series: Fix Released Status in OpenStack Security Advisories: Fix Released Bug description: Updating image-location by update images API users can download any file for which glance-api has read permission. And the file for which glance-api has write permission will be deleted when users delete the image. For example: When users specify '/etc/passwd' as locations value of an image user can get the file by image download. When locations of an image is set with 'file:///path/to/glance- api.conf' the conf will be deleted when users delete the image. How to recreate the bug: download files: - set show_multiple_locations True in glance-api.conf - create a new image - set locations of the image's property a path you want to get such as file:///etc/passwd. - download the image delete files: - set show_multiple_locations True in glance-api.conf - create a new image - set locations of the image's property a path you want to delete such as file:///path/to/glance-api.conf - delete the image I found this bug in 2014.2 (742c898956d655affa7351505c8a3a5c72881eae). What a big A RE RE!! To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1400966/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1426524] Re: race condition prevents intance deletion
couldn't reproduce. Also I suspect that volume attach/detach led to db inconsistencies ** Changed in: nova Status: Incomplete => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1426524 Title: race condition prevents intance deletion Status in OpenStack Compute (Nova): Invalid Bug description: Version: icehouse. Though looking in to the code in the master I believe bug is still there Hypervisor: libvirt Frequency: very rare, under heavy load (stress tests) Steps to reproduce: as an operator I issue "nova delete" command. Instead of being deleted that vm gets into ERROR state. relevant nova-compute.log: http://paste.openstack.org/show/183111/ Here's probably why it happens: It's a race condition. There are two threads (coroutines if eventlet patched) - thread-1 which handles termination request (nova.compute.manager.ComputeManager.terminate_instance) and thread-2 which dispatches events from hypervisor. 1) thread-1: manager clears (deletes) all queued events for that vm and switches to thread-2 https://github.com/openstack/nova/blob/983f755562cb87a0b498af5d62be9bd2010bc999/nova/compute/manager.py#L2526 2) thread-2: hypervisor emits one more event and switches to thread-1 without dispatching event 3) thread-1: manager deletes image files, marks instance as deleted in the db. Thread finishes and exits normally 4) thread-2: manager tries to dispatch one more event. But fails as there is no such instance anymore. To be more precise - there is no InstanceInfoCache for that vm. UPD: more logs: https://www.dropbox.com/sh/r0ek3w7g95qoetw/AADTfgN9tD2Mt_fOXjB9OCzva?dl=0 - Cluster is in HA mode (three nova-api files) - debug=False - conductor-logs contain only "connecting, reconnecting to rabbitmq". - cat ... | grep -A 15 -B 15 -r 09ac0ed2-07cb-4394-b0c8-aff3ab74dcdb To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1426524/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427179] [NEW] boot from volume instance failed, because when reschedule delete the volume
Public bug reported: 1. Create a volume "nova volume-create --display-name test_volume 1" [root@controller51 nova(keystone_admin)]# nova volume-list +--+---+-+--+-+---+ | ID | Status| Display Name| Size | Volume Type | Attached to | +--+---+-+--+-+---+ | a740ca7b-6881-4e28-9fdb-eb0d80336757 | available | test_volume | 1| None| | | 1f1c19c7-a5f9-4683-a1f6-e339f02e1410 | in-use| NFVO_system_disk2 | 30 | None| 6fa391f8-bd8b-483d-9286-3cebc9a93d55 | | d868710e-30d4-4095-bd8f-fea9f16fe8ea | in-use| NFVO_data_software_disk | 30 | None| a07abdd5-07a6-4b41-a285-9b825f7b5623;6fa391f8-bd8b-483d-9286-3cebc9a93d55 | | b03a39ca-ebc1-4472-9a04-58014e67b37c | in-use| NFVO_system_disk1 | 30 | None| a07abdd5-07a6-4b41-a285-9b825f7b5623 | +--+---+-+--+-+---+ 2. use The following command will boot a new instance and attach a volume at the same time: [root@controller51 nova(keystone_admin)]# nova boot --flavor 1 --image 1736471c-3530-49f2-ad34-6ef7da285050 --block-device-mapping vdb=a740ca7b-6881-4e28-9fdb-eb0d80336757:blank:1:1 --nic net-id=31fce69e-16b9-4114-9fa9-589763e58fb0 test +--+---+ | Property | Value | +--+---+ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name| instance-0082 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state| scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass| sWTuKqzrpS32 | | config_drive | | | created | 2015-03-02T11:34:29Z | | flavor | m1.tiny (1) | | hostId | | | id | 868cfd12-eb36-4140-b7b3-98cfcec627cd | | image| VMB_X86_64_LX_2.6.32_64_REL_2014_12_26.img (1736471c-3530-49f2-ad34-6ef7da285050) | | key_name | - | | metadata | {} | | name | test
[Yahoo-eng-team] [Bug 1427165] Re: unittest2 deprecated in Django
already fixed with commit https://github.com/openstack/horizon/commit/8e8c084847280f3f8e975910b498ed9fbb3a69c8 ** Changed in: horizon Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427165 Title: unittest2 deprecated in Django Status in OpenStack Dashboard (Horizon): Invalid Bug description: https://docs.djangoproject.com/en/1.7/topics/testing/overview /#writing-tests Python 2.7 introduced some major changes to the unittest library, adding some extremely useful features. To ensure that every Django project could benefit from these new features, Django used to ship with a copy of Python 2.7’s unittest backported for Python 2.6 compatibility. Since Django no longer supports Python versions older than 2.7, django.utils.unittest is deprecated. Simply use unittest. openstack_dashboard/test/api_tests/rest_util_tests.py To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427165/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427165] [NEW] unittest2 deprecated in Django
Public bug reported: https://docs.djangoproject.com/en/1.7/topics/testing/overview/#writing- tests Python 2.7 introduced some major changes to the unittest library, adding some extremely useful features. To ensure that every Django project could benefit from these new features, Django used to ship with a copy of Python 2.7’s unittest backported for Python 2.6 compatibility. Since Django no longer supports Python versions older than 2.7, django.utils.unittest is deprecated. Simply use unittest. openstack_dashboard/test/api_tests/rest_util_tests.py ** Affects: horizon Importance: Undecided Status: Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427165 Title: unittest2 deprecated in Django Status in OpenStack Dashboard (Horizon): Invalid Bug description: https://docs.djangoproject.com/en/1.7/topics/testing/overview /#writing-tests Python 2.7 introduced some major changes to the unittest library, adding some extremely useful features. To ensure that every Django project could benefit from these new features, Django used to ship with a copy of Python 2.7’s unittest backported for Python 2.6 compatibility. Since Django no longer supports Python versions older than 2.7, django.utils.unittest is deprecated. Simply use unittest. openstack_dashboard/test/api_tests/rest_util_tests.py To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427165/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1380238] Re: Instances won't obtain IPv6 address if they have additional IPv4 interface
@Ihar, I checked this for ubuntu: http://paste.openstack.org/show/184928/ and all works fine. ** Changed in: neutron Status: Confirmed => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1380238 Title: Instances won't obtain IPv6 address if they have additional IPv4 interface Status in OpenStack Neutron (virtual network service): Invalid Bug description: Description of problem: === I booted an instance with both IPv4 and IPv6 interfaces, yet that instance did no obtain any IPv6 address. In order make sure nothing is wrong with my IPv6 configuration (which is RADVD SLAAC), I booted additional instance with IPv6 interface only, which obtained an IPv6 address with no issues. Version-Release number of selected component (if applicable): = openstack-neutron-2014.2-0.7.b3 How reproducible: = Always Steps to Reproduce: === 0. Perior to the test, configure the following: a. Neutron router b. IPv4 Network & Subnet c. IPv6 Network & Subnet (SLAAC in my specific case) --> Created with: --ipv6-address-mode slaac --ipv6_ra_mode slaac d. Add router interfaces with those networks. 1. spwan an instance with both IPv4 & IPv6 interfaces. 2. spwan an instance with IPv6 interface only. Actual results: === 1. The instance spawed in step 1 obtained IPv4 address and IPv6 link local address only 2. The instance spawed in step 2 obtained IPv6 addrees proparly. Expected results: = Instances should obtain all IP addresses in both scenarios I mentioned above. Additional info: Using tcpdump from within the instances I noticed that ICMPv6 Router Advertisments did not reach the NIC. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1380238/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 977192] Re: Error message not user friendly while creating security group
** Changed in: python-novaclient Status: Invalid => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/977192 Title: Error message not user friendly while creating security group Status in OpenStack Compute (Nova): Invalid Status in Python client library for Nova: In Progress Bug description: While creating Security Group with a name which starts with '-' followed by an alphabet then the Error message is not user friendly Steps to reproduce: nova secgroup-create -A34f ghjk Expected Result: Error message which indicates name is not an appropriate name Actual Result: usage: nova secgroup-create error: too few arguments To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/977192/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1424597] Re: Obscure 'No valid hosts found' if no free fixed IPs left in the network
*** This bug is a duplicate of bug 1394268 *** https://bugs.launchpad.net/bugs/1394268 ** This bug has been marked a duplicate of bug 1394268 wrong error message when no IP addresses are available -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1424597 Title: Obscure 'No valid hosts found' if no free fixed IPs left in the network Status in OpenStack Compute (Nova): In Progress Bug description: If network have no free fixed IPs, new instances failed with 'No valid hosts found' without proper explanation. Example: nova boot foobar --flavor SSD.1 --image cirros --nic net-id=f3f2802a- c2a1-4d8b-9f43-cf24d0dc8233 (There is no free IP left in network f3f2802a-c2a1-4d8b- 9f43-cf24d0dc8233) nova show fb4552e5-50cb-4701-a095-c006e4545c04 ... | status | BUILD | (few seconds later) | fault| {"message": "No valid host was found. Exceeded max scheduling attempts 2 for instance fb4552e5-50cb-4701-a095-c006e4545c04. Last exception: [u'Traceback (most recent call last):\ | | | ', u' File \"/usr/lib/python2.7/dist-packages/nova/compute/manager.py\", line 2036, in _do", "code": 500, "details": " File \"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\", line 612, in build_instances | | | instances[0].uuid) | | | File \"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py\", line 161, in populate_retry | | | raise exception.NoValidHost(reason=msg) | | status | ERROR | Expected behaviour: Compains about 'No free IP' before attempting to schedule instance. See https://bugs.launchpad.net/nova/+bug/1424594 for similar behaviour. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1424597/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427158] [NEW] Rest call for group and user list is not working without domain_id
Public bug reported: In older build in PowerVC to get the group / user, we had used the following rest calls: https://9.114.226.100/powervc/openstack/admin/v3/users https://9.114.226.100/powervc/openstack/admin/v3/groups For the recent builds, we are seeing the above command is not working and current working calls are : https://9.114.226.100/powervc/openstack/admin/v3/ibm-roles/groups https://9.114.226.100/powervc/openstack/admin/v3/ibm-roles/users So, it needs a domain id parameter to get the result. So it would definitely be nice if keystone would recognize that and not require the domain_id query parameter to make the /v3/groups and /v3/users commands work. ** Affects: keystone Importance: Undecided Status: New ** Tags: api -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1427158 Title: Rest call for group and user list is not working without domain_id Status in OpenStack Identity (Keystone): New Bug description: In older build in PowerVC to get the group / user, we had used the following rest calls: https://9.114.226.100/powervc/openstack/admin/v3/users https://9.114.226.100/powervc/openstack/admin/v3/groups For the recent builds, we are seeing the above command is not working and current working calls are : https://9.114.226.100/powervc/openstack/admin/v3/ibm-roles/groups https://9.114.226.100/powervc/openstack/admin/v3/ibm-roles/users So, it needs a domain id parameter to get the result. So it would definitely be nice if keystone would recognize that and not require the domain_id query parameter to make the /v3/groups and /v3/users commands work. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1427158/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1365727] Re: N1kv tenant able to create networks for non-shared network profiles of other N1kv tenants
I wonder whether the bug should have been handled by vulnerability team. It looks like a privilege escalation problem. ** Also affects: ossa Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1365727 Title: N1kv tenant able to create networks for non-shared network profiles of other N1kv tenants Status in OpenStack Neutron (virtual network service): Fix Released Status in OpenStack Security Advisories: New Bug description: Tenants are able to create networks within network profiles that are not shared with them and belong to some other tenant. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1365727/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427148] [NEW] optparse.OptionConflictError: option -v/--verbose: conflicting option string(s): -v
Public bug reported: [mrunge@turing horizon (django18)]$ ./run_tests.sh -N -P Running Horizon application tests Traceback (most recent call last): File "/home/mrunge/work/horizon/manage.py", line 23, in execute_from_command_line(sys.argv) File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", line 330, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/python2.7/site-packages/django/core/management/commands/test.py", line 30, in run_from_argv super(Command, self).run_from_argv(argv) File "/usr/lib/python2.7/site-packages/django/core/management/base.py", line 378, in run_from_argv parser = self.create_parser(argv[0], argv[1]) File "/usr/lib/python2.7/site-packages/django/core/management/base.py", line 327, in create_parser parser.add_option(opt) File "/usr/lib64/python2.7/optparse.py", line 1021, in add_option self._check_conflict(option) File "/usr/lib64/python2.7/optparse.py", line 996, in _check_conflict option) optparse.OptionConflictError: option -v/--verbose: conflicting option string(s): -v This happens with Django-1.8 ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427148 Title: optparse.OptionConflictError: option -v/--verbose: conflicting option string(s): -v Status in OpenStack Dashboard (Horizon): New Bug description: [mrunge@turing horizon (django18)]$ ./run_tests.sh -N -P Running Horizon application tests Traceback (most recent call last): File "/home/mrunge/work/horizon/manage.py", line 23, in execute_from_command_line(sys.argv) File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", line 330, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/python2.7/site-packages/django/core/management/commands/test.py", line 30, in run_from_argv super(Command, self).run_from_argv(argv) File "/usr/lib/python2.7/site-packages/django/core/management/base.py", line 378, in run_from_argv parser = self.create_parser(argv[0], argv[1]) File "/usr/lib/python2.7/site-packages/django/core/management/base.py", line 327, in create_parser parser.add_option(opt) File "/usr/lib64/python2.7/optparse.py", line 1021, in add_option self._check_conflict(option) File "/usr/lib64/python2.7/optparse.py", line 996, in _check_conflict option) optparse.OptionConflictError: option -v/--verbose: conflicting option string(s): -v This happens with Django-1.8 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427148/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427141] [NEW] console auth token timeout has no impact
Public bug reported: Issue = The console feature (VNC, SERIAL, ...) returns a connection with an auth token. This connection *never* times out. Steps to reproduce == The steps below are suitable for testing with the serial console but the behavior is the same with VNC. * enable the console feature in nova.conf [serial_console] enabled=True * set the token timeout value in nova.conf to a value which fits your testing (e.g.) console_token_ttl=10 * start the nova-serialproxy service (e.g. with devstack [1]) * start an instance * Connect to the serial console of that launched instance (e.g. Horizon with "console" tab or another client [2]) * Execute a command (e.g. "date") * Wait until the timespan defined by "console_token_ttl" elapsed * Execute another command (e.g. "date") Expected behavior = The command in the console is refused after the timespan elapsed. Actual behavior === The connection is kept open and each command is executed after the defined timespan. This looks weird in the case when Horizon times out but the console tab is still working. Logs & Env. === OpenStack is installed and started with devstack. The logs [3] show that the expired token gets removed when a new token is appended. The append of a new token happens only when the console tab is reopened and the old token is expired. Nova version pedebug@OS-CTRL:/opt/stack/nova$ git log --oneline -n5 017574e Merge "Added retries in 'network_set_host' function" a957d56 libvirt: Adjust Nova to support FCP on System z systems 36bae5a Merge "fake: fix public API signatures to match virt driver" 13223b5 Merge "Don't assume contents of values after aggregate_update" c4a9cc5 Merge "Fix VNC access, when reverse DNS lookups fail" References == [1] Devstack guide; Nova and devstack; http://docs.openstack.org/developer/devstack/guides/nova.html [2] larsk/novaconsole; github; https://github.com/larsks/novaconsole/ [3] http://paste.openstack.org/show/184866/ ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1427141 Title: console auth token timeout has no impact Status in OpenStack Compute (Nova): New Bug description: Issue = The console feature (VNC, SERIAL, ...) returns a connection with an auth token. This connection *never* times out. Steps to reproduce == The steps below are suitable for testing with the serial console but the behavior is the same with VNC. * enable the console feature in nova.conf [serial_console] enabled=True * set the token timeout value in nova.conf to a value which fits your testing (e.g.) console_token_ttl=10 * start the nova-serialproxy service (e.g. with devstack [1]) * start an instance * Connect to the serial console of that launched instance (e.g. Horizon with "console" tab or another client [2]) * Execute a command (e.g. "date") * Wait until the timespan defined by "console_token_ttl" elapsed * Execute another command (e.g. "date") Expected behavior = The command in the console is refused after the timespan elapsed. Actual behavior === The connection is kept open and each command is executed after the defined timespan. This looks weird in the case when Horizon times out but the console tab is still working. Logs & Env. === OpenStack is installed and started with devstack. The logs [3] show that the expired token gets removed when a new token is appended. The append of a new token happens only when the console tab is reopened and the old token is expired. Nova version pedebug@OS-CTRL:/opt/stack/nova$ git log --oneline -n5 017574e Merge "Added retries in 'network_set_host' function" a957d56 libvirt: Adjust Nova to support FCP on System z systems 36bae5a Merge "fake: fix public API signatures to match virt driver" 13223b5 Merge "Don't assume contents of values after aggregate_update" c4a9cc5 Merge "Fix VNC access, when reverse DNS lookups fail" References == [1] Devstack guide; Nova and devstack; http://docs.openstack.org/developer/devstack/guides/nova.html [2] larsk/novaconsole; github; https://github.com/larsks/novaconsole/ [3] http://paste.openstack.org/show/184866/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1427141/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427135] [NEW] Neutron API reflects JavaScript/any input in error message
Public bug reported: During security scan of Neutron API, Nessus raises the following security alert about reflected XSS: REQUEST: cross_site_scripting.nasl API RESPONSE : HTTP/1.1 500 Internal Server Error Content-Type: text/plain Content-Length: 596 Date: Mon, 29 Dec 2014 09:50:52 GMT Connection: close File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 119, [...] "URL fragments must start with / or http:// (you gave %r)" % url) AssertionError: URL fragments must start with / or http:// (you gave 'cross_site_scripting.nasl') My proposal is to modify API error response in a way that doesn't causes reflection of the original input - doesn't matter if JavaScript or not. IMO error message should end at line "Connection: close" ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427135 Title: Neutron API reflects JavaScript/any input in error message Status in OpenStack Neutron (virtual network service): New Bug description: During security scan of Neutron API, Nessus raises the following security alert about reflected XSS: REQUEST: cross_site_scripting.nasl API RESPONSE : HTTP/1.1 500 Internal Server Error Content-Type: text/plain Content-Length: 596 Date: Mon, 29 Dec 2014 09:50:52 GMT Connection: close File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 119, [...] "URL fragments must start with / or http:// (you gave %r)" % url) AssertionError: URL fragments must start with / or http:// (you gave 'cross_site_scripting.nasl') My proposal is to modify API error response in a way that doesn't causes reflection of the original input - doesn't matter if JavaScript or not. IMO error message should end at line "Connection: close" To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427135/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427122] [NEW] dvr case with 1 subnet attaches multi routers, fail to create router netns
Public bug reported: Environment 1+2 env with DVR enabled, l3agent on all these nodes are configured with "router_delete_namespaces = True". Create router R1, subnet sn1 and sn2, attach sn1 and sn4 to router R1. Create router R2, subnet sn3 and sn4, attach sn3 and sn4 to router R2. Boot instances vm1 on sn1 on CN1, vm2 on sn2 on CN2, vm3 on sn3 on CN1, vm4 on sn4 on CN2. Create port p1 on sn1's network by running "neutron port-create --name p1 n1", and attach p1 to R2 by running "neutron router-interface-add R2 port=p1"(this instruction try to connect sn1 with sn3 and sn4 by router R2) Steps to raise issue 1) delete vm1, and make sure qrouter netns will disappear on CN1 2) create vm5 on sn1 on CN1, but qrouter netns doesn't come out. Workaround restart l3-agent on CN1. In normal case(a subnet will only be attached to one router), this issue will not raise. ** Affects: neutron Importance: Undecided Assignee: ZongKai LI (lzklibj) Status: New ** Changed in: neutron Assignee: (unassigned) => ZongKai LI (lzklibj) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427122 Title: dvr case with 1 subnet attaches multi routers, fail to create router netns Status in OpenStack Neutron (virtual network service): New Bug description: Environment 1+2 env with DVR enabled, l3agent on all these nodes are configured with "router_delete_namespaces = True". Create router R1, subnet sn1 and sn2, attach sn1 and sn4 to router R1. Create router R2, subnet sn3 and sn4, attach sn3 and sn4 to router R2. Boot instances vm1 on sn1 on CN1, vm2 on sn2 on CN2, vm3 on sn3 on CN1, vm4 on sn4 on CN2. Create port p1 on sn1's network by running "neutron port-create --name p1 n1", and attach p1 to R2 by running "neutron router-interface-add R2 port=p1"(this instruction try to connect sn1 with sn3 and sn4 by router R2) Steps to raise issue 1) delete vm1, and make sure qrouter netns will disappear on CN1 2) create vm5 on sn1 on CN1, but qrouter netns doesn't come out. Workaround restart l3-agent on CN1. In normal case(a subnet will only be attached to one router), this issue will not raise. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1427122/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1417515] Re: Horizon Input fields swapped when tried to Launch Stack with invalid name
Please do not set to FixReleased until the fix is released in a milestone. ** Changed in: horizon Status: Fix Released => Fix Committed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1417515 Title: Horizon Input fields swapped when tried to Launch Stack with invalid name Status in OpenStack Dashboard (Horizon): Fix Committed Bug description: Steps: 1.Open Horizon Dashboard 2.Open Project->Orchestration->Stack 3.Click "Launch Stack" 4.Template Source -> Direct Input Use template: [heat_template_version: 2013-05-23 description: Simple template to deploy a single compute instance parameters: key_name: type: string label: Key Name description: Name of key-pair to be used for compute instance image_id: type: string label: Image ID description: Image to be used for compute instance instance_type: type: string label: Instance Type description: Type of instance (flavor) to be used resources: my_instance: type: OS::Nova::Server properties: key_name: { get_param: key_name } image: { get_param: image_id } flavor: { get_param: instance_type }] 5.Click "Next" 6.Fill in the fields: "Stack Name": [111] - invalid value "Password for user "admin" - your admin pass "Image ID": [25e3e805-1c42-4eb0-abf2-afdc9e96c62e] or any valid id "Instance Type": [m1.small] or any valid type "Key Name":[heat_key] - your Key Pair 7.Click "Launch" Stack don't launch(invalid Stack Name), but fields "Instance Type" and "Key Name" swapped devstack$ git show HEAD commit e256022a1686eb447da1bbd318c44b58f72f3e0e Merge: b9a7d3b ff72c50 Author: Jenkins Date: Sat Jan 31 00:08:29 2015 + To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1417515/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427098] [NEW] create server ignore parameters whether match hypervisor
Public bug reported: I'm test create server in Juno RDO environment, and I check the code of Kilo version. In a environment with different hypervisor, such as QEMU, docker and Xen etc, Create server is not check the hypervisor type with some import parameters. For example, there is two compute node, one hypervisor is docker (node A-172), anther one hypervisor is QEMU(node B-168). I can create a docker server which uses docker image "tutum/wordpress" in a QEMU node(node B-168). In this case it ignores the parameter “image" whether is match the hypervisor type, finally this server boot failed with "no bootable device". I'm not sure is there any other parameter for QEMU, docker and Xen hypervisor should check. My step is following: [root@ ~(keystone_admin)]# glance image-show tutum/wordpress +--+--+ | Property | Value| +--+--+ | checksum | bab44a59a74878dd953c4ae5242f7c7c | | container_format | docker | | created_at | 2015-02-02T07:00:47 | | deleted | False| | disk_format | raw | | id | b8e12702-3fd1-4847-b018-ac8ba6edead7 | | is_public| True | | min_disk | 0| | min_ram | 0| | name | tutum/wordpress | | owner| 09291698b9ff44728493252e67fc6ee5 | | protected| False| | size | 517639680| | status | active | | updated_at | 2015-02-02T07:02:15 | +--+--+ [root@ ~(keystone_admin)]# nova boot --flavor 2 --image tutum/wordpress --key-name key1 --nic net-id=2510a249-1665-4184-afc8-62a2eccf6c3b --availability-zone xxx:B-168 test-image-168 2015-03-02 01:55:12.239 2857 WARNING nova.virt.disk.vfs.guestfs [-] Failed to close augeas aug_close: do_aug_close: y ou must call 'aug-init' first to initialize Augeas 2015-03-02 01:55:12.274 2857 DEBUG nova.virt.disk.api [-] Unable to mount image /var/lib/nova/instances/e799cc70-2e9f -44da-9ebd-0f55ddc7cd13/disk with error Error mounting /var/lib/nova/instances/e799cc70-2e9f-44da-9ebd-0f55ddc7cd13/d isk with libguestfs (mount_options: /dev/sda on / (options: ''): mount: /dev/sda is write-protected, mounting read-on ly mount: unknown filesystem type '(null)'). Cannot resize. is_image_partitionless /usr/lib/python2.7/site-packages/nova /virt/disk/api.py:218 2015-03-02 01:55:12.276 2857 DEBUG nova.virt.libvirt.driver [-] [instance: e799cc70-2e9f-44da-9ebd-0f55ddc7cd13] Star t _get_guest_xml network_info=[VIF({'profile': {}, 'ovs_interfaceid': u'563e9d58-9d41-4700-b4f2-a56a6ecfcefe', 'netwo rk': Network({'bridge': 'br-int', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fixed', 'fl oating_ips': [], 'address': u'10.0.0.45'})], 'version': 4, 'meta': {'dhcp_server': u'10.0.0.3'}, 'dns': [], 'routes': [], 'cidr': u'10.0.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'10.0.0.1'})})], 'meta': {'injected': False, 'tenant_id': u'3cf2410b5f554653a93796982657984b'}, 'id': u'2510a249-1665-4184-afc8-62a2e ccf6c3b', 'label': u'private'}), 'devname': u'tap563e9d58-9d', 'vnic_type': u'normal', 'qbh_params': None, 'meta': {} , 'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'address': u'fa:16:3e:d9:b3:1f', 'active': False, 'typ e': u'ovs', 'id': u'563e9d58-9d41-4700-b4f2-a56a6ecfcefe', 'qbg_params': None})] disk_info={'disk_bus': 'virtio', 'cd rom_bus': 'ide', 'mapping': {'disk': {'bus': 'virtio', 'boot_index': '1', 'type': 'disk', 'dev': u'vda'}, 'root': {'b us': 'virtio', 'boot_index': '1', 'type': 'disk', 'dev': u'vda'}}} image_meta={u'status': u'active', u'deleted': Fals e, u'container_format': u'docker', u'min_ram': 0, u'updated_at': u'2015-02-02T07:02:15.00', u'min_disk': 0, u'own er': u'09291698b9ff44728493252e67fc6ee5', u'is_public': True, u'deleted_at': None, u'properties': {}, u'size': 517639 680, u'name': u'tutum/wordpress', u'checksum': u'bab44a59a74878dd953c4ae5242f7c7c', u'created_at': u'2015-02-02T07:00 :47.00', u'disk_format': u'raw', u'id': u'b8e12702-3fd1-4847-b018-ac8ba6edead7'} rescue=None block_device_info={' block_device_mapping': [], 'root_device_name': u'/dev/vda', 'ephemerals': [], 'swap': None} _get_guest_xml /usr/lib/p ython2.7/site-packages/nova/virt/libvirt/driver.py:4147 2015-03-02 01:55:17.616 2857 DEBUG nova.compute.manager [-] [instance: e799cc70-2e9f-44da-9ebd-0f55ddc7cd13] Checking state _get_power_state /usr/lib/python2.7/site-packages/nova/compute/manager.py:1156 ** Affects: nova
[Yahoo-eng-team] [Bug 1427097] [NEW] Test case to create provider network
Public bug reported: Provider networks are created by the admin and map directly to an existing physical network in the data center. Useful network types in this category are flat (untagged) and VLAN (802.1Q tagged). It is possible to allow provider networks to be shared among tenants as part of the network creation process ** Affects: horizon Importance: Undecided Status: New ** Tags: integration-tests ** Description changed: - Provider networks are created by the admin and - map directly to an existing physical network in the data center. Useful network types in this - category are flat (untagged) and VLAN (802.1Q tagged). It is possible to allow provider + Provider networks are created by the admin and map directly to an existing physical network in the data center. Useful network types in this category are flat (untagged) and VLAN (802.1Q tagged). It is possible to allow provider networks to be shared among tenants as part of the network creation process -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1427097 Title: Test case to create provider network Status in OpenStack Dashboard (Horizon): New Bug description: Provider networks are created by the admin and map directly to an existing physical network in the data center. Useful network types in this category are flat (untagged) and VLAN (802.1Q tagged). It is possible to allow provider networks to be shared among tenants as part of the network creation process To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1427097/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp