Public bug reported:
Hello,
I successfully installed neutron with XenServer 6.2, but I got error message
in /var/log/neutron/openvswitch-agent.log at domU compute node:
2014-06-17 11:26:52.431 1346 ERROR neutron.agent.linux.ovsdb_monitor [-] Error
received from ovsdb monitor: Traceback (most
Public bug reported:
When I tried to Launch Stack , it shows status as failed. Now If I try
to delete that stack , it gives ValueError at /project/stacks/.
** Affects: horizon
Importance: Undecided
Status: New
** Attachment added: ValueError.png
Public bug reported:
If you run nova shelve api in nova-cell environment It throws following
error:
Nova cell (n-cell-child) Logs:
2014-07-06 23:57:13.445 ERROR nova.cells.messaging
[req-a689a1a1-4634-4634-974a-7343b5554f46 admin admin] Error processing message
locally: save() got an
Public bug reported:
Description of problem:
===
I configured a load balancing pool with 2 members using round robin mechanism.
My expectation was that each request will be directed to the next available
pool member.
Meaning, the expected result was:
Req #1 - Member #1
Req #2
Public bug reported:
we should catch InstanceUserDataTooLarge when we create an instance
because compute/api.py might report this exception
** Affects: nova
Importance: Undecided
Assignee: jichenjc (jichenjc)
Status: New
** Tags: api
** Tags added: api
** Changed in: nova
Public bug reported:
1. Create a new tenant
2. Create a network - Add subnet (10.10.10.0/24) in the network
3. Create two VMs(VM1 and VM2) in network in s default security group.
4. Now updated VM1 port with an allowed address pair IP (20.20.20.2)
neutron port-update
Public bug reported:
When updating network quota using the following command:
neutron quota-update --network 100
the client ouputs:
Request Failed: internal server error while processing your request.
This request fails since the parameter exceeds the integer range. An
error message
Public bug reported:
Description of problem:
When the Glance is configure to work with rbd backend (Ceph) the Rados
packages (python-ceph) are not installed the Error that the Glance's logs show
is:
2014-07-07 11:28:27.982 TRACE glance.api.v1.upload_utils Traceback (most
recent call
Released in oslo.vmware 0.3.
** Changed in: oslo.vmware
Status: Fix Committed = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289627
Title:
this is not a problem any more! So I am taking this bug off the radar.
** Changed in: horizon
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
Public bug reported:
Upon pressing on that button modal spinner appears, then disappears and
here we are again with empty 'Group Members' table.
Having investigated it a bit, I found that `data` string that is
appended here
Public bug reported:
Reproduce Procedure:
1. Login
2. Do nothing and wait till session timeout
3. Login again. Horizon asks you to login twice. The first time you login with
correct user/password, it shows session timeout. You login again, it enters
dashboard as expected.
The expected
Just found out it should be an image issue.
** Changed in: nova
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310513
Title:
Unable to
You have been subscribed to a public bug:
I have booted an instance from a volume, successfully booted,
now another volume, i try to attach to same instance, it is failing.
see the stack trace..
2014-07-04 08:56:11.391 TRACE oslo.messaging.rpc.dispatcher raise
** Project changed: cinder = nova
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337821
Title:
Volume attach fails while attaching to an instance that is booted from
Public bug reported:
From the policy.json of the V3 API:
admin_and_matching_domain_id: rule:admin_required and
domain_id:%(domain_id)s,
identity:list_projects: rule:admin_required and domain_id:%(domain_id)s,
...
identity:list_users: rule:cloud_admin or
Public bug reported:
When the interface-attach action is run, it may be passed in a network
(but no port identifier). Therefore, the action allocates a port on
that network. However, if the attach method fails for some reason, the
port is not cleaned up.
This behavior would be appropriate if
It should belong to nova, so moved it to nova and moved to new.
** Changed in: nova
Status: Invalid = New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337821
Public bug reported:
First, I use glance cli to upload a image
glance image-create --name myimage --disk-format=raw --container-format=bare
--file /path/to/file.img
At the same time, I use the v2 api to delete the image
curl -i -X DELETE -H 'X-Auth-Token: $TOKNE_ID' -H 'Content-Type:
This bug was fixed in the package nova - 1:2014.1.1-0ubuntu1
---
nova (1:2014.1.1-0ubuntu1) trusty; urgency=medium
* Resynchronize with stable/icehouse (867341f) (LP: #1328134):
- [867341f] Fix security group race condition while listing and deleting
rules
- [ffcb176]
** Changed in: ossa
Status: Fix Committed = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1331912
Title:
[OSSA 2014-022] V2 Trusts allow trustee to emulate trustor
** Information type changed from Private Security to Public
** Changed in: ossa
Status: Incomplete = Invalid
** Tags added: security
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
Public bug reported:
In Neutron there's no retry logic in case a DB deadlock is got.
If a deadlock occurs the operation should be retried.
** Affects: neutron
Importance: Undecided
Assignee: Rossella Sblendido (rossella-o)
Status: In Progress
--
You received this bug
Public bug reported:
When setting resize_rootfs to 'noblock', cloud-init should fork a new
process and continue with it's own initialization process. However, it
seems that this is currently broken, as you see from these logs that it
still blocks on it:
Jul 7 12:34:20 localhost [CLOUDINIT]
Public bug reported:
Integration tests using Selenium Webdriver are currently running in a medium
size window (Selenium's defualt size for Firefox browser).
Maximizing the Firefox's window size requires a simple change and will improve
the tests display on run time.
TODO:
-
Add
Public bug reported:
This applies only when the nova/neutron event reporting mechanism is
enabled.
It has been observed that in some cases Nova spawns an instance without
waiting for network-vif-plugged event, even if the vif was unplugged and
then plugged again.
This happens because the status
Contrary to what claimed in the bug description, the actual root cause
is instead a different one, and it's in neutron.
For events like rebuilding or rebooting an instance a VIF disappears and
reappears rather quickly.
In this case the OVS agent loop starts processing the VIF, and then it skips
Public bug reported:
The launched_at instance field should be populated with the launch time
in the compute.instance.create.end notification. Since the move to
build_and_run_instance this field is no longer populated when the
notification is sent.
** Affects: nova
Importance: Undecided
After discussing with Andrew and Thierry, I'm convinced that the
potential behavior change introduced by a backport of that mitigating
commit, when weighed against the amount of social engineering needed to
exploit this in Havana, means this bug is probably better just
documented as a known
Public bug reported:
When live migration is performed on instances with volume attached, nova
sends two initiator commands and one terminate connection. This causes
orphan access records in some storage arrays ( tested with Dell
EqualLogic Driver).
Steps to reproduce:
1. Have one controller and
Public bug reported:
Would be useful for keystone to support a healthcheck URL for
consumption by load balancers.
This middleware should provide the ability to manually disable the
service via the existence of a file on the system's local disk. This
middleware can also be extended [1] to
The problem described in the bug seems to be a new feature needed to increase
performance on the scale.
It can't be really considered as a bug because the described behavior is as
designed.
I suggest to work on this problem in the scope of appropriate blueprint.
** Changed in: neutron
** Changed in: nova
Status: In Progress = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338737
Title:
nova needs to require oslotest in
It looks like swifts middleware could be moved to oslo, as there's
nothing swift-specific about it. There's nothing stopping you from
deploying that middleware in front of Keystone or swift, regardless of
whether it lives in oslo or swift.
** Description changed:
Would be useful for keystone
Public bug reported:
It takes too much time to upload to the VMware store. The bits are uploaded to
Glance then go through vCenter, then through ESXi to finally land on the
datastore.
The upload time is not necessarily good, also uploading through vCenter adds
unnecessary load on the vCenter
I need to bring this back. Right now oslotest is a runtime dependency of
nova which is wrong since oslotest.base is only used for nova unit
tests, so it should be in test-requirements.txt.
This is especially bad for downstream packagers/deployers because the
runtime dependencies for oslotest
Public bug reported:
If I resize an instance to a flavor with more cpus than should be
possible, even more cpus than cpu_allocation_ratio would allow, then
nova proceeds with the resize and the instance state goes to error and
it does exist anymore in the hypervisor.
My environment:
Nova 2014.1
Public bug reported:
in cisco n1kv plugin, port gets created during launching vm instance.
But upon failure of launching, the ports are not cleaned up in the
except block.
The issue can be easily recreated by creating a network without subnet
and then use the network for vm creation.
**
Public bug reported:
After switching to a different cloud, Django's old sessionid cookie causes
Horizon to greet you with the 500 Internal Server Error page. Clearing browser
cookies or deleting just the sessionid cookie (e.g. in Chrome Dev Tools
Resources Cookies) and refreshing is a
Public bug reported:
The trace for the failure is here:
http://logs.openstack.org/57/105257/4/check/check-tempest-dsvm-postgres-
full/f72b818/logs/tempest.txt.gz?level=TRACE#_2014-07-07_23_43_37_250
This is the console error:
2014-07-07 23:44:59.590 | tearDownClass
Fix proposed to branch: master
Review: https://review.openstack.org/105311
** Changed in: keystone
Status: Opinion = In Progress
** Changed in: keystone
Assignee: (unassigned) = John Dewey (retr0h)
--
You received this bug notification because you are a member of Yahoo!
Engineering
Public bug reported:
Public bug reported:
There is a corner case that the nsx api_client code does not handle
today where the nsx controller can return a 307 in order to redirect the
request to another controller. At this point neutron-server issues this
request to the redirected controller and usually this works
Public bug reported:
Nuage plugin stores a mapping of neutron and VSD id's for every neutron
resource.
This bug is to remove the mapping to avoid storing redundant data and also
avoid the
upgrade and out of sync issues.
** Affects: neutron
Importance: Undecided
Assignee: Sayaji
Public bug reported:
The help_text for create subnet when allocation_polls.
For lt and gt should be transfer to and , but it does not
in .po
** Affects: horizon
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
[Expired for neutron because there has been no activity for 60 days.]
** Changed in: neutron
Status: Incomplete = Expired
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1314130
[Expired for neutron because there has been no activity for 60 days.]
** Changed in: neutron
Status: Incomplete = Expired
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1308713
[Expired for neutron because there has been no activity for 60 days.]
** Changed in: neutron
Status: Incomplete = Expired
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1312521
Public bug reported:
Even though the default policy.json restrict the creation of external
networks to admin_only, any user can update a network as external.
I could verify this with the following test (PseudoPython):
project: ProjectA
user: ProjectMemberA has Member role on project ProjectA.
Public bug reported:
We are using non-administrator to connect the vCenter when start compute
service. In vCenter we defined a separate role (you can it in the
attachment) for this account and allow it to only access the cluster
that is used to provision VM and split with the management cluster.
** Also affects: ossn
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316822
Title:
soft reboot of instance does not ensure
51 matches
Mail list logo