Public bug reported:
Description
===
In my ironic environment, my baremetal node has two interface, we will register
two ports with two macs into ironic. When tenant user boot baremetal server
with one network, nova will randomly choose the one of the macs, baremetal
server will randoml
Public bug reported:
Description
===
when boot vm with ephemeral disk, the default format is vfat. but when the
image has metadata os_type=linux, it should use ext4. in fact, it still use
vfat format in ephemeral disk.
Steps to reproduce
==
1. not define virt_mkfs in nov
Public bug reported:
When user create network with isolated subnet, dhcp may create metadata
ns proxy with router id.
How to reproduce:
1. create a router: R1
2. create a network: Net1
3. create two subnetworks: Sub1, Sub2
4. attach Sub1 to R1. (do not attach Sub2)
if you deploy dhcp-agent and l
I reproduce this in Juno.
** Changed in: nova
Status: Expired => Confirmed
** Changed in: nova
Assignee: (unassigned) => Liping Mao (limao)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute
;Conflict" is better:
In neutron/extensions/metering.py
class MeteringLabelRuleOverlaps(qexception.NotFound):
message = _("Metering label rule with remote_ip_prefix "
"%(remote_ip_prefix)s overlaps another")
** Affects: neutron
Importance: Undecided
Public bug reported:
We have the following config in etc/metering_agent.ini:
# driver =
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
But in code, the default value of the driver in metering_agent.py is:
cfg.StrOpt('driver',
default=
eans unlimited.')),
cfg.IntOpt('quota_firewall_rule',
default=-1,
help=_('Number of firewall rules allowed per tenant. '
'A negative value means unlimited.')),
]
** Affects: neutron
Importance: Undecided
work node,
in the backend, we will have a large number of iptables rules. This will make
the network node crash or very slow.
So I suggest we use another number but not "-1" here.
** Affects: neutron
Importance: Undecided
Assignee: Liping Mao (limao)
Status: N
Public bug reported:
When I merge code https://review.openstack.org/#/c/107731/ ,
I get the following LB unit test error in stable/havana
2014-07-17 23:04:57.807 | pythonlogging:'neutron.api.extensions': {{{2014-07-17
22:59:34,750ERROR [neutron.api.extensions] Extension path 'unit/extensions
gentschedulers_db.py
mechanism_fslsdn.py
cisco_csr_mock.py
fake.py
database_stubs.py
** Affects: neutron
Importance: Undecided
Assignee: Liping Mao (limao)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Liping Mao (limao)
--
You received this bug notification because yo
;Maximum number of routes")),
]
** Affects: neutron
Importance: Undecided
Assignee: Liping Mao (limao)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net
P and Mac Address match an address pair entry.")
** Affects: neutron
Importance: Undecided
Assignee: Liping Mao (limao)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Liping Mao (limao)
--
You received this bug notification because you are a memb
PT all -- * * 0.0.0.0/00.0.0.0/0
MAC $MAC_ADDRESS
This rule will hit all the ips, but here we should not allow all the ips ...
So I think we should not add this rule.
** Affects: neutron
Importance: Undecided
Assignee: Liping Mao (limao)
S
** Changed in: neutron
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323151
Title:
Revises error in neutron / neutron / db / migration /
alembic
Public bug reported:
In stable/havana branch :
https://github.com/openstack/neutron/blob/615cb67e0082b6a2d2ab1c91623e9b2a20ddedec/neutron/db/migration/alembic_migrations/versions/havana_release.py
We have the following code:
"""havana
Revision ID: havana
Revises: 1341ed32cc1e
Create Date: 2013-1
100.100.100.100 again.
We will find that we can't access 100.100.100.100 at the second time.
** Affects: neutron
Importance: Undecided
Assignee: Liping Mao (limao)
Status: New
** Bug watch added: Red Hat Bugzilla #963927
https://bugzilla.redhat.com/show_bug.cgi?id=9
route rules.
So the vip can't work.
** Affects: neutron
Importance: Undecided
Assignee: Liping Mao (limao)
Status: New
** Changed in: neutron
Assignee: (unassigned) => Liping Mao (limao)
--
You received this bug notification because you are a member of Yahoo!
Eng
Public bug reported:
When we use the default config in neutron.conf:
# report_interval = 4
# agent_down_time = 5
When I boot VMs, I find sometimes the port status of one VM is DOWN. Other VMs
is working well.
I got the following log in /var/log/neutron/openvswitch.log.
2014-03-28 09:50:45.201
Public bug reported:
Hi all,
When neutron-lbaas-agent use haproxy , and if haproxy process itself is crash.
neutron-lbaas-agent will not restart haproxy.
I think neutron-lbaas-agent need to restart haproxy.
** Affects: neutron
Importance: Undecided
Assignee: Liping Mao (limao
Public bug reported:
Version : Havana
I have two controllers in my environment, and deploy glance-api on each
controller.
In nova.conf :
glance_api_servers=controller2:9292,controller2:9292
glance_num_retries = 2
When I kill the glance on controller2, then run "nova image-list", I will get
err
Public bug reported:
My environment is :
Nova api --https--> haproxy(SSL proxy)http> Glance api1
|--http> Glance api2
I use centos + rdo rpm package(havana), my haproxy is 1.5_dev21.
It can work well if I config in nova.conf as following:
glance
Public bug reported:
In neutron/agent/metadata/agent.py, we have :
url = urlparse.urlunsplit((
'http',
'%s:%s' % (self.conf.nova_metadata_ip,
self.conf.nova_metadata_port),
req.path_info,
req.query_string,
22 matches
Mail list logo