ova - instance_info_caches - This looks like it is going to be a problem,
the whole network stuff is in here.
Although it is called cache it does not seem to be updated if you delete the
content :(
Anyone tried this before? :)
Cheers,
Robert van Leeu
>> George Shuklin
> For high CPU usage for openvswitchd there is a simple solution: upgrade
> to anything like 1.11 or higher. There is a huge problem with ovs 1.4,
> 1.9, 1.10 - they suck at real-world network activity. All new version
> (2.0, 2.1) are much better and works great.
Well no, after
> To help people help you:
> - Are you running with ML2 plugin + ovs mechanism? (yes according to
> below details)
Currently we run without ML2 but since we run Icehouse we should be able (and
have) to migrate to ML2.
I am testing in our dev environment with ML2 enabled.
> - Do you have flat, vla
> > Well no, after updating to 2.1 we actually have increased openvswitch cpu
> > consumption on some machines.
> What OVS kernel module are you running? How did you update to 2.1?
We build our own RPM from upstream.
___
OpenStack-operators mailing list
size.
Cheers,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
or the moment but are planning to move to
ML2 in the next month.
Not exactly looking forward to that process :-(
Cheers,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
cause there is no useful info in it.
Looking at our logstash grok I could probably make some suggestions on what we
find useful and not :)
Cheers,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
might not get feature parity with
different drivers.
>So basically my question is - which solution would you prefer as a cloud op?
As others before me I also lean toward option one.
Cheers,
Robert van leeuwen
___
OpenStack-operators mailing li
x27;s.
Maybe people from Ubuntu & Redhat (which I would be interested in ;) reading
this could give some suggestions...
Cheers,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/
Build the rpm, install and install librbd1 package
You need to link the so file for it to work:
ln -s /usr/lib64/librbd.so.1 /usr/lib64/qemu/librbd.so.1
Cheers.
Robert van Leeuwen
> +1 for a CentOS 6 deployment, you need to upgrade the qemu, not the kernel.
Slightly related:
Note that there are currently no Juno RDO packages for CentOS 6.
If you are building a new OpenStack deployment I would certainly start with
Centos/RHEL 7
Cheers,
Robert van Leeu
before or have a ready made tool to clean these
"orphaned" interfaces?
Thx,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>I just noticed that the bridge devices and ovs devices are no longer
>>removed after instance deletion.
>>So using brctl show and ovs-vsctl show the devices are still there.
>The tool exists already as it is called: neutron-ovs-cleanup
> You will have the same issues with name spaces but we also
eed you can do encapsulation in software.
You will need network cards that can do VXLAN offloading to go to similar speed
as hardware (10Gbit).
Cheers,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.ope
> Hi Robert,
>
>how do I check that?
I would take a look at the Spec sheet of the nic.
Since it is a pretty recent thing it probably is not supported unless you
specifically shopped for a card with support...
Cheers,
Robert
___
OpenStack-operators mail
> After our icehouse -> juno upgrade we are noticing sporadic but frequent
> errors from nova-metadata when trying to > serve metadata requests. The
> error is the following:
>Is anyone else noticing this or frequent read timeouts when talking to
>neutron? Have you found a solution?
> What
16 matches
Mail list logo