I think these options will apply only for the VIFs (logical_ports with
an empty 'type').
--
Babu
On Thursday 31 December 2015 04:34 AM, Ben Pfaff wrote:
I was hoping to keep the options column for options that are specific
to a particular type of logical port. Do you think that the QoS
option
On Mon, Dec 28, 2015 at 3:57 AM, Ofer Ben Yacov
wrote:
> Add unit test for passive mode.
>
> ---
> python/ovs/db/idl.py | 18 +++---
> python/ovs/jsonrpc.py | 19 +++
> python/ovs/stream.py | 47 +++
> tests/ovsdb-idl.at
On Wed, Dec 23, 2015 at 4:23 PM, Ben Pfaff wrote:
> On Wed, Dec 23, 2015 at 03:41:04PM -0500, Russell Bryant wrote:
> > On 12/22/2015 04:17 PM, Ben Pfaff wrote:
> > > Until now, the flow table treated localnet logical ports that have a
> VLAN
> > > quite differently from those that don't. The on
On Wed, Dec 23, 2015 at 4:16 PM, Ben Pfaff wrote:
> On Wed, Dec 23, 2015 at 03:39:27PM -0500, Russell Bryant wrote:
> > Add a test case for OVN localnet ports. We set up two hypervisors
> > connected by a network. We create two ports on each hypervisor and
> > attach them to this network using
On 12/23/15, 1:34 PM, "dev on behalf of Guru Shetty"
wrote:
>Hello All,
> I just looked at the OVN workflow for implementing VTEP schema (L2 only)
>and at first glance it feels wrong. There is possibly a reason for the way
>it has been implemented, but this is how I see it.
>
>The current w
I was hoping to keep the options column for options that are specific to a
particular type of logical port. Do you think that the QoS options will be type
specific or generic?
On December 30, 2015 3:51:13 AM CST, Babu Shanmugam wrote:
>I am trying to implement the QOS APIs of openstack neutron
Hello Ilya,
I applied the patch but I still getting a low throughput and the message
"ofproto_dpif_upcall(pmd101)|WARN|upcall_cb failure: ukey installation
fails" in the ovs log.
On 30 December 2015 at 09:59, Ilya Maximets wrote:
> As I see, this is exactly the same bug as fixed in
> commit e
As I see, this is exactly the same bug as fixed in
commit e4e74c3a2b ("dpif-netdev: Purge all ukeys when reconfigure pmd.")
but reproduced while only reconfiguring of pmd threads without restarting.
Try this patch as a workaround:
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index fe2cd4b.
Adding a new field called protocol in flow tunnel structure to verify the
validity
of tunnel metadata. This field avoids the need of resetting and validating the
entire ipv4/ipv6 tunnel destination address which caused a serious performance
drop.
Signed-off-by: Sugesh Chandran
---
lib/flow.c
I have no idea, ovs was running for a long time when I took that data.
I restarted everything and now the main thread shows:
main thread:
emc hits:1316
megaflow hits:0
miss:681
lost:1348
polling cycles:7226622 (19.41%)
processing cycles:30002635 (80.59%)
avg cycles per
On 30.12.2015 17:32, Mauricio Vásquez wrote:
> I just checked and the traffic is generated after everything is already set
> up, ports and flows.
And what is this 50K packets in that case?
main thread:
emc hits:20341
megaflow hits:0
miss:10193
lost:20372
>
> On 30 December 201
I just checked and the traffic is generated after everything is already set
up, ports and flows.
On 30 December 2015 at 08:50, Ilya Maximets wrote:
> The transmission starts before the addition of dpdkr4 to ovs?
>
> On 30.12.2015 16:31, Mauricio Vásquez wrote:
> > Dear Ilya,
> >
> > ovs-appct
The transmission starts before the addition of dpdkr4 to ovs?
On 30.12.2015 16:31, Mauricio Vásquez wrote:
> Dear Ilya,
>
> ovs-appctl dpif-netdev/pmd-stats-show -> http://pastebin.com/k1nnMfQZ
> ovs-appctl coverage/show -> http://pastebin.com/617CYR4n
> ovs-appctl dpctl/show -> http://pastebin.c
Dear Ilya,
ovs-appctl dpif-netdev/pmd-stats-show -> http://pastebin.com/k1nnMfQZ
ovs-appctl coverage/show -> http://pastebin.com/617CYR4n
ovs-appctl dpctl/show -> http://pastebin.com/JFCT8tgS
ovs-log -> http://pastebin.com/sJkaF20M
Thank you very much.
On 30 December 2015 at 08:05, Ilya Maximet
On 30.12.2015 15:51, Mauricio Vásquez wrote:
> Hello Ilya,
>
> The dpdkr ports involved have just one TX queue, so it should not be the
> reason in this case.
>
Please, provide output of:
ovs-appctl dpif-netdev/pmd-stats-show
ovs-appctl coverage/show
ovs-appctl dpctl/sho
Hello Ilya,
The dpdkr ports involved have just one TX queue, so it should not be the
reason in this case.
Thank you very much,
On 30 December 2015 at 07:07, Ilya Maximets wrote:
> Your 'Source' application, most likely, directs packets of the same flow
> to different TX queues. That's why
Your 'Source' application, most likely, directs packets of the same flow
to different TX queues. That's why most of pmd threads can't install a ukey
and always executes a misses instead of emc hits.
Fix your 'Source'.
Best regards, Ilya Maximets.
On 29.12.2015 22:19, Mauricio Vásquez wrote:
> He
I am trying to implement the QOS APIs of openstack neutron in the
networking-ovn plugin. I understand that I have to make the relevant
changes in OVN code as well.
I feel, 'options' field in Logical_Port table would be a decent
candidate to bring QoS support in OVN [1]. I could see from the
d
Kindly ignore this email. I sent it by mistake.
On Wednesday 30 December 2015 02:06 PM, Babu Shanmugam wrote:
I am trying to implement the QOS APIs of neutron in the networking-ovn
plugin.
___
dev mailing list
dev@openvswitch.org
http://openvswitch.o
19 matches
Mail list logo