ÈÏäà ªk¹Ë˪/èªIÍJúáPø"ÅEy
3[°9l»Û/ KÛ}Á³÷!àÆÕÄ*%HÉG£À&Üù±òÏBò9~éø»«W?ãx¸ä(ÕO/p©
h`RISm÷Ø®á;kgL?6.AÕé9/ÄZÁÔQgþ5KðïS0Â[zÀj`6ô)XZ¿n*U°ÄÓYNÒî£û×þKló¬4¸¦ÁTÁAQ`x¶v*pñHKmN¶®//ä2å·O)U©Wfo«¥
ÓÏ^a
:ê#"ðÆnBYöá¿<0]ó`ª¬VâM÷±FÜý,£±zÔÔ¶Sà§x\û¤ùÞLÊ\R±G}"ñÊÊ#GJɬ\ij
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev
controller1 eth2--- eth2
controller2
|
|
|
|
The original message was received at Fri, 13 May 2016 11:04:49 +0700 from
openvswitch.org [83.210.88.242]
- The following addresses had permanent fatal errors -
dev@openvswitch.org
___
dev mailing list
dev@openvswitch.org
Move this function out from file scope.
Signed-off-by: Nithin Raju
---
datapath-windows/ovsext/Flow.c | 16 +++-
datapath-windows/ovsext/Flow.h | 2 ++
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/datapath-windows/ovsext/Flow.c
While testing DFW and recirc code it was found that userspace
was calling into packet execute with the tunnel key and the
vport added as part of the execute structure. We were not passing
this along to the code that executes actions. The right thing is
to contruct the key based on all of the
We'll need this for parsing nested attributes.
Signed-off-by: Nithin Raju
---
datapath-windows/ovsext/DpInternal.h | 1 +
datapath-windows/ovsext/User.c | 13 -
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git
2016-05-12 13:40 GMT-07:00 pravin shelar :
> On Thu, May 12, 2016 at 12:59 PM, Jesse Gross wrote:
> > On Thu, May 12, 2016 at 11:18 AM, pravin shelar wrote:
> >> On Tue, May 10, 2016 at 6:31 PM, Jesse Gross wrote:
> >>> I'm
>
>
> >>
> >> With respect to the other questions, I think its best approach would be
> to
> >> ask direct questions so those
> >> direct questions get answered.
> >>
> >> 1) With 1000 HVs, 1000 HVs/tenant, 1 distributed router per tenant, you
> >> choose the number of gateways/tenant:
> >>
> >>
On Thu, May 12, 2016 at 4:55 PM, Guru Shetty wrote:
>
>
> On 12 May 2016 at 16:34, Darrell Ball wrote:
>
>> On Thu, May 12, 2016 at 10:54 AM, Guru Shetty wrote:
>>
>> >
>> >> I think you misunderstood - having one or more gateway per tenant does
>>
wZã3õÔöÙ0¼½íb
Í0ïm³Ü¨òx¾Ýk»%£qgëxÔnÍTFÎ,0aL\Éõо9]
¶Ö¨a]â̱²>S¿W
.ºK(Mµ
ù¥N_c©ÎУ¢RB 4Ø´½LÄ¥yóÍNQÁ.ºþâç_äT²ãNcSOh¦n
ò1aw!Òñ¡È5Ô8²Ã©"6Êr"À©n$¼ý{\³)»qiæ±rUåYU
~~ôc¥ãÜ&
Pfîµ>|ä-§¤7©DoÃñë«jÑð·DéûQüw!°ÎÝËLe#`wF-«oç.¨eÉZ$ý.§¯Sá:%ÆyÛzý±$ú$íÚgÓ][ér
¡«ô££
Thanks for fixing this!
Acked-by: Daniele Di Proietto
2016-05-10 15:50 GMT-07:00 Joe Stringer :
> Clang complains:
> lib/netdev-dpdk.c:1860:1: error: mutex 'dev->mutex' is not locked on every
> path
> through here [-Werror,-Wthread-safety-analysis]
>
On 12 May 2016 at 16:34, Darrell Ball wrote:
> On Thu, May 12, 2016 at 10:54 AM, Guru Shetty wrote:
>
> >
> >> I think you misunderstood - having one or more gateway per tenant does
> >> not make Transit LS better in flow scale.
> >> The size of a Transit LS
The original message was included as attachment
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev
On Thu, May 12, 2016 at 10:54 AM, Guru Shetty wrote:
>
>> I think you misunderstood - having one or more gateway per tenant does
>> not make Transit LS better in flow scale.
>> The size of a Transit LS subnet and management across Transit LSs is one
>> the 5 issues I mentioned and
>
>
>
> Completely agree that you need to go through a common point in both
> directions
> in the same chassis.
>
> Why does this require a separate gateway router?
>
The primary reason to choose a separate gateway router was to support
multiple physical gateways for k8s to which you can
See comments inline.
>To: dev@openvswitch.org
>From: Gurucharan Shetty
>Sent by: "dev"
>Date: 05/10/2016 08:10PM
>Cc: Gurucharan Shetty
>Subject: [ovs-dev] [PATCH 2/5] ovn: Introduce l3 gateway router.
>
>Currently OVN has distributed switches and routers. When a packet
>exits a
On Thu, May 12, 2016 at 12:59 PM, Jesse Gross wrote:
> On Thu, May 12, 2016 at 11:18 AM, pravin shelar wrote:
>> On Tue, May 10, 2016 at 6:31 PM, Jesse Gross wrote:
>>> I'm a little bit torn as to whether we should apply your rx checksum
>>>
On Thu, May 12, 2016 at 11:18 AM, pravin shelar wrote:
> On Tue, May 10, 2016 at 6:31 PM, Jesse Gross wrote:
>> I'm a little bit torn as to whether we should apply your rx checksum
>> offload patch in the meantime while we wait for DPDK to offer the new
>> API.
On Tue, May 10, 2016 at 6:31 PM, Jesse Gross wrote:
> On Tue, May 10, 2016 at 3:26 AM, Chandran, Sugesh
> wrote:
>>> -Original Message-
>>> From: Jesse Gross [mailto:je...@kernel.org]
>>> Sent: Friday, May 6, 2016 5:00 PM
>>> To: Chandran,
On 5/9/16 2:32 AM, Bhanuprakash Bodireddy wrote:
Add INSTALL.DPDK-ADVANCED document that is forked off from original
INSTALL.DPDK guide. This document is targeted at users looking for
optimum performance on OVS using dpdk datapath.
Thanks for this effort.
Signed-off-by: Bhanuprakash Bodireddy
>
>
> I think you misunderstood - having one or more gateway per tenant does not
> make Transit LS better in flow scale.
> The size of a Transit LS subnet and management across Transit LSs is one
> the 5 issues I mentioned and it remains the same
> as do the other issues.
>
> Based on the example
On 5/9/16 2:32 AM, Bhanuprakash Bodireddy wrote:
Refactor the INSTALL.DPDK in to two documents named INSTALL.DPDK and
INSTALL.DPDK-ADVANCED. While INSTALL.DPDK document shall facilitate the
novice user in setting up the OVS DPDK and running it out of box, the
ADVANCED document is targeted at
On 5/9/16 2:32 AM, Bhanuprakash Bodireddy wrote:
This patchset refactors the present INSTALL.DPDK.md guide.
The INSTALL guide is split in to two documents named INSTALL.DPDK and
INSTALL.DPDK-ADVANCED. The former document is simplified with emphasis
on installation, basic testcases and targets
On Wed, May 11, 2016 at 10:13:48AM +0800, Xiao Liang wrote:
> On Wed, May 11, 2016 at 4:31 AM, Flavio Leitner wrote:
> > On Tue, May 10, 2016 at 10:31:19AM +0800, Xiao Liang wrote:
> >> On Tue, May 10, 2016 at 4:28 AM, Flavio Leitner wrote:
> >> > On Sat,
>
>
>>
>> I think one of the main discussion points was needing thousands of arp
>> flows and thousands of subnets, and it was on an incorrect logical
>> topology, I am glad that it is not an issue any more.
>>
>
> I think you misunderstood - having one or more gateway per tenant does not
> make
Thank you William and Ryan. I pushed this to master.
On 12 May 2016 at 09:32, William Tu wrote:
> Thanks for adding this, I will re-run the OVN-related valgrind tests.
>
> On Thu, May 12, 2016 at 9:14 AM, Ryan Moats wrote:
>
> >
> >
> > "dev"
Thanks for adding this, I will re-run the OVN-related valgrind tests.
On Thu, May 12, 2016 at 9:14 AM, Ryan Moats wrote:
>
>
> "dev" wrote on 05/12/2016 10:23:39 AM:
>
> > From: Gurucharan Shetty
> > To: dev@openvswitch.org
> > Cc:
On Thu, May 12, 2016 at 6:03 AM, Guru Shetty wrote:
>
>
>
> On May 11, 2016, at 10:45 PM, Darrell Ball wrote:
>
>
>
> On Wed, May 11, 2016 at 8:51 PM, Guru Shetty wrote:
>
>>
>>
>>
>>
>> > On May 11, 2016, at 8:45 PM, Darrell Ball
"dev" wrote on 05/12/2016 10:23:39 AM:
> From: Gurucharan Shetty
> To: dev@openvswitch.org
> Cc: Gurucharan Shetty
> Date: 05/12/2016 10:42 AM
> Subject: [ovs-dev] [PATCH] tests: Add valgrind targets for ovn
> utilities and dameons.
>
Hi,
OVS reports that link state of a vhost-user port (type=dpdkvhostuser) is DOWN,
even when traffic is running through the port between a Virtual Machine and the
vSwitch.
Changing admin state with the "ovs-ofctl mod-port up/down" command
over OpenFlow does affect neither the reported link
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev
Signed-off-by: Gurucharan Shetty
---
tests/automake.mk |4
1 file changed, 4 insertions(+)
diff --git a/tests/automake.mk b/tests/automake.mk
index a5c6074..211a80d 100644
--- a/tests/automake.mk
+++ b/tests/automake.mk
@@ -152,6 +152,10 @@ check-lcov: all tests/atconfig
In future XPS implementation dpif-netdev layer will distribute
TX queues between PMD threads dynamically and netdev layer will
not know about sharing of TX queues. So, we need to lock them
always. Each tx queue still has its own lock, so, impact on
performance should be minimal.
Signed-off-by:
New appctl command to perform manual pinning of RX queues
to desired cores.
Signed-off-by: Ilya Maximets
---
INSTALL.DPDK.md| 24 +-
NEWS | 2 +
lib/dpif-netdev.c | 199 -
This command can be used to force PMD threads to reload
and apply new configuration.
Signed-off-by: Ilya Maximets
---
NEWS | 2 ++
lib/dpif-netdev.c | 41 +
vswitchd/ovs-vswitchd.8.in | 3 +++
3
Patch-set implemented on top of v9 of 'Reconfigure netdev at runtime'
from Daniele Di Proietto.
( http://openvswitch.org/pipermail/dev/2016-April/070064.html )
Manual pinning of RX queues to PMD threads required for performance
optimisation. This will give to user ability to achieve max.
If CPU number in pmd-cpu-mask is not divisible by the number of queues and
in a few more complex situations there may be unfair distribution of TX
queue-ids between PMD threads.
For example, if we have 2 ports with 4 queues and 6 CPUs in pmd-cpu-mask
such distribution is possible:
Current implementarion of TX packet's queueing is broken in several ways:
* TX queue flushing implemented on receive assumes that all
core_id-s are sequential and starts from zero. This may lead
to situation when packets will stuck in queue forever and,
also,
Currently number of tx queues is not configurable.
Fix that by introducing of new option for PMD interfaces: 'n_txq',
which specifies the maximum number of tx queues to be created for
this interface.
Example:
ovs-vsctl set Interface dpdk0 options:n_txq=64
Signed-off-by: Ilya Maximets
> On May 11, 2016, at 10:45 PM, Darrell Ball wrote:
>
>
>
>> On Wed, May 11, 2016 at 8:51 PM, Guru Shetty wrote:
>>
>>
>>
>>
>> > On May 11, 2016, at 8:45 PM, Darrell Ball wrote:
>> >
>> >> On Wed, May 11, 2016 at 4:42 PM, Guru
Hi dev
The attached spreadsheet contains receive payments. Please review
Regards,
Herman Wolfe
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev
Hi dev
The attached spreadsheet contains other names. Please review
Regards,
Chance Goff
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev
Hi dev
The attached spreadsheet contains item receipts. Please review
Regards,
Boyd Whitaker
___
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev
Add support for Jumbo Frames to DPDK-enabled port types,
using single-segment-mbufs.
Using this approach, the amount of memory allocated for each mbuf
to store frame data is increased to a value greater than 1518B
(typical Ethernet maximum frame length). The increased space
available in the mbuf
This patch constitutes a response to a request on ovs-discuss
(http://openvswitch.org/pipermail/discuss/2016-May/021261.html), and
is only for consideration in the testing scenario documented therein.
It should not be considered for review, or submission to the OVS source
code - the proposed
*Added OvsExtractLayers - populates only the layers field without unnecessary
memory operations for flow part
*If in STT header the flags are 0 then force packets checksums calculation
on receive.
*Ensure correct pseudo checksum is set for LSO both on send and receive.
Linux includes the segment
Hi All,
I want to add a flow in OVS to allow ssh from specific IP address.
Also, i want add some rules to allow/drop accessibility from specific
IPs.
Steps i did yet
* My OVS is running in VM ubuntu.
* created one bridge.
* added port to bridge.
* added 2 host using network
Your message was undeliverable due to the following reason:
Your message was not delivered because the destination server was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely there
49 matches
Mail list logo