Re: [ovs-discuss] Monitor Switch to Controller Traffic

2020-03-25 Thread Brian Perry
Thank you for the quick response. I have been trying out numerous
variations of the commands to get the desired outcome.

> Sure, just specify an appropriate address.  For example, set up the
> switch to listen on an IP address ("ovs-vsctl set-controller br0 ptcp:")
> then use ovs-ofctl to connect to that ("ovs-ofctl dump-flows
> tcp:$MY_IP").

That command worked great, thanks! I noticed that the following command
combination would fail to establish a TCP connection:
ovs-vsctl set-controller br0 tcp:127.0.0.1:6633
ovs-ofctl dump-flows tcp:127.0.0.1:6633
Your solution fixed that problem and I was able to establish a TCP
connection. As well as send OpenFlow messages through the loopback
interface:
ovs-vsctl set-controller br0 ptcp:6633
ovs-ofctl dump-flows tcp:127.0.0.1:6633

I was wondering what the difference between "ptcp" and "tcp" is. And why
using ptcp:6633 as a controller would allow TCP connections but tcp:
127.0.0.1:6633 would not.

> Sure, just specify those addresses instead of 127.0.0.1.

I noticed a typo in my previous message, under "Experiment 2 Results" the
IP address 192.168.56.3 should have been 10.0.0.1. I am still trying to get
the addresses to show up as 10.0.0.1 (br0) and 127.0.0.1 (controller) but I
can't seem to get it working. The closests I was able to get was using the
commands that builds Environment 2 and:
python3.6 ./bin/ryu-manager ./ryu/app/simple_switch.py
--ofp-switch-address-list 10.0.0.1:6633
The controller would establish a connection with the 10.0.0.1 (br0) address
using 10.0.0.1 as its address as well, but after sending the OpenFlow hello
messages it would make a new connection where the controller and switch
would use the 127.0.0.1 address. Then use the 127.0.0.1 address for the
remainder of the experiment.

On Mon, Mar 23, 2020 at 10:33 PM Ben Pfaff  wrote:

> On Mon, Mar 23, 2020 at 08:40:55PM -0700, Brian Perry wrote:
> > Environment 1 Results:
> > When running Wireshark on the loopback interface and the br0 interface I
> > was unable to find any OpenFlow messages when using flow table commands
> > like:
> > ovs-ofctl dump-flows br0
> >
> > Looking through various documentation eventually lead me to a website
> that
> > states that the ovs-ofctl command is using a Unix domain socket to
> > communicate with the switch (
> >
> https://github.com/mininet/openflow-tutorial/wiki/Learn-Development-Tools#accessing-remote-ovs-instances-or-the-stanford-reference-switch
> )
> > . And I also found out that Wireshark can't capture Unix domain socket
> > traffic because it isn't a network interface (
> > https://www.wireshark.org/lists/ethereal-users/200202/msg00259.html).
> >
> > Is it possible to have the ovs-ofctl commands go through an interface so
> I
> > can see the OpenFlow messages on Wireshark?
>
> Sure, just specify an appropriate address.  For example, set up the
> switch to listen on an IP address ("ovs-vsctl set-controller br0 ptcp:")
> then use ovs-ofctl to connect to that ("ovs-ofctl dump-flows
> tcp:$MY_IP").
>
> > Environment 2 Results:
> > When running Wireshark on the loopback interface and the br0 interface I
> > saw the OpenFlow messages being sent to and from the loopback address
> > 127.0.0.1. While I initially thought the messages would be addressed from
> > br0 (192.168.56.3) to the controller (127.0.0.1). After thinking about it
> > some more, I understand why the switch br0 and the controller are both
> > addressed 127.0.0.1. Because the switch and controller are two processes
> > that are on the same Host OS communicating with each other.
> >
> > But I was wondering if it is possible to configure the switch so that the
> > OpenFlow message packets address br0 as 192.168.56.3 and the controller
> as
> > 127.0.0.1?
>
> Sure, just specify those addresses instead of 127.0.0.1.
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Monitor Switch to Controller Traffic

2020-03-23 Thread Brian Perry
Hi,

I built two topologies using OvS and Virtualbox so that I can see how a
controller interacts with the switch (br0). Here are the commands used to
build the two environments:
Environment 1 - Use OvS Commands to Manage the Switch's Flow Tables
ovs-vsctl add-br br0
ovs-vsctl add-port br0 p1 -- set interface p1 type=internal
ovs-vsctl add-port br0 p2 -- set interface p2 type=internal
ifconfig br0 up
ifconfic p1 up
ifconfic p2 up

Environment 2 - Remote Controller Running on the Host OS
ovs-vsctl add-br br0
ovs-vsctl add-port br0 p1 -- set interface p1 type=internal
ovs-vsctl add-port br0 p2 -- set interface p2 type=internal
ovs-vsctl set-controller br0 tcp:127.0.0.1:6633
ifconfig br0 10.0.0.1/24
ifconfig br0 up
ifconfic p1 up
ifconfic p2 up

Environment 3 - Remote Controller Running on a Guest OS
# For reference only.
ovs-vsctl add-br br0
ovs-vsctl add-port br0 p1 -- set interface p1 type=internal
ovs-vsctl add-port br0 p2 -- set interface p2 type=internal
ovs-vsctl set-controller br0 tcp:192.168.56.2:6633
ifconfig br0 192.168.56.3/24
ifconfig br0 up
ifconfic p1 up
ifconfic p2 up

Where Virtualbox has two guest Linux OS's all with their own "Bridge
Adapter" network interface of either p1 or p2 for both environments. In
Environment 3 there is a third Virtualbox guest Linux OS executing the
controller application, which uses a  "Host-only" network interface of
192.168.56.1/24. Environment 3 is used as a reference for a question I have
about OvS switches, which is asked after discussing Environment 2's results.

Environment 1 Results:
When running Wireshark on the loopback interface and the br0 interface I
was unable to find any OpenFlow messages when using flow table commands
like:
ovs-ofctl dump-flows br0

Looking through various documentation eventually lead me to a website that
states that the ovs-ofctl command is using a Unix domain socket to
communicate with the switch (
https://github.com/mininet/openflow-tutorial/wiki/Learn-Development-Tools#accessing-remote-ovs-instances-or-the-stanford-reference-switch)
. And I also found out that Wireshark can't capture Unix domain socket
traffic because it isn't a network interface (
https://www.wireshark.org/lists/ethereal-users/200202/msg00259.html).

Is it possible to have the ovs-ofctl commands go through an interface so I
can see the OpenFlow messages on Wireshark?

Environment 2 Results:
When running Wireshark on the loopback interface and the br0 interface I
saw the OpenFlow messages being sent to and from the loopback address
127.0.0.1. While I initially thought the messages would be addressed from
br0 (192.168.56.3) to the controller (127.0.0.1). After thinking about it
some more, I understand why the switch br0 and the controller are both
addressed 127.0.0.1. Because the switch and controller are two processes
that are on the same Host OS communicating with each other.

But I was wondering if it is possible to configure the switch so that the
OpenFlow message packets address br0 as 192.168.56.3 and the controller as
127.0.0.1?

I also had a problem trying to ping br0's address (10.0.0.1) when inside
one of the guest OS's. Where the guest OS had an IP address of 10.0.0.2.
Which leads me to my third question, is it possible to ping br0 (10.0.0.1)
from a guest OS? I do realize that is an odd question, because switches are
supposed to be transparent to the end hosts.

Based on Wireshark's results mentioned above, br0's IP address is not used
in the control plane and can not be used by an end host to ping the switch
in the data plane. Which leads me to my final question, when would you
assign an IP address to a switch? Currently I can only think of two
situations, when one of the switch's interfaces is connected to a physical
interface (e.g. eth0) or the controller can't be accessed on the loopback
interface (e.g. Environment 3).

Thanks for your time.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Source Code - Multiple Controllers Round Robin Load Balancing

2017-11-19 Thread Brian Perry
Summary: OvS last commit 52f793b
I am trying to modify the switch's code to use a round robin scheduler for
sending asynchronous messages (especially PACKET_IN messages) to one of the
controllers within a multiple controller SDN setup. After traversing the
code I think I found where I need to insert this round robin scheduler,
line 4766 and 6225 of ofproto-dpif-xlate.c.

I plan on modifying this part of the code and testing it to see if I was
correct but it would be great to have some feedback from the community on
this.
1) Am I on the right path, do I only need to modify the code around line
4766? Or do I need to modify another part of the code?
2) What is OFPACT_CONTROLLER case for? From what I can gather macro
OFPACT_FOR_EACH uses a macro that goes through a list that deal with table
entry actions but it could call the execute_controller_action() function
and I don't know why it would.
3) What does emit_continuation() function do? It looked like it had to do
with table lookups instead of sending asynchronous messages to the
controller. If it only does table lookups why would it call the
execute_controller_action() function.


Details:
I setup a simple Mininet topology with a single switch, 3 user computers,
and 2 controllers, where everything is connected to the switch. Currently
when the switch receives a packet with no corresponding forwarding rule it
sends a request to both controllers. I would like it so that it sends the
forwarding rule request in a round robin method. So the first forwarding
rule request will be sent to only controller 1, then the next forwarding
rule request will be sent to only controller 2, then the next one to only
controller 1, then the next one only to controller 2, etc.

Along side other documents I've read a description of the Open vSwitch
architecture (https://www.slideshare.net/rajdeep/openvswitch-deep-dive) and
pages 25-28 of Pavel's master thesis (https://mail.openvswitch.org/
pipermail/ovs-discuss/2017-February/043763.html) to get a better
understanding of the internals of OvS. This information paired with the
following post (https://mail.openvswitch.org/pipermail/ovs-discuss/2016-
March/040236.html) informed me that the source code I am looking for is
used within the Userspace and located within the ofproto directory. I
started looking in the ofproto directory for the handle_openflow()
function. This eventually lead me to look at the structure and usage of
ofconn which lead me to line 1741 of connmgr.c where I found the structure
ofproto_async_msg and function connmgr_send_async_msg. After following the
function I noticed that ofproto_async_msg.controller_id is assigned in only
2 different places; within the execute_controller_action() and
emit_continuation() functions. I continued following the
execute_controller_action() function and noticed that it's called in only 5
different locations all within the file ofproto-dpif-xlate.c. Within those
5 locations only lines 4766 and 6225 use some sort of loop to craft and
send asynchronous messages to multiple controllers.

So my questions become:
1) Am I on the right path, do I only need to modify the code around line
4766? Or do I need to modify another part of the code?
2) What is OFPACT_CONTROLLER case for? From what I can gather macro
OFPACT_FOR_EACH uses a macro that goes through a list that deal with table
entry actions but it could call the execute_controller_action() function
and I don't know why it would.
3) What does emit_continuation() function do? It looked like it had to do
with table lookups instead of sending asynchronous messages to the
controller. If it only does table lookups why would it call the
execute_controller_action() function.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss