Hi Volkan,

Great post.

I'm a bit puzzled about the paradigm with Flowvisor; What if Flowvisor
fails? My understanding is that Flowvisor runs as a separate sever and it's
not easy that this server can support fault tolerance.


--Junchang

On Wed, Dec 12, 2012 at 4:29 AM, Volkan YAZICI <[email protected]>wrote:

> You can setup a Linux-HA <http://www.linux-ha.org> architecture to
> achieve this without the need for a flowvisor. That is, build a topology as
> follows.
>
>                                   OF Controller-A
>                                 /
> OF-Switch ---- Unmanaged Switch
>                                 \
>                                   OF Controller-B
>
> Say you have Linux-HA installed on each controller and they are sharing a
> virtual IP address which corresponds to the controller address the
> OF-Switch is set to connect. Controllers check each others' liveliness
> through heartbeats and when one goes down, the other one acquires the
> released virtual IP address. During such a transition (that is, migrating
> the virtual IP address from one controller to another) the OF Switch
> observes a connection failure and retries to connect such that in the very
> first retry it gets redirected to the active controller. (Beforehand, after
> acquiring the virtual IP address, the active controller sends an ARP PING
> to the Unmanaged-Switch to flush its ARP cache.) If you further don't want
> your switches to observe this minuscule connection failure, you might pull
> a Flowvisor in between to do this connection reattempt for you. That is,
>
>                                                OF Controller-A
>                                               /
> OF-Switch ---- Flowvisor ---- Unmanaged Switch
>                                               \
>                                                OF Controller-B
>
> Contrary to SIGCOMM HotSDN'12 reviewers, who had claimed that this
> architecture is impractical, we (and the other thousands of Linux cluster
> users in the rest of the world) have been using sophisticated derivations
> of such high-availability methods successfully to implement distributed OF
> controllers. (See Controlling a Software-Defined Network via Distributed
> Controllers<http://nem-summit.eu/files/2012/10/2012_NEM_Summit_Proceedings.pdf>for
>  details.) As an example, we successfully migrate 256 switches from one
> controller to another in ~8 secs.
>
>  If you further
>
>
> On Tue, Dec 11, 2012 at 7:44 PM, Panagiotis Georgopoulos <
> [email protected]> wrote:
>
>> Hello,****
>>
>> ** **
>>
>>                 We are currently building an openflow setup on our
>> production network and we have the following : ****
>>
>> ** **
>>
>> HP Openflow switch <--> Flowvisor ------>
>> Experimental-Floodlight-Controller****
>>
>>                                                           |------------->
>> Fallback-Floodlight-Controller****
>>
>>                                                                 ****
>>
>>                 We currently have two slices on flowvisor (each on its
>> own FlowEntry), one slice pointing to the Experimental-Floodlight
>> controller with high priority and another slice pointing to our
>> Fallback-Floodlight controller with lower priority. The idea is that
>> everything goes to the Experimental-Floodlight controller but if we have to
>> change it and the controller is not up, everything should be forwarded to
>> the Fallback-Floodlight controller so that normal network operation
>> continues. ****
>>
>> ** **
>>
>>                 How do we properly configure the above? ****
>>
>> ** **
>>
>> It seems that the current behavior we are seeing is that if the
>> Experimental-Floodlight controller is down, nothing gets forwarded to our
>> Fallback-Floodlight controller. Do I understand it correctly that if we put
>> both slices next to each other in one FlowEntry, then each request will be
>> broadcasted to both slices, thus having the Fallback-Floodlight-Controller
>> responding even when the Experimental-Floodlight-Controller would be
>> running?****
>>
>> ** **
>>
>> Thanks a lot for your help,****
>>
>> Panos****
>>
>> ****
>>
>> ** **
>>
>>                 ****
>>
>> ** **
>>
>>                 ****
>>
>> ** **
>>
>>                 ****
>>
>> ** **
>>
>> _______________________________________________
>> openflow-discuss mailing list
>> [email protected]
>> https://mailman.stanford.edu/mailman/listinfo/openflow-discuss
>>
>>
>
> _______________________________________________
> openflow-discuss mailing list
> [email protected]
> https://mailman.stanford.edu/mailman/listinfo/openflow-discuss
>
>
_______________________________________________
openflow-discuss mailing list
[email protected]
https://mailman.stanford.edu/mailman/listinfo/openflow-discuss

Reply via email to