I don't see any problem with them. Comments inline: On Mon, Jul 2, 2012 at 5:36 AM, clocy <[email protected]> wrote: > These days, I am trying the new noxcpp, it's fantastic, I love the builder > pattern for constructing the openflow messages, but I ran into a serious > problem when I am trying to implement a LLDP module like the python > discovery component in classic nox, > 1. in openflow-datapath.cc function void Openflow_datapath::send(const > v1::ofp_msg *msg) it has a line says: > if (is_sending) return. > So when I want to send a packet, but this method is in is_sending state, the > packet failed to send without a message.
No, the packet is written to the pending output archive before the function returns. If the socket is busy (is_sending == true), the pending packets (which are already written to the output archive) will be sent upon completion of the previous call to send (when send_cb is called). Having said that, we are not yet protecting the output archive and is_sending from concurrent writes. The assumption was that the writes to a single socket happen in the context of a single socket. If we later realize that this is not an easy to enforce decision, we will change it. > 2. Another issue is when I implement a loop like: > FOR(i, 1, n) dp->send(&msg) > ------------ > The sending result is > ethernet header > ofmsg1 > ofmsg2 > ... > ofmsgn > -------------- > It packed all openflow message to a large ethernet frame, it makes the ovs > not received independent msg. They are independent OpenFlow messages. A correct implementation does not assume that OpenFlow messages are packed in different Ethernet frames/IP packets/TCP segments. You can verify this using the OpenFlow Dissector plugin for Wireshark. As a side note, with our implementation I expect ofmsg1 to be packed in a single Ethernet frame and the rest to be batched together. The reason is that the socket is not busy the first time and the send request goes through immediately, but the subsequent messages are queued until the first send completes. Cheers, Amin
