Thanks for your detail reponse, it's my misunderstanding, the new nox cpp works 
great, thanks for your hard work !


But I still got another 2 questions, could you take a look at them:
1. I think in packets.h, the arp_eth_header is missing the ethernet field (the 
ARP_ETH_HEADER_LEN should be 42), the missing ethernet field causes the 
openflow-inl-1.0.hh line 1175 failed.
2. in openflow-1.0.hh class ofp_features_reply, why the n_ports_ and ports_ 
field set to private? It seems having no public getters method for them? 


Regards,
Changlin Fu






------------------ Original ------------------
From:  "Amin Tootoonchian"<[email protected]>;
Date:  Wed, Jul 4, 2012 02:39 AM
To:  "clocy"<[email protected]>; 
Cc:  "nox-dev"<[email protected]>; 
Subject:  Re: [nox-dev] Noxcpp issues



I don't see any problem with them. Comments inline:

On Mon, Jul 2, 2012 at 5:36 AM, clocy <[email protected]> wrote:
> These days, I am trying the new noxcpp, it's fantastic, I love the builder
> pattern for constructing the openflow messages, but I ran into a serious
> problem when I am trying to implement a LLDP module like the python
> discovery component in classic nox,
> 1. in openflow-datapath.cc function void Openflow_datapath::send(const
> v1::ofp_msg *msg) it has a line says:
>     if (is_sending) return.
> So when I want to send a packet, but this method is in is_sending state, the
> packet failed to send without a message.

No, the packet is written to the pending output archive before the
function returns. If the socket is busy (is_sending == true), the
pending packets (which are already written to the output archive) will
be sent upon completion of the previous call to send (when send_cb is
called).

Having said that, we are not yet protecting the output archive and
is_sending from concurrent writes. The assumption was that the writes
to a single socket happen in the context of a single socket. If we
later realize that this is not an easy to enforce decision, we will
change it.

> 2. Another issue is when I implement a loop like:
> FOR(i, 1, n) dp->send(&msg)
> ------------
> The sending result is
> ethernet header
> ofmsg1
> ofmsg2
> ...
> ofmsgn
> --------------
> It packed all openflow message to a large ethernet frame, it makes the ovs
> not received independent msg.

They are independent OpenFlow messages. A correct implementation does
not assume that OpenFlow messages are packed in different Ethernet
frames/IP packets/TCP segments. You can verify this using the OpenFlow
Dissector plugin for Wireshark.

As a side note, with our implementation I expect ofmsg1 to be packed
in a single Ethernet frame and the rest to be batched together. The
reason is that the socket is not busy the first time and the send
request goes through immediately, but the subsequent messages are
queued until the first send completes.

Cheers,
Amin

Reply via email to