As Martin said, in some cases cbench may significantly over-report
numbers in throughput mode (of course it depends on the controller
implementation, so not all the controllers might be affected).
The cbench code sleeps for 100ms to clear out buffers after reading
the switch counters
Random curiosity: Why would jumbo frames increases replies per sec?
Regards
KK
On 15 December 2010 11:45, Amin Tootoonchian a...@cs.toronto.edu wrote:
I missed that. The single core throughput is ~250k replies/sec, two
cores ~450k replies/sec, three cores ~650k replies/sec, four cores
~800
I double checked. It does slightly improve the performance (in the
order of a few thousand replies/sec). Larger MTUs decrease the CPU
workload (by decreasing the number of transfers across the bus) and
this means that more CPU cycles are available to the controller to
process requests. However, I
Hi Amin,
Just to clarify, does your jumbo frames refer to the OpenFlow messages
or the frames in the datapath? By OpenFlow messages, I am assuming
you use a TCP connection between NOX and the switches, and you are
batching the messages into jumbo frames of 9000 bytes before sending
them out.
Oh.. another point, if you are batching the frames, then what about
delay? There seems to be a trade-off between delay and throughput,
and we have went for the former by disabling Nagle's algorithm.
Regards
KK
On 15 December 2010 12:46, kk yap yap...@stanford.edu wrote:
Hi Amin,
Just to
I'll let Amin follow up, but from what I understand, the way he's doing
batching doesn't introduce any additional delay. Rather, if he can
write to the socket, he writes. However, if the socket is blocked for
whatever reason (e.g. waiting for an ACK or send buffer is full) he
buffers all of
I am talking about jumbo Ethernet frames here. By batching, I mean
batching outgoing messages together and writing to the underlying
layer which would be the TCP write buffer. The TCP buffer is not
limited to MTU or anything like that, so in most cases my code flushes
more than 64KB to the TCP
Hello,
When you receive a flow mod event, is there any way to associate it with
the xid of the original request that caused it? I'm looking for a way to
confirm that a specific request generated a specific response. For
example, if multiple components are running and they both send a packet
Hi Derek,
Are you assuming the components will tag the flow_mod with the same
xid as the packet_in? I think this is not true for verbatim NOX,
though I am not sure. Either way, what is important is that you can
make changes to make that true. So, you can definitely do this.
Regards
KK
On 15
I'm not sure if this is what you're asking, but flow_mod's have a
'cookie' associated with them that gets returned in all sorts of
flow_mod related messages, e.g., flow_removed messages. Maybe that is
what you're looking for.
- Rob
.
On Wed, Dec 15, 2010 at 10:04 PM, Derek Cormier
@KK
It turns out I made a wrong assumption. I thought that when an
ofp_flow_mod (OFPFC_ADD) message was sent, it returns a reply with the
same xid. After looking at the OF protocol, it looks like a message is
only sent back if an error occurred.
@Rob
The cookie isn't quite what I'm looking
11 matches
Mail list logo