Hello,
I'm trying to use the various tables and statistics provided by emanesh 
to debug a CORE/EMANE 802.11abg
scenario, but even after reading the docs and tutorials I'm unclear as 
to the significance of some of the statistic
measurements I'm seeing.

In my scenario I have three nodes in a chain, with a source application 
on  node 1 sending messages over TCP to
an application running on node 2, which in turn sends the messages to a 
sink application on node 3. If I send
data as fast as I can (i.e. rely on TCP backpressure), I see very large 
latencies (on the order of a few seconds) . If I
rate limit the source to under utilize the channel, the latencies are on 
the order of
tens of milliseconds.  Note that I have flow control enabled with the 
standard number of tokens (10).

What I'd like to understand is whether the latency numbers I'm seeing 
without rate limiting are realistic (and due
to e.g. increased packet loss due to contention, TCP send socket 
buffering), or whether EMANE can't keep up and is introducing 
significant extra delay. In
a previous email to the core-users mailing list it was suggested to use 
emanesh to examine various statistics regarding the phy, mac, and 
virtual transport layers
(see Stephen Galgano's email below).  However, I'm having the following 
problems:

(i) emanesh only shows me statistics regarding the phy and mac layers, 
but nothing about the virtual transport (i.e.
if I do a 'get stat * all', I only get information about mac and phy 
statistics). Is this something I have to enable explicitly?

(ii) I'm finding it difficult to reason about the significance of the 
numbers I'm seeing. When I compare the mac & phy statistics for
the normal and rate-limited cases, they don't look much different. 
Should I conclude from this that EMANE isn't bottlenecked on
mac or phy processing? Presuming all of the 'delay' statistics are in 
microsconds, most delays seem to be under a millisecond - does
this seem reasonable? Some example measurements for the statistics 
suggested by Stephen for one of the nodes are below (for both
the unconstrained and rate limited executions). Can anyone advise if 
they look 'normal'?

'Unconstrained sender'
-------------------------------
h (localhost:47000)] ## get stat 3 mac
nem 3   mac  avgDownstreamProcessingDelay0 = 326.528320312
nem 3   mac  avgProcessAPIQueueDepth = 1.01472248449
nem 3   mac  avgProcessAPIQueueWait = 21.5879439619
nem 3   mac  avgTimedEventLatency = 40.8964598884
nem 3   mac  avgTimedEventLatencyRatio = 0.131895821796
nem 3   mac  avgUpstreamProcessingDelay0 = 465.360961914

[emanesh (localhost:47000)] ## get stat 3 phy
nem 3   phy  avgDownstreamProcessingDelay0 = 3.34661269188
nem 3   phy  avgProcessAPIQueueDepth = 1.00564204968
nem 3   phy  avgProcessAPIQueueWait = 32.1192521699
nem 3   phy  avgUpstreamProcessingDelay0 = 4.33834266663

'Rate limited sender'
----------------------------
[emanesh (localhost:47000)] ## get stat 3 mac
nem 3   mac  avgDownstreamProcessingDelay0 = 240.181991577
nem 3   mac  avgProcessAPIQueueDepth = 1.0200306159
nem 3   mac  avgProcessAPIQueueWait = 24.7767970557
nem 3   mac  avgTimedEventLatency = 48.3653869226
nem 3   mac  avgTimedEventLatencyRatio = 0.247179240988
nem 3   mac  avgUpstreamProcessingDelay0 = 422.160675049

[emanesh (localhost:47000)] ## get stat 3 phy
nem 3   phy  avgDownstreamProcessingDelay0 = 3.57592797279
nem 3   phy  avgProcessAPIQueueDepth = 1.01447435246
nem 3   phy  avgProcessAPIQueueWait = 47.6275234891
nem 3   phy  avgUpstreamProcessingDelay0 = 7.49943065643

(iii) The docs say that a latency event ratio near 1 is bad. Does that 
mean a ratio of 0.1-0.2 is ok?

(iv) What is the meaning of the 'Dst MAC' column of the 
UnicastPacketDropTable0 table (from e.g. get table * mac)? I see a large 
number of packet drops in that
column for the unconstrained case. From turning up the logging on emane 
I think most of them are to do with promiscuous mode not being enabled 
or the
receiver sensitivity not being high enough. Can I ignore those drops?

Any help much appreciated,
Dan


On 23/03/2015 14:24, Steven Galgano wrote:
> Dan,
>
> Most emane 0.9 models contain a set of statistics that can be used to
> determine how your emulation is performing. These statistics aim to show
> average processing delay and timer latency. As you characterize your
> hardware and scenario, you can monitor these to determine the falloff point.
>
> Virtual Transport:
>   avgDownstreamProcessingDelay
>   avgProcessAPIQueueDepth
>   avgProcessAPIQueueWait
>   avgTimedEventLatency
>   avgTimedEventLatencyRatio
>   avgUpstreamProcessingDelay
>
> https://github.com/adjacentlink/emane/wiki/Virtual-Transport#Statistics
>
> IEEE802.11abg:
>   avgDownstreamProcessingDelay0
>   avgDownstreamProcessingDelay1
>   avgDownstreamProcessingDelay2
>   avgDownstreamProcessingDelay3
>   avgProcessAPIQueueDepth
>   avgProcessAPIQueueWait
>   avgTimedEventLatency
>   avgTimedEventLatencyRatio
>   avgUpstreamProcessingDelay0
>   avgUpstreamProcessingDelay1
>   avgUpstreamProcessingDelay2
>   avgUpstreamProcessingDelay3
>
> https://github.com/adjacentlink/emane/wiki/IEEE-802.11abg-Model#Statistics
>
> Phy:
>   avgDownstreamProcessingDelay0
>   avgProcessAPIQueueDepth
>   avgProcessAPIQueueWait
>   avgTimedEventLatency
>   avgTimedEventLatencyRatio
>   avgUpstreamProcessingDelay0
>
> https://github.com/adjacentlink/emane/wiki/Physical-Layer-Model#Statistics
>

_______________________________________________
emane-users mailing list
[email protected]
http://pf.itd.nrl.navy.mil/mailman/listinfo/emane-users

Reply via email to