Hi all,

Searching the NS archives did not shed much light on this
topic: is there a recommended procedure to model available computational
resources (CPU time) within NS nodes? 
I'm just wondering how one could model the following scenario in NS: a node
runs several distinct Agents. All Agents must provide some feedback on the
consumed CPU time and a node control unit delays incoming and/or outgoing
packets accordingly on congestion (meaning one queue+delay object upstream
and one downstream on any link).

>From an architectural point of view it's not clear to
me how Agents (the ultimate source to decide on required CPU processing time
for an application-specific packet that is sent or received) can change the
behavior of a classifier within the node structure or, even worse, control a
queue/delay object that is located upstreams or downstreams.

Any ideas/hints where to start searching? Are there any NS examples that
implement such behavior? Do you know of any work in this area?

thanks in advance,
br
Joachim

Reply via email to