It is like the following:

There are two kinds of nodes: The sink and the rest. The sink, in a
initial phase, will send many broadcast messages, gradually increasing its
transmission power, so that every node will be able to determine its
necessary TP in order to reach the sink.

Thus, I intended to create a loop, where the messages would keep being
sent, just increasing the TP.

 64     if (strcasecmp(argv[1], "pot_setup")==0) {
 65       Packet* pkt = allocpkt();
 66       hdr_myunequal* hdr = (hdr_myunequal*) hdr_myunequal::access(pkt);
 67       hdr_cmn *cmh = HDR_CMN(pkt);
 68       for (i=0;i<255;i++) {
 70         hdr->type=INCR_POT;
 72         cmh->addr_type()= NS_AF_INET;
 73         cmh->prev_hop_ = myaddr;
 74         cmh->direction()= hdr_cmn::UP;
 75         cmh->next_hop()= IP_BROADCAST;
 76         send(pkt,0);
 77       }
 78       return TCL_OK;
 79     }

Something similar to this, but with the increasing TP operations and a
possible delay inside it.


My doubts:
1 - Why this function, when called from inside the tcl script, results in
a stach smashing? The packets must be reallocated in every iteration?

2 - How do I create a delay inside my C++ code, as part of a protocol
operation? I had tried previously with the sleep and usleep commands, but
in the end the code had been successfully compiled and no delay occured at
all.



Thanks in Advance,
-- 
Fernando Henrique Gielow - UFPR - NR2
Computer Science graduation student.


Reply via email to