Re: [ns] Large ad hoc simulation using FreeSpace propagation model is way too slow

2008-05-17 Thread Daniel Mahrenholz

Hi Ricardo,

the obvious answer - the number of events to process is very different.
But why?
First of all, the different propagation models result in different
transmission ranges. And greater transmission range results in a larger
number of nodes receiving an event and hence more processing time
required. You can check this by counting the number of events in the
event trace.
Second, when you start your simulation it should print a line containing
distCST_ = 550.0. This is the maximum range one node can affect
another one (carrier sense). If it is unknown you will see a really huge
number. This value is used to optimize the propagation computation
because nodes further away do not need to be considered during the
transmission of a packet. So, larger values will increase computation time.
Third, even if you set the transmission range of different models to the
same value you will get different event counts. The reason is that all
models produce different carrier sense to transmission range ratios.
That means, equal transmission ranges will result in different carrier
sense ranges that obviousely results in a different behavior of your
simulation.

Daniel.

Ricardo Ulisses schrieb:
 Hi all,

 I've been doing research on large mobile ad hoc network simulation, so
 far using Two-Ray Ground radio wave propagation model. I suppose I was
 achieving a reasonable amount of time to fully complete a simulation
 run with 10k nodes with 3k connections between them during 300 seconds
 of simulation time (about 2 days and a half).

 Now I've switched to the FreeSpace propagation model and I've noticed
 that the time to process the simulation largely increased.

 Let me show some small scale network examples:

 1) Simulating 10 nodes, 3 connections between them, 300s simulation time:

 FreeSpace..1m3.232s
 TwoRayGround0m0.500s

 2) Simulating 100 nodes, 30 connections between them, 300s simulation time:

 FreeSpace..more than 30 minutes
 TwoRayGround1m11.207s

 I've also made some simulations using the Shadowing model and the time
 spent was very close to the ones obtained when using the TwoRayGround
 model.

 I am really surprised with the high time consumption of the simulation
 run using FreeSpace model as it is a much less complex model than
 TwoRayGround and Shadowing.

 Does anyone have any clue about the reason why this is happening?

 Yes, I've read the FAQ, ns-problems page, and manual and I couldn't
 find the answer there neither in any other document or website.

 Please, ask me if more information about how the simulation is
 configured is needed.

 Thanks in advance.

 Ricardo J. Ulisses Filho
 =
 Departamento de Sistemas e Computacão - DSC
 Universidade de Pernambuco - UPE
 Recife - Pernambuco - Brazil
 [EMAIL PROTECTED]
 [EMAIL PROTECTED]

   





Re: [ns] Vectors in NS

2008-03-06 Thread Daniel Mahrenholz

Hi,

vector.erase(...) works only on iterators. Look here for more 
information: http://www.cplusplus.com/reference/stl/vector/erase.html

Daniel.


SS Mukaka schrieb:
 Dear All

 How do you define and manipulate vectors in C++ code.
 I have written my own protocol but I'm having problems with the vectors

 Here is how I defined my vector:

 vector int MyVector

 and this is how I manipulate it

 MyVector.erase(index)
 MyVector. push_back(DataItem)

 This is the error that I get when I try to compile my code (using make)
  error: no matching function for call to 'std::vectorint,
 std::allocatorint ::erase(int)'
 /usr/lib/gcc/i486-linux-gnu/4.1.2/../../../../include/c++/4.1.2/bits/vector.tcc:110:
 note: candidates are: typename std::vector_Tp, _Alloc::iterator
 std::vector_Tp, _Alloc::erase(__gnu_cxx::__normal_iteratortypename
 std::_Vector_base_Tp, _Alloc::_Tp_alloc_type::pointer, std::vector_Tp,
 _Alloc ) [with _Tp = int, _Alloc = std::allocatorint]
 /usr/lib/gcc/i486-linux-gnu/4.1.2/../../../../include/c++/4.1.2/bits/vector.tcc:122:
 note: typename std::vector_Tp, _Alloc::iterator
 std::vector_Tp, _Alloc::erase(__gnu_cxx::__normal_iteratortypename
 std::_Vector_base_Tp, _Alloc::_Tp_alloc_type::pointer, std::vector_Tp,
 _Alloc , __gnu_cxx::__normal_iteratortypename std::_Vector_base_Tp,
 _Alloc::_Tp_alloc_type::pointer, std::vector_Tp, _Alloc ) [with _Tp =
 int, _Alloc = std::allocatorint]

 Regards

 SS Mukaka
   



Re: [ns] About wireless cards specifications simulation

2008-02-25 Thread Daniel Mahrenholz

Hi,

Gabrial Monty schrieb:
   You have said that :
   To answer your question: NO, it is absolutely not realistic to deduce 
 from a result of 150m transmission range to be in an indoor scenario.

   Do you mean that I cannot assume an indoor range to be 150m, or I cannot 
 assume that an infrastructured network is working in an indoor environment?
   
What I tried so say is: If your transmission range is 150m, you cannot 
assume to be in an indoor environment.
For your question:
a) you cannot assume that the transmission range will be 150m in an 
indoor environment. Usually, when someone gives a transmission range, he 
means that the transmission range can be _up to_ this value. You can 
never assume a minimum range (s. the elevator example).
b) An infrastructured network is known to work in an indoor environment.

   The point that I want to get to, is that in the standard it is specified 
 that the maximum transmit power for 802.11a cards in the range 5.725-5.825 
 GHz can reach 800 mW, following this can I assume a range of 150m (for 
 example) for a card using 600 mW as its transmit power, apart from the 
 propagation model or lets say I am working with TwoRayGround model, can I 
 assume this or one can argue that my simulation settings are not real?   
   
In a general environment (one you have no specific knowledge about) you 
cannot assume anything (s. the elevator). If you define some properties 
of the environment as prerequisits (e.g. we are in an open park area 
with clear line-of-sight and no obstacles within the first fresnel 
zone) you can assume a transmission range with high confidence (e.g. 
150m +/- 20m). In an indoor environment this is quite difficult because 
the variation is extremly high - something linke 50m +/- 50m does not 
help you much. Furthermore, even in a static indoor environment (one 
where anything remains at its position, no people moving, no doors 
opened/closed etc.) the signal strength between a pair of stationary 
WLAN transmitters will not be constant. You will find more information 
and references to other peoples' work in my thesis 
(http://deposit.ddb.de/cgi-bin/dokserv?idn=980478588) or in the papers 
of my collegues that continued / extended my work 
(http://wwwivs.cs.uni-magdeburg.de/EuK/forschung/publikationen/index.shtml).

So my advise - start your work with a discussion of the properties of 
the environment, than that of your devices, and finally that of your 
protocol. From all this you can derive the performance of your 
communication / application with reasonable confidence. And don't forget 
- transmission power is a property of your devicey, but transmission 
range is not. It is a result of the combination of transmitter, 
receiver, environment, and signal encoding.

Daniel.
 Daniel Mahrenholz [EMAIL PROTECTED] wrote:
   Hi,

 Gabrial Monty schrieb:
   
 I have to simulate an infrastructured wireless network behavior using 
 different wireless cards specifications. I want to test the network 
 performance using 802.11a cards with high transmit power level (not less 
 than 600 mW). I have choosen the XtremeRange5 card but what I have realized 
 is that its outdoor range is over 50 km and it indoor range is 150 m, during 
 simulation if two nodes are further than 150 m the link throughput is 0, so 
 is it realistic to assume that I deal with indoor networks? what I mean I am 
 dealing with similar networks topology to wireless mesh networks, can I 
 assume this network to be indoor and apply this NIC card specifications
 
 Basically there is no difference if you increase the transmission power 
 or use a high gain antenna. So, apart from the transmission power, the 
 sensitivity of the transceiver is what makes the most important 
 difference between two wireless cards.

 Before you proceed with your work you should read something about 
 wireless propagation. Just as an example, an outdoor range of 50km is 
 only possible if the transmitter is placed high enough above ground. 
 Otherwise you will not have a free line of sight and fresnel zone. When 
 I remember right, for 50km distance the transmitter needs to be placed 
 about 80m above ground. The environment defines how the transmitted 
 signal is attenuated. Just imaging you are inside a metal elevator - 
 then you probably get an indoor range of 1m.

 To answer your question: NO, it is absolutely not realistic to deduce 
 from a result of 150m transmission range to be in an indoor scenario.

 I suggest that you start by selecting a propagation model that mimics 
 the effects experienced in an indoor environment (e.g. multipath 
 propagation, shadowing, interference ...). If you have such a 
 propagation model, you can start to investigate effects caused by the 
 transmission power, cars specification, protocol ... whatever you like.

 Hope that gets you started,
 Daniel.



 -
 Never miss a thing.   Make Yahoo your homepage.
   




Re: [ns] About wireless cards specifications simulation

2008-02-22 Thread Daniel Mahrenholz

Hi,

Gabrial Monty schrieb:
   I have to simulate an infrastructured  wireless network behavior using 
 different wireless cards specifications. I want to test the network 
 performance using 802.11a cards with high transmit power level (not less than 
 600 mW). I have choosen the XtremeRange5 card but what I have realized is 
 that its outdoor range is over 50 km and it indoor range is 150 m, during 
 simulation if two nodes are further than 150 m the link throughput is 0, so 
 is it realistic to assume that I deal with indoor networks? what I mean I am 
 dealing with similar networks topology to wireless mesh networks, can I 
 assume this network to be indoor and apply this NIC card specifications
Basically there is no difference if you increase the transmission power 
or use a high gain antenna. So, apart from the transmission power, the 
sensitivity of the transceiver is what makes the most important 
difference between two wireless cards.

Before you proceed with your work you should read something about 
wireless propagation. Just as an example, an outdoor range of 50km is 
only possible if the transmitter is placed high enough above ground. 
Otherwise you will not have a free line of sight and fresnel zone. When 
I remember right, for 50km distance the transmitter needs to be placed 
about 80m above ground. The environment defines how the transmitted 
signal is attenuated. Just imaging you are inside a metal elevator - 
then you probably get an indoor range of 1m.

To answer your question: NO, it is absolutely not realistic to deduce 
from a result of 150m transmission range to be in an indoor scenario.

I suggest that you start  by selecting a propagation model that mimics 
the effects experienced in an indoor environment (e.g. multipath 
propagation, shadowing, interference ...). If you have such a 
propagation model, you can start to investigate effects caused by the 
transmission power, cars specification, protocol ... whatever you like.

Hope that gets you started,
Daniel.



Re: [ns] Which is the best Opreating System for NS2.27 emulation

2008-02-04 Thread Daniel Mahrenholz

M Lee schrieb:
 Hello:
  I want to know which is the best Opreating System for
 NS2(2.27--2.32)?especially
 for emulation.
  linux Redhat9 ? linux Debian? Free BSD?

Thanks!
   
Debian - if you use Debian, you can install the whole emulation 
extension using apt-get / aptitude. Furthermore, NSE has been developed 
and tested on Debian for a long time.

Daniel.



Re: [ns] Somthing wrong when I patch NS-2.27 with Emulation Extensions

2008-02-04 Thread Daniel Mahrenholz

Hi,

you need to install the kernel headers (or the complete kernel sources)
to compile this emulation extension.

Daniel.

M Lee schrieb:
 Hello:
  I have install NS-2.27 on* Linux Redhat9* and patch it with  NS-2
 Emulation Extensions :
  *Distributed Clients Emulation Patch against ns-2 2.27, examples [**TGZ
 *http://ivs.cs.uni-magdeburg.de/EuK/forschung/projekte/nse/ns2emu-dstapp.tgz
 *]*

  Every thing is OK until I type make, the following info coming:
  .
   /filter_core -I./asim/ -I./qs -o common/scheduler.o
 common/scheduler.cc
 *common/scheduler.cc:53:21: asm/msr.h: No such file or directory
 common/scheduler.cc: In member function `void
RealTimeScheduler::sync_cputicks()':
 common/scheduler.cc:1059: `rdtscll' undeclared (first use this function)
 common/scheduler.cc:1059: (Each undeclared identifier is reported only once
 for
each function it appears in.)
 make: *** [common/scheduler.o] Error 1*
 * *
 *  Anyone met this before? *How can I do?*   *
 **
 *   Thanks!*
 **
 * *

 * *
   



Re: [ns] why my TCL file cannot run ..?

2007-08-22 Thread Daniel Mahrenholz

AZHAR MOHD ARIS schrieb:
 The good news is i succesfully installed NS2 in my Ubuntu Linux OS.
 The bad news is :D i found an error when execute the command ns
 singlehop.tcl

 [EMAIL PROTECTED] :/usr/local/ns2/projek$ ns singlehop.tcl

 num_nodes is set 3
 invalid command name Propagation/SimpleDistance
 while executing
 Propagation/SimpleDistance create _o24 
 invoked from within
 catch $className create $o $args msg
 invoked from within
 if [catch $className create $o $args msg] {
 if [string match __FAILED_SHADOW_OBJECT_ $msg] {
 delete $o
 return 
 }
 global errorInfo
 error class $...
 (procedure new line 3)
 invoked from within
 new $propType_
 (procedure _o3 line 29)
 (Simulator node-config line 29)
 invoked from within
 $ns_ node-config -mobileIP OFF \
   -adhocRouting NOAH \
   -llType LL \
   -macType Mac/802_11 \
...
 (file singlehop.tcl line 22)
   
For me its seems that you implemented a new propagation model but forgot 
to make it available on the tcl layer.
Check the following:
- Makefile compiles your new propagation model
- the final binary contains the new model
- the new model is registered with tcl using the right name

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln
Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] Movement of nodes should be modifiable during runtime of an ns-2 simulation

2007-08-22 Thread Daniel Mahrenholz

Hi Bjoern,

Schuenemann, Bjoern schrieb:
 I'd like to make a simulation with ns-2 where the movement of the nodes is 
 modifiable by another program during the runtime of the ns-2 simulation. How 
 could I realize this? It seems that modifications of trace and movement files 
 are ignored by ns-2 after the files are read in by the TCL script
I think the problem is that after reading the movement files all events 
that actually control the movement are already in the event queue.

One of my students implemented an external movement control some years 
ago for the emulation mode. Basically he adds a listening socket that 
takes simulator control commands from an external program, converts them 
to TCL code and evaluates them. Then he ran the simulation for an 
infinite time (and send a stop command from the external controller to 
exit) and could move the nodes around as he liked.

But, this only works in emulation mode. The reason is, if you are in the 
normal simulation mode, time will jump from event to event. And if there 
is the last movement event in the queue, the simulator possibly will 
jump to the final stop event and quit. So, only emulation ensures that 
your simulation time does not runs too fast.

I took a quick look but could not find the diploma thesis / code of the 
student. I will spend more time searching if you like. Title of the 
thesis was  Eine dynamische WLAN-Emulationsumgebung auf Basis des 
NS-2, Thomas Kiebel, Diplomarbeit, 2005.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] How can i set transmission difference rage

2007-08-21 Thread Daniel Mahrenholz

Phitsanu Eamsomboon schrieb:
 Dear All
 My name is Phitsanu from Thailand , I  to do my research about ad
 hoc network , In my research having 100 mobile nodes but I want to set 50
 nodes can communicated in 250 m and another 50 node communicated in 500 node
 . How can I set in tcl for NS-2 ?

   
Use antennas with different gains. Change the global settings for the 
antenna gain before you create a node. That means, set it to (for 
example) to 2dBi for the first 50 nodes and 8dBi for the next 50. Use 
the propagation.cc utility to calculate the exact parameters (search the 
list for details).

But be aware, the transmission range depends on both, the sender and the 
receiver. That means, if you have two kinds of nodes (A and B) you get 3 
different combinations and hence communication ranges (e.g. A-A: 250m, 
A-B: 400m, B-B: 500m).

Hope this helps,
Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] Is there exist any IRC for ns2

2007-08-17 Thread Daniel Mahrenholz

Scottie schrieb:
 I had do a google search and hardly find any of IRC for ns2.
 Regards,
   Scottie
   
4th hit with Google for irc ns-2 : 
http://mailman.isi.edu/pipermail/ns-users/2005-November/052772.html

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] TCP packet size and Wireless channel delay settings?

2007-08-17 Thread Daniel Mahrenholz

Farzaneh Razavi Armaghani schrieb:
   I'm monitoring the TCP throughput over 802.11 in wireless networks in my 
 scenario. 
   When I change the TCP packet size, I can not see any change in TCP 
 throughput? is it alright? 
   I'm using the command Agent/TCP set packetSize_ 1460 in my TCL script and 
 I'm calculating the throughput using this equation: (total received bytes in 
 server * 8)/simulation time. Is it correct?
   Also, How can I set the wireless channel delay to 25 microsecond in my TCL 
 script
Delay on the wireless link is defined by physical laws. However, you 
could increase the error rate to cause retransmissions on the MAC layer. 
Or you could introduce a processing delay in the agent.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] Is there a data rate limitation for ns emulate?

2007-08-15 Thread Daniel Mahrenholz

jerry zhao schrieb:
 I want to build a scenrio to emulate video streaming.
 my real network topology is:

  A- - - - B- - - - -C
 The NS was installed on Computer B.
 The data rate is relative big(more than 30Mb/s). There is always
 warnings  RealTimeScheduler:
 warning:slop ... exceed limit 
 And some packets are lost, but the bandwith is enough to transmit the video
 data.
 Could anyone tell me if there is a data rate limitation for ns emulate? Any
 advice would be appreciated
ns-2 defines no limitation. However, ns-2 needs to process data packets 
almost in real-time and hence is limited by the speed of your hardware / 
operating system. If you emulate a wired network ns-2 will perform quite 
well. But for a wireless network performance requirements are much 
stronger (standard ns-2 will not be able to work at all for wireless 
networks in most cases).
But even if your CPU is fast enough, the operating system can introduce 
short but significant delays. To avoid them you should try the following:
- set ns-2 to run with real-time priority
- disable tracing
- use a multi-core / CPU system and pin ns-2 to one CPU / core
- disable other applications that write to the disk
- if you need tracing, buffer output in memory (ram disk)

For high-throughput emulations of wired networks you should consider a 
different emulator like NIST Net.

This should improve your results.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] verying link capacity dynamically

2007-08-14 Thread Daniel Mahrenholz

bhaskar sardar schrieb:
  Is there any way to dynamically change the bandwidth of a wireless link. I
 mean, during first 10 sec of the simulation BW is 1 Mb, for next 10 seconds
 BW is 2 Mb and so on. This is required when a user moves very fast.
   
A wireless link does not have a link bandwidth. Maybe you mean the 
trasmission data rate. If so, use the new 80211 MAC layer with multirate 
support.
If you really need links with defined bandwidth, you first need to 
implement a bandwidth management / admission control which is quite 
difficult in wireless networks.

Regards,
Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] Changing NS Unit disk model of radio range

2007-08-14 Thread Daniel Mahrenholz

Hi,

Faisal Aslam schrieb:
 I am using cmu extension to simulate a wireless-sensor protocol. I 
 believe NS-2 only support unit disk transmission range model. However, 
 unit disk model is not realistic.
100% true.
  I wish to have model that has following
 (1) Each node selects its transmission range (say c) using uniform 
 distribution from say [a, b].
   
Simply modify the antenna gain or transmission power before creating a node.
 (2) Each node randomly selects few nodes near the border of circular 
 transmission range to be out of range. It is because in practice 
 transmission range is not exactly circular.
   
You will need to implement (maybe there is already an implementation) a 
directional antenna. Also using the Shadowing propagation model 
introduces some probabilistic effects.

To select particular nodes that are not in range you could implement a 
new propagation model that uses some kind of lookup table to determine 
the reception of a packet.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] Memory problems

2007-08-14 Thread Daniel Mahrenholz

Hi,

Ángel Cuevas Rumín schrieb:
 I am having some problems with my simulations.
 I am generating quite a lot packets and at some point during the simulation,
 it finishes with this messages in the screen:

   terminate called after throwing an instance of
 'std::bad_alloc'
   what():  St9bad_alloc


 It seems there is not more free space to allocate new entities.

 I have tried to free memory by freeing packets (Packet::free(p)). I have
 also tried some other solutions but they didn't work.

 I would appreciate any kind of help from someone who had the same problem in
 the past or someone who knows which is the problem.
   
For any memory problem I suggest to use valgrind (http://valgrind.org).
It saved my day several times.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] advice (relatime protocol implementation on ns-2)

2007-08-10 Thread Daniel Mahrenholz

Hi,

koleti suresh schrieb:
 Please advice me how to implement real time protocol(it should work on linux) 
 on ns-2 ?
 can any one tell that in which applications we willl use ns-2?
 let me conform whether this tool is used for real time implementation of a 
 protocol or not?
 my basic doubt is i want to implement new protocol for linux. whether i can 
 use this tool for development purpose ?
   
If you try to implement a new protocol for linux and need to test it in 
ns-2, our extension may be a solution for you:
http://ivs.cs.uni-magdeburg.de/EuK/forschung/projekte/gea/index.shtml
http://ivs.cs.uni-magdeburg.de/EuK/forschung/publikationen/pdf/2005/gea.pdf

If you find this approach suitable for your needs, contact me to get the 
extension.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] help ! for increase communication rage to 1600 m

2007-08-08 Thread Daniel Mahrenholz

Hi,

adjusting transmission power and thresholds is not sufficient for such a 
long distance. You also need to adjust MAC timeout values.
Citing from my own mail with subject Re: [ns] MAC/802.11 STA error on 
ACK
-
The Linux driver for the madwifi cards
includes a utility to calculate all relevant timeout values. Maybe it is
a starting point for you. If you do not have access to a madwifi card I
could ask a colleague for assistence.

Look here for some information:
http://forums.wi-fiplanet.com/showthread.php?t=6488


Faisal Aslam schrieb:
 Read the section of transmission models in NS-2 manual. You should 
 check which transmission model (e.g. FreeSpace, TwoRayGround, Shadowing) 
 is in use. You should try to adjust the RXThreah_ using threshold.c as 
 described in the NS manual.

 regards,
 Faisal

 Phitsanu Eamsomboon wrote:
   
 My name is Phitsanu from Thailand , I use ns-2 for to do my research .
 May I trouble you for ask you some question , I try to increase Transmission
 rage is 1600 m by increase Pt but  It still can communicate in 500 m range ,

 How can I  increase transmission rage is 1600 m if I use TwoRayGrounf model.
 How can I calculate , Please recommend me !!!
 

Regards,
Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] shutting down link or node with wireless topology

2007-06-27 Thread Daniel Mahrenholz

Hi,

[EMAIL PROTECTED] schrieb:
 I am a new ns user i am trying to simulate a wireless topology using OLSR
 as a routing protocol. I need to simulate a link failure for a short
 period of time to study the reactions of OLSR to the failure
In a wireless scenario you do not have links, so you cannot shut them 
down. One option is to simply move a node out of the transmission range 
of another. This may become difficult for a large number of nodes. You 
could also extend the node to implement something like a MAC filter so 
that a node simply ignores messages from individual sources. But im not 
aware of any such solution currently existing. You could also modify the 
propagation model to create very specific transmission failures, e.g. 
drop all packets from a specific node (node failure).

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] How MAC finds out that there is no more traffic?

2007-06-18 Thread Daniel Mahrenholz

Hai T. Vu schrieb:
 I am working on Mac protocol for wireless network. Let say I have node_(1)
 with cbr traffic sending to node_(0). The cbr traffic will be like this:
 start at 1.0, stop and 2.0, (re)start at 3.0 and stop again at 4.0 second.
 My question is that: at 2.0, how Mac finds out that the traffic is indeed
 done? For example, if node_(2) is a mobile phone, and cbr traffic is the
 voice call, then at 1.0 it needs to set up the call and after 2.0 second it
 needs to tear down the call.

 If you know how to do this, please help me. I appreciate any help.
   
Usually a MAC will not care about calls etc, it only handles individual 
data packets. In your terms, the MAC layer knows that there is no more 
traffic if its input buffer is empty. If you need a MAC layer that is 
aware of the duration of a call you need to implement some cross-layer 
interaction, i.e. the layer handling the voice connection needs to pass 
this information down to the MAC layer e.g. to (de)allocate ressources 
on the medium.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.




Re: [ns] help required regarding analysis of tracefile

2007-06-01 Thread Daniel Mahrenholz

Hi,

harpreet schrieb:
   
 Hi.
 I want to analysis trace file which looks like (not exactly ) data given 
 below.
 Here are NS Manual's example
 + 1.84375 0 2 cbr 210 --- 0 0.0 3.1 225 610
 - 1.84375 0 2 cbr 210 --- 0 0.0 3.1 225 610
 r 1.84471 2 1 cbr 210 --- 1 3.0 1.0 195 600
 r 1.84566 2 0 ack 40 --- 2 3.2 0.1 82 602
 + 1.84566 0 2 tcp 1000 --- 2 0.1 3.2 102 611
 - 1.84566 0 2 tcp 1000 --- 2 0.1 3.2 102 611
 r 1.84609 0 2 cbr 210 --- 0 0.0 3.1 225 610
 + 1.84609 2 3 cbr 210 --- 0 0.0 3.1 225 610
 d 1.84609 2 3 cbr 210 --- 0 0.0 3.1 225 610
 - 1.8461 2 3 cbr 210 --- 0 0.0 3.1 192 511
 r 1.84612 3 2 cbr 210 --- 1 3.0 1.0 196 603
 + 1.84612 2 1 cbr 210 --- 1 3.0 1.0 196 603
 - 1.84612 2 1 cbr 210 --- 1 3.0 1.0 196 603
 + 1.84625 3 2 cbr 210 --- 1 3.0 1.0 199 612

 Ok. last field is packet identifier right? and it's unique.
 But 1st line 610 packet enque and next deque. but 7th line 610 packet again 
 received and enque and dropped.
 How can this happen? packet identifier is not unique?
   
Line 1: packet #610 enqueued on node 0
Line 2: packet #610 dequeued on node 1
Line 7: packet #610 received on node 2 from node 0
Line 8: packet #610 enqueued on node 2 (now with new destination / next
hop 3)
Line 9: packet #610 dropped

If you receive a packet, the receiver is still the destination.

Hope this clearifies you problem.,
Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.




Re: [ns] Ns2 with threads

2007-05-02 Thread Daniel Mahrenholz

Hi,

Jose M.Herrera schrieb:
 A question.
 Does they know if Ns2 support threads 
Usually not - but you can get a version that uses threading internally. 
The question is for what you need threading?

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] how to computer the Transmission range

2007-04-18 Thread Daniel Mahrenholz

Mohammed Abu Hajar schrieb:
 Is there any utility in NS-2 that compute the transmission range of a 
 wireless node like in glomosim simulator you type at the command line: 
 radio_range config.in it's gives the range in m like 375m.
   
Look in the indep-utils/propagation directory.

But keep in mind - in reality you do not have anything like a discrete 
transmission range. Instead you have regions with different receive 
probabilities.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.




Re: [ns] Lucents WaveLAN propagation model

2007-04-18 Thread Daniel Mahrenholz

Mohammed Abu Hajar schrieb:
   How can I implement(write)  Lucent’s WaveLAN propagation model in ns-2 tcl 
 script file?, is it like: 
   set opt(prop)   Propagaion/TwoRayGround, or
   set opt(prop)   Propagaion/FreeSpace, or
   set opt(prop)   Propagaion/Shadowing. 
Don't mix the modelling of a transmitter with the modelling of the 
signal propagation.

The model of the card (see answer from Ran Ren) defines what signal is 
generated (frequency, power), the antenna characteristics (gain), and 
what signals the card is able to receive (*Thresh).

The propagation model defines how the signal changes (signal strength 
degradation, noise, etc.) between the transmitter and the receiver.

If you have more specific questions, don't hesitate to ask me.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] strange bug in ns2 - halted forever (more details) ?

2007-04-18 Thread Daniel Mahrenholz

tdinhtoan schrieb:
 I got the following bug in ns2: I run a Tcl file, after running for a
 while, it is halted forever (of course it is expected to run more), and I
 could not get the results at all.

 There is no infinite loop in my code. My simulation is a wireless mesh  
 network topology consisting of several nodes, and I run UDP flows between  
 nodes. I compute the throughput of each node and output them to the screen  
 every 5ms. The bug is like this: After outputing the results several times  
 (say 5 times, for example), it is halted and does not outputs more  
 results, but I program it to output the results 10 times. Sometimes, with  
 some topologies, it works as expected - running to the final, and  
 sometimes with other topologies, it does not work.
   
I don't know this bug, but I suggest to do the following:
1. Check if you flush the output after you write your statistics. This 
is a mistake I made several times.
2. Look into the trace file (enable it if necessary) if the simulation 
generates events during the time it seems to be halted
3. Run the simulation inside a debugger and break it when it hangs to 
see whats going on. Maybe you managed to create an infinite loop of 
events triggering itself over and over again.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.



Re: [ns] Lucents WaveLAN propagation model

2007-04-18 Thread Daniel Mahrenholz

Mohammed Abu Hajar schrieb:

   Yes, realy I want to simulate the performance of DSR routing protocol using 
 NS-2 , I did that and compare my results I got with a published paper called 
  Performance omparison of Two On-Demand Routing Protocols for Ad Hoc 
 Networks , but unfortunatly there were some differences among figures 
 relayed to:  Normalize Routing Load , Average End to End Delay, Packet 
 delivery Ratio. I am using the same parameters used in the paper , but there 
 is a statement menssion in the peper talks  
   
You may use the same parameters - but do you use the same version of 
ns-2? Small code changes can have a significant effect.
   The radio model uses
   characteristics similar to a commercial radio interface,
   Lucent’s WaveLAN [14, 15]. WaveLAN is modeled as a
   shared-media radio with a nominal bit rate of 2 Mb/s and a
   nominal radio range of 250 m.
   I am didn't understand how I write this parameter in my TCL script file , 
 So I think the differences in my results  related to this point.
   
If I remember right, these are the default values included in 
tcl/lib/ns-default.tcl or ns-node.tcl . Look there or in the tutorials 
mentioned in the ns-2 wiki for examples to setup wireless nodes. If you 
use a current ns-2 version, my guess is that the code makes most of the 
difference.

Personally, I never used this setup because a) the Lucent WaveLAN 
adapter has significant differences to currrent WiFi cards and b) all 
models that include a fixed communication range deliver results not 
applicable to real-world scenarios / applications. My personal advise - 
at least you the ShadowPropagation model. It is not perfect but creates 
many problems you have to cope with in the real world.

More about this topic and the setup used for my research can be found in 
my thesis at 
http://diglib.uni-magdeburg.de/Dissertationen/2006/danmahrenholz.htm

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.




Re: [ns] how to pass PHY layer information to upper layers?

2007-04-12 Thread Daniel Mahrenholz

sri_seeta_ram schrieb:
 I want to pass the Received Power of the packet from PHY layer to the
 protocol agent.
 how can i achieve this in ns2?

 I found the Received power in wirelessphy.cc and mac-802_11.cc i.e PHY and
 MAC layers.
 how can i pass this info to LL and then Routing agent...?
   
A Packet in ns2 contains all configured headers in parallel. So, if you 
can access information on one layer (e.g. PHY), you can also access it 
on other layers. If the information is not contained in a header you 
need to add a new header to the packet structure. In this header you can 
store your information and pass it around.

Daniel.

-- 
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.




Re: [ns] MAC/802.11 STA error on ACK

2007-03-30 Thread Daniel Mahrenholz


[EMAIL PROTECTED] schrieb:

I sent CBR packets from node 0 to node 1. The distance between the two
nodes is changing. I found that when the distance is more than 700m, the
transmitter seems always drop the ACK coming back from the receiver, and
give an error state of STA. When the distance is below 500m,
everything is fine. I attached some trace below. 


Is this caused by the long propagation delay of 700m? If so would change
the retransmit timeout solve the problem? How can I do that? Or is there
any other suggestion on the reason?
  

Increasing the retransmission timeout is not enough. You will need to
increase other timeouts as well. The Linux driver for the madwifi cards
includes a utility to calculate all relevant timeout values. Maybe it is
a starting point for you. If you do not have access to a madwifi card I
could ask a colleague for assistence.

Look here for some information:
http://forums.wi-fiplanet.com/showthread.php?t=6488

Regards,
Daniel.

--
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.




Re: [ns] Doubt........plz help me

2007-03-30 Thread Daniel Mahrenholz


Shailesh Gamit schrieb:

hello to every ns users,
 i have a doubt that in ad hoc network the links are wireless, so
when we send the packets using any protocol i.e aodv, dsr.
then the receiving is done at the node which is specified in destination
address. But if we consider real scenerio of wireless links than all the
packets sent are always BROADCASTED  and received by all the node. In
simulation it is not the same
  
You are wrong. There is nothing like a wireless link. All nodes are 
attached to a Channel object that is a broadcast medium. If a node 
transmits a packet, the simulator calculates the receive power for all 
nodes attached to the channel. If the signal is strong enough, the node 
will receive the packet. But if the destination address does not match, 
it will simply ignore it.


Regards,
Daniel.

--
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.




Re: [ns] definition of sending / receiving threshold per mobile node

2007-03-08 Thread Daniel Mahrenholz


Hi Markus,

Leitner, Markus schrieb:

I am currently working on a project dealing with simulation of mobile
nodes. 
To describe relevant network topologies I am searching for a possibility

to define the receiving threshold individually per node (e.g. mobile
node A has a receiving threshold of X, while all other nodes have a
receiving threshold of Y).
 
I know how to set the receiving threshold in principle via RXThresh_ but

I unfortunately did not find any way of specifying this per node.
 
Is anybody aware how to implement this?
  
I don't know if the following works for the threshold (it did for the 
antenna gain). Just give it a try. Set the threshold before you create a 
node. The node constructor will copy the value and you can change it 
later without changing the properties of already existing nodes. Maybe 
this is a bug - but we used it as a feature.


Daniel.

--
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

Web: www.rt-solutions.de

rt-solutions.de
networks you can trust.




Re: [ns] what is the transmission range fo mobile nodes

2007-02-08 Thread Daniel Mahrenholz


Satish B schrieb:

I want know what is the transmission range(ie upto maximum what distance it can 
send/receive packets with other nodes)of the mobile nodes in ns2.If I want to 
set with new range how to send it.please let me know this.
Thank  you .
  

This question has been answered multiple times.

Transmission range is NOT a parameter you can set. It is the result of
the signal propagation. To change the transmission range you can adjust
the transmission power, receive thresholds, antenna gains, and
propagation parameters. The propagation.cc utility
(indep-utils/propagation/) helps you to calculate these parameters.
Additionally, you should study some hardware specs to get realistic values.

Daniel.

--
Dr.-Ing. Daniel Mahrenholz
rt-solutions.de GmbH
Oberländer Ufer 190a
D-50968 Köln

rt-solutions.de
networks you can trust.




Re: [ns] threads in NS-2

2006-12-08 Thread Daniel Mahrenholz

Savitri P. Pandharikar schrieb:
Is it possible to create threads in NS-2,I am using Linux platform for 
 NS-2.
   
In general - yes. But for what purpose? You cannot run the simulation 
with multiple threads. But you can off-load certain tasks like output 
compression to separated threads.

All simulations are event-based and do not need threads.

Daniel.



Re: [ns] Compilation Error ns2.1b8a

2006-11-15 Thread Daniel Mahrenholz

Mohammad Haseeb Zafar schrieb:
 Hi
 I got following error while compiling ns2.1b8a. Please help.
 Haseeb


 c++ -c  -DTCP_DELAY_BIND_ALL -DNO_TK -DNIXVECTOR -DTCLCL_CLASSINSTVAR  
 -DNDEBUG
 -DUSE_SHM -DHAVE_LIBTCLCL -DHAVE_TCLCL_H -DHAVE_LIBOTCL1_0A7 -DHAVE_OTCL_H 
 -DHAVE_LIBTK8_3 -DHAVE_TK_H -DHAVE_LIBTCL8_3 -DHAVE_TCL_H  -DHAVE_CONFIG_H 
 -I. -I/bonhome/haseeb/work/ns-allinone-2.1b8a/tclcl-1.0b11 
You should provide more information, especially the version of gcc/g++ 
you are using. NS2.1b8a is very old. So, you will need an old gcc / 
binutils to compile it.

Daniel.



Re: [ns] Simulating channel Hopping in ns2

2006-11-15 Thread Daniel Mahrenholz

Hi,
gaurav deshpande schrieb:
 Hi all,

 We intend to implement channel hopping using ns2 simulator. Has anyone tried 
 to implement hopping in ns2?

 Our approach: 

 We found that each channel (class WirelessChannel) keeps a linked list of all 
 nodes listening on that channel.  For a mobilenode to hop we wrote code to 
 dynamically  add and remove the node from this list. However, on running ns 
 we get a segmentation fault. The code we wrote looks something like:


 MobileNode* n = node that hops;
 WirelessChannel new = next channel to hop to;
 WirelessChannel current = current channel of node n;

 new.addNodeToList(n);
 current.removeNodeFromList(n);


 It would be great of someone share thier experience in implementing channel 
 hopping in ns2.
   
I cannot help with the channel hopping. But to find problems with linked 
lists etc. in the C++ part of the code it is always a very good idea to 
run ns-2 using valgrind. Simulations will use a lot more time to run, 
but every strange or wrong memory access will be recognized.

Daniel.



Re: [ns] How can i measure the bandwidth in use?

2006-10-24 Thread Daniel Mahrenholz

wang laye schrieb:
 Hi,
I am simulating a QoS Routing Protocol,in this protocol,the
 available bandwidth of node is caculate according:
 Available Bandwidth=Total Bandwidth - Reseved Bandwidth - Bandwith In Use
 My question is:In C++, to a mobile node,How can i measure (or
 caculate) the bandwidth in use?
 Any suggestion is appreciated!
   
You should explain your network a little more, are you suing APs or an 
ad hoc network?

Determining the available bandwidth in a wireless network is quite 
complicated because it depends on the behavior of the nodes around you 
(even if they are out of communication range).

I used the following approach: instead of bandwidth I used air time 
(to account for different transmission speeds), that is the time a node 
utilizes the wireless channel in its region for a transmission. To 
reserve air time I used a beacon-based multi-hop reservation protocol. 
Additionally, to measure lost air time, that is air time than cannot 
be used because of interference from nodes out of communication range 
(or e.g. a microwave oven) I measured the time a packet to be 
transmitted spends in the output buffer of the transmitter. Changes in 
this time indicate a change in the available air time.

You see, the task you are going to solve cannot simply be accomplished 
by calculating a value out of available variables. If you have more 
questions, don't hesitate to ask me.

Daniel.



Re: [ns] Need tunctl command for cygwin

2006-10-19 Thread Daniel Mahrenholz

Emin Gencpinar schrieb:
 We installed ns-2 emulation (nse) both over Linux and (by cygwin) Windows
 machines. NSE uses tunctl command for tun / tap interfaces to manage the
 network. We have successfully installed the tunctl command source code for
 Linux under sbin directory, but we could not find any tunctl command source
 code valid for cygwin. Compilation error for the same source code in cygwin.
   
tunctl is a tool to manage the TUN/TAP driver of the linux kernel. There 
is a  TUN/TAP driver for Windows  developed by the OpenVPN folks. But I 
suspect that is configured in a different way.

If you are using the nse extension for wireless emulation you should 
give up on Cygwin, you have to run it on Linux. But it is possible to  
integrate Windows clients in such a setup.

Daniel.



Re: [ns] Multiple source files

2006-10-11 Thread Daniel Mahrenholz

Keita Rose schrieb:
 I am trying to run a mobile scenario using multiple source files, how do I go 
 about implementing that is NS-2 if it is at all possible.
   
I'm not sure, I got you right. But I did the following in order to 
simulate different topologies with the same set of applications. I first 
build a core simulation script, e.g. one that creates the simulator 
objects, configures everything, etc. Then I include the script that 
creates the topology (given by a command line parameter) into this core 
script. The remaining part of the core script configures the 
applications and runs the simulation.

Now, in order to investigate different topologies, I wrote a control 
script, that iterates over the list of available toplogies and calls the 
simulation script with the name of the topology script as a parameter. 
You should be able to do something similar.

Daniel.



Re: [ns] how to detemine channel used by 802.11 nodes

2006-08-10 Thread Daniel Mahrenholz

m w schrieb:

  I was just wondering as to how we can know which channel is 
 802.11 node is using. Is it possible to find it from somewhere regarding the 
 channel used information. 802.11b uses three channels so which channel a 
 specific node is using, how can we get to know that ? Please help me at your 
 earliest. I am pressed in time.
   

Wireless communication in ns-2 uses one or more channel objects. All
channel objects are independent from each other. That means, if you use
different channels you will not have cross-channel noise, interference
and such things. Some times ago someone asked how to model such
cross-channel effects and I proposed to derive a new channel object that
knows about its neighbor channels and than introduces such errors. But I
don't know how this evolved. If this is of interest for you, you should
search the maiing list archiv and contact the author.

Daniel.



Re: [ns] TX Range Problem Still not Clear

2006-07-11 Thread Daniel Mahrenholz

Abdulaziz Barnawi schrieb:
 I have been looking for a solution on how to set a TX range for a Base 
 Station that is different (e.g. much larger) from other nodes in the network. 
 Unfortunately all the solutions I have tried (including ones already posted 
 on the this list) will set the same TX range for all nodes including BS.
   

If you change the node-config during the creation of nodes (e.g. by 
setting different antenna gains) you will get different transmission 
ranges. You can safely assume that the antenna of the BS has a higher 
gain that that of mobile stations.

Daniel.



Re: [ns] Performing several runs of same simulation

2006-07-10 Thread Daniel Mahrenholz

[EMAIL PROTECTED] schrieb:
 we try to perform several runs of the same simulation and can't find a hint in
 the manual how to manage this.
 Is there a possibility to do it by a for (; numberOfRuns;) or an other 
 tcl-method?
 Or should we simply write a shell script calling our tcl file several times?
   
Just call your tcl script several times. But do not forget to initialize 
the random seeds to different values. Otherwise you will get the same 
results every time.

Daniel.



Re: [ns] tuning up wireless phy layer

2006-06-28 Thread Daniel Mahrenholz

Balazs Kovacs schrieb:
 I am simulating some communication performance with 802.11. My simulations 
 always fail to show the results I expect because the wireless-phy seems to 
 be overloaded. Is there any possibility to tune the throughput? I set 
 datarate and basicrate to 54 mbps but this cause very slight improvement, 
 while setting these rates to higher values makes no sense and no better 
 results. I have 2.29.

   
You should describe your simulation setup in more detail, especially 
what propagation, antenna, and threshold values you are using.

 Increasing simulation area and number of nodes cause much worse results. 
 With AODV on a 1000x1000 area with 140 nodes and 30 parallel CBR flows 
 (64byte*10/sec) I get more than 10 percent packet loss.
   
10% packet loss sounds quite normal in such a setup. That much nodes 
grouped together very close causes a lot of interference , collisions etc.
Btw - you will be shocked to see how bad AODV performs in a real-world 
setup.


Daniel.



Re: [ns] Warning everyone is ignoring (please use -channel)

2006-06-20 Thread Daniel Mahrenholz

Amer Filipovic schrieb:
 Can anyone tell me how to get rid of that warning at the beginning of 
 the wireless simulations

 warning: Please use -channel as shown in tcl/ex/wireless-mitf.tcl

 It is getting on my nerves :P
   
As you can read - modify each test script to use the wireless channel as 
shown in the example file (explanation is in the file; a three-liner a 
suppose).

I agree that this warning is anoying - so if you can offer some time for 
the modification, please provide a patch and I will spend some time to 
integrate it into the CVS version  (if the other developers don't have 
any objections).

Daniel.



Re: [ns] Mobile nodes: radius

2006-06-20 Thread Daniel Mahrenholz

Andrea M schrieb:
 Hello, I found this string in NS Manual, how can I use it?

 [quote]$mobilenode radius r

 The radius r denotes the node's range. All mobilenodes that fall within
 the circle of radius r with the node at its center
 are considered as neighbours.[/quote]

 I used it like:
 $node_(0) radius 50

 but It seems to do not work.
   

This must be a very old piece of code and does not make any sense at 
all. In practice there is nothing like a transmission radius. If a node 
receives a packet  depends on the propagation model. If you need a 
specific transmission radius (which btw is not really realistic) you can 
calculate the corresponding propagation parameters using the 
propagation.cc helper tool. Better setup the propagation parameters to 
reflect the conditions in the simulated environment and see what happens.

Daniel.

 



Re: [ns] packet loss rate on ad hoc network

2006-06-19 Thread Daniel Mahrenholz

ns user schrieb:
  
i'm beginner with ns2.29, anr i'm trying  to calculate the number of 
 collisions (or packet loss rate) made on a  node in an ad hoc network.
  
You need to enable MAC tracing. Then go through the generated trace file 
and lock for lines where packets are dropped because of a collision. 
This should look like:

d .   COL 

To get the packet loss rate you should count send and received packets 
on the MAC or AGT layer that are not broadcast. The lines look like:

s  .. MAC ...
r  .. MAC ...

or

s  .. AGT ...
r  .. AGT ...

For broadcast packets (destination -1) you first need to define what a 
lost packet is.

You will find the exact format in the ns2 documentation (trace file format)

Daniel.



Re: [ns] Simulating an embedded ad-hoc wireless protocol

2006-05-29 Thread Daniel Mahrenholz

Jeff,

you tried to compile GEA in the wrong order. You have to:
1. install GEA using
./configure
make
make install
in the GEA source directory.
2. compile ns

Step 1 will compile and install the native GEA version. This is not 
required for ns-2 at runtime. But you need the different include files 
that usually will be installed in /usr/local/include/gea. If you need 
them to be installed in a different directory you can call a 
./configure --prefix=install dir. This should be a directory that 
ns-2 uses during its compilation. Otherwise you have to modify the ns-2 
Makefile.

Daniel.

Jeff Schwentner schrieb:

 1) I applied ns_2.28-4.diff to ns_2.28.orig, and had an error when 
 trying to compile. When I attempted a make after configure, I got the 
 following error:

 $ make

 c++ -c -DTCP_DELAY_BIND_ALL -DNO_TK -DTCLCL_CLASSINSTVAR -DNDEBUG 
 -DUSE_SHM -D

 HAVE_LIBTCLCL -DHAVE_TCLCL_H -DHAVE_LIBOTCL1_9 -DHAVE_OTCL_H 
 -DHAVE_LIBTK8_4 -DH

 AVE_TK_H -DHAVE_LIBTCL8_4 -DHAVE_TCL_H -DHAVE_CONFIG_H -DNS_DIFFUSION 
 -DSMAC_NO

 _SYNC -DCPP_NAMESPACE=std -DUSE_SINGLE_ADDRESS_SPACE -Drng_test -I. 
 -I/usr/ns-al

 linone-2.29/tclcl-1.17 -I/usr/ns-allinone-2.29/otcl-1.11 
 -I/usr/ns-allinone-2.29

 /include -I/usr/ns-allinone-2.29/include -I/usr/include/pcap -I./tcp 
 -I./sctp -I

 ./common -I./link -I./queue -I./adc -I./apps -I./mac -I./mobile 
 -I./trace -I./ro

 uting -I./tools -I./classifier -I./mcast -I./diffusion3/lib/main 
 -I./diffusion3/

 lib -I./diffusion3/lib/nr -I./diffusion3/ns -I./diffusion3/filter_core 
 -I./asim/

 -I./qs -I./diffserv -I./wpan -o gea/gea.o gea/gea.cc

 gea/gea.cc:1:21: gea/API.h: No such file or directory

 gea/gea.cc:3: error: `gea' has not been declared

 gea/gea.cc:3: error: expected constructor, destructor, or type 
 conversion before

 GEA

 gea/gea.cc:3: error: expected `,' or `;' before GEA

 make: *** [gea/gea.o] Error 1

 2) It looks like the patched version is missing a file. I then tried 
 using gea source (libgea_1.1) instead of gea directory of the patch, 
 and got the following error when performing make:

 $ make

 make: *** No rule to make target `gea/gea.cc', needed by `gea/gea.o'. 
 Stop.

 I’m obviously doing something wrong here, but I don’t know what. What 
 steps are required to compile ns with gea?




Re: [ns] Simulating an embedded ad-hoc wireless protocol

2006-05-28 Thread Daniel Mahrenholz

Hi Jeff,

Jeff Schwentner schrieb:
 So if I understand correctly, I would embed our custom packets in UDP
 packets.  This would include the RTS/CTS/ACK MAC packets in addition to the
 actual data packets.  The MAC layer, as far as ns-2 is concerned, would just
 be a pass-through (not sending its own RTS/CTS).
   
Yes - at the moment this would result in all packets being encapsulated 
into UDP packets. In our case we have some kind of extended MAC layer. 
We do not replace the standard 802.11 MAC but we do not use RTS/CTS and 
only broadcast communication and so no ACKs. Currently we do not have 
the pass through MAC because our protocols are intended to work on 
COTS hardware. But for you this probably will not work if you like to 
replace/modify the whole MAC behavior and its timing.
 Do you have an example that I can compile and run with ns-2 (I'm using 2.29
 on Cygwin)?  In either case, I would greatly appreciate any documentation
 you can point me to get started!
We have some code examples in the source code archive in our repository. 
So far we did not try to compile GEA on cygwin but there should be no 
problem. But you need to use the modified ns-2 (available from our 
repository).

Daniel.




Re: [ns] Simulating an embedded ad-hoc wireless protocol

2006-05-19 Thread Daniel Mahrenholz

Hi,

Jeff Schwentner schrieb:
 Has anyone simulated a mobile node using only send/receive functions with
 ns-2?  

 I have an ad-hoc wireless protocol that has been implemented and fielded
 in an embedded system.  I would like to simulate it for performance
 benchmarking.  My problem is that the code is written in C and it has
 it's own custom implementation for the MAC, routing protocol, queueing,
 etc.  I have just started looking into ns-2, and from what I understand,
 it requires the MAC, routing protocol, and message queue to be in
 seperate classes (inheriting from Agent, Mac, etc.).

 Is it possible to use ns-2 to simulate a node at the transport layer,
 and treat the rest as a black box?  If not, do you have any
 suggestions/comments for simulating this firmware?
   

We are using our native protocol implementations inside ns-2. To do so
we implemented a small adaptation library called GEA. Maybe this could
be of help for you. But it would require some modifications to your code
(depending on the way the firmware is implemented). There is also a
project/extension that allows  to use the native TCP/IP stacks of
different operating systems in ns-2 (called network craddles). I do not
know  more about this but you will find a link on the Contributed Code
page in the ns-2 wiki. Maybe this could be a solution.

The last chance would be to use network emulation.

Daniel.



Re: [ns] Building Problem!

2006-05-08 Thread Daniel Mahrenholz

Enzo Memmolo schrieb:
 hello everybody,
 when I compile ns2 source code I find the follow error:

 dsr/dsragent.cc: In member function ‘void 
 DSRAgent::handleFlowForwarding(SRPacket, int)’:
 dsr/dsragent.cc:828: error: ‘XmitFlowFailureCallback’ was not declared in 
 this scope

   
It is unclear to me which version of the code and compiler you are 
using. But it looks very similar to an error that I encountered with the 
CVS version and the latest gcc version. You should look into the file if 
you can find the missing function. In my case it was declared at the end 
of the file without a forward declaration. If so, you just need to move 
it to the beginning of the file or add an forward declaration for the 
function at the beginning.

Daniel.



Re: [ns] Modifying 802.11 wireless Channel Propagation Delay

2006-05-05 Thread Daniel Mahrenholz

Hi,

Bahman Kalantari Sabet schrieb:
 Does anyone know how I can change the wireless channel propagation delay.

 I have two mobile nodes, use FTP over TCP, with RTS-CTS mechanism.

 I want to increase the delay, so that every packet (including the control 
 packets) passing the channel goes though this delay.  Then I want to 
 increase this delay, until there wouldn't be any throughput because the 
 CTS_Timeout expires and the transmitter keeps retransmitting the RTS packet 
 until it stops.

   

Increasing the propagation delay only makes sense by increasing the 
transmission range. Otherwise it is unrealistic.

 Any ideas, please let me know.
   

I think what a really need is to introduce a processing delay on the 
nodes. If I remember right, someone explained how to do this 2-3 days ago.


Daniel.



Re: [ns] Wireless transmissions look like they take no time

2006-05-03 Thread Daniel Mahrenholz

Hi Paul,

Paul Vincent Craven schrieb:
 I'm trying a very simple wireless simulation while I'm learning to use
 ns-2. The output of it seems to show the wireless transmissions happening
 instantly:

 s 10.0 _0_ AGT  --- 4 cbr 1000 [0 0 0 0] --- [0:0 1:0 32 0]
 [0] 0 0
 r 10.0 _0_ RTR  --- 4 cbr 1000 [0 0 0 0] --- [0:0 1:0 32 0]
 [0] 0 0
 s 10.0 _0_ AGT  --- 5 cbr 500 [0 0 0 0] --- [0:0 1:0 32 0] [1]
 0 0
 r 10.0 _0_ RTR  --- 5 cbr 500 [0 0 0 0] --- [0:0 1:0 32 0] [1]
 0 0
 s 11.0 _0_ AGT  --- 8 cbr 1000 [0 0 0 0] --- [0:0 1:0 32 0]
 [2] 0 0
 r 11.0 _0_ RTR  --- 8 cbr 1000 [0 0 0 0] --- [0:0 1:0 32 0]
 [2] 0 0
 s 11.0 _0_ AGT  --- 9 cbr 500 [0 0 0 0] --- [0:0 1:0 32 0] [3] 0

   
Your trace file only shows packets going from the agent to the routing 
layer on node 0 - this takes zero time. There are no packets transmitted 
or received by another node. You should set -macTrace ON to see if you 
are transmitting packets on the MAC layer.


Daniel.



Re: [ns] the more realist model taking in consideration the obstacl

2006-05-03 Thread Daniel Mahrenholz

Hi,

www triste schrieb:
 Hello everybody
   i want to use a realist propagation model,because i think that TwoRayGround 
 usually used in ns2 is not realist because it is not taking in consideration 
 the obstacls,i find in ns titoriel shadowing model is it realist?thanks for 
 informations
   

The shadowing model is a good start because it produces a lot of effects 
experienced in the real world. But it is only useful for a generelized 
environment, that means if you like to simulate randomly placed nodes. 
But if you need to simulate nodes in an actual environment (e.g. a floor 
in an office building) it is not so good. You can try the shadowing 
visibility model which gives limited support for obstacles. But you need 
to cross check / calibrate your simulation results with a real setup 
because it is not as accurate as a ray tracing like propagation model 
(which to my knowledge is only available in ns-2 using a commercial 
propagation calculation tool).

If you need more information or want to share your experiences on this 
topic, you can ask my directly because our group is currently working on 
this topic.

Daniel.



Re: [ns] why noone wants to help me? :(

2006-05-03 Thread Daniel Mahrenholz

Hi,

[EMAIL PROTECTED] schrieb:
 I'm sorry that I am writing in such way, but I really need help. I
 am trying to install ns-2.1b7a because I want to patch gprs wrote by
 Richa Jain.
 I tried installing it and I have such error:

 tclcl-mappings.h: In static member function #8216;static int
 TclObjectHelperT::dispatch_(void*, Tcl_Interp*, int, char**)#8217;:
 tclcl-mappings.h:51: error: incomplete type #8216;Tcl#8217; used in nested
 name specifier
 tclcl-mappings.h:52: error: invalid use of undefined type #8216;struct 
 Tcl#8217;
 tclcl-mappings.h:41: error: forward declaration of #8216;struct Tcl#8217;
 tclcl-mappings.h:57: error: invalid use of undefined type #8216;struct 
 Tcl#8217;
 tclcl-mappings.h:41: error: forward declaration of #8216;struct Tcl#8217;
 make: *** [Tcl.o] Error 1
 tclcl-1.0b10 make failed! Exiting ...
 See http://www.isi.edu/nsnam/ns/ns-problems.html for problems

 I don't know what to do? I have fc4. I cannot patch gprs to ns-2.29
 beacuse it doesn't work.
   
You have two options - if you really need to use a very old version of  
ns-2 you should install  a matching  compiler/linker tool chain  and  
libraries.
Your second option is to adapt the patch to the current version of ns-2 
and the default compiler of your distribution. The second option surely 
requires a lot more effort but would be valueable for many other peoples.

Daniel.
 




Re: [ns] Installing NS 2.29.2

2006-04-28 Thread Daniel Mahrenholz

Ethan Giordano schrieb:
 It would be nice if that info got posted to an ns-2 wiki.

 Does the ns-2 project have an official wiki? It may prove slightly 
 more efficient than the mailing lists if we all used it.
   

Yes, it has - http://nsnam.isi.edu/nsnam/index.php/Main_Page

Daniel.



Re: [ns] HELP URGENT

2006-04-25 Thread Daniel Mahrenholz

Hi,

anil reddy schrieb:
 hello,
   While executing the wireless program below ,we have some config
 problems, the sent packets and the NAM file are not being displayed , the
 output is also given below. plz help its urgent
 Any help will be appreciated ..
   
Wireless emulation is almost impossible with the default ns-2 because of 
the internal timimg of the simulator. You can find an improved version at:
http://wwwivs.cs.uni-magdeburg.de/EuK/forschung/projekte/nse/index.shtml

You can find an explanation of the problems in the default ns-2 in the 
paper Real-Time Network Emulation with ns-2.

If you have any problems with the improved version, don't hesitate to 
ask questions on the nse mailing list (see web pge fpr details).

Daniel.



Re: [ns] wireless-nodes-finding bandwidth in C++

2006-04-23 Thread Daniel Mahrenholz

Mahesh schrieb:
 is there anyway to find the bandwidth of a wireless node in the ns-2 (C++ ) 
 DSR code. And is there any way to find the bandwidth of a wireless node in 
 the TCL script. 

 And also i need to know how to find the bandwidth of the neighboring nodes in 
 a wireless network in NS-2 ( C++ ) DSR code.

 I am trying to implement an admission control protocol in DSR which requires 
 me to find the bandwidth of a node as well as the bandwidth of the 
 neighboring nodes. Since i am new to ns-2, i couldn't find it.
   
Before you start to implement an admission control protocol you should 
become more familiar with the nature of wireless communication. It is a 
common misunderstanding that bandwidth is assigned to individual nodes. 
In the wireless setup you only define the transmission speed of the 
node. Wireless communication uses a shared medium and so a node has to 
share the channel access with all nodes that are within interference 
range. Additionally you should keep in mind that a message from one node 
to a target multiple hops away probably consumes bandwidth multiple 
times - just as an example, a message consumes bandwidth at the node 
when the node transmits the message, and when the neighbor and most 
likely the 2-hop neighbor forward it towards the destination.

btw - this question has been asked multiple times before on this list.

Regards,
Daniel.



Re: [ns] Bandwidth of wireless channel

2006-04-13 Thread Daniel Mahrenholz

Vinod schrieb:
 Actually, my problem involves calculating the effective bandwidth of a 
 wireless node (more specifically in a MANET). I guess the bandwidth_ 
 variable of MAC or Phy layers dont give this value.

 I am using the default parameters given in the AODV implementation.

 To put it more clearly, suppose node 'S' wants to send a pkt to 'R'.
 This pkt can either be forwarded through 'A' or 'B' (A  B are one hop 
 neighbors of S) depending on the bandwidth. So, I am planning to 
 calculate the effective bandwidths of A and B and choose the node 
 based on it.

 Can you (or anyone) suggest a way for this. I would be thankful

If nodes A and B are direct neighbors of S it is most likely that A and 
B have to share the medium either because they are in direct 
communication range or through the physical carrier sense. If I 
understand you right, you  try to compute the bandwidth that one node 
(e.g. S) can use to transmit a packet. A node has to share the medium 
with all nodes within interference range. So if S wants to transmit a 
packet via nodes A, D, and F to R and A,D are within interference range 
of S then the available bandwidth of S will reduce three times by the 
bandwidth required to transmit the packet. This is because the packet 
utilizes the medium (from S's point of view) when S transmits it, and 
when A and D forward it. Beyond this point S will not be influenced by 
the further forwarding of the packet because it is out of interference 
range.

In your case you have to take all possible routes and calculate the 
changes in the channel utilization for every node along the route or at 
least within your own interference range to know what effect the route 
decision will have on your available end-to-end bandwidth.

Hope this clarifies more than it confuses,
Daniel.






Re: [ns] Values to be set in the TCL file to make the transmission range 100m

2006-04-12 Thread Daniel Mahrenholz

Sasan Sahraei schrieb:
 transmission range defined internally by distCST_

   
This is only the maximum assumed interference range for a propagation 
model and will be used to speed up the calculation of packet receivers.

 the transmission range is suppose to be calculated using the parameted
 below (as you mentioned) and propagation model however by looking at the
 Wireless Physical code, the propagation model is not implemented yet and
 thus the result is as -1 which is wrong.

   
You have various implemented propagation models (e.g. FreeSpace, 
TwoRayGround, Shadowing). Maybe you just look at the base class for all 
propagation models.

The transmission range is defined by the antennas of the transmitter AND 
the receiver, the transmission power, signal attenuation, modulation 
scheme (which results in different receiver sensitivities / RX 
treshhold) and so on. It is nothing you can compute solely by looking on 
the transmitter.

Just as a little example: two laptops equiped with usual WLAN interface 
adapters may have a transmission range of 300m in an open area. But with 
good external antennas you can boost this whithout amplification up to 
150km (http://www.wifi-shootout.com/).

As a personal advice for every WLAN simulation - use the Shadowing (or 
better) propagation model if possible. I can provide ready to use 
parameter sets for different environments if necessary.

Daniel.



Re: [ns] Bandwidth of wireless channel

2006-04-12 Thread Daniel Mahrenholz

Vinod schrieb:
 im working in qos routing in mobile adhoc networks.

 can anyone tell me why the default value of bandwidth is 0 in link layer 
 of type LL (wireless routing simulation). Does that mean its not used or 
 something else? I can see that the delay value is set to 25us by 
 default.

 Also, can anyone tell me whether i can have cost param attached to 
 wireless links as we can do in wired?
   
The short answer - there is nothing like a wireless link. Wireless 
communication uses a broadcast medium. Links are only logically defined 
by your MAC layer or routing protocol. Additionally you do not have 
something like a universal bandwidth of the wireless channel. This 
depends on your technology, modulation schemes etc. So you have to setup 
this value depending on your model technology (e.g. 2 Mbit/s for a WLAN 
using BPSK modulation).

Daniel.



Re: [ns] Real time Play with NS-2 emulator

2006-04-11 Thread Daniel Mahrenholz

[EMAIL PROTECTED] schrieb:
 I am just putting this question out there to see if it is possible to
 implement.  Right now I only have access to one laptop.  I am running
 ns-2.27 with pclinuxos.  So far I was able to 'make nse' and did not
 return any error messages however I still can't run the pingdemo.tcl.
 But the question is this:

 I want the emulator to be able to read a tracefile (a large one
 perhaps that of a half an hour show) in real time and at the same time
 output the packets perhaps to a reciever tracefile that is constantly
 updated as packets arrive.  Next the tracefile is read by a real time
 player such as VLC (I think it has real time capabilities) and outputs
 the video  So I can readily view the real-time video as it is being
 emulated.  I have to all this in one machine or the laptop.

 Is it possible or has anyone done anything like this before that can
 offer me some direction?  Thank you.
   

I assume that you actually mean a packet dump / capture file (e.g. pcap 
format) when you say tracefile.
The ns-2 tracefiles does not contain any application data and therefore 
is not suitable to be used as input for any application like VLC.

What you need is a setup where you have nse and two virtual machines - 
one that replays the packet dump and the second that runs VLC. To replay 
a PCAP file you need to implement a tool similar to tcpreplay or 
simple-replay. To get a receiver tracefile as you name it, you can 
capture the packets on the second virtual machine using tcpdump. But 
this will not be necessary as you can feed them directly into VLC.

Hope, this helps.
Daniel.





Re: [ns] compile ns-allinone-2.29.2 error on fedora core 5

2006-04-04 Thread Daniel Mahrenholz

landrew126 schrieb:
 Hi,everybody  I think I must send this letter again,for I didn't post the 
 errormessage to you. The following is the error message I got when 
 complingns-2.29:
 g++ -c -Wall  -DTCP_DELAY_BIND_ALL -DNO_TK -DTCLCL_CLASSINSTVAR-DNDEBUG 
 -DLINUX_TCP_HEADER -DUSE_SHM -DHAVE_LIBTCLCL -DHAVE_TCLCL_H-DHAVE_LIBOTCL1_11 
 -DHAVE_OTCL_H -DHAVE_LIBTK8_4 -DHAVE_TK_H-DHAVE_LIBTCL8_4 -DHAVE_TCL_H  
 -DHAVE_CONFIG_H -DNS_DIFFUSION-DSMAC_NO_SYNC -DCPP_NAMESPACE=std 
 -DUSE_SINGLE_ADDRESS_SPACE -Drng_test-I. 
 -I/home/landrew/WsnSimulator/ns-allinone-2.29/tclcl-1.17-I/home/landrew/WsnSimulator/ns-allinone-2.29/otcl-1.11-I/home/landrew/WsnSimulator/ns-allinone-2.29/include-I/home/landrew/WsnSimulator/ns-allinone-2.29/include-I/usr/include/pcap
  -I./tcp -I./sctp -I./common -I./link -I./queue-I./adc -I./apps -I./mac 
 -I./mobile -I./trace -I./routing -I./tools-I./classifier -I./mcast 
 -I./diffusion3/lib/main -I./diffusion3/lib-I./diffusion3/lib/nr 
 -I./diffusion3/ns -I./diffusion3/filter_core-I./asim/ -I./qs -I./diffserv 
 -I./satellite -I./wpan -o dsr/dsragent.odsr/dsragent.ccdsr/dsragent.cc: In 
 member function ‘voidDSRAgent::handleFlowForwarding(SRPacket, 
 int)’:dsr/dsragent.cc:828: error: ‘XmitFlowFailureCallback’ was not 
 declaredin this scopedsr/dsragent.cc: In member function 
 ‘voidDSRAgent::sendOutPacketWithRoute(SRPacket, bool, 
 Time)’:dsr/dsragent.cc:1385: error: ‘XmitFailureCallback’ was not declared 
 inthis scopedsr/dsragent.cc:1386: error: ‘XmitFlowFailureCallback’ was not 
 declaredin this scopedsr/dsragent.cc:1403: error: ‘XmitFailureCallback’ was 
 not declared inthis scopemake: *** [dsr/dsragent.o] Error 1
   

This is a problem with the latest gcc version. To fix it, you can modify 
the files by moving the Xmit*Callback functions to the beginning of the 
file. Otherwise you could get updated versions of the files from the CVS 
- I already fixed this 3 weeks ago.

You can also downgrade your gcc to a 3.3 version.

Daniel.



Re: [ns] a comparison between the wireless routing protocols

2006-04-03 Thread Daniel Mahrenholz

Hello Alexandra,

Alexandra Cioroianu schrieb:
 Hello ns users!
  I,m working with ns 2.29 in Mandriva 2006 and i need to make a comparison 
 between the  existing wireless routing protocols(DSDV, AODV, DSR...). I 
 thought about creating Xgraph files that would show the traffic, delay, 
 speed...,but i don't know what topologies to use to be relevant. Shall i 
 start with the examples in the tutorial?
  Has anyone made this before or anyone knows some steps to be followed? Any 
 ideas, sugestions would help me
An important advice from me: test the performance of the protocols using 
different propagation models - at least compare them using the 
TwoRayGround and the Shadowing model. Additionally, it can't hurt to 
have some bottlenecks in your topology. Either place the nodes by hand 
or use a random topology e.g. on a 100x5000 m grid.

Daniel.



Re: [ns] Does ns2 support simultaneous processing ?

2006-04-01 Thread Daniel Mahrenholz

On Friday 31 March 2006 17:33, Saurabh Sinha wrote:
 I just wanted to know that whether ns2 supports threads or not? Like can
 more that 2 two stations transmit at the same time
 It will of gr8 help if some one can post a suggestion

Yes and no - ns2 itself does not use threads. But the event-based processing 
model does not need threads to allow for concurrent threads of control. That 
means two station can perform actions at the same moment in time. The 
simulator will serialize them and compute them sequentially. An event-based 
simulation does not need to care about real-time (wall clock time), it has 
its own time base and so can compute as much events (happening) at the same 
time as you need. The simulation time will stand still during the processing 
of an event and will jump to the execution time of the next event that has to 
be processed. 

Hope this answers your question.

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] statement has no effect with CURRENT_TIME

2006-03-21 Thread Daniel Mahrenholz

On Tuesday 21 March 2006 13:46, elise hu wrote:
 My code compiles and is built into ns - however i have a warning :
 warning: statement has no effect
 for pretty much every line looking like :
expire_time = CURRENT_TIME + nb_freshness_*t_br_;
 (i have a few) - so that's a bit worrying.  CURRENT_TIME is defined as :
#define CURRENT_TIME Scheduler::instance().clock();
Remove the ; at the end

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] Integrate program

2006-03-21 Thread Daniel Mahrenholz

On Tuesday 21 March 2006 13:35, . . wrote:
 I am going to do a simulation of dns in NS-2. Is there any way i can
 integrate Bind or a similar application in NS-2 or is it better to
 implement it from scratch? I want to use Bind or similar because it is
 relevant for the testing.
 I don't want to use another simulator, it have to be NS-2!

Network emulation will be the right choice for you. If you dont need the 
actual DNS functionality you should try to implement small traffic generators 
to mimic a DNS server and client.

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] CSThresh_ and RXThresh_

2006-03-20 Thread Daniel Mahrenholz


On Monday 20 March 2006 08:27, Ashraf Bourawy wrote:
 in addition, in ns-2, the transmission range is set to approximately 250
 m...and the distCST_ is 550 m

 my question is, why is this huge difference? as this will affect the
 performance...and is it required by the standard? I just couldn't find
 anything supporing this issu.I would appreciate it, if anyone could
 tell me where I can find any explanation about this issue..

It is a common missunderstanding that the transmission / carrier sense 
(interference) range is set to 250 / 550m. They are just the results of the 
choosen propagation model, transmission power, antenna gains, TX/CS 
thresholds.  I have to emphasis it again: you CANNOT set the transmission 
range of a WLAN device (you can only set all named parameters to result in a 
desired range). The default values in ns2 are not very realistic. If you are 
developing new WLAN protocols you should at least use the Shadowing 
propagation model. 

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] Memory leak in TFRC?!

2006-03-16 Thread Daniel Mahrenholz

On Thursday 16 March 2006 15:01, Arne Lie wrote:
 In my work of enabling a VBR video application on top of TFRC in ns-2.28,
 I observe that I have gotten a memory leak, which is difficult to trace!
 Two questions:

 1. Anybody having suggestions in how to track this down? (gdb, profiling?)

For memory leaks I recomment to use valgrind - it helped a lot in our 
projects. And yes - it is possible to valgrind ns-2 although it is not that 
funny. You will need a lot of memory ... good luck.

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] Shadowing model

2006-03-10 Thread Daniel Mahrenholz

Hi, 

On Friday 10 March 2006 09:34, cp kothari wrote:
 hi anyone tried Shadowing propagation model on ns2. i am getting lots of
 ri\outing overhead in case shadowing model? please reply with details.

I use the Shadowing model for all my WLAN simulations because FreeSpace and 
TwoRayGround are very unrealistic. Compared with these models you will notice a 
higher routing overhead because you get links with loss probabilities between 0 
and 1. This is not a bug, its a feature. Many protocol suffer from the fact 
that they do not consider link quality. 

You will find some results in 
Stepanov et al. On the Impact of radio Propagation Models on MANET Simulation 
Results, MWCN 2005.

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho


Re: [ns] How to use Error Models with Wireless channels ?

2006-03-06 Thread Daniel Mahrenholz

On Monday 06 March 2006 09:26, [EMAIL PROTECTED] wrote:
 Could anyone please tell me how to configure error models with a wireless
 channel in ns2.28. Actually I tried it as mentioned in the manual like
 this:

I use the following code and it works:
proc UniformErr {} {
set err [new ErrorModel]
$err unit packet
$err set rate_ 0.005 # 0.5%
$err ranvar [new RandomVariable/Uniform]
$err drop-target [new Agent/Null]
return $err
}

$ns node-config -IncomingErrProc UniformErr

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] How to speed up ns-2 simulations

2006-03-01 Thread Daniel Mahrenholz

Hi, 

On Wednesday 01 March 2006 00:19, Soo-Hyun Choi wrote:
 2. As far as my understanding, several hundred nodes with ns-2 might
 not be a good choice in terms of CPU time, memory consumption,
 debugging etc etc. If your simulation needs to be run with that many
 nodes, people usually write his/her own simulator.

The sheer number of nodes is not your primary problem - we successfully run 
wireless simulations using 10.000 and more nodes. This required about 1.5GB 
memory. The memory consumption grows rapidly if you have a large number of 
packets, e.g. on high bandwidth (wired) links with long delays, or large 
input queues on each node. It will be even higher if you forget to remove all 
unnecessary headers from your packets.
For wireless simulations you have a special problem - the CPU consumption 
grows with the density of the network because propagation has to be computed 
for every pair of nodes that could affect each other. The third problem is 
the number of pending events. Depending on the used scheduler it takes 
significant different times to insert new events. This is especially 
important if you have a lot of events that are scheduled far in the future. 
Here you have to test the different schedulers which one performs best for 
you.


-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] (yet another) doubt on CSThresh_ and RxThresh_

2006-02-08 Thread Daniel Mahrenholz

On Wednesday 08 February 2006 16:29, Sita S. Krishnakumar wrote:
 I am not sure I understand what you mean by:
  If you increase the range at witch you define the signal to
  be receiveable you have to lower RXTresh because it needs to be lower
  than the signal at this point.

 I thought RXThresh is the threshold until which a packet can be received
 by a node ie. it is the reach of the node. If you want it to receive
 packets from a farther distance, should the threshold be greater?

Usually you define Pt_, RXTresh and then compute the maximum possible 
transmission range. But if you need a different range you set it fix and 
compute either Pt_ or RXTresh while keeping the other fix. 

For a longer distance you need either a higher Pt_ (stronger sender) or lower 
RXTresh (more sensitive receiver).

For the two-ray-ground model it is very easy to compute these values. Take a 
look into the code and you will find the equation.

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] simulation time in ns-2

2006-01-09 Thread Daniel Mahrenholz

Hi, 

On Monday 09 January 2006 14:42, Michael Sidiropoulos wrote:
   Hello everyone!!I have a question regarding simulation time in ns-2  
  I am using ns-2.27 version and i have noticed that the calculated
 throughput in my wireless scenario changes as the simulation time
 increases.What  simulation time should i set to my tcl script in order to
 getaccurate throughput results?Any point or hint is extremely
 welcomedThanks for your attention!!!   

If you like to know the values in a steady state you sould run the simulation 
for at least several minutes. Especially in the first few seconds you can 
experience several strange effects. Something I often see is a storm of ARP 
packets at the beginning of a simulation because all stations start to 
communicate at about the same time. So, it is sometimes better to discard the 
results from the first seconds.

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] Pleas help:How to access mobile node data out of the agent

2006-01-09 Thread Daniel Mahrenholz

Hi Christoph, 

On Monday 09 January 2006 18:36, christoph schroth wrote:
 I have a rather critical question, and I would be very happy if someone
 could write me a brief answer:

 How can I access mobile node data such as location (X_, Y_, etc.) out of
 the agent (e.g., MessagePassing Agent).


 With other words: How do I get a reference to the mobile node wo whom I
 have attached the agent before?

Look into tcl/lib/ns-node.tcl : instproc attach ... how an agent is attached 
to a node. I didn't tried but you should get the Node reference from the 
node_ field in the agent.  

Daniel.
-- 
Dipl.-Inf. Daniel Mahrenholz, University of Magdeburg, Germany
Homepage: http://ivs.cs.uni-magdeburg.de/~mahrenho



Re: [ns] How to calculate bandwidth from transmission power?

2006-01-08 Thread Daniel Mahrenholz

Hi, 

On Sunday 08 January 2006 14:19, you wrote:
 As you said, I have just found that, in NS2, the bandwidth can be set by
 Phy/WirelessPhy set bandwidth_ 2e6 and it will be constant, even though
 the transmission power is changed. I am not sure why bandwidth is not
 related to transmission power. According to my understand, when the
 transmission power is decreased, the communication range will be decreased.
 It is logical true. However, decreasing of transmission power should
 increase SNR. Once SNR increases, the bandwidth or data rate should be
 decreased somehow. Do you think  so? Could you give me an explaination if i
 am wrong?

You have to keep in mind that the received signal strength is not only defined 
by the sender. You can increase the transmission range just by attaching an 
external antenna with positive gain to the receiver. So the signal strength 
is defined by the transmission power, the antenna gains, and the distance. 
Above this you have the modulation (with a maximum possible bandwidth) that 
requires a certain SNR. This does not mean that the bandwidth will 
automatically increase with an increasing SNR because the sender does not 
know about the SNR at the receiver side. The current 802.11 implementation 
does not has multi-rate support. There is a new implementation (s. wiki for 
more information) that has, but I cannot tell you exactly how it works. If 
you use the current implementation you have to decide for a bandwidth, set it 
in the PHY layer and set the RXThresh accordingly. 

Just as an example what you can do with antennas (and an unamplified 802.11b 
card) lock here: 
http://www.unwiredadventures.com/unwire/2005/12/defcon_wifi_sho.html
They reached more than 125 miles!

Daniel.
-- 
+-[Dipl.-Inf. Daniel Mahrenholz / Otto-von-Guericke Universität Magdeburg]-+
| http://ivs.cs.uni-magdeburg.de/~mahrenho  Geb. 29 Raum 407   |
| mailto:[EMAIL PROTECTED]  Tel. +49-391-67-12788  |
+--+