Firstly, here is my first post on this topic a few weeks ago (Jan 2015):
http://pf.itd.nrl.navy.mil/pipermail/emane-users/2015-January/000656.html

I am seeking to get very basic, barebones communication going between two nodes 
on separate networks.
As a follow up, the topics I believe may pertain to this question are:
Demo 0 from the tutorial - using this as a sample to modify for my scenario
OLSRD - specifically, routing1.conf and routing2.conf
Platform xml files - platform1.xml and platform2.xml
Bash scripts: democtl-host - test flow, notably the LXC container creation / 
configuration
OTA manager channel - 224.1.2.8:45702
Event service - 224.1.2.8:45703, and eventdaemon1.xml, eventdaemon2.xml
Linux utility iptables
                (Optional) VMWare workstation
                (Highly preferable) Linux containers (LXCs)

Just for context, I'm pretty new to all of these concepts.

My planned approach is that I'd have two LXCs, one on host A, network A, and 
another on host B, network B.
Each host would, in my planned scenario, have a public IP with a port which, 
using iptables, would forward all packets received to the LXC.
For example, if LXC A was communicating with LXC B, the way I see this working 
is that the packet would be forwarded as such: LXC A -> Host A, Network A -> 
Host B, Network B:port B -> LXC B.
Reverse communication would be the exact opposite. I'm having a bit of trouble 
seeing where the OTA manager channel comes into my specific scenario.

Considering the OTA manager channel, I am consistently seeing an IP of 
224.1.2.8:45702, and for event service, 224.1.2.8:45703.
If I were to have a common manager channel for both Network A and Network B, 
would I just leave this as it is? I'm a little confused how this is handled in 
a distributed approach. Can I just set this IP to one of the host's public IP?

Considering the LXCs, I have seen that the democtl-host bridges the NEM LXCs 
using iptables.
I don't really want more than one NEM on a host just to start, so is this 
necessary?
Would I just bridge LXC A to the host A NIC so it can reach the remote host B?
For the purposes of my scenario, I don't think I strictly even need the LXC 
containers, if it's simple I'd like to just use the hosts NICs themselves.
If I were to change the democtl-host to not use LXCs, would this affect the 
rest of the scenario to make it fail unless I make changes elsewhere?

I want to just specify a remote IP instead of an LXC container IP, but I know 
it's not that easy.
I'm having troubles finding out which files need to be changed. Also, from 
routing1.conf and routing2.conf, I see a reference to emane0 and emane1.
Since each host would only have 1 NEM, do I need to modify this to just have 
emane0 on one and emane1 on the other? Or am I understanding this incorrectly?

I am pretty confident that for this scenario I'm describing, I'll need emane to 
be running on each host / LXC (depending on if I use LXCs or not).
I'm having trouble figuring out how all the configuration files (from demo 0) 
would need to change to reflect this.

I'm using at the moment, host A is running Windows with Ubuntu 14.04 in VMWare 
Workstation, while host B is running Ubuntu as the main OS (no VM involved).
The VM is bridged to the NIC with its own public static IP.
Host A and host B both have static IPs as well.
If I can't get this going with a VM though, I could have Ubuntu as the main OS 
on host A.
I would like to use VMs but only if they don't overcomplicate the situation.

I've searched the mailing list archives quite a bit to see a similar scenario, 
the closest I seem to have found was this:
http://emane-users.pf.itd.nrl.navy.narkive.com/B6OsXtHa/emane-prf-understanding-raw-transports
This post is not my exact scenario, but I think it's in the general ballpark.
It's pretty old though (July 2010).

Very respectfully,
Derek Lake

_______________________________________________
emane-users mailing list
[email protected]
http://pf.itd.nrl.navy.mil/mailman/listinfo/emane-users

Reply via email to