Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-29 Thread Jeff Newmiller

On Sat, 29 Dec 2001, Scott C. Best wrote:

 Paul:
 
   Heya. Notso left field, really. I've used ipfwd to
 forward IPSec packets (protocol 50 and 51) to my NAT'd LAN's
 broadcast address...and IPSec clients on that LAN can handle
 it.
   Of course...IPSec has some sense of state of a
 connection beyond just the packets. Webservers don't. I wonder
 what would happen with some game servers, or VNC. Hmmm...

Webservers have a very clear sense of 'state' in their connections
beyond just the packets.  They use TCP, which maintains a pipe long
enough to support a request and reply between the client and the server.
The statelessness attributed to webservers has to do with the fact that
these connections are generally very short-lived... under http 1.0, only a
single request/reply is supported before the connection is dropped.

Multicasting is useful for now data... radio broadcasts, for example,
where if you don't get the sound from a few seconds ago, oh well, because
you won't be able to catch up to the sound corresponding to now if you
try to listen to re-transmissions of packets sent then.  Large file
transfers cannot afford to omit retransmissions, and some amount of delay
is worth the delay involved in making sure the file gets through intact.

One level of difficulty is variable reliability of transmissions. Keeping
track of the fact that after transmitting 300 packets, destination A needs
a retransmission of packet 293, but destination B needs packet 297, and
destination C got it all so far, is outside the scope of normal TCP
connections.  They only work on a one-to-one basis.

Another level of difficulty is the fact that the application level
protocols are typically designed for point-to-point communication... the
communication initiator does not expect to hear several voices answer when
it says Hello.

Therefore I think Charles' suggestion to have an application-level
repeater would be the only way to construct this data flow.

 
 -Scott
 
 
  Maybe this is way out in left field, but could this be a potential
  application for multicasting?
 
  paul
 
 
 ___
 Leaf-user mailing list
 [EMAIL PROTECTED]
 https://lists.sourceforge.net/lists/listinfo/leaf-user
 

---
Jeff NewmillerThe .   .  Go Live...
DCN:[EMAIL PROTECTED]Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...2k
---


___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-28 Thread Charles Steinkuehler

  ???
  Please explain a bit more about exactly what you're trying to
accomplish...

 Large medical images -- some approaching gigabyte sizes.

 The internal network connects multiple facilities.  The images may need
 to be shared across multiple facilities.

 Our preferred solution is to put one (1) copy of each image on a large
 and robust fileserver inside their network.  The catch is, they are
 using proprietary systems for viewing and analyzing the images and we
 may not be granted access nor information adequate to implementing our
 preferred solution.  Currently, the remote sources are using their
 proprietary systems (black boxes) to auto-magically transfer the files
 directly to one (1) proprietary system inside our customer's network.
 Yes, this looks everyway like ftp -- except the proprietary system
 vendor says, no, it is not that simple ;

 When one of these images is needed on another proprietary system inside
 this network, somebody needs to push the required file to another
 proprietary system.  Our customer wants ``pull'' access from any given
 system.

 In brainstorming alternatives, this occured to me:

 send images
 |
 V
  internet
 |
 V
  firewall
 |
   -
   | | |
   V V V
 host_1host_2host_n ...

 Regardless, whether or not this is the best solution for this
 application, how can this be done?

Well, it sounds like you're in black-box hell :

My current understanding of your problem (please correct me if I'm wrong):

- A remote, black-box system pushes images to one black-box system in your
customer's network (let's call it Master)

- Your customer can push images from the Master black box system to several
other black-box systems (Slaves)

- Your customer wants to be able to view images from any slave system w/o
having to push the file from the master system (ie pull, not push)

Without an understanding of the black-boxes involved, there's not a lot you
can do here.  If I am now understanding your initial question properly, you
are asking if it's possible to somehow get the remote system to push the
image content to multiple systems on your internal network.  The simple
answer is no...there is no straight-forward network slight-of-hand that will
allow you to trick one system into talking multiple remote systems at the
same time.  The more complex answer is maybe, and depends a lot on unknown
details of your black-box systems...

I can think of a few things that could potentially help you:

1) Get the remote system to push the data to all clients.  This will chew up
lots of internet bandwidth, and will either require multiple external IP's,
or control over which ports are used to connect (assuming you're running a
masqueraded internal network).

2) Get the Master internal system to push the data to all slave systems.
This may be possible to automate, or it may require a console
jocky...depends on unknown (to me) details of your black-box systems.

3) Whip up a Black-Box emulator, that can talk the file-transfer protocol
used by your systems.  Have your remote system transfer files to your fake
system, then push the data from there to all internal systems, as required.
This will require network programming (not too tough in modern languages
like Java, Perl, Python, c) and an understanding (or reverse engineering)
of the file transfer protocols used by your equipment.

4) Read through the docs for your black-boxes, and see if they support any
kind of image-server access.  I think this is really what you want...a
central image server.  If you're lucky, you can do this with a *nix box.  If
you're unlucky, you'll need a propriatery box from your vendor for the other
systems.  If you're really-really unlucky, there's no support for any kind
of 'pull' technology on the individual Slave stations.

NOTE that everything but the last option requires lots of storage capacity
on each image station, since each station is storing copies of all the image
files.

Charles Steinkuehler
http://lrp.steinkuehler.net
http://c0wz.steinkuehler.net (lrp.c0wz.com mirror)



___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-28 Thread Michael D. Schleif


Charles Steinkuehler wrote:
 
   ???
   Please explain a bit more about exactly what you're trying to
 accomplish...
 
  Large medical images -- some approaching gigabyte sizes.
 
  The internal network connects multiple facilities.  The images may need
  to be shared across multiple facilities.
 
  Our preferred solution is to put one (1) copy of each image on a large
  and robust fileserver inside their network.  The catch is, they are
  using proprietary systems for viewing and analyzing the images and we
  may not be granted access nor information adequate to implementing our
  preferred solution.  Currently, the remote sources are using their
  proprietary systems (black boxes) to auto-magically transfer the files
  directly to one (1) proprietary system inside our customer's network.
  Yes, this looks everyway like ftp -- except the proprietary system
  vendor says, no, it is not that simple ;
 
  When one of these images is needed on another proprietary system inside
  this network, somebody needs to push the required file to another
  proprietary system.  Our customer wants ``pull'' access from any given
  system.
 
  In brainstorming alternatives, this occured to me:
 
  send images
  |
  V
   internet
  |
  V
   firewall
  |
-
| | |
V V V
  host_1host_2host_n ...
 
  Regardless, whether or not this is the best solution for this
  application, how can this be done?
 
 Well, it sounds like you're in black-box hell :

Do I detect a bit of empathy from somebody who's been here before?

 My current understanding of your problem (please correct me if I'm wrong):
 
 - A remote, black-box system pushes images to one black-box system in your
 customer's network (let's call it Master)
 
 - Your customer can push images from the Master black box system to several
 other black-box systems (Slaves)
 
 - Your customer wants to be able to view images from any slave system w/o
 having to push the file from the master system (ie pull, not push)

Yes -- this pretty much sums it up ;

 Without an understanding of the black-boxes involved, there's not a lot you
 can do here.  If I am now understanding your initial question properly, you
 are asking if it's possible to somehow get the remote system to push the
 image content to multiple systems on your internal network.  The simple
 answer is no...there is no straight-forward network slight-of-hand that will
 allow you to trick one system into talking multiple remote systems at the
 same time.  The more complex answer is maybe, and depends a lot on unknown
 details of your black-box systems...

Yes, that is core to my question: can port forwarding process do a
``tee'' to multiple addresses?

 I can think of a few things that could potentially help you:
 
 1) Get the remote system to push the data to all clients.  This will chew up
 lots of internet bandwidth, and will either require multiple external IP's,
 or control over which ports are used to connect (assuming you're running a
 masqueraded internal network).
 
 2) Get the Master internal system to push the data to all slave systems.
 This may be possible to automate, or it may require a console
 jocky...depends on unknown (to me) details of your black-box systems.
 
 3) Whip up a Black-Box emulator, that can talk the file-transfer protocol
 used by your systems.  Have your remote system transfer files to your fake
 system, then push the data from there to all internal systems, as required.
 This will require network programming (not too tough in modern languages
 like Java, Perl, Python, c) and an understanding (or reverse engineering)
 of the file transfer protocols used by your equipment.
 
 4) Read through the docs for your black-boxes, and see if they support any
 kind of image-server access.  I think this is really what you want...a
 central image server.  If you're lucky, you can do this with a *nix box.  If
 you're unlucky, you'll need a propriatery box from your vendor for the other
 systems.  If you're really-really unlucky, there's no support for any kind
 of 'pull' technology on the individual Slave stations.
 
 NOTE that everything but the last option requires lots of storage capacity
 on each image station, since each station is storing copies of all the image
 files.

Yes, these are the alternatives we are considering.  Yes, all of these
depend -- more or less -- on access to proprietary information to which
we may or may not be privy ;

Of course, the image/fileserver approach makes most sense from many
perspectives.

Nevertheless, the port forwarding ``tee'' feasibility is an interesting
question, regardless of our current customer's predicament!

man ipmasqadm, on my potato box, contains and interesting example, which
may or may not shed light on this; but, which I also do *not* fully
understand:

``Redirect all web traffic to internals 

RE: [Leaf-user] portfw to *multiple* hosts ???

2001-12-28 Thread Paul M. Wright, Jr.

Maybe this is way out in left field, but could this be a potential
application for multicasting?



paul


Paul M. Wright, Jr.
McKay Technologies
making technology play nice






___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-28 Thread Charles Steinkuehler

 Yes, these are the alternatives we are considering.  Yes, all of these
 depend -- more or less -- on access to proprietary information to which
 we may or may not be privy ;

 Of course, the image/fileserver approach makes most sense from many
 perspectives.

 Nevertheless, the port forwarding ``tee'' feasibility is an interesting
 question, regardless of our current customer's predicament!

But saddly, this is not generally possible.  Certianly not without some
custom code.  Perhaps not at all depending on the details of the
communication protocol (or at least not without a full application level
proxy).

 man ipmasqadm, on my potato box, contains and interesting example, which
 may or may not shed light on this; but, which I also do *not* fully
 understand:

 ``Redirect all web traffic to internals hostA and hostB, where hostB
 will serve 2 times hostA connections.  Forward rules already masq
 internal hosts to outside (typical).
 ipchains -I input -p tcp -y -d yours.com/32 80 -m 1
 ipmasqadm mfw -I -m 1 -r hostA 80 -p 10
 ipmasqadm mfw -I -m 1 -r hostB 80 -p 20''

 What is this really doing?

If I'm understanding this properly, it's doing load-balancing.  This divides
individual inbound web connections between two internal web servers.  You
want something that takes a single inbound connection (from the remote
system to your internal Master system) and Tee's the connection to
multiple systems, which is just not how TCP communications work...

Charles Steinkuehler
http://lrp.steinkuehler.net
http://c0wz.steinkuehler.net (lrp.c0wz.com mirror)



___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-28 Thread Scott C. Best

Paul:

Heya. Notso left field, really. I've used ipfwd to
forward IPSec packets (protocol 50 and 51) to my NAT'd LAN's
broadcast address...and IPSec clients on that LAN can handle
it.
Of course...IPSec has some sense of state of a
connection beyond just the packets. Webservers don't. I wonder
what would happen with some game servers, or VNC. Hmmm...

-Scott


 Maybe this is way out in left field, but could this be a potential
 application for multicasting?

 paul


___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



[Leaf-user] portfw to *multiple* hosts ???

2001-12-27 Thread Michael D. Schleif


Quite simply, what is the simplest, secure way to forward to two (2)
hosts?  There are probably better ways to accomplish the end goal; but,
we have an application whereby we may need to push very large files from
the internet to two (or, more) locations behind a Dachstein firewall.

What do you think?

-- 

Best Regards,

mds
mds resource
888.250.3987

Dare to fix things before they break . . .

Our capacity for understanding is inversely proportional to how much we
think we know.  The more I know, the more I know I don't know . . .

___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-27 Thread Jeff Newmiller

On Thu, 27 Dec 2001, Michael D. Schleif wrote:

 
 Quite simply, what is the simplest, secure way to forward to two (2)
 hosts?  There are probably better ways to accomplish the end goal; but,
 we have an application whereby we may need to push very large files from
 the internet to two (or, more) locations behind a Dachstein firewall.
 
 What do you think?

scp or https/PUT to separate ports (22 and 2022, or 443 and 4443, for
example), one port for each host.  The hosts could each see input on the
nominal port (22 or 443).

---
Jeff NewmillerThe .   .  Go Live...
DCN:[EMAIL PROTECTED]Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...2k
---


___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-27 Thread Charles Steinkuehler

 Quite simply, what is the simplest, secure way to forward to two (2)
 hosts?  There are probably better ways to accomplish the end goal; but,
 we have an application whereby we may need to push very large files from
 the internet to two (or, more) locations behind a Dachstein firewall.

Simplest is to simply create multiple INTERN_SERVERS entries.  You'll need
to port-forward from different ports (or have more than one external IP
address).

Security of a port-forwarded service is only as secure as the service
itself.  If you really need tight security, you might consider an
application level proxy.  For instance, if you're port-forwarding to a MS
IIS web-server, you might want to run all requests through a *nix based
proxy that filters out (and logs) any *default.ida web-requests.  If you can
afford the overhead (and potential cost...there are many good application
level proxies that are for sale commercial products), this can be a good way
to shield yourself from various attacks, both known and (at least some)
unknown...stuff like buffer overflow attacks, broken protocol attacks, and
the like.

If you want to use a proxy, just port-forward the service to the proxy
instead of your 'real' server, and configure the proxy to talk to your real
server(s)...

If you're looking at pushing/syncing a large number of files to various
remote sites, you might also want to look into rsync and/or ssh.  If the
files aren't terribly sensitive (ie can traverse the internet unencrypted),
you can setup an rsync server at the 'master' site and sync all the clients
periodically.  This is more of a 'pull' architecture, but it can be made
into a 'push' system by having the master run the rsync download command on
the clients via ssh.  If you need to encrypt the transfers, you can tunnel
the entire session through ssh.  You can keep security as tight as you want
with proper ssh configuration...for something like this I usually disable
general logins, setup ssh authentication by RSA/DSA keys only, and have the
ssh session automatically invoke the proper behavior on the client end (ie
fire off an rsync session in your case).  This way, even if the master
server is compromised, you won't autmoatically get user-level access to the
clients...all you'll be able to do is force them to rsync to the master
server whenever you want...

You can also get rsync/ssh for windows (see cygwin
http://www.redhat.com/download/cygwin.html ) if your network is of the M$
persuasion...

Charles Steinkuehler
http://lrp.steinkuehler.net
http://c0wz.steinkuehler.net (lrp.c0wz.com mirror)



___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-27 Thread Charles Steinkuehler

   Quite simply, what is the simplest, secure way to forward to two (2)
   hosts?  There are probably better ways to accomplish the end goal;
but,
   we have an application whereby we may need to push very large files
from
   the internet to two (or, more) locations behind a Dachstein firewall.
  
   What do you think?
 
  scp or https/PUT to separate ports (22 and 2022, or 443 and 4443, for
  example), one port for each host.  The hosts could each see input on the
  nominal port (22 or 443).

 Yes, I see this; but, is there someway to accomplish this --
 simultaneously -- with one (1) remote operation?

???
Please explain a bit more about exactly what you're trying to accomplish...

Charles Steinkuehler
http://lrp.steinkuehler.net
http://c0wz.steinkuehler.net (lrp.c0wz.com mirror)



___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-27 Thread Michael D. Schleif


Charles Steinkuehler wrote:
 
Quite simply, what is the simplest, secure way to forward to two (2)
hosts?  There are probably better ways to accomplish the end goal;
 but,
we have an application whereby we may need to push very large files
 from
the internet to two (or, more) locations behind a Dachstein firewall.
   
What do you think?
  
   scp or https/PUT to separate ports (22 and 2022, or 443 and 4443, for
   example), one port for each host.  The hosts could each see input on the
   nominal port (22 or 443).
 
  Yes, I see this; but, is there someway to accomplish this --
  simultaneously -- with one (1) remote operation?
 
 ???
 Please explain a bit more about exactly what you're trying to accomplish...

Large medical images -- some approaching gigabyte sizes.

The internal network connects multiple facilities.  The images may need
to be shared across multiple facilities.

Our preferred solution is to put one (1) copy of each image on a large
and robust fileserver inside their network.  The catch is, they are
using proprietary systems for viewing and analyzing the images and we
may not be granted access nor information adequate to implementing our
preferred solution.  Currently, the remote sources are using their
proprietary systems (black boxes) to auto-magically transfer the files
directly to one (1) proprietary system inside our customer's network. 
Yes, this looks everyway like ftp -- except the proprietary system
vendor says, no, it is not that simple ;

When one of these images is needed on another proprietary system inside
this network, somebody needs to push the required file to another
proprietary system.  Our customer wants ``pull'' access from any given
system.

In brainstorming alternatives, this occured to me:

send images
|
V
 internet
|
V
 firewall
|
  -
  | | |
  V V V
host_1host_2host_n ...

Regardless, whether or not this is the best solution for this
application, how can this be done?

What do you think?

-- 

Best Regards,

mds
mds resource
888.250.3987

Dare to fix things before they break . . .

Our capacity for understanding is inversely proportional to how much we
think we know.  The more I know, the more I know I don't know . . .

___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user



Re: [Leaf-user] portfw to *multiple* hosts ???

2001-12-27 Thread David Douthitt

On 12/27/01 at 10:21 PM, Michael D. Schleif [EMAIL PROTECTED] wrote:

 Large medical images -- some approaching gigabyte sizes.
 
 The internal network connects multiple facilities.  The
 images may need to be shared across multiple facilities.
 
 Our preferred solution is to put one (1) copy of each
 image on a large and robust fileserver inside their
 network.  The catch is, they are using proprietary systems
 for viewing and analyzing the images and we may not be
 granted access nor information adequate to implementing
 our preferred solution.  Currently, the remote sources are
 using their proprietary systems (black boxes) to
 auto-magically transfer the files directly to one (1)
 proprietary system inside our customer's network. Yes,
 this looks everyway like ftp -- except the proprietary
 system vendor says, no, it is not that simple ;
 
 When one of these images is needed on another proprietary
 system inside this network, somebody needs to push the
 required file to another proprietary system.  Our customer
 wants ``pull'' access from any given system.
 
 In brainstorming alternatives, this occured to me:
 
 send images
 |
 V
  internet
 |
 V
  firewall
 |
   -
   | | |
   V V V
 host_1host_2host_n ...
 
 Regardless, whether or not this is the best solution for
 this application, how can this be done?
 
 What do you think?

This sounds to me like a case for rsync + ssh There is, if you
need it, an rsync.lrp already - and of course, ssh.lrp.  You could set
up rsync either as a push or a pull alternative.  As a case study,
consider that there are many publicly accessibly rsync servers (the
Linux kernel site kernel.org comes to mind...)

If you could set up host_1, host_2, etc. to be rsync recipients, why
not tunnel rsync via ssh through the firewall?
--
David Douthitt
UNIX Systems Administrator
HP-UX, Unixware, Linux
[EMAIL PROTECTED]

___
Leaf-user mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/leaf-user