Network pipes

2003-07-24 Thread Diomidis Spinellis
I am currently testing a set of modifications to /bin/sh that allow a
user to create a pipeline over the network using a socket as its
endpoints.  Currently a command like

tar cvf - / | ssh remotehost dd of=/dev/st0 bs=32k

has tar sending each block through a pipe to a local ssh process, ssh
communicating through a socket with a remote ssh daemon and dd
communicating with sshd through a pipe again.  The changed shell allows
you to write

tar cvf - / |@ ssh remotehost -- dd of=/dev/st0 bs=32k | :

The effect of the above command is that a socket is created between the
local and the remote host with the standard output of tar and the
standard input of dd redirected to that socket.  Authentication is still
performed using ssh (or any other remote login mechanism you specify
before the -- argument), but the flow between the two processes is from
then on not protected in terms of integrity and privacy.  Thus the
method will mostly be useful within the context of a LAN or a VPN.

The authentication design requires the users to have a special command
in their path on the remote host, but does not require an additional
privileged server or the reservation of special ports.

By eliminating two processes, the associated context switches, the data
copying, and (in the case of ssh) encryption performance is markedly
improved:

dd if=/dev/zero bs=4k count=8192 | ssh remotehost -- dd of=/dev/null
33554432 bytes transferred in 17.118648 secs (1960110 bytes/sec)
dd if=/dev/zero bs=4k count=8192 |@ ssh remotehost -- dd of=/dev/null |
:
33554432 bytes transferred in  4.452980 secs (7535276 bytes/sec)

Even eliminating the encryption overhead by using rsh you can still see 

dd if=/dev/zero bs=4k count=8192 | rsh remotehost -- dd of=/dev/null
33554432 bytes transferred in 131.907130 secs (254379 bytes/sec)
dd if=/dev/zero bs=4k count=8192 |@ rsh remotehost -- dd of=/dev/null |
:
33554432 bytes transferred in 86.545385 secs (387709 bytes/sec)

My questions are:

1. How do you feel about integrating these changes to the /bin/sh in
-CURRENT?  Note that network pipes are a different process plumbing
mechanism, so they really do belong to a shell; implementing them
through a separate command would be inelegant.

2. Do you see any problems with the new syntax introduced?

3. After the remote process starts running standard error output is
lost.  Do find this a significant problem?

4. Both sides of the remote process are communication endpoints and have
to be connected to other local processes via pipes.  Would it be enough
to document this behaviour or should it be hidden from the user by means
of forked read/write processes?

Diomidis - http://www.spinellis.gr
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Network pipes

2003-07-24 Thread Luigi Rizzo
hi,
i have the following questions:

* strange benchmark results! Given the description, I would expect 
  the |@ rsh and |@ ssh cases to give the same throughput, and
  in any case | rsh to be faster than | ssh. How comes, instead,
  that the times differ by an order of magnitude ? Can you run the
  tests in similar conditions to appreciate the gains better ?

* I do not understand how can you remove the pipe in the remote host
  without modifying there both sshd and sh ?
  I think it would be very important to understand how much
  |@ depends on the behaviour of the remote daemon.

* the loss of encription on the channel is certainly something that might
  escape the attention of the user. I also wonder in how many cases you
  really need the extra performance to justify the extra plumbing
  mechanism.

* there are subtle implications of your new plumbing in the way
  processes are started. With A | B | C the shell first creates the
  pipes, then it can start the processes in any order, and they can
  individually fail to start without any direct consequence other
  than an I/O failure. A |@ B |@ C requires that you start things
  from the end of the chain (because you cannot start a process 
  until you have a [socket] descriptor from the next stage in the
  chain), and if a process fails to start you cannot even start the
  next one in the sequence. Not that this is bad, just very different
  from regular pipes.

All the above leaves me a bit puzzled on whether or not this is a
useful addition... In fact, i am not convinced that network pipes
should be implemented in the shell...
 
cheers
luigi

On Thu, Jul 24, 2003 at 11:19:49AM +0300, Diomidis Spinellis wrote:
 I am currently testing a set of modifications to /bin/sh that allow a
 user to create a pipeline over the network using a socket as its
 endpoints.  Currently a command like
 
 tar cvf - / | ssh remotehost dd of=/dev/st0 bs=32k
 
 has tar sending each block through a pipe to a local ssh process, ssh
 communicating through a socket with a remote ssh daemon and dd
 communicating with sshd through a pipe again.  The changed shell allows
 you to write
 
 tar cvf - / |@ ssh remotehost -- dd of=/dev/st0 bs=32k | :
 
 The effect of the above command is that a socket is created between the
 local and the remote host with the standard output of tar and the
 standard input of dd redirected to that socket.  Authentication is still
 performed using ssh (or any other remote login mechanism you specify
 before the -- argument), but the flow between the two processes is from
 then on not protected in terms of integrity and privacy.  Thus the
 method will mostly be useful within the context of a LAN or a VPN.
 
 The authentication design requires the users to have a special command
 in their path on the remote host, but does not require an additional
 privileged server or the reservation of special ports.
 
 By eliminating two processes, the associated context switches, the data
 copying, and (in the case of ssh) encryption performance is markedly
 improved:
 
 dd if=/dev/zero bs=4k count=8192 | ssh remotehost -- dd of=/dev/null
 33554432 bytes transferred in 17.118648 secs (1960110 bytes/sec)
 dd if=/dev/zero bs=4k count=8192 |@ ssh remotehost -- dd of=/dev/null |
 :
 33554432 bytes transferred in  4.452980 secs (7535276 bytes/sec)
 
 Even eliminating the encryption overhead by using rsh you can still see 
 
 dd if=/dev/zero bs=4k count=8192 | rsh remotehost -- dd of=/dev/null
 33554432 bytes transferred in 131.907130 secs (254379 bytes/sec)
 dd if=/dev/zero bs=4k count=8192 |@ rsh remotehost -- dd of=/dev/null |
 :
 33554432 bytes transferred in 86.545385 secs (387709 bytes/sec)
 
 My questions are:
 
 1. How do you feel about integrating these changes to the /bin/sh in
 -CURRENT?  Note that network pipes are a different process plumbing
 mechanism, so they really do belong to a shell; implementing them
 through a separate command would be inelegant.
 
 2. Do you see any problems with the new syntax introduced?
 
 3. After the remote process starts running standard error output is
 lost.  Do find this a significant problem?
 
 4. Both sides of the remote process are communication endpoints and have
 to be connected to other local processes via pipes.  Would it be enough
 to document this behaviour or should it be hidden from the user by means
 of forked read/write processes?
 
 Diomidis - http://www.spinellis.gr
 ___
 [EMAIL PROTECTED] mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Network pipes

2003-07-24 Thread Diomidis Spinellis
Luigi Rizzo wrote:
 * strange benchmark results! Given the description, I would expect
   the |@ rsh and |@ ssh cases to give the same throughput, and
   in any case | rsh to be faster than | ssh. How comes, instead,
   that the times differ by an order of magnitude ? Can you run the
   tests in similar conditions to appreciate the gains better ?

They were executed on different machines.  The ssh result was between
freefall.freebsd.org and ref5, the rsh result was between old low-end
Pentium machines on my home network.

 * I do not understand how can you remove the pipe in the remote host
   without modifying there both sshd and sh ?
   I think it would be very important to understand how much
   |@ depends on the behaviour of the remote daemon.

The remote daemon is only used for authentication.  Thus any remote host
command execution method can be used without modifying the client or the
server.  What the modified shell does is start on the remote machine a
separate command netpipe.   Netpipe takes as arguments the originating
host, the socket port, the command to execute, and its arguments. 
Netpipe opens the socket back to the originating host, redirects its I/O
to the socket, and execs the specified command.

 * the loss of encription on the channel is certainly something that might
   escape the attention of the user. I also wonder in how many cases you
   really need the extra performance to justify the extra plumbing
   mechanism.

I felt the need for such functionality when moving GB data between
different machines for creating a disk copy and backup to tape.  My
requirements may be atypical, this is why I asked for input.

 * there are subtle implications of your new plumbing in the way
   processes are started. With A | B | C the shell first creates the
   pipes, then it can start the processes in any order, and they can
   individually fail to start without any direct consequence other
   than an I/O failure. A |@ B |@ C requires that you start things
   from the end of the chain (because you cannot start a process
   until you have a [socket] descriptor from the next stage in the
   chain), and if a process fails to start you cannot even start the
   next one in the sequence. Not that this is bad, just very different
   from regular pipes.

It is even worse.  You can not write A |@ B |@ C because sockets are
created on the originating host.  For the above to work you would need a
mechanism to create another socket between the B and C machines.  Maybe
the syntax should be changed to make such constructions
counterintuitive.


Diomidis
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Network pipes

2003-07-24 Thread John
- Diomidis Spinellis's Original Message -
  Luigi Rizzo wrote:
  
  * the loss of encription on the channel is certainly something that might
escape the attention of the user. I also wonder in how many cases you
really need the extra performance to justify the extra plumbing
mechanism.

Once you pass a certain order of magnitute, it becomes an overriding
priority. Thus the reason why many backup systems are hand crafted. 

 I felt the need for such functionality when moving GB data between
 different machines for creating a disk copy and backup to tape.  My
 requirements may be atypical, this is why I asked for input.

Your requirements are not atypical. There are folks out here dealing
with 100s  1000s of TB.

  * there are subtle implications of your new plumbing in the way
processes are started. With A | B | C the shell first creates the
pipes, then it can start the processes in any order, and they can
individually fail to start without any direct consequence other
than an I/O failure. A |@ B |@ C requires that you start things
from the end of the chain (because you cannot start a process
until you have a [socket] descriptor from the next stage in the
chain), and if a process fails to start you cannot even start the
next one in the sequence. Not that this is bad, just very different
from regular pipes.
 
 It is even worse.  You can not write A |@ B |@ C because sockets are
 created on the originating host.  For the above to work you would need a
 mechanism to create another socket between the B and C machines.  Maybe
 the syntax should be changed to make such constructions
 counterintuitive.

Syntactic consistency should be a high priority.

-john
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Probing for devices

2003-07-24 Thread Geoff Glasson
Folks,

I'm trying to port the Linux i810 Direct Rendering Interface ( DRI ) kernel 
module to FreeBSD.  I have reached the point where the thing compiles, and I 
can load it as a kernel module, but it can't find the graphics device.

Through a process of elimination I have come to the conclusion that once the 
AGP kernel module probes and attaches to the i810 graphics device, nothing 
else can attach to it.  When I read the section on PCI devices it implied ( 
to me at least ) that multiple kernel modules should be able to attach to the 
same device.  I have tried to get it to work without any success.

Has anyone else out there tried anything similar?  If so, did you manage to 
get it to work?  Does anyone have any suggestions for other things that I can 
try?

Thanks in advance...Geoff

BTW: I'm running FreeBSD 4.8-STABLE on a Pentium 3.

-- 
Geoff Glasson
[EMAIL PROTECTED]

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Problem with program memory layout.

2003-07-24 Thread Fred Gilham


Hello,

I'm working on creating CMU Lisp executables that contain both the
startup executable and the lisp image (the actual lisp system code).
The way it currently works is that the startup executable reads the
lisp image and mmaps parts of it, called spaces, into memory.  What
I'm trying to do is add the spaces to the startup executable as ELF
segments so that when the program starts, the spaces will get mapped
into memory by the OS.  The idea is that the core and startup code
have to be synchronized and so the core won't run without the
particular startup program it was created with, and sometimes one
forgets to keep the startup program when one upgrades.

I've created a program to write out ELF compatible .o files that
contain the code in the core file.  I have also hacked a linker script
that will combine these .o files with the rest of the .o files that
make up the startup program.  The resulting program will actually run.

Unfortunately when it calls malloc, malloc isn't able to set itself up
in memory and returns 0.  What happens is that malloc calls sbrk and
sbrk returns an EINVAL error.

After some exploration I figured out why this was happening.  There's
a field called vm_daddr in the vmspace structure that gets set when
the segments are loaded.  The following code in
/usr/src/sys/kern/imgact_elf.c does this:



   /*
 * Is this .text or .data?  We can't use
 * VM_PROT_WRITE or VM_PROT_EXEC, it breaks the
 * alpha terribly and possibly does other bad
 * things so we stick to the old way of figuring
 * it out:  If the segment contains the program
 * entry point, it's a text segment, otherwise it
 * is a data segment.
 *
 * Note that obreak() assumes that data_addr + 
 * data_size == end of data load area, and the ELF
 * file format expects segments to be sorted by
 * address.  If multiple data segments exist, the
 * last one will be used.
 */
if (hdr-e_entry = phdr[i].p_vaddr 
hdr-e_entry  (phdr[i].p_vaddr +
phdr[i].p_memsz)) {
text_size = seg_size;
text_addr = seg_addr;
entry = (u_long)hdr-e_entry;
} else {
data_size = seg_size;
data_addr = seg_addr;
}


  .

  vmspace-vm_daddr = (caddr_t)(uintptr_t)data_addr;


Sbrk will return EINVAL when the new requested break value is smaller
than the current contents of vm_daddr (this is done in
/usr/src/sys/vm/vm_unix.c).  But vm_daddr gets set to the end of my
last segment, rather than the end of the actual data segment.

This seems to mean I lose completely.

Can anyone suggest a way around this?

Here's the program header for the executable I create:


Elf file type is EXEC (Executable file)
Entry point 0x8049398
There are 9 program headers, starting at offset 52

Program Headers:
  Type   Offset   VirtAddr   PhysAddr   FileSiz MemSiz  Flg Align
  PHDR   0x34 0x08048034 0x08048034 0x00120 0x00120 R   0x4
  INTERP 0x000154 0x08048154 0x08048154 0x00019 0x00019 R   0x1
  [Requesting program interpreter: /usr/libexec/ld-elf.so.1]
  LOAD   0x00 0x08048000 0x08048000 0x12940 0x12940 R E 0x1000
  LOAD   0x012940 0x0805b940 0x0805b940 0x0078c 0x09c50 RW  0x1000
  DYNAMIC0x012edc 0x0805bedc 0x0805bedc 0x000b8 0x000b8 RW  0x4
  NOTE   0x000170 0x08048170 0x08048170 0x00018 0x00018 R   0x4
  LOAD   0x014000 0x1000 0x1000 0xaa 0xaa R E 0x1000
  LOAD   0xab4000 0x28f0 0x28f0 0x27d000 0x27d000 R E 0x1000
  LOAD   0xd31000 0x4800 0x4800 0x01000 0x01000 R E 0x1000

 Section to Segment mapping:
  Segment Sections...
   00 
   01 .interp 
   02 .interp .note.ABI-tag .hash .dynsym .dynstr .rel.bss .rel.plt .init .plt 
.text .fini .rodata 
   03 .data .eh_frame .dynamic .ctors .dtors .got .bss 
   04 .dynamic 
   05 .note.ABI-tag 
   06 CORRO 
   07 CORSTA 
   08 CORDYN 


-- 
-Fred Gilham  [EMAIL PROTECTED]
In America, we have a two-party system.  There is the stupid
party. And there is the evil party. I am proud to be a member of the
stupid party.  Periodically, the two parties get together and do
something that is both stupid and evil. This is called --
bipartisanship.   --Republican congressional staffer
___
[EMAIL PROTECTED] mailing list

Re: Network pipes

2003-07-24 Thread Luigi Rizzo
i understand the motivations (speeding up massive remote backups and the
like), but i do not believe you need to introduce a new shell
construct (with very different semantics) just to accommodate this.

I believe it is a lot more intuitive to say something like

foo [flags] source-host tar cvzf - /usr dest-host dd of=/dev/bar

and have your 'foo' command do the authentication using ssh or whatever
you require with the flags, create both ends of the socket, call
dup() as appropriate and then exec the source and destination
pipelines.

cheers
luigi

On Thu, Jul 24, 2003 at 02:04:21PM +0300, Diomidis Spinellis wrote:
 Luigi Rizzo wrote:
  * strange benchmark results! Given the description, I would expect
the |@ rsh and |@ ssh cases to give the same throughput, and
in any case | rsh to be faster than | ssh. How comes, instead,
that the times differ by an order of magnitude ? Can you run the
tests in similar conditions to appreciate the gains better ?
 
 They were executed on different machines.  The ssh result was between
 freefall.freebsd.org and ref5, the rsh result was between old low-end
 Pentium machines on my home network.
 
  * I do not understand how can you remove the pipe in the remote host
without modifying there both sshd and sh ?
I think it would be very important to understand how much
|@ depends on the behaviour of the remote daemon.
 
 The remote daemon is only used for authentication.  Thus any remote host
 command execution method can be used without modifying the client or the
 server.  What the modified shell does is start on the remote machine a
 separate command netpipe.   Netpipe takes as arguments the originating
 host, the socket port, the command to execute, and its arguments. 
 Netpipe opens the socket back to the originating host, redirects its I/O
 to the socket, and execs the specified command.
 
  * the loss of encription on the channel is certainly something that might
escape the attention of the user. I also wonder in how many cases you
really need the extra performance to justify the extra plumbing
mechanism.
 
 I felt the need for such functionality when moving GB data between
 different machines for creating a disk copy and backup to tape.  My
 requirements may be atypical, this is why I asked for input.
 
  * there are subtle implications of your new plumbing in the way
processes are started. With A | B | C the shell first creates the
pipes, then it can start the processes in any order, and they can
individually fail to start without any direct consequence other
than an I/O failure. A |@ B |@ C requires that you start things
from the end of the chain (because you cannot start a process
until you have a [socket] descriptor from the next stage in the
chain), and if a process fails to start you cannot even start the
next one in the sequence. Not that this is bad, just very different
from regular pipes.
 
 It is even worse.  You can not write A |@ B |@ C because sockets are
 created on the originating host.  For the above to work you would need a
 mechanism to create another socket between the B and C machines.  Maybe
 the syntax should be changed to make such constructions
 counterintuitive.
 
 
 Diomidis
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Network pipes

2003-07-24 Thread John-Mark Gurney
Diomidis Spinellis wrote this message on Thu, Jul 24, 2003 at 14:04 +0300:
 separate command netpipe.   Netpipe takes as arguments the originating
 host, the socket port, the command to execute, and its arguments. 
 Netpipe opens the socket back to the originating host, redirects its I/O
 to the socket, and execs the specified command.

This breaks nat firewalls.  It is very common occurance to only accept
incoming connections, and only on certain ports.  This means any system
of firewill will probably be broken by this. :(

i.e. behind a nat to a public system, the return connection can't be
established.  From any system to a nat redirected ssh server, the
incoming connection won't make it to the destination machine.

I think this should just be a utility like Luigi suggested.  This will
help solve these problems.

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 All that I will do, has been done, All that I have, has not.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Network pipes

2003-07-24 Thread Dirk-Willem van Gulik

 I think this should just be a utility like Luigi suggested.  This will
 help solve these problems.

And in large part the traditional netpipe/socket tools in combination with
the -L and -R flags of SSH solve these issues rather handily. And when
used with ssh-keyagent rather nicely.

Dw

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Network pipes

2003-07-24 Thread Luigi Rizzo
On Thu, Jul 24, 2003 at 10:36:41AM -0700, John-Mark Gurney wrote:
 Diomidis Spinellis wrote this message on Thu, Jul 24, 2003 at 14:04 +0300:
  separate command netpipe.   Netpipe takes as arguments the originating
  host, the socket port, the command to execute, and its arguments. 
  Netpipe opens the socket back to the originating host, redirects its I/O
  to the socket, and execs the specified command.
 
 This breaks nat firewalls.  It is very common occurance to only accept
 incoming connections, and only on certain ports.  This means any system
 of firewill will probably be broken by this. :(

actually it is the other way around -- this solution simply won't
work on firewalled systems. But to tell the truth, i doubt you'd
do a multi-gb backup through a nat and be worried about the encryption
overhead.

cheers
luigi

 i.e. behind a nat to a public system, the return connection can't be
 established.  From any system to a nat redirected ssh server, the
 incoming connection won't make it to the destination machine.
 
 I think this should just be a utility like Luigi suggested.  This will
 help solve these problems.
 
 -- 
   John-Mark GurneyVoice: +1 415 225 5579
 
  All that I will do, has been done, All that I have, has not.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Network pipes

2003-07-24 Thread Tim Kientzle
Dirk-Willem van Gulik wrote:
I think this should just be a utility like Luigi suggested.  This will
help solve these problems.
And in large part the traditional netpipe/socket tools in combination with
the -L and -R flags of SSH solve these issues rather handily. And when
used with ssh-keyagent rather nicely.
But piping GBs of data through an encrypted SSH connection
is still slow.  The performance issues the OP is
trying to address are real.
Another approach would be to add a new option to SSH
so that it could encrypt only the initial authentication,
then pass data unencrypted after that.  This would
go a long way to addressing the performance concerns.
Tim

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Network pipes

2003-07-24 Thread Leo Bicknell
In a message written on Thu, Jul 24, 2003 at 12:48:23PM -0700, Tim Kientzle wrote:
 Another approach would be to add a new option to SSH
 so that it could encrypt only the initial authentication,
 then pass data unencrypted after that.  This would
 go a long way to addressing the performance concerns.

ssh -c none?

Note, you don't want to use password authentication in this case, but
public key should still be ok.

You could also set up something like kerberos and use krsh or similar...

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org


pgp0.pgp
Description: PGP signature


Update on nVidia/MCP ethernet

2003-07-24 Thread Bill Paul

First off, response to my announcement has been amazing. I've received
150 e-mails so far, and they're still coming. Thanks to everyone who
has responded.

An extra special thanks to those people who agreed to talk to contacts
they have within nVidia. I'm still waiting for for info from these
people. As soon as I learn something new, I'll pass it along.

I have also made some progress on another front. It occured to me
that since nVidia is known for GPU expertise rather than networking
expertise that maybe their 'proprietary design' wasn't really anything
of the sort. Well, I was right: what nVidia calls MCP ethernet is
really a Conexant CX25870/1 jedi controller. I have contacted Conexant
and am in the process of trying to obtain a copy of the programming
manual for this device. There are some NDA issues to deal with, however
I've been told they will not prevent me from releasing driver source.

I hope to get this resolved soon. Stay tuned.

-Bill

--
=
-Bill Paul(510) 749-2329 | Senior Engineer, Master of Unix-Fu
 [EMAIL PROTECTED] | Wind River Systems
=
  If stupidity were a handicap, you'd have the best parking spot.
=
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Network pipes

2003-07-24 Thread Kirk Strauser
At 2003-07-24T08:19:49Z, Diomidis Spinellis [EMAIL PROTECTED] writes:

 tar cvf - / |@ ssh remotehost -- dd of=/dev/st0 bs=32k | :

 The effect of the above command is that a socket is created between the
 local and the remote host with the standard output of tar and the standard
 input of dd redirected to that socket.  Authentication is still performed
 using ssh (or any other remote login mechanism you specify before the --
 argument), but the flow between the two processes is from then on not
 protected in terms of integrity and privacy.  Thus the method will mostly
 be useful within the context of a LAN or a VPN.

Isn't this almost the same as:

# ssh -f remotehost nc -l -p 54321 | dd of=/dev/st0 bs=32k
# tar cvf - / | nc remotehost 54321

Netcat implements a TCP/UDP transports and basically nothing else.  Isn't
that what you're trying to achieve?
-- 
Kirk Strauser


pgp0.pgp
Description: PGP signature


Fwd: [GE users] FreeBSD binaries for 5.3p4

2003-07-24 Thread Ron Chen
FYI, In case anyone on this list didn't know.

 -Ron

P.S. Gridengine is a batch system (similar to PBS or
LSF), but is free and with lots of features.


 I've created a FreeBSD port of Sun Grid Engine which
 I have used to
 create a FreeBSD i386 package and tarball like those
 on the official
 download page.  The package is available at:
 
 http://people.freebsd.org/~brooks/sge/sge-5.3.4.tgz
 
 The tarball is at:
 

http://people.freebsd.org/~brooks/sge/sge-5.3p4-bin-fbsd-i386.tar.gz
 
 If you use this tarball, you will need to do the
 following once you have
 installed it and the common files from
 gridengine.sunsource.net:
 
 cd $SGE_ROOT
 perl -p -i -e 's/freebsd-/fbsd-/g' util/arch
 util/arch_variables
 
 This is due to a change made to the branch just
 after release which
 was backported as a patch in the port so binaries
 will use the same
 architecture strings going forward.
 
 If you wish to build the port your self of wish to
 use the SGEEE port,
 they are both in this tarball:
 
 http://people.freebsd.org/~brooks/sge/sge.tar.gz
 
 They will be added to the ports collection in due
 time.
 
 Finally, due to the fact that the FreeBSD port
 system can't deal with
 the source download URLs on the GE website, there is
 a bzip'd version of
 the source distribution on the FreeBSD ftp mirror
 network.  One copy is
 available at:
 

ftp://ftp.freebsd.org/pub/FreeBSD/ports/local-distfiles/brooks/sge-V53p4_TAG-src.tar.bz2
 
 -- Brooks
 

-
 To unsubscribe, e-mail:
 [EMAIL PROTECTED]
 For additional commands, e-mail:
 [EMAIL PROTECTED]
 


__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]