I was at the meeting where ssh was discussed.
Maybe I am clueless here, but this is my understanding of the SSH
authentication process after being away from active employment as a UNIX
SA for 20 months......
There are 3 components to SSH authentication: (1) user keys; (2) session
keys; and (3) host keys.
I set up session keys to be re-exchanged every ten minutes of an active
SSH connection.
Host keys were exchanged for each host I wanted in my host collection
for each user I wanted to have to login to a remote ssh host. I had to
manually establish a connection the first time for that user. Once the
initial authentication and key exchange had occurred as a user, I
established a .shosts entry for the connecting host on the remote host,
and future connections as that user would be password/passphrase-less.
The .shosts file was in the users home directory, just as a .rhosts file
would be.
I have probably missed something since it has been so long - it wasn't
that easy!
I used this technique to administer a remote server that was behind an
"outbound only" firewall. The remote server would establish a reverse
tunnel with my local host. I could then use ssh on the local host back
through the reverse tunnel to login as my unprivileged self and su to do
my administrative tasks. The tunnel would not be left up permanently,
since there was a requirement by the network administrators of the
outbound-only firewall to establish new connections periodically.
As previously established in the thread, there are dangers with this
technique of compromising a userid of a remote host once the user on the
local host is compromised. But once you've been compromised it's all
but over anyway - it's just a matter of time. The key is to always use
"best practices" in approaching security so that the chances of being
compromised are extremely minimal.
Having said that, about the only thing running on the local host I was
using for this purpose was the minimal kernel, the console terminal, the
one network connection and sshd (it had to listen to accept the
establishment of a tunnel). The local host was in a cypher-locked
computer room - no inetd, no inbound mail, no DNS lookup ("hosts" file
only for name resolution). The OS was loaded and modified according to
the SANS.ORG methodology "Securing Solaris Step-by-Step". The local
host and remote host always had up-to-date patches and kernel level
controls were placed upon the network card at boot time to reduce the
possibility of compromise via lower level protocols, SYN floods, etc. I
did load the package "screen" so I could work from multiple terminal
sessions if I needed to without using an X based display.
The SSH being used was SSHv2 from SSH Communications Security because,
at the time, we couldn't use open source software. All hosts were Sun
boxes with Solaris 8. I believe this same technique can be worked out
with OpenSSH, but I never had the blessing to establish tunnels with a
box running OpenSSH. I did successfully test the normal interoperablity
of OpenSSH & SSHv2.
One warning of a problem using SFTP with a unprivileged user that I was
never able to resolve: SFTP allocates new memory (without releasing old
memory) on the remote host for every file transfer. So if you login,
leave the connection up and transfer thousands of files, you could cause
the remote host to have an out-of-memory condition. The work around was
to force a periodic re-login well before that condition occurred. Why
didn't we use SCP? The time taken up in re-authentication for each file
transfer was prohibitive and we couldn't get enough throughput to meet
client requirements.
I understand that, after I left that company, the user process needing
the file transfers was redesigned and the issue disappeared. SFTP
wasn't fixed though.
_______________________________________________
RLUG mailing list
[email protected]
http://lists.rlug.org/mailman/listinfo/rlug