It is a problem, and frankly one of the biggest PITAs I know of.

True story - although this goes back a couple of years and was in an
environment with few switches, it's still a cautionary tale.

The situation: A large development environment, approx 250-300 users, say
about 90% of the usage of the network being developers & testers and maybe
10% being other usage.

The problem: Every day at about 8:30am +- 5m a particular server stopped
functioning for most of it's users.  No user could connect to it.  Console
said it's up and functional.  Late afternoons, say after 4 or 4:30pm it came
back.  EVERY DAY.  Once in a blue moon, it would stay up all day.  If it
went down, nothing helped - reboots didn't phase it, nothing would make it
work.  Developers weren't able to develop, testers weren't able to test.

The NOC was run by another contractor to monitor the production side of the
network, but did light trouble shooting for the State users of the
development network.  The NOC claimed the machine was still pingable.   So
the NOC gave up.  This dumped it back into our laps, since "it's a
development problem".  And because it was now a crisis, TPTB authorized
desperate measures.

Enter me - "Mr. Measure".  At the time, I was the boss of the LAN god (who
was actually quite competent and had tried all the usual stuff).

Now the bad boy was on the 4th floor, as was most of it's users, but the
developers were in the basement.

So the 1st thing I did was sever the link.  No email, no web access, nothing
but access to the server on 4.  Guess what?   It worked fine.   Never went
down the next day.  Success?  Ha - you haven't seen anything 'til users
can't get their email...  Connected the 4th floor back up and it dies
again...   argh...

Day 2:  Warned people the network would be flaky - it's wonderful what you
can do when "extreme measures have been authorized"...  The network wiring
was Cabletron MMAC8s, so we were able to pull entire segments off the
network and poof... we were able to isolate it down to the segment causing
the failure.  Which was in the basement...

Turns out, one of the people in the NOC had a misconfigured IP address (but
you saw this coming because of the subject and because you understand IP.
This was 1994 and the 1st large scale TCP/IP deployment of connectivity to
an IBM mainframe.  In a mostly Windows 3.1/Novell NetWare environment).

Every day at 8:30am, she came in, powered up her machine and went for a cup
of coffee.

As soon as it booted and loaded the network stack, the gun was cocked and
loaded.  As soon as somebody tried to reach that server, her machine
responded - and because she was much closer to the backbone switch, it
decided HER machine was the right one.  And that would be it until she
powered down at 4:00 or 4:30 and the backbone timedout it's cache...

The very setup conspired against us.  Servers were on a separate, high speed
network segment with a small subnet address range.  Because the NOC was "so
important" they had their own connections directly onto the backbone...

Was this a dhcp environment?  Yes, except for the NOC who "knew" they were
so important they had to do things their own way and that meant static IPs.
And one individual couldn't type...

Would I do anything different, today?  Sure -

1) I would have changed the IP address of the server.
2) I would have a much smarter backbone and be able to view the arp cache
and notice the MAC address had changed.  The backbone switch would tell me
which port - narrowing it down to part of a floor.  Based on the 3st three
octets of the MAC address, I could narrow it down even further, to a brand
of network card that I hadn't used in my 200 machines. etc.
3) The backbone switch would itself probably report the conflict.  Windows
users would see error messages, etc.
4) No users on the backbone.  Don't care who they know or how much they
whine.  If the "have to" have access, then deploy their own infrastructure
instead of piggy-backing off mine...


But, IP address conflicts would still be a brass plated pain...

-----Burton





-----Original Message-----
From: Chris [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 14, 2002 11:10 AM
To: [EMAIL PROTECTED]
Subject: DHCP Security Questions


I was curious to find out about some issues that I would like to prevent
if at all possible.  I am running a network with a DHCP server handing
out public IP's to clients.  It is also reserving by the MAC for clients
that have static publics.  My concern is someone that has legitimate
access to the network purposely or accidentally setting their IP to an
IP that is already taken and login on to the network and causing
problems.  Obviously this could really be a problem if it is a business
client and are running some sort of server and someone logs on with that
IP.  Does anyone know of a way to prevent this?  If you need more
details please ask.

Thank You,

Chris Raynor
Network Security
Mendo Link, LLC

"An Ounce Of Prevention Is Worth  A Pound Of Cure."

Reply via email to