Re: preventing future situations like panix

2006-01-23 Thread Thor Lancelot Simon

On Mon, Jan 23, 2006 at 12:47:38PM -0700, Josh Karlin wrote:
 
 Suspicious routes are those that originate at an AS that has not
 originated the prefix in the last few days and those that introduce
 sub-prefixes.  Sub-prefixes are always considered suspicious (~1 day)
 and traffic will be routed to the super-prefix for the suspicious
 period.

So, if you consider the recent Cone-D hijacking incident, it seems to
me that:

1) Cone-D's announcement of _some_ of the prefixes they announced would
   have been considered suspicious -- but not all, since some of the
   prefixes in question were for former customers or peers who had only
   recently terminated their business arrangements with Cone-D.

2) Panix's first, obvious countermeasure aimed at restoring their
   connectivity -- announcing their own address space split in half --
   would *also* have been considered suspicious, since it gave two
   sub-prefixes of what Cone-D was hijacking.

Unless I misunderstand what you're proposing -- which is entirely possible,
in fact perhaps even likely -- it seems to me that it might well have done
at least as much harm as good.

Thor


oof. panix sidelined by incompetence... again.

2006-01-22 Thread Thor Lancelot Simon


This is hardly as serious as the last incident -- but, well, some people
do seem to have all the luck, eh?

Of course, there are measures one can take against this sort of thing; but
it's hard to deploy some of them effectively when the party stealing your
routes was in fact once authorized to offer them, and its own peers may
be explicitly allowing them in filter lists (which, I think, is the case
here).  Sometimes budget network connectivity isn't -- even when you've
already realized that and turned off the tap!

The text below is what's currently in the MOTD on Panix's NetBSD hosts:

==

Con Ed 'stealing' Panix routes (alexis) Sun Jan 22 12:38:16 2006

   All Panix services are currently unreachable from large portions of the
   Internet (though not all of it). This is because Con Ed Communications,
   a competence-challenged ISP in New York, is announcing our routes to the
   Internet. In English, that means that they are claiming that all our
   traffic should be passing through them, when of course it should not.
   Those portions of the net that are closer (in network topology terms)
   to Con Ed will send them our traffic, which makes us unreachable.
   
   We are taking several steps to deal with this:
   1) We are announcing more specific routes to our peers. More specific
   routes are always preferred. However, we have to contact network admins
   at those peers to get them to change their route filters, before this
   workaround will be effective.
   2) We are attempting to reach Con Ed Communications. Unfortunately, so
   far we've been unable to do so. They don't seem to answer their phones
   on Sunday.
   3) We are attempting to reach Verio, which is upstream from Con Ed,
   because they could (and should!!) choose to ignore the rogue routes from
   Con Ed.
   
   Since all of these depend on humans outside of Panix, we can't give a
   specific time at which we expect this problem to be worked around (I
   don't expect a real resolution for a while, because Con Ed is hopeless,
   but the workaround will be perfect until then). But we do expect to
   be able to reach responsible parties at our peers within a few hours at
   most. We don't know how long it will take for them to change their
   filters, but that's not a challenging job technically, so we hope it won't
   take long.
   
   I'll post another MOTD as soon as we know anything more.



Re: oof. panix sidelined by incompetence... again.

2006-01-22 Thread Thor Lancelot Simon

On Sun, Jan 22, 2006 at 10:33:04AM -0800, william(at)elan.net wrote:
 
 
 Can there be a confirmation of this? I see no such MOTD at
  http://www.panix.com/panix/help/Announcements/

Verio was just extremely helpful and filtered out the bogus Panix
routes ConED was sending them quite rapidly upon request from Panix's
staff.  AFAICT ConED is still sending the bogus routes, and since they
evidently don't believe in staffing their NOC on the weekend, or
responding to reports of their own misconduct, heaven only knows if
they'll ever stop.

Thanks to Verio's quick intervention the problem, thank goodness,
seems to be solved.  The current Panix MOTD is below:



Connectivity restored (alexis) Sun Jan 22 13:31:28 2006

   At around 1:10PM, all of the Internet can now reach Panix again.
   
   We accomplished this by getting our peers to accept more-specific routes
   from us. We also, nearly simultaneously, got Con Ed's rogue route
   announcements pulled by Verio, their upstream.
   
   I'm surprised and pleased that Verio, which we don't have a business
   relationship with, was so easy to contact and so ready to do what they
   should.
   
   No mail was lost during this outage. Some was delayed, of course, and
   everything should be caught up again in an hour or two. Please let us know
   if you have network problems *after* 1:10PM EST.



Re: US slaps fine on company blocking VoIP

2005-03-04 Thread Thor Lancelot Simon

On Fri, Mar 04, 2005 at 01:54:33PM -0800, David Schwartz wrote:
 
   I'm curious how you'd feel if your local telephone company started
 preventing you from calling its competitors. How about if you suddenly

Your local telephone company is a regulated entity.  It's required to
complete your calls regardless of which other carrier they terminate
on.

Vonage has fought tooth and nail to *not* be a regulated entity.  But
now it's turning around and complaining that other non-regulated
entities are employing the same freedom from regulation that Vonage
enjoys in a way that Vonage finds inconvenient.

Meanwhile, Vonage has been pretty much entirely out of service for
the entirety of this afternoon, for all subscribers.  Something very
similar happened yesterday.  If Vonage were a regulated telephone
carrier, it would be subject to millions of dollars of fines --
essentially, the regulatory regime would force it to give back to
its customers the money it will doubtless not give back to them of
its own good will (it would be suicidally stupid business practice
to give it back unless they ask, after all, and most won't ask).

But Vonage has used a complaisant FCC as a stick to beat another
non-regulated entity with in order to force it to behave the way
Vonage wants.

This is all very effective but it does stink to high heaven.  We
can argue about whether it is best to have telecom regulation or
not have telecom regulation, but exactly as much regulation as
Vonage happens to want, where and when Vonage happens to want it
is certainly neither equitable nor good.

Thor


Re: More on Vonage service disruptions...

2005-03-03 Thread Thor Lancelot Simon

On Wed, Mar 02, 2005 at 09:46:05AM -0600, Church, Chuck wrote:

 Another thing for an ISP considering blocking VoIP is the fact that
 you're cutting off people's access to 911.  That alone has got to have
 some tough legal ramifications.  I can tell you that if my ISP started
 blocking my Vonage, my next cell phone call would be my attorney... 

Why?  Do you have a binding legal agreement with your ISP that requires
them to pass all traffic?  Do you really think you can make a
persuasive case that you have an implicit agreement to that effect?

(Note that I am not expressing an opinion about whether you _should_
 or _might like to_ have such an agreement, just my skepticism that
 you actually _do_ have such an agreement, and can enforce it)

The 911 issue is a tremendous red herring.  In fact, it's more of a
red halibut, or perhaps a red whale.  Vonage fought tooth-and-nail
to *not* be considered a local exchange carrier precisely *so that*
they could avoid the quality of service requirements associated with
911 service.  One of their major arguments in that dispute was that
they provided a service accessible by dialing 911 that was like
real 911 service but that was not actually 911 service.

As I and others noted at the time, that very much violates the
principle of least surprise, and is quite possibly more dangerous
than not providing any 911 service at all: in New York City, for
example, the number to which Vonage sends 911 calls is not equipped
to dispatch emergency services and often advises callers to hang
up and dial 911: this _decreases_ public safety by causing people
to waste time instead of dealing with emergencies in some constructive
way.

But Vonage persisted nonethess in insisting that they should not be
held to real 911 service standards, and they prevailed, basically by
convincing a compliant federal regulatory body with little or no
understanding of the underlying technical and human-factors issues
to force the state regulators to see it Vonage's way.  To turn around
now and use 911 reliability (of their service that is like 911 but
not 911 and thus should not _have_ any reliability standards enforced
upon it) as a reason why other carriers should be enjoined from
filtering Vonage's packets is not just wrong, it's absurd.

Of course, like much of Vonage's other rhetoric, it will probably
be effective.  Ultimately, Vonage will succeed in the marketplace
and, in the process of controlling its own costs, manage to wipe
away almost all of the traditional regime of regulation of service
quality, telco accountability, etc. even in realms like contact to
emergency service in which the public good is generally considered
to in fact be well served by those regulations.

We will have cheaper voice telephone service when all is said and
done but will we, eventually, be forced to turn around, after
Vonage uses cheaper costs from differential regulation to wipe out
all the old wireline carriers, to painfully reinstate a large part
of the old regulatory regime to ensure that telecom services that
we believe essential to the public good are not (or do not remain)
wiped out as well?

Thor



Re: Vonage complains about VoIP-blocking

2005-02-15 Thread Thor Lancelot Simon

On Tue, Feb 15, 2005 at 01:45:05PM -0500, Eric Gauthier wrote:
 
   On Tue, Feb 15, 2005 at 11:53:59AM -0600, Adi Linden wrote:
   How is this any different then blocking port 25 or managing the bandwidth
   certain applications use.
 
 Something else to consider.  We block TFTP at our border for security reasons 
 and we've found that this prevents Vonage from working.  Would this mean that 
 LEC's can't block TFTP?

This is a significant issue.  Vonage is complaining about what are
purportedly deliberate actions to block their service, while at the
same time trying to sweep under the rug that *they have chosen to
provide their service using insecure protocols that some carriers
might quite reasonably choose to filter*.

If their -- centrally-provided: everything is forced through their SIP
proxy anyway, resulting in a voice network architecture that really
looks like a giant corporate VoIP PBX -- service were actually properly
resistant to tampering and random-adversary eavesdropping, it would
*also* have the property that it were opaque to intermediate networks:
providers blocking SSL or ESP to Vonage's proxies would _clearly_ have
no motivation to do so save interference with Vonage service.

It is my general impression of Vonage that they are very, very savvy
about gaming what they percieve as the regulatory trend at the Federal
level in an attempt to cut technical corners and thus grow their
service faster than they could if they consistently did things right.
The history of their many, many wiggles on 911 access shows this pretty
obviously, I think, and here I believe we have another case: they want
to try to get regulatory agencies or the courts to force intermediate
networks to let their packets through (by claiming all such filtering
_must_ be deliberate) rather than actually doing what, on technical
grounds, they ought to do anyway, and provide real security to their
customers.

It is understandable, and probably a viable economic and political
strategy, but that doesn't really make it right.  It behooves those
of us who understand the actual underlying technical issues (e.g.
telco routing and human factors issues with Vonage's so-called 911
service; man-in-the-middle and eavesdropping issues with Vonage's
totally unsecured TFTP boot and SIP services from each ATA) to do
our best to point them out, so that, if possible, coercive regulatory
decisions are not made on the basis of smoke and mirrors.

Thor


Re: Why do so few mail providers support Port 587?

2005-02-15 Thread Thor Lancelot Simon

On Tue, Feb 15, 2005 at 09:00:11PM -0500, Sean Donelan wrote:
 
 Sendmail now includes Port 587, although some people disagree how
 its done.  But Exchange and other mail servers are still difficult
 for system administrators to configure Port 587 (if it doesn't say
 click here for Port 587 during the Windows installer, its too
 complicated).

This is utterly silly.  Running another full-access copy of the MTA
on a different port than 25 achieves precisely nothing -- and this
support has always been included in sendmail, with a 1-line change
either to the source code (long ago) or the default configuration or
simply by running sendmail from inetd.

What benefit, exactly, do you see to allowing unauthenticated mail
submission on a different port than the default SMTP port?

Similarly, what harm, exactly, do you see to allowing authenticated
mail submission on port 25?

What will actually give us some progress on spam and on usability
issues is requiring authentication for mail submission.  Which TCP
port is used for the service matters basically not at all.

Thor


Re: Why do so few mail providers support Port 587?

2005-02-15 Thread Thor Lancelot Simon

On Wed, Feb 16, 2005 at 02:23:04AM +, Adrian Chadd wrote:
 
 Quite useful when it works (read: the other party has implemented
 AUTH-SMTP on port 587).

And if they's implemented unauthenticated SMTP on port 587, like,
say, Sendmail, you've achieved nothing, or possibly worse, since you
have encouraged people to simply run open relays on a different port
than 25.  How long do you think it's going to take for spammers to
take advantage of this?  (That's a rhetorical question: I already see
spam engines trying to open port 587 connections in traces).

Slavishly changing ports isn't the solution.  Actually using authentication
is the solution.  It is silly -- to say the least -- to confuse the benefits
of the two.

Thor


Re: Why do so few mail providers support Port 587?

2005-02-15 Thread Thor Lancelot Simon

On Tue, Feb 15, 2005 at 09:30:18PM -0500, Sean Donelan wrote:
 
 In theory true, you could run a TELNET listener on Port 25 or 135.  But
 the world works a bit better when most people follow the same practice.
 Port 587 is for authenticated mail message submission.

I'm sorry, your last message seemed to indicate that you felt that
Sendmail accepting unauthenticated mail on port 587 (if configured to
accept unauthenticated mail at all) was not a problem; that, somehow,
it was a *good* thing that it would happily apply the same policy to
all ports it listened on, so long as one of those ports was 587.

Is that not, in fact, your position?

It is really hard for me to see encouraging people to run additional
unauthenticated mail servers on some other port as a good idea, and it
is really hard for me to read the actual text in your first message
any other way than simply mail accepted on port 587 good.

Thor


Re: High Density Multimode Runs BCP?

2005-01-26 Thread Thor Lancelot Simon

On Tue, Jan 25, 2005 at 07:23:17PM -0500, Deepak Jain wrote:
 
 
 I have a situation where I want to run Nx24 pairs of GE across a 
 datacenter to several different customers. Runs are about 200meters max.
 
 When running say 24-pairs of multi-mode across a datacenter, I have 
 considered a few solutions, but am not sure what is common/best practice.

I assume multiplexing up to 10Gb (possibly two links thereof) and then
back down is cost-prohibitive?  That's probably the best practice.

Thor


Re: High Density Multimode Runs BCP?

2005-01-26 Thread Thor Lancelot Simon

On Wed, Jan 26, 2005 at 02:49:29PM -0500, Hannigan, Martin wrote:
   
   When running say 24-pairs of multi-mode across a datacenter, I have 
   considered a few solutions, but am not sure what is 
  common/best practice.
  
  I assume multiplexing up to 10Gb (possibly two links thereof) and then
  back down is cost-prohibitive?  That's probably the best practice.
 
 I think he's talking physical plant. 200m should be fine. Consult
 your equipment for power levels and support distance.

Sure -- but given the cost of the new physical plant installation he's
talking about, the fact that he seems to know the present maximum data
rate for each physical link, and so forth, I think it does make sense to
ask the question is the right solution to simply be more economical
with physical plant by multiplexing to a higher data rate?

I've never used fibre ribbon, as advocated by someone else in this thread,
and that does sound like a very clever space- and possibly cost-saving
solution to the puzzle.  But even so, spending tens of thousands of
dollars to carry 24 discrete physical links hundreds of meters across a
datacenter, each at what is, these days, not a particularly high data
rate, may not be the best choice.  There may well be some question about
at which layer it makes sense to aggregate the links -- but to me, the
question is it really the best choice of design constraints to take
aggregation/multiplexing off the table is a very substantial one here
and not profitably avoided.

Thor


Re: High Density Multimode Runs BCP?

2005-01-26 Thread Thor Lancelot Simon

On Wed, Jan 26, 2005 at 09:17:44PM -0500, John Fraizer wrote:
 
 I assume multiplexing up to 10Gb (possibly two links thereof) and then
 back down is cost-prohibitive?  That's probably the best practice.
 
 It's best practice to put two new points of failure (mux + demux) in a 
 200m fiber run?

Well, that depends.  To begin with, it's not one run, it's 24 runs.
Deepak described the cost of those 24 runs as:

 I priced up one of these runs at 100m, and I was seeing a list price in
 the ballpark of $2500-$3000 plenum. So I figured it was worth asking if
 here is a better way when we're talking about N times that number. :)

So, to take his lower estimate 24 x $2500, we're talking about $60,000
worth of cable -- and all the bulk and management hassle of 48 strands
of fibre for what is in one sense logically a single run.

It still probably doesn't cover the cost of muxing it up and back down,
but particularly when you consider that space for 48 strands isn't free
either, it is certainly worth thinking about.

I was a little surprised by the $2500/pair figure but that's what he
said.

Thor


Re: Gtld transfer process

2005-01-18 Thread Thor Lancelot Simon

On Tue, Jan 18, 2005 at 06:36:16PM +1100, Bruce Tonkin wrote:
 
 (5) The registry will send a message to the losing registrar confirming
 that a transfer has been initiated.

Can you confirm or deny whether this actually happened in the case of
the panix.com transfer?

The other problem I see in this area is that the RRP specification (if
that is in fact the protocol that was used) seems to claim that this
message is out-of-band and thus beyond the scope of the protocol: so it
does not (can not) specify an ACK.  If an attacker found a way to prevent
this message from being received, even if generated...

A strictly enforced technical requirement for an ACK here might work
wonders (perhaps it would have to be enforced by duping both the
confirmation and the ACK to the System, as RRP so quaintly calls it, and
denying future transfers initiated by parties with too many outstanding
ACKs).  Not an approval, just an ACK.

There seems to be a general lack of IETF design and review of protocols
in this crucial area.  Again not good.

Thor


[alexis@panix.com: Panix.com- Some brief comments on the hijacking of our domain]

2005-01-17 Thread Thor Lancelot Simon

- Forwarded message from Alexis Rosen [EMAIL PROTECTED] -

X-Original-To: [EMAIL PROTECTED]
Delivered-To: [EMAIL PROTECTED]
Resent-Message-Id: [EMAIL PROTECTED]
X-Original-To: [EMAIL PROTECTED]
Delivered-To: [EMAIL PROTECTED]
Date: Mon, 17 Jan 2005 01:42:04 -0500
From: Alexis Rosen [EMAIL PROTECTED]
To: nanog@merit.edu
Subject: Panix.com- Some brief comments on the hijacking of our domain
User-Agent: Mutt/1.4.2.1i

[Please note: I tried to post this five hours ago. It didn't make it, though
 I resubscribed to nanog-post (and acked the confirmation check) about half
 an hour previously. I'm resending (with light edits) and CCing this to a
 few friends; if any of you get this and see that it's not on nanog yet,
 please resend it for me. Thanks.]

We're still digging out from under here, so I can't say nearly as much as I'd
like. However, I have a few things that really need to be said sooner rather
than later. (A couple of the later points are operational. Skip to *** if
you don't care who I'm grateful to...)

First, I want to thank Martin Hannigan at Verisign. Whatever I may think of
the (in)action I got from other parties there, he made significant efforts
to get them to move, and the incomplete view of events that I have leads
me to believe that it's his efforts, and the efforts of others at Verisign
that he worked on, that got Melbourne IT to finally get off the dime. This
was a very serious effort on his part, for someone who wasn't his direct
customer, and I'm very appreciative of the concern and the effort.

(This isn't to say that the immense efforts of other parties wasn't also
helpful in this respect.)

Secondly, I want to thank the MANY people here (and elsewhere), most of whom
I don't know and have never had contact with, who devoted time and energy to
this issue. Some I do know, and some of them were especially generous. You
know who you are, but a partial list includes Thor Simon, Perry Metzger,
Steve Bellovin, Bill Manning, and hm, I don't know if I can say those names.
Thank you.

Third (here's the ***), I want to make a plea for those with operational
control over large nameservers to reload their caches or expire out the
panix.com entries from their caches, if they haven't yet picked up the
correct data for our zone. (Note that having correct NS records isn't
sufficient if you're caching all types.) The correct zones can be pulled
from 198.7.0.1 or 198.7.0.2, for comparison's sake.

If any of you have hand-copied our data into your DNS, please delete it
so we're not afflicted by odd bits of stale data in the far future, when
this incident is long forgotten.

I noted something very odd earlier today. The A records for the hosts
purporting to be mail.panix.com and mail2.panix.com were changed, with the
last octets switched to .0, making them unreachable. At the time I was
grateful (because mail was being queued or bounced at the sender side,
rather than bounced- and possibly copied- at the recipient side) but I
didn't have time to try to figure out who had done what. I still don't know
who/what was responsible, but I thank those who are, and just so I have a
fuller understanding, I'd appreciate it if someone who knows what was done
would contact me and fill me in.

Someone here pointed out that we seem to have an SSH daemon running on
port 80. That's intentional. It's on our shell hosts, and it's actually a
clever bit of front-end code that switches web clients to a web server and
ssh clients to the ssh daemon. It's for the benefit of customers who want
to ssh in but are behind dumbass (or rightfully paranoid, take your pick)
firewalls that don't allow out anything but connections to port 80.

Thor and others have been commenting a bit on the fact that *something*
is broken or compromised, either at MelbourneIT, Dotster, or Verisign. I
hope that now that it's Monday morning in Australia, and will be in 12-15
hours here in the US, we can make some progress on figuring out what really
happened. This would start with Verisign, Dotster, and MelbourneIT producing
*all* relevant logs. I'll be discussing that with them tomorrow.

There's a lot more to be said here, but for now we're going to finish
cleaning up the mess, get the registry back to dotster, and try to catch
up on some sleep. Oh, and work with various law enforcement types to try
to catch the bastards responsible for this.

/a
---
Alexis Rosen
President
Public Access Networks Corp. - Panix.com  [EMAIL PROTECTED]
Grand Central Server LLC.[EMAIL PROTECTED]

- End forwarded message -


Re: panix.com recovery in progress

2005-01-16 Thread Thor Lancelot Simon

On Sun, Jan 16, 2005 at 06:01:35PM -0500, Henry Yen wrote:
 
 The latest shell host motd's:
 
 . Hijack recovery underway (elr) Sun Jan 16 17:43:28 2005
 . 
 .Recovery is underway from the panix.com domain hijack.
 .
 .The root name servers now have the correct information, as does the
 .WHOIS registry.  Portions of the Internet will still not be able to
 .see panix.com until their name servers expire the false data.  More
 .info soon.

Yes, some folks with serious mojo got involved and things seem to be
on the way to operationally fixed.  AFAIK there is still no progress as
to the question of how this kind of transfer can happen without notice
to the transferred-from registrar (it's possible that there's progress
I don't know about).

I have just spoken to the tremendously tired and overworked ops staff at
Panix again.  They would appreciate it very much if network operators
would reload their nameservers to help the good data for panix.com
propagate over the bad.  Some Panix customer email now seems to be being
relayed to the actual Panix mail servers by the fake ones in the UK, which
is not such a good thing for obvious reasons.

Thor


Re: panix.com hijacked (VeriSign refuses to help)

2005-01-15 Thread Thor Lancelot Simon

Alexis Rosen tried to send this to NANOG earlier this evening but it
looks like it never made it.  Apologies if it's a duplicate; we're
both reduced to reading the list via the web interface since the
legitimate addresses for panix.com have now timed out of most folks'
nameservers and been replaced with the hijacker's records.

Note that we contacted VeriSign both directly and through intermediaries
well known to their ops staff, in both cases explaining that we suspect
a security compromise (technical or human) of the registration systems
either at MelbourneIT or at VeriSign itself (we have reasons to suspect
this that I won't go into here right now).  We noted that after calling
every publically available number for MelbourneIT and leaving polite
messages, the only response we received was a rather rude brush-off from
MelbourneIT's corporate counsel, who was evidently directed to call us
by their CEO.

We are also told that law enforcement separately contacted VeriSign on
our behalf, to no avail.

Below please find VeriSign's response to our plea for help.  We're rather
at a loss as to what to do now; MelbourneIT clearly are beyond reach,
VeriSign won't help, and Dotster just claim they still own the domain and
that as far as they can tell nothing's wrong.  Panix may not survive this
if the formal complaint and appeal procedure are the only way forward.

 Date: Sun, 16 Jan 2005 00:21:33 -0500
 To: [EMAIL PROTECTED], NOC Supervisor [EMAIL PROTECTED]
 Subject: Re: FW: [EMAIL PROTECTED]: Brief summary of panix.com hijacking 
 incident]  (KMM2294267V49480L0KM)
 From: VeriSign Customer Service [EMAIL PROTECTED]
 X-Mailer: KANA Response 7.0.1.127
 
 Dear Alexis,
 
 Thank you for contacting VeriSign Customer Service.
 
 Unfortunately there is little that VeriSign, Inc. can do to rectify this
 situation.  If necessary, Dotster (or Melbourne) is more than welcome to
 contact us to obtain the specific details as to when the notices were
 sent and other historical information about the transfer itself.
 
 Dotster can file a Request for Enforcement if Melbourne IT contends that
 the request was legitimate and we will review the dispute and respond
 accordingly.  Dotster can also contact Melbourne directly and if they
 come to an agreement that the transfer was fraudulent they can file a
 Request for Reinstatement and the domain would be reinstated to its
 original Registrar.  Dotster could submit a normal transfer request to 
 Melbourne IT for the domain name and hope that Melbourne IT agrees to
 transfer the name back to them outside of a dispute having been filed. 
 In order to expedite processing the transfer or submitting a Request for
 Reinstatement however Dotster will need to contact Melbourne IT
 directly.  If Dotster is unable to get in touch with anyone at Melbourne
 IT we can assist them directly if necessary.
 
 Best Regards,
 
 Melissa Blythe
 Customer Service
 VeriSign, Inc.
 www.verisign.com
 [EMAIL PROTECTED]



Re: panix.com hijacked

2005-01-15 Thread Thor Lancelot Simon

Apologies for what may be another duplicate message, probably with broken
threading.  This is Alexis Rosen's original posting to this thread; we
think the mail chaos caused by the hijacking of panix.com kept it from
ever reaching the list (but, flying mostly-blind, we aren't sure).


 On Sat, Jan 15, 2005 at 10:27:31PM -0500, Steven M. Bellovin said:
  panix.com has apparently been hijacked.  It's now associated with a 
  different registrar -- melbourneit instead of dotster -- and a 
  different owner.  Can anyone suggest appropriate people to contact to 
  try to get this straightened out?
 
 Hi, all.
 
 I hate to pop my head up after years of lurking, only when things are
 going bad, but probably better that than remaining silent.
 
 First of all, I'm going to be bounced from this list once its cache of
 my DNS times out, which will probably be in about 2-3 hours, so if you have
 anything to say that you'd like me to see, please copy me. We're temporarily
 accepting mail at panix.net in addition to panix.com, so use alexis (at)
 panix.net.
 
 A few points to respond to:
 First, Eric, thanks for contacting Bruce and Eric on my behalf. While
 nothing has happened so far, I hope that it will soon, and in any case
 I appreciate your efforts to help a total stranger.
 
 Someone asked if we had registrar-lock set. It's not clear to me what
 happened. Our understanding is that we had locks on all of our domains.
 However, when we looked, locks were off on panix.net and panix.org, which
 we own but don't normally use. It's not clear how that happened; dotster
 has yet to contact us with any information about, well, anything at all.
 They did answer a call this morning; they're apprently in the middle of
 an ice storm. All I was able to larn from them is that according to the
 person I talked to, they had no records of any transfer requests on our
 domain from today back through last October.
 
 Someone suggested invoking a dispute procedure. We'll do that, as soon as
 we can get someone to actually accept the dispute, but if it goes through
 that process to completion, many people will suffer, and Panix itself will
 be tremendously damaged. How long do you think even our customers will
 stay loyal? (Forever, for many of them, but that doesn't mean the won't be
 forced to start using a different service.)
 
 While it's true that MelbourneIT won't do anything before (their) Monday
 morning, I don't want to paint them as bad guys in this drama. I don't
 know how they're organized and I don't know how difficult it is for them
 logistically. Of course I want them to move faster. Much faster. But I'll
 take what I can get.
 
 And speaking of MIT,  I don't intend to send them nastygrams - nor NSI
 either. Neither of them owes me anything (at least directly) and being
 heavyhanded would not be a good way to get what I want (restoral of the
 panix.com domain to dotster) even if I thought they deserved it. I expect
 that there will be criminal prosecutions arising out of this, but the time
 for that sort of thing is later, when things are back to normal, and we've
 fixed any systemic vulnerabilities that can be fixed before they're used
 to wreak mass havoc. And it's anyone's guess who the target of those
 prosecutions will be, but I doubt MIT or NSI will be among them.
 
 Lastly, someone expressed surprise that I'd call MIT's lawyer directly.
 I didn't. I spent *hours* trying to find working contact info for MIT and
 Dotster. I didn't find useful 24-hour NOC-type info anywhere. (Someone
 obviously has this info; I expect it's restricted to a list of registrars.)
 I reached Dotster's customer support when they opened for business Saturday
 morning; the guy was polite, and did what he could, but I saw no evidence
 whatsoever of the promised attempt to assist me after he got off the phone.
 MIT apparently has no weekend support at all; I finally located their CEO's
 cellphone in an investor-relations web page. I caled him, and he had his
 lawyer call me back. That was his choice. FWIW, she's not just a lawyer;
 she's apparently the person who has to make decisions about reverting
 control of the domain. So she at least needs to be aware of our position.
 My impression is that she didn't fully grasp the gravity of the situation,
 and so treated us like she'd treat any other annoying customer who managed
 to track her down on her day off. This is somewhat understandable (though
 infuriating) which is why I'd hoped to talk to someone on their tech side
 first. No luck there, but if any of this reaches them, maybe that will
 start things going.
 
 Thanks again to everyone who has tried to help us today.
 
 /a



Re: panix.com hijacked (VeriSign refuses to help)

2005-01-15 Thread Thor Lancelot Simon

On Sun, Jan 16, 2005 at 02:22:59AM -0500, Paul G wrote:
 
 
 - Original Message - 
 From: Thor Lancelot Simon [EMAIL PROTECTED]
 To: nanog@merit.edu
 Sent: Sunday, January 16, 2005 2:04 AM
 Subject: Re: panix.com hijacked (VeriSign refuses to help)
 
 
 
  Alexis Rosen tried to send this to NANOG earlier this evening but it
  looks like it never made it.  Apologies if it's a duplicate; we're
 
 --- snip ---
 
 how about trying to get in touch with the folks hosting the dns (on the off
 chance that they are honest and willing to help) and asking them to put up
 the correct panix.com zone?

The purported current admin contact appears to be a couple in Las Vegas
who are probably the victims of a joe job.  A little searching will
reveal that people by that name really *do* live at the address given,
and that one of the phone numbers given is a slightly obfuscated form
of a Las Vegas number that either now or in the recent past belonged to
one of them.

Suffice to say it doesn't seem to be possible to get them to change the
DNS.

Chasing down the records for the tech contact, and the allocated party
for the IP addresses now returned for various panix.com hosts (e.g.
142.46.200.72 for panix.com itself), and doing a little gumshoe work,
seems to show that they're all in some way associated with a UK holding
company that, when contacted by phone, claims no knowledge of today's
mishap involving Panix.com.  It's possible that this set of entities was
chosen specifically *because* its convoluted ownership structure would
make getting it to let go of a domain it may or may not know it now is
the tech contact for as difficult as possible.

Beyond the above, it's basically a matter for law enforcement.  Who is
really behind the malfeasance here is not clear, but what is clear
enough to me at this point is that there is, in fact, some deliberate
wrongdoing going on.  Whether the point is just to harm Panix or
to actually somehow profit by it I don't know, but I do note that
an earlier message in this thread pointed out a very similar earlier
incident involving MelbourneIT as the registrar, the same bogus new
domain contacts, and another hapless U.S. corporate victim.

I don't know if these are merely isolated attempts at harassment and
mischief or the precursors to a more widespread attack.  What I do know
is that I'm very concerned, Panix is quite literally fighting for its
life, everyone we've shown details of the problem to is concerned --
including CERT, AUSCERT, and knowledgeable law enforcement personnel --
with the notable exception of MelbourneIT, whose sole corporate response
has been one of decided unconcern, and VeriSign, who seem entirely
determined to pass the buck instead of investigating, fixing, or helping.

And so it goes.

Thor