Re: Interesting new dns failures

2007-05-25 Thread Per Heldal

On Thu, 2007-05-24 at 17:46 +, Chris L. Morrow wrote:
 which brings us back to my original comment: we need a policy most likely
 from ICANN that requires some action based on proper documentation and
 evidence or wrong-doing/malfeasance. That policy needs to dictate some
 monetary penalties for non-compliance.

It's too late to put the genie back in the bottle. The only way to
change the policy before the contract term ends is to either move ICANN
out of US jurisdiction (to brake contract terms) or to organise a
grass-root uprising to replace ICANNs root with something else.


//per



Re: Interesting new dns failures

2007-05-25 Thread Simon Waters

On Friday 25 May 2007 15:40, you wrote:
 
 It's too late to put the genie back in the bottle. The only way to
 change the policy before the contract term ends is to either move ICANN
 out of US jurisdiction (to brake contract terms) or to organise a
 grass-root uprising to replace ICANNs root with something else.

Since ICANN doesn't contract to all TLD registries, nor do the root server 
operators control the CCTLD, there is no way to fix this from the top down. 
One can at best displace it from those top level domains ICANN does have 
contracts for to those that they don't.

Packets and digs can slow my networks. but other people's names can't hurt me.


Re: Interesting new dns failures

2007-05-25 Thread Scott Weeks




[EMAIL PROTECTED] wrote:
 the bits of governments that deal with online crime, spam, etc.,
 I can report that pretty much all of the countries that matter  
 realize there's a problem, and a lot of them have passed or will  
 pass laws whether we like it or not.  So it behooves us to engage  
 them and help them pass better rather than worse laws.
--


Which countries are pretty much all of the countries that matter?  Do you 
have a list or is this just 'something you're sure of'?

scott



Re: Interesting new dns failures

2007-05-25 Thread Valdis . Kletnieks
On Fri, 25 May 2007 12:08:44 PDT, Scott Weeks said:
 [EMAIL PROTECTED] wrote:
  the bits of governments that deal with online crime, spam, etc.,
  I can report that pretty much all of the countries that matter  
  realize there's a problem, and a lot of them have passed or will  
  pass laws whether we like it or not.  So it behooves us to engage  
  them and help them pass better rather than worse laws.
 --

 Which countries are pretty much all of the countries that matter?  Do you
 have a list or is this just 'something you're sure of'?

A lot of the more nefarious uses of the DNS are there precisely because the
actual country *doesn't* matter, and as a result the TLD is run by somebody
who is asleep at the wheel or worse.  For instance, there appears to be a
'*.cm' wildcard in place, and several flag of convenience TLDs with a high
ratio of users that aren't actually associated with the country...


pgpyG6nAoeK9j.pgp
Description: PGP signature


Re: Interesting new dns failures

2007-05-25 Thread Chris L. Morrow



On Fri, 25 May 2007 [EMAIL PROTECTED] wrote:

 On Fri, 25 May 2007 12:08:44 PDT, Scott Weeks said:
  [EMAIL PROTECTED] wrote:
   the bits of governments that deal with online crime, spam, etc.,
   I can report that pretty much all of the countries that matter
   realize there's a problem, and a lot of them have passed or will
   pass laws whether we like it or not.  So it behooves us to engage
   them and help them pass better rather than worse laws.
  --

  Which countries are pretty much all of the countries that matter?  Do you
  have a list or is this just 'something you're sure of'?

 A lot of the more nefarious uses of the DNS are there precisely because the
 actual country *doesn't* matter, and as a result the TLD is run by somebody
 who is asleep at the wheel or worse.  For instance, there appears to be a
 '*.cm' wildcard in place, and several flag of convenience TLDs with a high

cameroon outsourced their dns infrastructure management to someone, that
contract includes a we can answer X for all queries that would return
NXDOMAIN' ... that's not 'asleep at the wheel' so much as 'not a good
idea' (except for click revenue I suppose). This is different from .cx? or
.tv  how?

 ratio of users that aren't actually associated with the country...


Re: Interesting new dns failures

2007-05-25 Thread Valdis . Kletnieks
On Fri, 25 May 2007 20:31:59 -, Chris L. Morrow said:

 cameroon outsourced their dns infrastructure management to someone, that
 contract includes a we can answer X for all queries that would return
 NXDOMAIN' ... that's not 'asleep at the wheel' 

As I said, asleep at the wheel or worse...


pgp30FuA08lXx.pgp
Description: PGP signature


Re: Interesting new dns failures

2007-05-25 Thread Will Hargrave

Joe Provo wrote:

 An obvious catalyst was commercialization of domains.  Which 
 interestingly enough leads us back to the lack of categories and 
 naming morass in which we live. I find it quite humourous that 
 new 'restrictive membership' branches of the tree are now being 
 proposed as a solution to the problem of identity (eg, .bank to 
 solve phishing).  Unless there will be some level of enforcement 
 teeth, we will see the same situtaion that played out in 94/95:

On a national level it's probably fairly easy to work this sort of thing
out. Lists of banks exist, as do lists of schools (.sch.uk is
prepopulated). The .ltd.uk and .plc.uk are only available to people with
the appropriate company form but aren't really that popular.

There's a larger issue of not just practicalities but is this in fact an
 appropriate use for DNS? DNS isn't a security mechanism.


Will


Re: Interesting new dns failures

2007-05-25 Thread Chris L. Morrow



On Fri, 25 May 2007 [EMAIL PROTECTED] wrote:

 On Fri, 25 May 2007 20:31:59 -, Chris L. Morrow said:

  cameroon outsourced their dns infrastructure management to someone, that
  contract includes a we can answer X for all queries that would return
  NXDOMAIN' ... that's not 'asleep at the wheel'

 As I said, asleep at the wheel or worse...

ha :) as always, perfectly cynical :) I think my point was that someone in
cameroon (or acting on their official behalf, perhaps pocketting some cash
along the way, who knows?) made a decision that this is 'ok' with them.

-Chris


Re: Interesting new dns failures

2007-05-25 Thread Chris L. Morrow



On Sat, 26 May 2007, Will Hargrave wrote:


 Joe Provo wrote:

  An obvious catalyst was commercialization of domains.  Which
  interestingly enough leads us back to the lack of categories and
  naming morass in which we live. I find it quite humourous that
  new 'restrictive membership' branches of the tree are now being
  proposed as a solution to the problem of identity (eg, .bank to
  solve phishing).  Unless there will be some level of enforcement
  teeth, we will see the same situtaion that played out in 94/95:

 On a national level it's probably fairly easy to work this sort of thing
 out. Lists of banks exist, as do lists of schools (.sch.uk is
 prepopulated). The .ltd.uk and .plc.uk are only available to people with
 the appropriate company form but aren't really that popular.

 There's a larger issue of not just practicalities but is this in fact an
  appropriate use for DNS? DNS isn't a security mechanism.

and studies have already shown that 98% of the populace doesn't know:

www.bankovamerica.com
from
a href=http://www.bankovamerica.com;www.bankofamerica.com/a

where the thing is pointed (.bank .secure .hereliesgoodness) isn't
relevant so much as making the bad thing go away as quickly as possible...
unless there's a way to discourage it from being made in the first place,
which brings us back to the monetary incentives and policy to provide
such.


Re: Interesting new dns failures

2007-05-25 Thread Suresh Ramasubramanian


On 5/26/07, Scott Weeks [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] wrote:
 the bits of governments that deal with online crime, spam, etc.,
 I can report that pretty much all of the countries that matter
 realize there's a problem, and a lot of them have passed or will
 pass laws whether we like it or not.  So it behooves us to engage
 them and help them pass better rather than worse laws.

Which countries are pretty much all of the countries that matter?  Do you have
a list or is this just 'something you're sure of'?


Quite a long list. http://www.londonactionplan.net/?q=node/5

--
Suresh Ramasubramanian ([EMAIL PROTECTED])


Re: Interesting new dns failures

2007-05-25 Thread Scott Weeks



--- [EMAIL PROTECTED] wrote:
From: Suresh Ramasubramanian [EMAIL PROTECTED]

On 5/26/07, Scott Weeks [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
  the bits of governments that deal with online crime, spam, etc.,
  I can report that pretty much all of the countries that matter
  realize there's a problem, and a lot of them have passed or will
  pass laws whether we like it or not.  So it behooves us to engage
  them and help them pass better rather than worse laws.

 Which countries are pretty much all of the countries that matter?  Do you 
 have
 a list or is this just 'something you're sure of'?

Quite a long list. http://www.londonactionplan.net/?q=node/5



I see 24 countries.  Did I miss something?

scott


Re: Interesting new dns failures

2007-05-24 Thread David Ulevitch


Douglas Otis wrote:


On May 22, 2007, at 2:16 PM, Gadi Evron wrote:

On Tue, 22 May 2007, David Ulevitch wrote:

These questions, and more (but I'm biased to DNS), can be solved at 
the edge for those who want them.  It's decentralized there.  It's 
done the right way there.  It's also doable in a safe and fail-open 
kind of way.


This is what I'm talking about.


Agreed.


Gadi,

What is the downside of a preview of zones being published by a 
TLD?  Previews could be on a 12 or 24 hour cycle.  This would enable 
defenses at the edge by disabling fast-flux outright.  There could be 
exceptions, of course.  When millions of domains are in rapid flux 
daily, few protective schemes are able to sustain or afford the 
dispersion of raw threat information.  In addition, these raw updates 
arrive too late at that.  A preview would not change how the core 
works, only how fast changes occur, while also dramatically reducing 
the amount data required for comprehensive protections at the edge.


This would be a policy change at the core that enables defenses at the 
edge.
Lots of people already track newly added domains.  Rick Wesson runs a 
feed called Day old bread that is just such a feed.


Again, good idea, but doesn't belong in the core.  If I register a 
domain, it should be live immediately, not after some 5 day waiting 
period.  On the same token, if you want to track new domains and not 
accept any email from me until my domain is 5 days old, go for it.  Your 
prerogative.


-david




-Doug





Re: Interesting new dns failures

2007-05-24 Thread Suresh Ramasubramanian


On 5/24/07, David Ulevitch [EMAIL PROTECTED] wrote:


Again, good idea, but doesn't belong in the core.  If I register a
domain, it should be live immediately, not after some 5 day waiting
period.  On the same token, if you want to track new domains and not
accept any email from me until my domain is 5 days old, go for it.  Your
prerogative.


Well then - all you need is to have some way to convince registrars
take down scammer domains fast.

Some of them do.   Others dont know (several in asia) or are aware and
dont care - theres some in russia, some stateside that mostly kite
domains but dont mind registering a ton of blog and email spammer
domains.

-srs


Re: Interesting new dns failures

2007-05-24 Thread Kradorex Xeron

On Thursday 24 May 2007 03:13, Suresh Ramasubramanian wrote:
 On 5/24/07, David Ulevitch [EMAIL PROTECTED] wrote:
  Again, good idea, but doesn't belong in the core.  If I register a
  domain, it should be live immediately, not after some 5 day waiting
  period.  On the same token, if you want to track new domains and not
  accept any email from me until my domain is 5 days old, go for it.  Your
  prerogative.

 Well then - all you need is to have some way to convince registrars
 take down scammer domains fast.

 Some of them do.   Others dont know (several in asia) or are aware and
 dont care - theres some in russia, some stateside that mostly kite
 domains but dont mind registering a ton of blog and email spammer
 domains.

 -srs

Very true - If this is going to work, it's goign to have to be on a global 
scale, Not just one country of registrars can be made to correct the problem 
as people who maliciously register domains will just do what the spyware 
companies do, go to a country that doesn't care and do business there.


Re: Interesting new dns failures

2007-05-24 Thread Per Heldal

On Thu, 2007-05-24 at 12:43 +0530, Suresh Ramasubramanian wrote:
 Well then - all you need is to have some way to convince registrars
 take down scammer domains fast.

It should be the registries responsibility to keep their registrars in
line. If they fail to do so their delegation should be transferred
elsewhere.

Of course, to impose decent rules you'd need a root-operator whose
primary goal is to act in the community's best interest with contractual
terms subject to public scrutiny.

You'd have to make some real draconian rules not to have anybody wanting
to operate popular TLDs.

Was DNS designed as a tool for internet-protocol users, or was it
intended as a mechanism to distribute access to revenue-sources?


//per



Re: Interesting new dns failures

2007-05-24 Thread Suresh Ramasubramanian


On 5/24/07, Per Heldal [EMAIL PROTECTED] wrote:


It should be the registries responsibility to keep their registrars in
line. If they fail to do so their delegation should be transferred
elsewhere.

Of course, to impose decent rules you'd need a root-operator whose


Moving right back to where we started off .. the core, or at least
operation of the core :)

This is something that can't be solved at the edge. Unfortunately.

srs


Re: Interesting new dns failures

2007-05-24 Thread Chris L. Morrow



On Thu, 24 May 2007, Kradorex Xeron wrote:

 Very true - If this is going to work, it's goign to have to be on a global
 scale, Not just one country of registrars can be made to correct the problem
 as people who maliciously register domains will just do what the spyware
 companies do, go to a country that doesn't care and do business there.

isn't that why we have ICANN? Shouldn't we ask for policy at the ICANN
level that penalizes registrys who can then penalize registrars for bad
behaviour? From the beginning of this discussion there's ben the point
made that without financial incentives this is all moot. That supposed
policy should include financial penalties it would seem.

-Chris


Re: Interesting new dns failures

2007-05-24 Thread Steve Atkins



On May 24, 2007, at 6:14 AM, Chris L. Morrow wrote:





On Thu, 24 May 2007, Kradorex Xeron wrote:


Very true - If this is going to work, it's goign to have to be on  
a global
scale, Not just one country of registrars can be made to correct  
the problem
as people who maliciously register domains will just do what the  
spyware
companies do, go to a country that doesn't care and do business  
there.


isn't that why we have ICANN? Shouldn't we ask for policy at the ICANN
level that penalizes registrys who can then penalize registrars for  
bad

behaviour? From the beginning of this discussion there's ben the point
made that without financial incentives this is all moot. That supposed
policy should include financial penalties it would seem.


How much more, per-domain registration or renewal, would you be
prepared to pay to cover the due-diligence requirements, the
additional skilled staff, the legal and PR costs when domains are
cancelled due to false accusations (or true ones) and so on?

(I'd be prepared to pay quite a bit more, if it were to actually work,
but I know it wouldn't).

Cheers,
  Steve



Re: Interesting new dns failures

2007-05-24 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Kradorex Xeron [EMAIL PROTECTED] wrote:

On Thursday 24 May 2007 03:13, Suresh Ramasubramanian wrote:

 Some of them do.   Others dont know (several in asia) or are aware and
 dont care - theres some in russia, some stateside that mostly kite
 domains but dont mind registering a ton of blog and email spammer
 domains.

Very true - If this is going to work, it's goign to have to be on a global
 
scale, Not just one country of registrars can be made to correct the
problem 
as people who maliciously register domains will just do what the spyware 
companies do, go to a country that doesn't care and do business there.


Well, registrars have to be accredited by ICANN, right?

This is a policy enforcement issue, methinks.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.1 (Build 1012)

wj8DBQFGVcnBq1pz9mNUZTMRAscKAKCo2depssyh0WYbLwsDa3f31ZaJVgCg6Cvn
/jgr0q8uHu2cQFT6fsATr04=
=oZYe
-END PGP SIGNATURE-


--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Interesting new dns failures

2007-05-24 Thread Chris L. Morrow


On Thu, 24 May 2007, Fergie wrote:


 Well, registrars have to be accredited by ICANN, right?
 This is a policy enforcement issue, methinks.

which brings us back to my original comment: we need a policy most likely
from ICANN that requires some action based on proper documentation and
evidence or wrong-doing/malfeasance. That policy needs to dictate some
monetary penalties for non-compliance.


Re: Interesting new dns failures

2007-05-24 Thread Roger Marquis


On Thu, 24 May 2007, Chris L. Morrow wrote:

which brings us back to my original comment: we need a policy most likely
from ICANN that requires some action based on proper documentation and
evidence or wrong-doing/malfeasance.


Agreed, and I'd love to help define the draft rfc/policy, but is there
a contact at ICANN for this type of thing?  We used to be able to email
Carl Auerbach but that was a while back.

--
Roger Marquis
Roble Systems Consulting
http://www.roble.com/


Re: Interesting new dns failures

2007-05-24 Thread John Levine

which brings us back to my original comment: we need a policy most likely
from ICANN that requires some action based on proper documentation and
evidence or wrong-doing/malfeasance. That policy needs to dictate some
monetary penalties for non-compliance.

Ha ha ha ha ha ha ha ha ha.

Anyone been following the Registerfly fiasco?  Since 2000, the ICANN
registrar agreement has required registrars to escrow their registrant
data according to ICANN's specs.  It's been seven years, ICANN is just
now sending out an RFP to set up escrow providers, only because
they've been shamed into it when people discovered that there were no
backups of Registerfly's registrant data.

Even if ICANN should try to do this, registrars will push back like
crazy since most of them have a minimum price mininum service business
model.  In retrospect, it was a huge mistake to drop the price and let
Verisign and their friends mass merchandise domains as a fashion
accessory, but it's much too late to put that genie back in the
bottle.

Regards,
John Levine, [EMAIL PROTECTED], Primary Perpetrator of The Internet for 
Dummies,
Information Superhighwayman wanna-be, http://www.johnlevine.com, ex-Mayor
More Wiener schnitzel, please, said Tom, revealingly.





Re: Interesting new dns failures

2007-05-24 Thread Suresh Ramasubramanian


On 5/25/07, John LaCour [EMAIL PROTECTED] wrote:

If you're an network operator and you'd consider null routing IPs
associated with nameservers used only by phishers, please let me know
and we'll be happy to provide the appropriate evidence.


Half of them are on fastflux so nullroutes wouldnt help.  Some
mailservers (recent postfix) allow you to block by NS, or there's
always the good old expedient of bogusing these out in your bind
resolver config, or serving up a fake zone for them.

--
Suresh Ramasubramanian ([EMAIL PROTECTED])


Re: Interesting new dns failures

2007-05-23 Thread Hank Nussbacher


On Tue, 22 May 2007, David Ulevitch wrote:


Putting that aside, what do you think nobody should try at
the edge?


People should try putting the intelligence that we have into software and 
hardware.  Why can't we put Gadi into an edge device?


Um, where you gonna find a 48U chassis? :-)

-Hank


Re: Interesting new dns failures

2007-05-23 Thread Douglas Otis



On May 22, 2007, at 2:16 PM, Gadi Evron wrote:

On Tue, 22 May 2007, David Ulevitch wrote:

These questions, and more (but I'm biased to DNS), can be solved  
at the edge for those who want them.  It's decentralized there.   
It's done the right way there.  It's also doable in a safe and  
fail-open kind of way.


This is what I'm talking about.


Agreed.


Gadi,

What is the downside of a preview of zones being published by a  
TLD?  Previews could be on a 12 or 24 hour cycle.  This would enable  
defenses at the edge by disabling fast-flux outright.  There could be  
exceptions, of course.  When millions of domains are in rapid flux  
daily, few protective schemes are able to sustain or afford the  
dispersion of raw threat information.  In addition, these raw updates  
arrive too late at that.  A preview would not change how the core  
works, only how fast changes occur, while also dramatically reducing  
the amount data required for comprehensive protections at the edge.


This would be a policy change at the core that enables defenses at  
the edge.


-Doug



Re: Interesting new dns failures

2007-05-22 Thread Tim Franklin

On Mon, May 21, 2007 11:02 pm, Steve Gibbard wrote:

 Is the above situation any different from the decision of whether to use
 locally-expected ccTLDs for local content, or to use the international
 .com for everything?

Ah, assuming local content, no.  I was coming more from the 'must protect
the use of our name!' angle, which is a conversation I've had more often.

I wasn't aware of +800 at all though - thanks, that's interesting...

Regards,
Tim.




Re: Interesting new dns failures

2007-05-22 Thread Crist Clark

 On 5/21/2007 at 2:09 PM, Edward Lewis [EMAIL PROTECTED] wrote:

 At 3:50 PM -0500 5/21/07, Gadi Evron wrote:
 
As to NS fastflux, I think you are right. But it may also be an issue
of
policy. Is there a reason today to allow any domain to change NSs
constantly?
 
 Although I rarely find analogies useful when trying to explain 
 something, I want to use one now to see if I understand this.
 
 Let's say you rob convenience stores as a career choice.  Once your 
 deed is done, you need to get away fast.  So moving fast is a real 
 help to criminals.  Since moving fast is rarely helpful for decent 
 folk, maybe we should just slow every one down - this certainly would

 make it easier for law enforcement to catch the criminals.

There are these things called speed limits on all[0] public
streets (in the USA, at least). Also things like stop signs
and traffic lights. People exceeding the limit and driving
recklessly can and regularly are stopped by police. When
such drivers attempt to evade police, they are chased, even
though it is dangerous to the police, bystanders, and the
people being pursued, because there is a good chance that
they are running because they've done something else, something
worse.

So, yeah. We do have speed limits. And suspicion of nefarious
activity is put on anyone who grossly exceeds them.

 If the above is not an accurate analogy to the NS fastflux issue, I'd

 like to know what the deviations are.  I don't doubt there are any, 
 but from what little I've gathered, the problem isn't the NS fastflux

 but the activity that it hides - if it is indeed hiding activity.  As

 in, not every one speeding around town is running from the law.

No, but it's still prohibited.

But yeah, it's just an analogy. And like many, you can bend
it to support either side.

[0] Last I knew, the experiments with speed-limitless
roads after the drop of the federal 55 mph limit had all
gone back to some arbitrary limits. Even Montana.

BĀ¼information contained in this e-mail message is confidential, intended
only for the use of the individual or entity named above. If the reader
of this e-mail is not the intended recipient, or the employee or agent
responsible to deliver it to the intended recipient, you are hereby
notified that any review, dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this e-mail
in error, please contact [EMAIL PROTECTED] 


Re: Interesting new dns failures

2007-05-22 Thread Paul Vixie

apropos of this...

 As to NS fastflux, I think you are right. But it may also be an issue of
 policy. Is there a reason today to allow any domain to change NSs
 constantly?

...i just now saw the following on comp.protocols.dns.bind (bind-users@):

+---
| From: Wiley Sanders [EMAIL PROTECTED]
| Newsgroups: comp.protocols.dns.bind
| Subject: Hooray, glue updates are instantaneous!
| Date: Tue, 22 May 2007 12:08:13 -0700
| X-Original-Message-ID: [EMAIL PROTECTED]
| X-Google-Sender-Auth: fbac9c128e6c36c7
| 
| Well, maybe I've been out of the loop for a while but I just changed the IP
| address of one of our authoritative name servers on Network Solutions' web
| site and it propagated to all the gtld servers within 5 minutes.
| 
| I don't know how this got fixed but for all readers out there who may
| contributed to making this magic happen, my hat is off to you, and I will
| quaff a brew (or more) in your honor as I consider this a significant
| contribution to the march of civilization.
| 
| -W Sanders
|   http://wsanders.net
+---

in general, we ought to be willing to implement almost anything if free beer
is going to be offered by non-criminal beneficiaries.
-- 
Paul Vixie


Re: Interesting new dns failures

2007-05-22 Thread David Ulevitch


Gadi Evron wrote:

On Mon, 21 May 2007, Chris L. Morrow wrote:

ok, so 'today' you can't think of a reason (nor can I really easily) but
it's not clear that this may remain the case tomorrow. It's possible that
as a way to 'better loadshare' traffic akamai (just to make an example)
could start doing this as well.

So, I think that what we (security folks) want is probably not to
auto-squish domains in the TLD because of NS's moving about at some rate
other than 'normal' but to be able to ask for a quick takedown of said
domain, yes? I don't think we'll be able to reduce false positive rates
low enough to be acceptable with an 'auto-squish' method :(


Auto-squish on a registrar level is actually starting to work, but there
is a long way yet..

As to NS fastflux, I think you are right. But it may also be an issue of
policy. Is there a reason today to allow any domain to change NSs
constantly?


Why are people trying to solve these problems in the core?

These issues need to and must be solved at the edge.  In this case the 
edge can be on customer networks, customer resolvers, or at the 
registrar.  It's dangerous to fix problems at the core where visibility 
is limited and data is moving quickly.


These issues should not be solved by the registry operators or root 
server operators, that's very dangerous.


There are, of course, exceptions where it's helpful when a registry 
operator steps in to help mitigate a serious Internet disturbance, but 
that's the exception and should not be the rule.


People are suggesting it become the rule because nobody is trying 
anything else.


-David Ulevitch




Re: Interesting new dns failures

2007-05-22 Thread Gadi Evron

On 22 May 2007, Paul Vixie wrote:
 
 apropos of this...
 
  As to NS fastflux, I think you are right. But it may also be an issue of
  policy. Is there a reason today to allow any domain to change NSs
  constantly?
 
 ...i just now saw the following on comp.protocols.dns.bind (bind-users@):
 
 +---
 | From: Wiley Sanders [EMAIL PROTECTED]
 | Newsgroups: comp.protocols.dns.bind
 | Subject: Hooray, glue updates are instantaneous!
 | Date: Tue, 22 May 2007 12:08:13 -0700
 | X-Original-Message-ID: [EMAIL PROTECTED]
 | X-Google-Sender-Auth: fbac9c128e6c36c7
 | 
 | Well, maybe I've been out of the loop for a while but I just changed the IP
 | address of one of our authoritative name servers on Network Solutions' web
 | site and it propagated to all the gtld servers within 5 minutes.
 | 
 | I don't know how this got fixed but for all readers out there who may
 | contributed to making this magic happen, my hat is off to you, and I will
 | quaff a brew (or more) in your honor as I consider this a significant
 | contribution to the march of civilization.
 | 
 | -W Sanders
 |   http://wsanders.net
 +---
 
 in general, we ought to be willing to implement almost anything if free beer
 is going to be offered by non-criminal beneficiaries.

If it's once in a long while like with this guy... may not be worth it. :P

If it's every 10 minutes like fast fluxers... I want in on that action.

 -- 
 Paul Vixie
 

Gadi.



Re: Interesting new dns failures

2007-05-22 Thread Gadi Evron

On Tue, 22 May 2007, David Ulevitch wrote:
 Gadi Evron wrote:
  On Mon, 21 May 2007, Chris L. Morrow wrote:
  ok, so 'today' you can't think of a reason (nor can I really easily) but
  it's not clear that this may remain the case tomorrow. It's possible that
  as a way to 'better loadshare' traffic akamai (just to make an example)
  could start doing this as well.
 
  So, I think that what we (security folks) want is probably not to
  auto-squish domains in the TLD because of NS's moving about at some rate
  other than 'normal' but to be able to ask for a quick takedown of said
  domain, yes? I don't think we'll be able to reduce false positive rates
  low enough to be acceptable with an 'auto-squish' method :(
  
  Auto-squish on a registrar level is actually starting to work, but there
  is a long way yet..
  
  As to NS fastflux, I think you are right. But it may also be an issue of
  policy. Is there a reason today to allow any domain to change NSs
  constantly?
 
 Why are people trying to solve these problems in the core?
 
 These issues need to and must be solved at the edge.  In this case the 
 edge can be on customer networks, customer resolvers, or at the 
 registrar.  It's dangerous to fix problems at the core where visibility 
 is limited and data is moving quickly.
 
 These issues should not be solved by the registry operators or root 
 server operators, that's very dangerous.
 
 There are, of course, exceptions where it's helpful when a registry 
 operator steps in to help mitigate a serious Internet disturbance, but 
 that's the exception and should not be the rule.
 

Amen.

 People are suggesting it become the rule because nobody is trying 
 anything else.

I was with you up to this sentence. Obviously avoiding the core is key,
but should we not have the capability of preventing abuse in the core
rather than mitigating it there? Allowing NS changes with no other
verification or limitation is silly imo, but I am unsure if it is
relevant as a solution?
And who is nobody and why doesn't he try something else? That is a bit
insulting to nobody. :)

Putting that aside, what do you think nobody should try at
the edge?

After all, nobody's security being affected by the edge of some end-user
machine on the other side of the world is irrelevant to my edge
security. FUSSP.

DNS abuse is mostly not an edge issue.

Gadi.

 
 -David Ulevitch
 
 



Re: Interesting new dns failures

2007-05-22 Thread Roger Marquis



Why are people trying to solve these problems in the core?


Because that's the only place it can be done.


These issues need to and must be solved at the edge.


Been there, done that, with smtp/spam, netbios, and any number of
other protocols that would also be ideally addressed at the source or
edge but, in reality, cannot.


These issues should not be solved by the registry operators or
root server operators, that's very dangerous.


Do you know that it is dangerous to fix problems at the core or are
you speculating?  If you can cite specific examples please do.  Simply
saying it is dangerous is indistinguishable from any other verisign
astroturfing.


People are suggesting it become the rule because nobody is
trying anything else.


Can you say what that 'anything else' might consist of?

--
Roger Marquis
Roble Systems Consulting
http://www.roble.com/


Re: Interesting new dns failures

2007-05-22 Thread David Ulevitch


Gadi Evron wrote:

People are suggesting it become the rule because nobody is trying 
anything else.


I was with you up to this sentence. Obviously avoiding the core is key,
but should we not have the capability of preventing abuse in the core
rather than mitigating it there? Allowing NS changes with no other
verification or limitation is silly imo, but I am unsure if it is
relevant as a solution?
And who is nobody and why doesn't he try something else? That is a bit
insulting to nobody. :)

Putting that aside, what do you think nobody should try at
the edge?


People should try putting the intelligence that we have into software 
and hardware.  Why can't we put Gadi into an edge device?


I say this tongue-in-cheek, but am a bit serious.  You (Gadi) are very 
good at looking at interesting trends and more than saying it's a 
problem, you are able to come up with a report like the botnet rat-out 
reports.  We know who the CC's are.  We know who the compromised drones 
are.  We know all of this.  Today.


But very few people (okay, not nobody) are saying, Hey, why should I 
allow that compromised windows box that has never sent me an MX request 
before all of the sudden be able to request 10,000 MX records across my 
resolvers?  Why am I resolving a domain name that was just added into 
the DNS an hour ago but has already changed NS servers 50 times?


These questions, and more (but I'm biased to DNS), can be solved at the 
edge for those who want them.  It's decentralized there.  It's done the 
right way there.  It's also doable in a safe and fail-open kind of way.


This is what I'm talking about.



After all, nobody's security being affected by the edge of some end-user
machine on the other side of the world is irrelevant to my edge
security. FUSSP.

DNS abuse is mostly not an edge issue.


I disagree. DNS is the enabler for many many issues which are edge 
issues.  (Botnets, spam, etc)


-David Ulevitch




Gadi.


-David Ulevitch








Re: Interesting new dns failures

2007-05-22 Thread Gadi Evron

On Tue, 22 May 2007, David Ulevitch wrote:
 

snip

 These questions, and more (but I'm biased to DNS), can be solved at the 
 edge for those who want them.  It's decentralized there.  It's done the 
 right way there.  It's also doable in a safe and fail-open kind of way.
 
 This is what I'm talking about.

Agreed.

  After all, nobody's security being affected by the edge of some end-user
  machine on the other side of the world is irrelevant to my edge
  security. FUSSP.
  
  DNS abuse is mostly not an edge issue.
 
 I disagree. DNS is the enabler for many many issues which are edge 
 issues.  (Botnets, spam, etc)

There you did it, you said the B word. Now all the off-topic screamers
will flame. :)

Botnets, spam, etc. are symptoms, and DNS is abused to help them
along. DNS abuse, i.e. abuse of DNS, is a DNS issue.

David, we agree - just talking of similar issues which are.. different.

Gadi.



Re: Interesting new dns failures

2007-05-22 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- David Ulevitch [EMAIL PROTECTED] wrote:

But very few people (okay, not nobody) are saying, Hey, why should I 
allow that compromised windows box that has never sent me an MX request 
before all of the sudden be able to request 10,000 MX records across my 
resolvers?  Why am I resolving a domain name that was just added into 
the DNS an hour ago but has already changed NS servers 50 times?

These questions, and more (but I'm biased to DNS), can be solved at the 
edge for those who want them.  It's decentralized there.  It's done the 
right way there.  It's also doable in a safe and fail-open kind of way.


David,

As you (and some others) may be aware, that's an approach that we
(Trend Micro) took a while back, but we got a lot (that's an
understatement) of push-back from service providers, specifically,
because they're not very inclined to change out their infrastructure
(in this case, their recursive DNS) for something that could identify
these types of behaviors.

And actually, in the case you mentioned above -- to identify
this exact specific behavior.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.1 (Build 1012)

wj8DBQFGU2NQq1pz9mNUZTMRAn5EAKCxlJ6uAkM+GMK15oCezkBVXHcBpgCeLuzK
Sn4ppcRBy8Nbc5MJU+zYiSE=
=+JDX
-END PGP SIGNATURE-


--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Interesting new dns failures

2007-05-22 Thread David Ulevitch


Fergie wrote:


David,

As you (and some others) may be aware, that's an approach that we
(Trend Micro) took a while back, but we got a lot (that's an
understatement) of push-back from service providers, specifically,
because they're not very inclined to change out their infrastructure
(in this case, their recursive DNS) for something that could identify
these types of behaviors.


Was that the real reason?

Here's a crazy question... Did it by chance cost money? :-)

I'm not saying it should have been free just that the hesitation to roll 
it out *might* have been for factors besides the fact that it mitigated 
DNS based botnets.


How do operators decide the expense is worth it to mitigate spew coming 
out of their network?  When their outbound DoS traffic exceeds their 
inbound transit ratios? :-)


-David




And actually, in the case you mentioned above -- to identify
this exact specific behavior.





Re: Interesting new dns failures

2007-05-22 Thread David Ulevitch


Roger Marquis wrote:


Simply
saying it is dangerous is indistinguishable from any other verisign
astroturfing.


It's not everyday that you get accused of astroturfing for Verisign.

I'm printing this, framing it, putting it on my wall, and leaving this 
thread.


Thanks!

-David


Re: Interesting new dns failures

2007-05-22 Thread Joe Provo

On Mon, May 21, 2007 at 03:08:06PM +, Chris L. Morrow wrote:
[snip]
 This is sort of the point of the NRIC document/book... 'we need to
 find/make/use a directory system for the internet' then much talk of how
 dns was supposed to be that but for a number of reasons it's not,
 google/insert favorite search engine is instead

Um, no. DNS became the defacto 'directory' prior to the rise of 
decent search engines.  The directory that was contracted and 
'supposed to' exist as part of the NNSC-to-InterNIC dance was 
to be built by old-ATT Labs. As far as I can recall, it was ever
only an ftp repository and not much of a 'directory and database 
service' (corrections welcome).  The problem was a classic case 
of top-down thinking (we will dictate this glacially slow entity 
will cook The Directory and The Database and decide what gets 
published and when) crashing into a very dynamic market with a 
clever and impatient population (we won't wait - DS and IS aren't 
fast enough ... this RS thing is Good Enough).  

An obvious catalyst was commercialization of domains.  Which 
interestingly enough leads us back to the lack of categories and 
naming morass in which we live. I find it quite humourous that 
new 'restrictive membership' branches of the tree are now being 
proposed as a solution to the problem of identity (eg, .bank to 
solve phishing).  Unless there will be some level of enforcement 
teeth, we will see the same situtaion that played out in 94/95:

tech: no sir, you can't have .net as you're not a network provider
customer: the guy down the street will do it!
tech's boss: (weighs non-extistent penalties versus $s, doesn't
 care what 'good of the Internet' or 'sullied reputaion' means)
 competative disadvantage! must!

Pushing an issue around to different points on the tree doesn't 
eliminate it.

Cheers,

Joe
-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


RE: Interesting new dns failures

2007-05-22 Thread michael.dillon

 The directory that was contracted 
 and 'supposed to' exist as part of the NNSC-to-InterNIC dance 
 was to be built by old-ATT Labs. As far as I can recall, it 
 was ever only an ftp repository and not much of a 'directory 
 and database service' (corrections welcome). 

Anyone remember the Internet Scout? Even back then labors of love like
John December's list were more useful than the Internic services. And of
course, there was USENET with its categorized discussion groups, many of
which had regular FAQ postings. That too was more of a real Internet
directory (yellow pages) than the DNS (white pages) has ever been.

Does everybody on this list even know what I'm talking about when I say
yellow pages? I'll bet there are a few that are scratching their
heads. I know that I haven't used them for about 10 years.

--Michael Dillon


Re: Interesting new dns failures

2007-05-22 Thread Valdis . Kletnieks
On Wed, 23 May 2007 01:32:41 BST, [EMAIL PROTECTED] said:
 Anyone remember the Internet Scout? Even back then labors of love like
 John December's list were more useful than the Internic services.

That worked well for 14,000 .coms.  It doesn't work for 140,000,000 .coms.

 Does everybody on this list even know what I'm talking about when I say
 yellow pages? I'll bet there are a few that are scratching their
 heads. I know that I haven't used them for about 10 years.

google is your friend. Google pagerank is your webmaster's friend.

The problem with yellow pages is that although an electronic version can
theoretically scale well to zillions of categories, it doesn't scale well
to the case of zillions of providers listed in a single category



pgpTFeVAGhK52.pgp
Description: PGP signature


Re: Interesting new dns failures

2007-05-22 Thread Chris L. Morrow



On Tue, 22 May 2007, David Ulevitch wrote:


 Fergie wrote:

  David,
 
  As you (and some others) may be aware, that's an approach that we
  (Trend Micro) took a while back, but we got a lot (that's an
  understatement) of push-back from service providers, specifically,
  because they're not very inclined to change out their infrastructure
  (in this case, their recursive DNS) for something that could identify
  these types of behaviors.

also sometimes assumptions were made about userbase/usages/deployments...
but the larger issue I think is below.


 Was that the real reason?

 Here's a crazy question... Did it by chance cost money? :-)

I think the reason I gave was that at certain places in the recursive dns
world this sort of thing is 'hard' mostly because you can't easily scope
the userbase and their requirements.

To use an example: 198.6.1.1 is in all manner of documentation (from
vendors, wee!) and other places (Rodney says the same happend with 4.2.2.1
 4.2.2.2). All manner of oddball people (and customers. wee!) use this
'service'. Their intents and usages are mostly unknown (aside from 'where
is www.google.com today?').

On the other hand, look at the recursive resolver that lives inside YOUR
enterprise network (or your managed customer's network, perhaps managed by
you). This has a closed community with clear policy and procedures, and
clear reporting chain for figuring out 'problems'.

In the first cast applying some magical DNS solution is bound to cause
many and varied problems with out any real hope of finding a fix (aside
from, hey go use rodney's 4.2.2.1 box). In the second set of cases if your
mail-admins have problems they can be told: 'like it or lump it, policy
says blah' or 'hey, maybe you should use your own recursive resolvers?'

Fixing this problem (is it a problem? that's still tbd...) at the 'core'
is much more difficult than at the enterprise/edge. Services may/will
arise that offer 'edge' folks a way to implement 'security policy' ('no
one can view gadi.com or *.cn or whatever your policy is) in a sane and
reliable fashion not just in 'firewall' or 'access-list' places. Offering
more than one option for security policy enforcement (layered options)
seems like a very reasonable thing, to me atleast.


 How do operators decide the expense is worth it to mitigate spew coming
 out of their network?

there are a myriad of reasons, some related to how much sleep people want
to lose, some related to 'who pays', some related to 'would my management
yell at me about this?' I think in the case being discussed there's a
right place and a wrong place to do the function, some of the tools for
implementing this at the 'right' place don't quite exist in a digestable
fashion. (yet)

-Chris



Re: Interesting new dns failures

2007-05-22 Thread Chris L. Morrow



On Tue, 22 May 2007, Roger Marquis wrote:


  Why are people trying to solve these problems in the core?

 Because that's the only place it can be done.

it is A PLACE, not necessarily THE PLACE. With every decision as to where
there are tradeoffs, be prepared to accept/defend them.


  These issues need to and must be solved at the edge.

 Been there, done that, with smtp/spam, netbios, and any number of
 other protocols that would also be ideally addressed at the source or
 edge but, in reality, cannot.


maybe this is also a definition problem? what is the core and what is
the edge in this discussion?

  These issues should not be solved by the registry operators or
  root server operators, that's very dangerous.

 Do you know that it is dangerous to fix problems at the core or are
 you speculating?  If you can cite specific examples please do.  Simply

it is dangerous, making assumptions about how people use a basic plumbing
service is what gets people into trouble, ask verisign about sitefinder.

much of this discussion of mitigating this issue revolves around the
'belief' that 'no one should/would ever want to rotate NS records around
every five minutes'. Making statements that include absolutes is bound to
be problematic.

What if, for some reason unknown today, people thought that pushing around
NS records regularly was helpful to their application?  What if it were
automated into a product like bittorrent or other widely deployed thing?
What if the usage wasn't for 'where is www.sun.com' but as a signalling
method or metric/best-path decision process that was never revealed to the
end users?

you simply can't know what options folks might use in future applications
when it comes to basic plumbing things. people expect basic plumbing to
'just work' and 'just work according to the standards'. giving back
falsified information is bound to generate problems (see sitefinder for a
quick/simple example).


 Can you say what that 'anything else' might consist of?

Sure work on an expedited removal process inside a real procedure from
ICANN down to the registry. Work on a metric and monetary system used to
punish/disincent registrys from allowing their systems to be abused. Work
on a service/solution for the end-user/enterprise that allows them to take
action based on solid intelligence in a timely fashion with tracking on
the bits of that intelligence.

three options, go play :)

-Chris


Re: Interesting new dns failures

2007-05-22 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Chris L. Morrow [EMAIL PROTECTED] wrote:

Sure work on an expedited removal process inside a real procedure from
ICANN down to the registry. Work on a metric and monetary system used to
punish/disincent registrys from allowing their systems to be abused. Work
on a service/solution for the end-user/enterprise that allows them to take
action based on solid intelligence in a timely fashion with tracking on
the bits of that intelligence.

three options, go play :)


Good dialogue.

Fow what it's worth, I never advocated pushing mechanisms into
the DNS core to deal with this issue -- in fact, I agree with you:
It's an issue that can dealt with locally in recursive DNS, and it
also needs to be dealt with in the policies that exists.

One technical, one non-technical. Even up. :-)

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.1 (Build 1012)

wj8DBQFGU8Dkq1pz9mNUZTMRAuB3AJ4wWU9pq+thPlyR52jLCSH+UOW+3wCg/0Fx
d82qbmHd89AVVSHgnFg+MAs=
=VsuA
-END PGP SIGNATURE-



--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Interesting new dns failures

2007-05-21 Thread Valdis . Kletnieks
On Sun, 20 May 2007 22:19:30 PDT, Roger Marquis said:
 Nobody's saying that the root servers are responsible, only that they
 are the point at which these domains would have to be squelched. In
 theory registrars could do this, but some would have a financial
 incentive not to.

Some have a financial incentive not to do it.
Some others have no financial incentive to do it.
Almost none have a financial incentive to do it.

Nobody should be surprised at the outcome


pgpbiCab7X8vX.pgp
Description: PGP signature


Re: Interesting new dns failures

2007-05-21 Thread Stephane Bortzmeyer

On Sun, May 20, 2007 at 09:25:37PM -0700,
 Roger Marquis [EMAIL PROTECTED] wrote 
 a message of 15 lines which said:

 If not, have any root nameservers been hacked?
 
 To partly answer my own question, no.

I cannot find the original message in my mailbox. (Not on NANOG
mailing list archives.) What was the issue?

 The data returned by root (gtld) nameservers is not changing
 rapidly.

Now, I understand nothing. Is there a problem with the root
nameservers or with some gTLD nameservers???



Re: Interesting new dns failures

2007-05-21 Thread Mark Andrews

In article [EMAIL PROTECTED] you write:

On Sun, May 20, 2007 at 09:25:37PM -0700,
 Roger Marquis [EMAIL PROTECTED] wrote 
 a message of 15 lines which said:

 If not, have any root nameservers been hacked?
 
 To partly answer my own question, no.

I cannot find the original message in my mailbox. (Not on NANOG
mailing list archives.) What was the issue?

 The data returned by root (gtld) nameservers is not changing
 rapidly.

Now, I understand nothing. Is there a problem with the root
nameservers or with some gTLD nameservers???


There isn't a problem with the root or tld servers.

There is a problem with the server for these zones.
They don't speak RFC 1034, hence the error messages
about garbage responses.

Note the answer doesn't match the question.

;  DiG 9.5.0a2  @76.183.141.203 ns6.loptran.com +norec
; (1 server found)
;; global options:  printcmd
;; Got answer:
;; -HEADER- opcode: QUERY, status: NOERROR, id: 36800
;; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;ns6.loptran.com.   IN  A

;; ANSWER SECTION:
loptran.com.0   IN  A   24.218.122.218

;; Query time: 212 msec
;; SERVER: 76.183.141.203#53(76.183.141.203)
;; WHEN: Mon May 21 19:05:58 2007
;; MSG SIZE  rcvd: 60

There is a problem with the whole delegation process in
that no one involved in the delegation seems to care that
absolute garbage is being injected into the DNS.  A few
simple checks, like above, would have show that the servers
were not RFC 1034 compliant.  That the glue was not a copy
of the records in the child zone.  The parent *is* required
by RFC 1034 to check this.

RFC 1034, 4.2.2. Administrative considerations, paragraph 3.

As the last installation step, the delegation NS RRs and glue RRs
necessary to make the delegation effective should be added to the parent
zone.  The administrators of both zones should insure that the NS and
glue RRs which mark both sides of the cut are consistent and remain so.

These zones should be pulled.

Mark


Re: Interesting new dns failures

2007-05-21 Thread bmanning

On Sun, May 20, 2007 at 10:19:30PM -0700, Roger Marquis wrote:
 
 All the same, it would seem to be an easy and cheap abuse to address,
 at the gtlds.  Why are these obvious trojans are being propagated by
 the root servers anyhow?
 
 the root servers are responsible how exactly for the fast-flux issues?
 Also, there might be some legittimate business that uses something like
 the FF techniques... but, uhm... how are the root servers involved again?
 
 Nobody's saying that the root servers are responsible, only that they
 are the point at which these domains would have to be squelched. In
 theory registrars could do this, but some would have a financial
 incentive not to. Also I don't believe registrars can update the roots
 quickly enough to be effective (correct me if I'm wrong).

ok... so you suggest that the roots squelch these domains?
i check the contents of the root zone and find that the closest
the roots come to being able to squelch these zones is to 
remove .com from the zone (since these other entries are not in 
the root but in the com zone).  

if you can get concensus to remove .com, i'm sure the roots would
be willing to help out.

--bill

 
 Given the obvious differences between legitimate fast flux and the
 pattern/domains in question it would seem to be a no-brainer,
 technically at least.
 
 -- 
 Roger Marquis
 Roble Systems Consulting
 http://www.roble.com/


Re: Interesting new dns failures

2007-05-21 Thread Gadi Evron

On Sun, 20 May 2007, Roger Marquis wrote:
 
 An odd pattern of DNS failures began appearing in the logs yesterday:

Fastflux.

Gadi.



Re: Interesting new dns failures

2007-05-21 Thread Gadi Evron

On Mon, 21 May 2007, Chris L. Morrow wrote:
 
 
 
 On Sun, 20 May 2007, Roger Marquis wrote:
 
   If not, have any root nameservers been hacked?
 
  To partly answer my own question, no.  The data returned by root
  (gtld) nameservers is not changing rapidly.  Thanks for the pointers
  to fast flux too.  Wasn't familiar with this attack or terminology.
 
  All the same, it would seem to be an easy and cheap abuse to address,
  at the gtlds.  Why are these obvious trojans are being propagated by
  the root servers anyhow?
 
 the root servers are responsible how exactly for the fast-flux issues?
 Also, there might be some legittimate business that uses something like
 the FF techniques... but, uhm... how are the root servers involved again?
 

Small note: For regular fastflux, yes. for NS fastflux, not so much.



Re: Interesting new dns failures

2007-05-21 Thread Gadi Evron

On Mon, 21 May 2007, Stephane Bortzmeyer wrote:
 
 On Sun, May 20, 2007 at 09:25:37PM -0700,
  Roger Marquis [EMAIL PROTECTED] wrote 
  a message of 15 lines which said:
 
  If not, have any root nameservers been hacked?
  
  To partly answer my own question, no.
 
 I cannot find the original message in my mailbox. (Not on NANOG
 mailing list archives.) What was the issue?
 
  The data returned by root (gtld) nameservers is not changing
  rapidly.
 
 Now, I understand nothing. Is there a problem with the root
 nameservers or with some gTLD nameservers???
 

There is the issue of fastflux, and the possible solution of blacklisting
at the TLDs.

Both completely separate issues for discussion-sake, as flames can be
avoided.

Gadi.



Re: Interesting new dns failures

2007-05-21 Thread John Curran

At 5:30 AM + 5/21/07, Fergie wrote:
Why not? The Registrars seem sto being doing a great job of
expediting the activation of new domains -- why can't they de-activate
them just as quickly when they find out they are being used for
malicious purposes?

The business interests of the registrars, that's why.

Not to defend those doing malicious things, or service providers
that consciously hide such for money, but there is another
reason why removal/blockage/filtering/etc doesn't always
happen in a timely manner, and that's the legal liability.  In
larger organizations, the potential for liability can result in a
real administrative burden of paperwork before getting the
green light to terminate.  I don't know if that's the case here,
but would recommend against jumping to greed as the only
possible reason for hesitation in moving against such folks.

/John



Re: Interesting new dns failures

2007-05-21 Thread Valdis . Kletnieks
On Mon, 21 May 2007 10:38:56 -, [EMAIL PROTECTED] said:
   if you can get concensus to remove .com, i'm sure the roots would
   be willing to help out.

Whose bright idea *was* it to design a tree-hierarchical structure, and then
dump essentially all 140 million entries under the same node, anyhow? :)

I'll bet a large pizza that 90% or more could be relocated to a more
appropriate location in the DNS tree, and nobody except the domain holder
and less than a dozen other people will notice/care in the slightest. Now
if anybody has a good idea on what to do with those companies that register
www.thissummersblockbustermoviecomingsoonnow.com ;)



pgpsXK3s4goXy.pgp
Description: PGP signature


Re: Interesting new dns failures

2007-05-21 Thread Chris L. Morrow



On Mon, 21 May 2007 [EMAIL PROTECTED] wrote:

 On Mon, 21 May 2007 10:38:56 -, [EMAIL PROTECTED] said:
  if you can get concensus to remove .com, i'm sure the roots would
  be willing to help out.

 Whose bright idea *was* it to design a tree-hierarchical structure, and then
 dump essentially all 140 million entries under the same node, anyhow? :)

 I'll bet a large pizza that 90% or more could be relocated to a more
 appropriate location in the DNS tree, and nobody except the domain holder
 and less than a dozen other people will notice/care in the slightest. Now

There's an interesting read from NRIC about this problem: Signposts on
the information superhighway I think it's called. Essentially no one
aside from propeller-head folks understand that there is something aside
from 'com' :( take, for example, discussions inside the company formerly
known as uunet about email addresses: Yes, you can email me at
[EMAIL PROTECTED], uunet.com?, no, uu.net, uu.net.com?, nope, just
uu.net. Admittedly it was with sales/marketting folks, but still :(

I wonder how the .de or .uk folks see things? Is the same true elsewhere?

-Chris


RE: Interesting new dns failures

2007-05-21 Thread Chris L. Morrow



On Mon, 21 May 2007 [EMAIL PROTECTED] wrote:


  There's an interesting read from NRIC about this problem:
  Signposts on the information superhighway I think it's
  called. Essentially no one aside from propeller-head folks
  understand that there is something aside from 'com'

 Seems to me they are missing something here. Essentially no-on except
 from propeller-head folks uses the DNS for anything at all. Websites
 come from Google or bookmarks. Email addresses come from a directory or
 an incoming email or a business card.

This is sort of the point of the NRIC document/book... 'we need to
find/make/use a directory system for the internet' then much talk of how
dns was supposed to be that but for a number of reasons it's not,
google/insert favorite search engine is instead


 P.S., the .xx domains make the world look like a collection of countries
 all connected to the same Internet. But the reality is that the world is
 divided into a bunch of language zones, most of which cross several
 borders, and which don't tend to communicate much with the Internet that
 Americans see. For instance, what use does a Hungarian speaking native
 of Ukraine have for cnn.com? Or a SerboCroatian speaking native of
 Hungary?


oh, cnn doesn't publish their content in these tongues? :) they are
missing a marketting opportunity! :)

-Chris


Re: Interesting new dns failures

2007-05-21 Thread Chris L. Morrow



On Mon, 21 May 2007, Gadi Evron wrote:

 On Mon, 21 May 2007, Chris L. Morrow wrote:
  the root servers are responsible how exactly for the fast-flux issues?
  Also, there might be some legittimate business that uses something like
  the FF techniques... but, uhm... how are the root servers involved again?
 

 Small note: For regular fastflux, yes. for NS fastflux, not so much.

For regular FF 'yes' but for ns FF not much? Hrm, not much legit purpose?
or not much the root/tld folks can do?

I ask because essentially akamai's edgesuite (and I might have their
product names confused some) seems to do FF ... or the same thing FF does.
Doesn't it?

-Chris


Re: Interesting new dns failures

2007-05-21 Thread Tim Franklin

On Mon, May 21, 2007 3:26 pm, Chris L. Morrow wrote:

 There's an interesting read from NRIC about this problem: Signposts on
 the information superhighway I think it's called. Essentially no one
 aside from propeller-head folks understand that there is something aside
 from 'com' :( take, for example, discussions inside the company formerly
 known as uunet about email addresses: Yes, you can email me at
 [EMAIL PROTECTED], uunet.com?, no, uu.net, uu.net.com?, nope, just
 uu.net. Admittedly it was with sales/marketting folks, but still :(

To a great degree, there effectively stopped being anything outside .com
when there stopped being any distinction between who was eligable for
.com, .net or .org, and it just became a credit card, please
free-for-all.

I can't imagine anyone now registering a new .com and *not* registering
the corresponding .org and .net, making them pretty much pointless for new
registrations.  It's only legacy domains, and occasional gap-finding in
legacy registrations, where the registrant isn't the same for all three.

 I wonder how the .de or .uk folks see things? Is the same true elsewhere?

.co.uk generally seems to be understood by UK folks.  .org.uk tends to
cause a double-take.  (The 'special' UK SLDs, like nhs.uk, are a maze of
twisty turny third-levels, all on different logic).

My email confuses people by being both a .org and too short - the general
public seems to expect either [EMAIL PROTECTED] or
[EMAIL PROTECTED],gmail}.com.




Re: Interesting new dns failures

2007-05-21 Thread Joe Abley



On 21-May-2007, at 10:26, Chris L. Morrow wrote:

I wonder how the .de or .uk folks see things? Is the same true  
elsewhere?


I think the phenomenon of that doesn't look right because it doesn't  
end in .com is peculiar to the US.


Elsewhere, you don't need a particularly large TLD zone to get  
mindshare -- NZ, CA and NP are three random examples of ccTLDs which  
are well-recognised locally and which are far smaller than UK or DE;  
there are many more.



Joe




Re: Interesting new dns failures

2007-05-21 Thread Simon Waters

On Monday 21 May 2007 16:19, Tim Franklin wrote:
 
  I wonder how the .de or .uk folks see things? Is the same true elsewhere?

 .co.uk generally seems to be understood by UK folks.  .org.uk tends to
 cause a double-take.  (The 'special' UK SLDs, like nhs.uk, are a maze of
 twisty turny third-levels, all on different logic).

The odd thing is customers mostly fall into either;

I don't understand anything beyond .com and .co.uk

I'm a gov.uk, nhs.uk other speciality, who often know more about the 
procedures or technicalities of registering their desired domain name than we 
do. 

And those who just want every possible TLD, and variant, for a name, in some 
misguided belief this will protect it in some magical way, and won't just 
make a load of money for the registries.

We obviously prefer the last group, as they spend more money, are less hassle, 
and are usually content with registering all the TLD domains we can do for 
the standard price. 

I'm sure there is a business in doing services to the second group, especially 
if you chuck in certificates and a few related things.


Re: Interesting new dns failures

2007-05-21 Thread Gadi Evron

On Mon, 21 May 2007, Chris L. Morrow wrote:
 
 
 
 On Mon, 21 May 2007, Gadi Evron wrote:
 
  On Mon, 21 May 2007, Chris L. Morrow wrote:
   the root servers are responsible how exactly for the fast-flux issues?
   Also, there might be some legittimate business that uses something like
   the FF techniques... but, uhm... how are the root servers involved again?
  
 
  Small note: For regular fastflux, yes. for NS fastflux, not so much.
 
 For regular FF 'yes' but for ns FF not much? Hrm, not much legit purpose?
 or not much the root/tld folks can do?
 
 I ask because essentially akamai's edgesuite (and I might have their
 product names confused some) seems to do FF ... or the same thing FF does.
 Doesn't it?

Sorry, I didn't write in a clear fashion.

There is a difference between fastfluxing the A record and the NS record.

I don't know of many if any who change the NS record quite so frequently
without being bad guys.

 
 -Chris
 



Re: Interesting new dns failures

2007-05-21 Thread Chris L. Morrow



On Mon, 21 May 2007, Gadi Evron wrote:
 On Mon, 21 May 2007, Chris L. Morrow wrote:
  On Mon, 21 May 2007, Gadi Evron wrote:
   Small note: For regular fastflux, yes. for NS fastflux, not so much.
 
  For regular FF 'yes' but for ns FF not much? Hrm, not much legit purpose?
  or not much the root/tld folks can do?
 
  I ask because essentially akamai's edgesuite (and I might have their
  product names confused some) seems to do FF ... or the same thing FF does.
  Doesn't it?

 I don't know of many if any who change the NS record quite so frequently
 without being bad guys.

ok, so 'today' you can't think of a reason (nor can I really easily) but
it's not clear that this may remain the case tomorrow. It's possible that
as a way to 'better loadshare' traffic akamai (just to make an example)
could start doing this as well.

So, I think that what we (security folks) want is probably not to
auto-squish domains in the TLD because of NS's moving about at some rate
other than 'normal' but to be able to ask for a quick takedown of said
domain, yes? I don't think we'll be able to reduce false positive rates
low enough to be acceptable with an 'auto-squish' method :(

-Chris


Re: Interesting new dns failures

2007-05-21 Thread Jason Frisvold


On 5/20/07, Roger Marquis [EMAIL PROTECTED] wrote:

Most of the individual nameservers do not answer queries, the ones
that do are open to recursion, and all are hosted in cable/dsl/dial-up
address space with correspondingly rfc-illegal reverse zones.  Running
'host -at ns' a few times shows the list of nameservers is rotated
every few seconds, and occasionally returns server localhost.


They're likely not name servers, or at least not all name servers..
I'd venture a guess as to these being part of a Snowshoe spammer
network...  I've been getting hit by similar domains for a few weeks
now..  Blocking seems to be the best way to handle them..

Looks like some of these are running nginx (http://nginx.net/) as a
web server...  I've seen others with centos installs..  My guess is
that the web servers are for management of the spamming software..


Roger Marquis


--
Jason 'XenoPhage' Frisvold
[EMAIL PROTECTED]
http://blog.godshell.com


Re: Interesting new dns failures

2007-05-21 Thread Simon Waters

On Monday 21 May 2007 14:43, you wrote:

 I'll bet a large pizza that 90% or more could be relocated to a more
 appropriate location in the DNS tree, and nobody except the domain holder
 and less than a dozen other people will notice/care in the slightest.

More like 99% I suspect, but we've no idea which 99%.

The decision to make the name servers part of the hierarchy, without insisting 
they be within the zones they master (in bailiwick as some call it) and 
thus glued in, means we have no definite idea which bits of the DNS break on 
any specific deletion.

In general it is impossible when deleting a zone to know the full consequences 
of that action unless you are that zones DNS administrator, and even then you 
need to ask any administrators of delegated domains. 

So those who think deleting zones is a way to fix things, or penalise people, 
should tread VERY carefully, less they end up liable for something bigger 
than they expected (or could possibly imagine).

Doing it all again, this is clearly something that folks would work to 
minimize in the design of the DNS. Such that deleting .uk could be 
guaranteed to only affect domains ending in .uk. But at the moment, you 
can't know exactly which bits of the DNS would break if you deleted the .uk 
zone from the root servers. 

For example deleting our corporate .com zones from the GTLD servers could 
potentially* disable key bits of another second level UK domain, and no third 
party can tell for sure the full impact of that change in advance. Who knows 
they may be hosting other DNS servers for other zones in their turn (I doubt 
it but I don't know for certain).

Of course even if the DNS were designed so you can recognise which bits might 
break with a given change, you'd then be left not knowing which services are 
linked into a particular domain. But that is beyond the scope of a name 
service design I think.

Sure most of the time if you delete a recently registered domain name, with a 
lot of changes and abuse in its history, you normally just hurt a spammer. I 
dare say collateral damage probably follows some simple mathematical law like 
1/f ? Hopefully before you delete something really important you most likely 
delete something merely expensive, and learn to be more careful.

 Simon

PS: Those who make sarcastic comments about people not knowing the difference 
between root servers, and authoritative servers, may need to be a tad more 
explicit for the help of the Internet challenged.

* I'm hoping the name servers in co.uk will help if anything ever does go pear 
shaped with that domain name, but I wouldn't bet money on it.


Re: Interesting new dns failures

2007-05-21 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Chris L. Morrow [EMAIL PROTECTED] wrote:

So, I think that what we (security folks) want is probably not to
auto-squish domains in the TLD because of NS's moving about at some rate
other than 'normal' but to be able to ask for a quick takedown of said
domain, yes? I don't think we'll be able to reduce false positive rates
low enough to be acceptable with an 'auto-squish' method :(

Hi Chris,

While I agree with you, there are many of us who know that these
fast-flux hosts are malicious due to malware  malicious traffic
analysis...

I completely agree with you, however, on the issue of making
assumptions that it will always be malicious -- of course, that
will not always be the case. :-)

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.1 (Build 1012)

wj8DBQFGUd/7q1pz9mNUZTMRAigSAKDgooaGUsp+GT0sEYcEOivjY0afFwCfWmk6
EaWuXUl9W+3+uQEAEJ1c1SQ=
=V6Mu
-END PGP SIGNATURE-



--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Interesting new dns failures

2007-05-21 Thread Chris L. Morrow



On Mon, 21 May 2007, Fergie wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 - -- Chris L. Morrow [EMAIL PROTECTED] wrote:

 So, I think that what we (security folks) want is probably not to
 auto-squish domains in the TLD because of NS's moving about at some rate
 other than 'normal' but to be able to ask for a quick takedown of said
 domain, yes? I don't think we'll be able to reduce false positive rates
 low enough to be acceptable with an 'auto-squish' method :(

 Hi Chris,

 While I agree with you, there are many of us who know that these
 fast-flux hosts are malicious due to malware  malicious traffic
 analysis...

Oh, so we switched from 'the domain is bad because..' to 'the hosts using
the domain are bad because...' I wasn't assuming some piece of intel at
the TLD that told the TLD that 'hostX that was just named NS for domain
foo.bar is also compromised'. I was assuming a s'simple' system of
'changing NS's X times in Y period == bad'. I admit that's a might naive,
but given the number, breadth, content, operators of lists of 'bad things'
on the internet today I'm not sure I'd rely on them for a global decision
making process, especially if I were a TLD operator potentially liable for
removal of a domain used to process real business :(


 I completely agree with you, however, on the issue of making
 assumptions that it will always be malicious -- of course, that
 will not always be the case. :-)


agreed.


Re: Interesting new dns failures

2007-05-21 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Chris L. Morrow [EMAIL PROTECTED] wrote:


 While I agree with you, there are many of us who know that these
 fast-flux hosts are malicious due to malware  malicious traffic
 analysis...

Oh, so we switched from 'the domain is bad because..' to 'the hosts using
the domain are bad because...' I wasn't assuming some piece of intel at
the TLD that told the TLD that 'hostX that was just named NS for domain
foo.bar is also compromised'. I was assuming a s'simple' system of
'changing NS's X times in Y period == bad'. I admit that's a might naive,
but given the number, breadth, content, operators of lists of 'bad things'
on the internet today I'm not sure I'd rely on them for a global decision
making process, especially if I were a TLD operator potentially liable for
removal of a domain used to process real business :(

Well, I don't think I ever implied that, but let's say that there
are certainly some fast-flux behavior (fluxing across multiple
administratively managed prefix blocks, NS fast-flux) which should
immediately raise a red flag.

Decisions based on those flags are policy issues -- whether or not
someone decides to take action upon only on that information or do
further research, is something that has to be determined by the
person(s) who detect the behavior, etc.

Having said that, most people don't even realize that fast-flux
exists...

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.1 (Build 1012)

wj8DBQFGUeNhq1pz9mNUZTMRAgC5AJ98cW092rV7ghrlIzjLP89qjiurDACdEFaV
qUxEcKgfr42Mh9IQAOmaKr0=
=Hrk0
-END PGP SIGNATURE-


--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Interesting new dns failures

2007-05-21 Thread Roger Marquis


On Mon, 21 May 2007, Chris L. Morrow wrote:

ok, so 'today' you can't think of a reason (nor can I really easily) but
it's not clear that this may remain the case tomorrow.


Not a good justification for doing nothing while this sort of trojan
propagates.  As analogy, it is also true we cannot see how email-based
trojans may be desirable tomorrow, but that doesn't stop us from
protecting ourselves against their detrimental effects today.


It's possible that as a way to 'better loadshare' traffic akamai
(just to make an example) could start doing this as well.


Actually not.  There is no legitimate purpose for this dns hack.


So, I think that what we (security folks) want is probably not
to auto-squish domains in the TLD because of NS's moving about
at some rate other than 'normal'


Except that there's a lot more to this pattern than simply changing NS
at a rate other than normal, enough that it can be easily identified
for what it is.

--
Roger Marquis
Roble Systems Consulting
http://www.roble.com/


Re: Interesting new dns failures

2007-05-21 Thread Chris L. Morrow



On Mon, 21 May 2007, Roger Marquis wrote:

 Except that there's a lot more to this pattern than simply changing NS
 at a rate other than normal, enough that it can be easily identified
 for what it is.

I'm not in the mood to argue, but 'do tell'. Perhaps someone from ICANN
will implement this in policy for the TLD folks to use.


Re: Interesting new dns failures

2007-05-21 Thread Roger Marquis


On Mon, 21 May 2007, Jason Frisvold wrote:

They're likely not name servers, or at least not all name
servers.. I'd venture a guess as to these being part of a
Snowshoe spammer network... I've been getting hit by similar
domains for a few weeks now.. Blocking seems to be the best way
to handle them..


Fastflux does seem to be a tool in some spammer's kits but these
particular domains are probably not being used for that, at least not
effectively, since they have no MX records.

Are there sites that accept mail from domains without a valid MX/A
record?

--
Roger Marquis
Roble Systems Consulting
http://www.roble.com/


RE: Interesting new dns failures

2007-05-21 Thread michael.dillon

 In general it is impossible when deleting a zone to know the 
 full consequences of that action unless you are that zones 
 DNS administrator, and even then you need to ask any 
 administrators of delegated domains. 

Not just deleting.

 So those who think deleting zones is a way to fix things, or 
 penalise people, should tread VERY carefully, less they end 
 up liable for something bigger than they expected (or could 
 possibly imagine).

There was a case not long ago where someone decided that it was a good
idea to change the NS records in lame domains. This caused a major
service outage for a company who needed this specific domain to be lame
in order for a certain service to function. Fortunately, we were able to
find the domain technical contact who was able to log into the registrar
and put the lame delegation back. Now, the problem has been solved by
moving the domain to another registrar whose goal is to keep things the
way they are, not clean up lame domains or other perceived errors.

--Michael Dillon


Re: Interesting new dns failures

2007-05-21 Thread Stephane Bortzmeyer

On Mon, May 21, 2007 at 06:57:06PM +0100,
 Simon Waters [EMAIL PROTECTED] wrote 
 a message of 53 lines which said:

 PS: Those who make sarcastic comments about people not knowing the
 difference between root servers, and authoritative servers, may need
 to be a tad more explicit for the help of the Internet challenged.

Warning, the rest of this message is only for
Internet-challenged. They are probably uncommon in NANOG. For
instance, I cannot believe that people in NANOG may confuse the .com
name servers with the root name servers.

An authoritative name server is an official source of DNS data for a
given domain. For instance, ns2.nic.ve. is authoritative for
.ve. There are typically two to ten or sometimes more authoritative
name servers for a domain. You can display them with dig NS
the-domain-you-want..

A root name server is a server which is authoritative for the root of
the DNS. For instance, f.root-servers.net is authoritative for .
(the root). You can display them with dig NS . (for the benefit of
the Internet-challenged, I did not discuss the alternative roots).



Re: Interesting new dns failures

2007-05-21 Thread Roger Marquis


On Mon, 21 May 2007, Stephane Bortzmeyer wrote:

I cannot believe that people in NANOG may confuse the .com
name servers with the root name servers.


Not to confuse the issue but among some managerial circles the root
nameservers comprise both root and tld.

Point taken though, root and tld should not be confounded in a forum
like nanog.

--
Roger Marquis
Roble Systems Consulting
http://www.roble.com/


Re: Interesting new dns failures

2007-05-21 Thread Valdis . Kletnieks
On Mon, 21 May 2007 11:54:36 PDT, Roger Marquis said:

 Are there sites that accept mail from domains without a valid MX/A
 record?

Depends what you call valid.  A lot of sites get *real* confused when they
find out that the MX for foo.com is where foo.com's *inbound* mail servers
live, and that their *outward* facing mail servers are someplace totally
different (yes, there's *still* places that get this wrong - obviously,
not being able to talk to any of the 800-pound gorillas or even the 200-pound
dachsunds out there doesn't cause the sites to acquire kloo).

Then there's all the valid issues caused by domain on MAIL FROM doesn't
match the EHLO and/or PTR lookups that SPF and similar schemes haven't
succeeded in curing...

But in general, if a non-null MAIL FROM: arrives, and the purported domain
comes up NXDOMAIN or similar *totally* unreachable (as opposed to just hinky),
you're totally justified in either 4xx or 5xx'ing the sucker, because if you
250 it and then have to generate a bounce, you're left holding the bag.  But
again, just because it's a bad idea doesn't mean there's probably lots of places
that still do it...

Or as a co-worker who lurks here said the other day:

212.150.245.56 resolves to 212.150.245.56.245.150.212.in-addr.arpa
And they want to know why we block it.


pgpfB52AmXOrf.pgp
Description: PGP signature


Re: Interesting new dns failures

2007-05-21 Thread Gadi Evron

On Mon, 21 May 2007, Chris L. Morrow wrote:
 ok, so 'today' you can't think of a reason (nor can I really easily) but
 it's not clear that this may remain the case tomorrow. It's possible that
 as a way to 'better loadshare' traffic akamai (just to make an example)
 could start doing this as well.
 
 So, I think that what we (security folks) want is probably not to
 auto-squish domains in the TLD because of NS's moving about at some rate
 other than 'normal' but to be able to ask for a quick takedown of said
 domain, yes? I don't think we'll be able to reduce false positive rates
 low enough to be acceptable with an 'auto-squish' method :(

Auto-squish on a registrar level is actually starting to work, but there
is a long way yet..

As to NS fastflux, I think you are right. But it may also be an issue of
policy. Is there a reason today to allow any domain to change NSs
constantly?

 
 -Chris
 



Re: Interesting new dns failures

2007-05-21 Thread Tim Franklin


Jay R. Ashworth wrote:


Such is not my experience, and I strongly advise people against such
stupidity.


Oh, I'd absolutely advise against it - but the branding people and the 
lawyers typically think otherwise.


The case that gets a bit murky for me is genuinely multi-national 
entities.  In *theory* that ought to be what .com is for, but 
registering yourcompany.cc for every country where you have an operating 
entity looks sort of legit.


(Yes, I've been asked to do this before.)


But if we play to their ignorance, they'll *never* learn, will they?

I don't have overmuch trouble getting people to understand microsys.us


Oh, if there's the slightest hint of interest in learning, I'll explain 
- apologies if I implied otherwise.


Regards,
Tim.


Re: Interesting new dns failures

2007-05-21 Thread Edward Lewis


At 3:50 PM -0500 5/21/07, Gadi Evron wrote:


As to NS fastflux, I think you are right. But it may also be an issue of
policy. Is there a reason today to allow any domain to change NSs
constantly?


Although I rarely find analogies useful when trying to explain 
something, I want to use one now to see if I understand this.


Let's say you rob convenience stores as a career choice.  Once your 
deed is done, you need to get away fast.  So moving fast is a real 
help to criminals.  Since moving fast is rarely helpful for decent 
folk, maybe we should just slow every one down - this certainly would 
make it easier for law enforcement to catch the criminals.


If the above is not an accurate analogy to the NS fastflux issue, I'd 
like to know what the deviations are.  I don't doubt there are any, 
but from what little I've gathered, the problem isn't the NS fastflux 
but the activity that it hides - if it is indeed hiding activity.  As 
in, not every one speeding around town is running from the law.

--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis+1-571-434-5468
NeuStar

Sarcasm doesn't scale.


Re: Interesting new dns failures

2007-05-21 Thread Tim Franklin


Stewart Honsberger wrote:

Which is why new TLDs like .xxx et al. are redundant. I can see that 
becoming a haven for vanity domains.


It gets worse.

In a previous life, I had the job of de-bunking^Wevaluating whichever 
bunch of alt-root snake-oil salesmen had managed to get an audience with 
our CTO with the pitch that it would be absolutely brilliant if everyone 
could just invent their own TLD.


Just imagine - Fooboz could own .fooboz, then they could have 
sales.fooboz, support.fooboz, widgets.fooboz...


You know sales.fooboz.com, www.sales.fooboz.com, and 
www.fooboz.com/sales all work today, right?


Added to that their own panel of Internet experts to decide if a given 
Fooboz out of 47 (last time I checked) potential trade-mark holders on 
'fooboz' and countless possible companies of the same name has the 
*real* claim on the .fooboz TLD.


Feh.  That way lies madness.

Regards,
Tim.


Re: Interesting new dns failures

2007-05-21 Thread Chris L. Morrow



On Mon, 21 May 2007, Gadi Evron wrote:

 As to NS fastflux, I think you are right. But it may also be an issue of
 policy. Is there a reason today to allow any domain to change NSs
 constantly?

well, so it's not explicitly denied in the current operations policy
things, so people may depend on it for some reason(s). They might have
turned on a service that depends on it, something not related to email or
web or other things. DNS is basic internet plumbing, messing with it
without LOTS of study is bound to bring out wierd uses. Especially where
there is no prohibition on this today, making an arbitrary limit tomorrow
is going to cause problems.

-Chris


Re: Interesting new dns failures

2007-05-21 Thread Gadi Evron

On Mon, 21 May 2007, Chris L. Morrow wrote:
 On Mon, 21 May 2007, Gadi Evron wrote:
 
  As to NS fastflux, I think you are right. But it may also be an issue of
  policy. Is there a reason today to allow any domain to change NSs
  constantly?
 
 well, so it's not explicitly denied in the current operations policy
 things, so people may depend on it for some reason(s). They might have
 turned on a service that depends on it, something not related to email or
 web or other things. DNS is basic internet plumbing, messing with it
 without LOTS of study is bound to bring out wierd uses. Especially where
 there is no prohibition on this today, making an arbitrary limit tomorrow
 is going to cause problems.

Quite. And yet watching for such changes at the registrar level may be
interesting. A couple of years ago some DNS experts disagreed. I'll try
and raise this idea again and if it holds water, see if some of the
registrars are game (which in itself hints to another problem).

As an old boss of mine used to say: In Hebrew we say, 'one cow, one
cow'. (One cow at a time ... )

 -Chris
 



Re: Interesting new dns failures

2007-05-21 Thread Steve Gibbard


On Mon, 21 May 2007, Tim Franklin wrote:

The case that gets a bit murky for me is genuinely multi-national entities. 
In *theory* that ought to be what .com is for, but registering yourcompany.cc 
for every country where you have an operating entity looks sort of legit.


Why only sort of?

To analogize this to the phone network, there's a country code, +800, for 
international toll free calls.  There are also various national or regional 
toll free dialing codes, such as 1-800 numbers in the NANPA region (US, 
Canada, Caribbean), or 0800 numbers in the UK.


Looking at ads targeting the US market, I see lots of 1-800 numbers. 
Looking at ads targeting the UK market, I see lots of 0800 numbers.  In 
other countries, I see their own conventions.  I'm guessing if I dialed 
any of those numbers, the phone would be answered in the language the ad 
was written in, and prices would be quoted in the currency of the place 
where the ad was published.  I'm not sure I've ever noticed a +800 number 
being advertised, despite its status as a global standard.


Is this wrong?  Would those trying to sell things to Americans get more 
business by dropping their familiar 1-800 numbers is favor of what their 
customers would see as 011-800 numbers?  Would those trying to sell things 
to the British do better if they made people dial 00-800 rather than 0800? 
Or, for that matter, would those trying to sell things in France do better 
if their phones were answered in English?


Is the above situation any different from the decision of whether to use 
locally-expected ccTLDs for local content, or to use the international 
.com for everything?


-Steve


Re: Interesting new dns failures

2007-05-20 Thread Roger Marquis



If not, have any root nameservers been hacked?


To partly answer my own question, no.  The data returned by root
(gtld) nameservers is not changing rapidly.  Thanks for the pointers
to fast flux too.  Wasn't familiar with this attack or terminology.

All the same, it would seem to be an easy and cheap abuse to address,
at the gtlds.  Why are these obvious trojans are being propagated by
the root servers anyhow?

--
Roger Marquis
Roble Systems Consulting
http://www.roble.com/


Re: Interesting new dns failures

2007-05-20 Thread Chris L. Morrow



On Sun, 20 May 2007, Roger Marquis wrote:

  If not, have any root nameservers been hacked?

 To partly answer my own question, no.  The data returned by root
 (gtld) nameservers is not changing rapidly.  Thanks for the pointers
 to fast flux too.  Wasn't familiar with this attack or terminology.

 All the same, it would seem to be an easy and cheap abuse to address,
 at the gtlds.  Why are these obvious trojans are being propagated by
 the root servers anyhow?

the root servers are responsible how exactly for the fast-flux issues?
Also, there might be some legittimate business that uses something like
the FF techniques... but, uhm... how are the root servers involved again?


Re: Interesting new dns failures

2007-05-20 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Roger Marquis [EMAIL PROTECTED] wrote:

An odd pattern of DNS failures began appearing in the logs yesterday:

May 20 15:05:19 PDT named[345]: wrong ans. name (uzmores.com !=
ns5.uzmores.com)  


Perhaps some fast-flux sticky cruft leftover from abuse?

I just looked at the first one on the list [above], and it's
certainly tell-tale:


http://cert.uni-stuttgart.de/stats/dns-replication.php?query=ns5.uzmores.co
msubmit=Query

- - ferg


-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.1 (Build 1012)

wj8DBQFGUSmOq1pz9mNUZTMRAjwHAKCotvseQNDwuJ8FScudOW3/lRUzVgCg23ec
PtpYE7OtI5J8qRTpvxg0Vp8=
=Vl8r
-END PGP SIGNATURE-

--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Interesting new dns failures

2007-05-20 Thread Roger Marquis



All the same, it would seem to be an easy and cheap abuse to address,
at the gtlds.  Why are these obvious trojans are being propagated by
the root servers anyhow?


the root servers are responsible how exactly for the fast-flux issues?
Also, there might be some legittimate business that uses something like
the FF techniques... but, uhm... how are the root servers involved again?


Nobody's saying that the root servers are responsible, only that they
are the point at which these domains would have to be squelched. In
theory registrars could do this, but some would have a financial
incentive not to. Also I don't believe registrars can update the roots
quickly enough to be effective (correct me if I'm wrong).

Given the obvious differences between legitimate fast flux and the
pattern/domains in question it would seem to be a no-brainer,
technically at least.

--
Roger Marquis
Roble Systems Consulting
http://www.roble.com/


Re: Interesting new dns failures

2007-05-20 Thread Fergie

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

- -- Roger Marquis [EMAIL PROTECTED] wrote:

Nobody's saying that the root servers are responsible, only that they
are the point at which these domains would have to be squelched. In
theory registrars could do this, but some would have a financial
incentive not to. Also I don't believe registrars can update the roots
quickly enough to be effective (correct me if I'm wrong).

Why not? The Registrars seem sto being doing a great job of
expediting the activation of new domains -- why can't they de-activate
them just as quickly when they find out they are being used for
malicious purposes?

The business interests of the registrars, that's why.

This is one of the many ways that ICANN, and the registrars
in general, are falling down on the job.

But I digress... I'll slink back under my rock now.

- - ferg

-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.6.1 (Build 1012)

wj8DBQFGUS6Cq1pz9mNUZTMRAtRpAKC0GSrPnj3GRTtZ57sAOQfz4vnraACcDV10
Bp4R0+pkkIWJ5ZvTESy2KUw=
=mue1
-END PGP SIGNATURE-


--
Fergie, a.k.a. Paul Ferguson
 Engineering Architecture for the Internet
 fergdawg(at)netzero.net
 ferg's tech blog: http://fergdawg.blogspot.com/



Re: Interesting new dns failures

2007-05-20 Thread Chris L. Morrow



On Sun, 20 May 2007, Roger Marquis wrote:

  All the same, it would seem to be an easy and cheap abuse to address,
  at the gtlds.  Why are these obvious trojans are being propagated by
  the root servers anyhow?
 
  the root servers are responsible how exactly for the fast-flux issues?
  Also, there might be some legittimate business that uses something like
  the FF techniques... but, uhm... how are the root servers involved again?

 Nobody's saying that the root servers are responsible, only that they

but you said it:

at the gtlds.  Why are these obvious trojans are being propagated by
 the root servers anyhow?

 are the point at which these domains would have to be squelched. In
 theory registrars could do this, but some would have a financial
 incentive not to. Also I don't believe registrars can update the roots
 quickly enough to be effective (correct me if I'm wrong).


I think you really mean 'TLD' not 'root'... I think, from playing this
game once or twice myself, the flow starts with the registrar to the
registry (in your example estdomains is the registrar and Verisign is the
registry). i think it pretty much stops there. i suppose you COULD get
ICANN to spank someone, but that's going to take a LONG time to
accomplish. (I think atleast)

 Given the obvious differences between legitimate fast flux and the
 pattern/domains in question it would seem to be a no-brainer,
 technically at least.

hrm... I don't think it's a technical stumbling block, though trying to
pre-know who's bad and who's not might get you in trouble (say I register
the domain lakjdauejalkasu91er.com and fast-flux it for my own 'good' use,
how's that different from 'uzmores.com' ?).

Anyway... I don't disagree that there ought to be a hammer here and it
ought to be applied. I'm just not sure it's as simple as it appears at
first blush.