Re: Abuse response [Was: RE: Yahoo Mail Update]

2008-04-16 Thread Greg Skinner

On Wed, Apr 16, 2008 at 03:39:05PM -0400, Joe Abley wrote:

 On 16 Apr 2008, at 13:33 , Simon Waters wrote:
 
  Ask anyone in the business if I want a free email account who do I  
  use.. and you'll get the almost universal answer Gmail.
 
 I think amongst those not in the business there are regional trends,  
 however. Around this neck of the woods (for some reason) the answer  
 amongst your average, common-or-garden man in the street is yahoo!.
 
 I don't know why this is. But that's my observation.

In my experience, Gmail tends to be the preferred freemail acount
among geeks and techies.  Y! mail and Hotmail are preferred by the
(non-techie) man and woman on the street.  I think this is largely due
to branding.

 So, with respect to your other comments, correlation between technical/ 
 operational competence and customer choice seems weak, from my  
 perspective. If there's competition, it may not driven by service  
 quality, and the conclusion that well-staffed abuse desks promote  
 subscriber growth is, I think, faulty.

Also, IME, the business community tends to perceive marketing as a
profit center (whether or not it actually is), because they understand
it and can measure the ROI they get from it.  This may not be the case
in companies with executives who came from the tech side, however, but
it's still more common for executives to have more of a business than
technical background.

--gregbo


Re: Problems sending mail to yahoo?

2008-04-13 Thread Greg Skinner

On Sun, Apr 13, 2008 at 11:48:31PM -0400, Rich Kulawiec wrote:
 On Sun, Apr 13, 2008 at 08:04:12PM -0400, Barry Shein wrote:
 A number of things that are true, including:
 
  I say the core problem in spam are the botnets capable of delivering
  on the order of 100 billion msgs/day.
 
 But I say the core problem is deeper.  Spam is merely a symptom of an
 underlying problem.  (I'll admit that I often use the phrase spam
 problem but that's somewhat misleading.)
 
 The problem is pervasive poor security.  Those botnets would not exist
 were it not for nearly-ubiquitous deployment of an operating system that
 cannot be secured -- and we know this because we've seen its own vendor
 repeatedly try and repeatedly fail.  But a miserable excuse for an OS is
 just one of the causes; others have been covered by essays like Marcus
 Ranum's Six Dumbest Ideas in Security, so I won't attempt to enumerate
 them all.

Is there a (nontrivial) OS that can be secured inexpensively, ie. for
the price that is paid for by shoppers at your local big box outlet?
To me, that's as much the problem as anything else that's been written
so far.  The Internet is what it is largely because that is what the
users (collectively) will pay for.  Furthermore, it's not so much the
OS as it is the applications, which arguably might be more securable
if Joe and Jane User took the time to enable the security features
that are available for the OSes they buy.  But that doesn't happen.  I
don't blame Joe and Jane User; most nontechnical people do not view
their home or work systems as something more than an appliance for
getting work done or personal entertainment.

 A secondary point that actually might be more important:
 
 We (and I really do mean 'we because I've had a hand in this too)
 have compounded our problems by our collective response -- summed up
 beautifully on this very mailing list a while back thusly:
 
   If you give people the means to hurt you, and they do it, and
   you take no action except to continue giving them the means to
   hurt you, and they take no action except to keep hurting you,
   then one of the ways you can describe the situation is it isn't
   scaling well.
   --- Paul Vixie on NANOG
 
 We need to hold ourselves accountable for the security problems in
 our own operations, and then we need to hold each other accountable.
 This is very different from our strategy to date -- which, I submit,
 has thoroughly proven itself to be a colossal failure.

One of the things I like about this list is that it consists of people
and organizations who DO hold themselves accountable.  But as long as
it's not the collective will of the Internet to operate securely, not
much will change.

--gregbo



Re: Does TCP Need an Overhaul? (internetevolution, via slashdot)

2008-04-08 Thread Greg Skinner

On Wed, Apr 09, 2008 at 01:10:53AM +0200, Marcin Cieslak wrote:
 The problem is that fairness was probably never a design goal of TCP, 
 even with Van Jacobson's congestion avoidance patch.
 
 Bob Briscoe is a member of the IETF Transport Working Group (TSVWG).

 This subject got some publicity and politics involved, but please see 
 some real discussion on the TSVWG list, with my favorite answer highlighted:

This issue also got some publicity and politics on the IRTF end2end list.

For example, start at 
http://www.postel.org/pipermail/end2end-interest/2007-August/006925.html .

--gregbo






Re: Using RIR info to determine geographic location...

2007-12-20 Thread Greg Skinner

Personally, I have trouble accepting some of the claims the
geotargeting companies have made, such as Quova's 99.9% to the country
level, and 95% to the US state level.  ( More info at
http://www.quova.com/page.php?id=132 ) Perhaps I'm just part of the
outlying data; using the three top search engines I rarely see them
get the city correct (ie. where *I* am physically located, as opposed
to where the registration data says the block is located), and have
seen some glaring errors for the country in some cases.

Geotargeting has turned into quite a business, and I'm concerned that
people who rely on these services do not fully understand the risks.

--gregbo

On Thu, Dec 20, 2007 at 08:48:44AM +0200, Hank Nussbacher wrote:
 
 At 08:44 PM 19-12-07 -0500, Drew Weaver wrote:
 
 I too would be interested to know how others feel about the various 
 geo-location services available to speed things along.  Three that come to 
 mind are Akamai, Neustar/Ultradns and the roll your own Cisco GSS 
 4492R.  How do they stack up?  How good are the various Maxmind files?
 
 Thanks,
 Hank
 
 
  Is this becoming a more common or less common practice as we 
  slide ourselves into the last week of 2007? The reason I am wondering is 
  we have noticed some 'issues' recently where correct info in the RIR 
  causes very inefficient and sometimes annoying interaction with some of 
  the world's largest online applications (such as Google) lets say for 
  example that a customer in India purchases dedicated server or 
  Co-Location hosting at a HSP in the United States [very common]. So the 
  RIR shows that the customer is in India, so when the customer interacts 
  with any google applications google automatically directs this traffic to 
  google.in (or the India version of whichever app)
 
  More unfortunate than this fact, is the fact that it appears that 
  services and application providers such as google are caching RIR data 
  for an unknown amount of time. Which means that if a service provider 
  SWIPs an allocation to a customer (lets use the same example... again in 
  India) (say a /24) to a user, and then that user subsequently returns 
  that allocation and the service provider re-allocates in smaller blocks 
  to different customers in say /29, /28.. et cetera... the problems 
  related to this issue are compounded (30 customers being affected, 
  instead of one...) by this caching...
 
  Obviously providing RIR information is the responsibility of 
  service providers (it is even ARIN's policy) has anyone else in the 
  community ran into issues such as this and found solutions or workarounds?
 
 Happy holidays to all on NANOG :D
 
 Thanks,
 -Drew


Re: Using RIR info to determine geographic location...

2007-12-20 Thread Greg Skinner

On Thu, Dec 20, 2007 at 10:17:36PM -0500, Steven M. Bellovin wrote:
 On Fri, 21 Dec 2007 02:13:17 +, Greg Skinner [EMAIL PROTECTED] wrote:
  Personally, I have trouble accepting some of the claims the
  geotargeting companies have made, such as Quova's 99.9% to the country
  level, and 95% to the US state level.  ( More info at
  http://www.quova.com/page.php?id=132 ) Perhaps I'm just part of the
  outlying data; using the three top search engines I rarely see them
  get the city correct (ie. where *I* am physically located, as opposed
  to where the registration data says the block is located), and have
  seen some glaring errors for the country in some cases.
  
  Geotargeting has turned into quite a business, and I'm concerned that
  people who rely on these services do not fully understand the risks.
  
 Some folks are relying on it for serious purposes.  Many Internet
 gambling sites use it to avoid serving US customers, for example.
 Their risk is criminal liability for the executive -- the have a
 strong incentive to get reliable data...  Some sports media sites use it
 to enforce local area blackouts; though that doesn't need to be
 perfect, if it's too imperfect they risk breach of contract and
 expensive lawsuits.
 
 For the advertisers, best effort is probably good enough...
 
   --Steve Bellovin, http://www.cs.columbia.edu/~smb

Funny you should mention sports media sites.  Not too long ago,
someone asked on usenet how to foil geotargeting in order to watch a
sportscast that was being blocked.  The answer was posted not long
after the question.  It doesn't surprise me that the word is out on
how to foil geotargeting, but it disturbs me that this aspect of
geotargeting is not discussed more.  I would prefer it if there were more
openness and transparency about such things (without necessarily
divulging the exact means by which geotargeting can be foiled).

The Carleton paper ( http://www.scs.carleton.ca/~jamuir/papers/TR-06-05.pdf )
goes into some detail on the practical limits of geotargeting, but it
has been difficult to raise this type of awareness among consumers of
geotargeting services.

WRT advertisers, opinions are mixed on whether best effort is good
enough, fraud aside.  Some feel any discrepancies are just a cost of
doing business on the Internet; hopefully they have factored
discrepancies into their ad spend.  Others are more skeptical.  Some
of you may find ( http://blog.merjis.com/2007/10/19/adwords-geotargeting-myths/ 
)
interesting.

--gregbo


Re: Hey, SiteFinder is back, again...

2007-11-06 Thread Greg Skinner

Bill Stewart wrote:

 When Verisign hijacked the wildcard DNS space for .com/.net, they
 encoded the Evil Bit in the response by putting Sitefinder's IP
 address as the IP address.  In theory you could interpret that as
 damage and route around it, or at least build ACLs to block any
 traffic to that IP address except for TCP/80 and TCP/UDP/53.  But if
 random ISPs are going to do that at random locations in their IP
 address space, and possibly serve their advertising from servers that
 also have useful information, it's really difficult to block.

 Does anybody know _which_ protocols Verizon's web-hijacker servers are
 supporting?  Do they at least reject ports 443, 22, 23, etc.?

 In contrast, Microsoft's IE browser responds to DNS no-domain
 responses by pointing to a search engine, and I think the last time I
 used IE it let you pick your own search engine or turn it off if you
 didn't like MS's default.  That's reasonable behaviour for an
 application, though it's a bit obsequious for my taste.

Hmmm.  When using IE 7 on Windows Vista out of the box, and I give it
a non-existent domain, it prompts me to connect to a network (even if
I'm already connected to one).  It also puts the browser in work
offline mode.  (Very annoying.)  I've never been pointed to a search
engine or prompted to select one.  Perhaps this is something that is
controlled by the machine's initial setup.

--gregbo