Re: Flow Based Routing/Switching (Was: "Does TCP Need an Overhaul?" (internetevolution, via slashdot))

2008-04-05 Thread Roland Dobbins



On Apr 5, 2008, at 5:16 PM, Jeroen Massar wrote:

The flows are in those boxes, but only for stats purposes exported  
with NetFlow/IPFIX/sFlow/etc. Apparently it was not as fast as they  
liked it to be



This is essentially correct.  NetFlow was originally intended as a  
switching mechanism, but then it became apparent that the value of the  
information in the cache and the ability to export it as telemetry  
were of more value, as there were other, more efficient methods of  
moving the packets around.


---
Roland Dobbins <[EMAIL PROTECTED]> // +66.83.266.6344 mobile

 History is a great teacher, but it also lies with impunity.

   -- John Robb



Re: Mitigating HTTP DDoS attacks?

2008-03-24 Thread Roland Dobbins



On Mar 25, 2008, at 8:10 AM, Frank Bulk - iNAME wrote:


In any case, it's reactive.



Several SPs (quite a few, actually) are offering DDoS mitigation  
services based upon a variety of tools and techniques, and with  
various pricing models.  Some provide the service for their own  
transit/hosting/colo customers, and some provide it as an OTT/overlay  
service.


---
Roland Dobbins <[EMAIL PROTECTED]> // +66.83.266.6344 mobile

   It doesn't pay to dispute what you know to be true.

-- Fred Reed



Re: Mitigating HTTP DDoS attacks?

2008-03-24 Thread Roland Dobbins



On Mar 25, 2008, at 6:18 AM, Tim Yocum wrote:


If you're running Apache, you may also investigate mod_evasive, and in
the case of exploits, mod_security.



mod_evasive and mod_security are definitely recommended, good point.

And a good relationship with your peers/upstreams/customers/vendors is  
also key, so that you can get assistance when you need it.


---
Roland Dobbins <[EMAIL PROTECTED]> // +66.83.266.6344 mobile

   It doesn't pay to dispute what you know to be true.

-- Fred Reed



Re: Mitigating HTTP DDoS attacks?

2008-03-24 Thread Roland Dobbins



On Mar 25, 2008, at 5:02 AM, Mike Lyon wrote:


Any input would be greatly appreciated.



There are devices available today from different vendors (including  
Cisco, full disclosure) which are intelligent DDoS-'scrubbers' and  
which can deal with more sophisticated types of attacks at layer-7,  
including HTTP and DNS.  S/RTBH is also an option, keeping in mind  
some of the caveats you mentioned (staying mindful of attacking hosts  
behind proxies, botted hosts of legit customers, et. al.).


-------
Roland Dobbins <[EMAIL PROTECTED]> // +66.83.266.6344 mobile

   It doesn't pay to dispute what you know to be true.

-- Fred Reed



OT: One Wilshire photos.

2008-03-03 Thread Roland Dobbins



<http://www.wired.com/techbiz/it/multimedia/2008/03/ 
gallery_one_wilshire>


---
Roland Dobbins <[EMAIL PROTECTED]> // +66.83.266.6344 mobile

 If you don't know what to do, it's harder to do it.

   -- Malcolm Forbes






Re: Blackholes and IXs and Completing the Attack.

2008-02-02 Thread Roland Dobbins



On Feb 3, 2008, at 4:50 AM, Paul Ferguson wrote:


We (Trend Micro) do something similar to this -- a black-hole BGP
feed of known botnet C&Cs, such that the C&C channel is effectively
black-holed.


What's the trigger (pardon the pun, heh) and process for removing IPs  
from the blackhole list post-cleanup, in Trend's case?


Is there a notification mechanism so that folks who may not subscribe  
to Trend's service but who are unwittingly hosting a botnet C&C are  
made aware of same?


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company





Re: Sicily to Egypt undersea cable disruption

2008-02-01 Thread Roland Dobbins



On Feb 2, 2008, at 8:56 AM, George William Herbert wrote:


However, despite the "attractive target" angle of what got busted,
and the proximity of the breaks to Islamic Terrorist problem spots,
I don't see a statistical or evidentiary case made that these were
anything but the usual occasional strings of normal random problems
spiking up at the same time


My instinctive reaction was to recall the Auric Goldfinger quote as  
smb did - after reflection, however, it's highly unlikely that these  
issues are the result of a terrorist group action simply because, just  
like the economically-driven miscreants, the ideologically-driven  
miscreants have a vested interest in the communications infrastructure  
remaining intact, as they're so heavily dependent upon it.


There are always corner-cases like the Tamil Tiger incident, and  
people don't always act rationally even in the context of their own  
perceived (as opposed to actual) self-interest, but I just don't see  
any terrorist groups nor any governments involved in some kind of  
cable-cutting plot, as it's diametrically opposed to their commonality  
of interests (i.e., the terrorist groups want the comms to stay up so  
that they can make use of them, and the governments want the comms to  
stay up so that they can monitor the terrorist group comms).


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company





Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-20 Thread Roland Dobbins



On Jan 21, 2008, at 12:22 AM, Ben Butler wrote:

Or maybe... we will run out of corporates first!  Which would have  
to be

the best of outcomes, everyone multihomed how wants/needs plus a
manageable route table without having run out of IPs or AS numbers.


As Internet connectivity becomes more and more vital to the mechanics  
of everyday life, it probably won't just be corporations wanting to  
multihome, but individuals (and their associated spime-clouds) which  
want to multihome, too - probably via a combination of wireline and  
wireless methods.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company





Re: request for help w/ ATT and terminology

2008-01-18 Thread Roland Dobbins



On Jan 19, 2008, at 12:12 PM, William Herrin wrote:


 For renumbering purposes, you could reasonably expect the firewall
to perform the translations once when rebooted or reset, after which
it would use the discovered IP addresses.


You can do that now with most firewalls and ACLs on most routers -  
there's generally a configuration setting which allows/disallows live  
lookups of hostnames when config files are updated containing same.  I  
don't like it due to the load it puts on the resolving box, plus the  
auditing issue, but some folks do it.


This would only fail where the firewall was being operated by  
someone in a different

administrative domain that the engineer who has to renumber... And
those scenarios are already indicative of a security problem.


'Renumbering' happens all the time due to multiple A records for a  
single FQDN, DNS-based load-balancing setups, etc.  And remember, in  
many cases, there are hosts in firewall rules/ACLs which are not part  
of the operator's own administrative domain, but which are external to  
it.



Unfortunately, we're all ignoring the big white elephant in the
room: spam filters. When a large flow of email suddenly starts
emitting from an address that didn't previously send significant
amounts of mail, a number of filters squash it for a while based
solely on the changed message rate. This can be very traumatic for the
engineer trying to renumber and it is 100% outside of his realm of
control. And of course, you lose all of the private whitelists that
you talked your way on to over the years where you no longer have a
valid point of contact.


With regards to antispam systems which are configured to behave in  
such a manner, this is (or ought to be) a BCP issue, obviously.



 Renumbering is a bad bad thing.


Renumbering in a world in which EIDs and locators are conflated and in  
which the EID is in any case vastly overloaded from a policy  
perspective is indeed very painful, and not just for the renumbering  
party, but for many others, as well.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company





Re: request for help w/ ATT and terminology

2008-01-18 Thread Roland Dobbins



On Jan 18, 2008, at 7:50 AM, Brandon Galbraith wrote:

Agreed. I'd see a huge security hole in letting someone put  
host.somewhere.net in a firewall rule in a PIX/ASA/etc. as opposed  
to an IP, especially since it's rare to see DNSSEC in production.


It's not only a security issue, but a performance issue (both resolver  
and server) and one of practicality, as well (multiple A records for a  
single FQDN, CNAMEs, A records without matching PTRs, et. al.).  The  
performance problem would likely be even more apparent under DNSSEC,  
and the practicality issue would remain unchanged.


As smb indicated, many folks put DNS names for hosts in the config  
files and then perform a lookup and do the conversion to IP addresses  
prior to deployment (hopefully with some kind of auditing prior to  
deployment, heh).



-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company





Re: periodic patterns in juniper netflow exports

2008-01-03 Thread Roland Dobbins



On Jan 3, 2008, at 7:53 PM, Fernando Silveira wrote:


 The patterns I'm
talking about would imply an absolute clock (independent of any flow)
ticking every minute, and flushing the entire flow cache. The result
of this would be the binning effect I mentioned.


Yes, what you're describing is in fact different from the Cisco active  
flow timer.  The Cisco active flow timer is set relative to the  
beginning of the flow, as you indicate, and not a system-wide purge of  
the entire cache (I didn't parse that properly in your initial query,  
apologies) on some sort of fixed-time basis.


There are folks involved in various NetFlow collection/analysis  
efforts on this list, I'm sure one of them or someone from Juniper  
will respond.  juniper-nsp might also be a good place to ask.



-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: periodic patterns in juniper netflow exports

2008-01-03 Thread Roland Dobbins



On Jan 3, 2008, at 5:57 PM, Fernando Silveira wrote:


 Can anyone tell me if there is such a
timer in JunOS, i.e., flushing the flow cache every minute (or an
interval defined as a parameter)?


I don't know about Juniper routers, but there's such a setting in  
Cisco routers, it's called the active flow timer.  If you don't use it  
and don't tell your collection/analysis system what setting you've  
used (most folks use between 5 minutes for traffic analysis down to  
one minute for security-related analysis), you end up with backlogged  
stats which aren't chronologically representative of the actual  
traffic, and your graphs are all jagged and useless.


My guess would be that Juniper have a similar construct for a similar  
purpose.  Most collection/analysis systems of which I'm aware take  
this setting into account, as long as you tell them what interval  
you're using.  It's generally considered highly desirable to make use  
of this functionality, for the aforementioned reasons.


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: European ISP enables IPv6 for all?

2007-12-17 Thread Roland Dobbins



On Dec 17, 2007, at 9:58 PM, Danny McPherson wrote:


when client-side attacks seem to be more than sufficient.


A self-selected group of victims really helps lower the  
reconnaissance opex, heh.


;>

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: Book on Network Architecture and Design

2007-12-03 Thread Roland Dobbins



On Dec 3, 2007, at 9:43 AM, John Kristoff wrote:


  TCP/IP Illustrated: Volume I
  W. Richard Steves


Kozierok is pretty handy, too:

<http://nostarch.com/tcpip.htm>

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: Creating a crystal clear and pure Internet

2007-11-27 Thread Roland Dobbins



On Nov 27, 2007, at 7:03 AM, Jared Mauch wrote:

Other operating systems may follow. (This was a WAG, based on gut  
feeling).



Nokia by default require app installed on the phones to be signed,  
though one can disable this functionality (and in fact must, in order  
to run many of the desirable applications).  It's been stated in the  
press that Apple are doing this with the iPhone SDK, too.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Roland Dobbins



On Oct 22, 2007, at 7:50 AM, Sean Donelan wrote:

 Will P2P applications really never learn to play nicely on the  
network?


Here are some more specific questions:

Is some of the difficulty perhaps related to the seemingly  
unconstrained number of potential distribution points in systems of  
this type, along with 'fairness' issues in terms of bandwidth  
consumption of each individual node for upload purposes, and are  
there programmatic ways of altering this behavior in order to reduce  
the number, severity, and duration of 'hot-spots' in the physical  
network topology?


Is there some mechanism by which these applications could potentially  
leverage some of the CDNs out there today?  Have SPs who've deployed  
P2P-aware content-caching solutions on their own networks observed  
any benefits for this class of application?


Would it make sense for SPs to determine how many P2P 'heavy-hitters'  
they could afford to service in a given region of the topology and  
make a limited number of higher-cost accounts available to those  
willing to pay for the privilege of participating in these systems?   
Would moving heavy P2P users over to metered accounts help resolve  
some of the problems, assuming that even those metered accounts would  
have some QoS-type constraints in order to ensure they don't consume  
all available bandwidth?


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

   I don't sound like nobody.

   -- Elvis Presley



Re: DDoS Question

2007-09-27 Thread Roland Dobbins



On Sep 28, 2007, at 6:49 AM, Ken Simpson wrote:


You might want to look at some kind of edge email
traffic shaping layer.


So that 'Curtis Blackman' is the only one getting SMTP through to  
Martin and his customers?


;>

Assuming nothing in the header which could be blocked by S/RTBH or  
ACLs (or a QoS policy), some of the various DDoS scrubbers available  
from different vendors may be able to deal with this via the  
anomalous TCP rates associated with these streams of spam, and/or  
regexp.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

   I don't sound like nobody.

   -- Elvis Presley



Re: Using Mobile Phone email addys for monitoring

2007-09-06 Thread Roland Dobbins



On Sep 6, 2007, at 1:46 PM, Rick Kunkel wrote:


Is SMTP to a mobile phone a fundamentally flawed way to do this?


Yes, IMHO - too many things to fail, including potentially your own  
DCN, the SMTP gateway service from the mobile operator, et. al.


I'd strongly recommend a direct NMS-to-SMS gateway, etc., OOB.  And  
of course, multiple methods in event of failure of one of them.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

   I don't sound like nobody.

   -- Elvis Presley



Re: How to get help from your ISP for security problems

2007-08-30 Thread Roland Dobbins



On Aug 30, 2007, at 11:27 AM, Sean Donelan wrote:

I was amazed when I met a lot of security researchers which didn't  
seem to know about all the different
things ISPs are doing to help customers avoid having their  
computers compromised by intrusions and repairing their computers  
afterwards


I've found that, with some notable exceptions, many academics who've  
evinced an interest in the networking security space seem to be quite  
incapable of making use of the various excellent search engines  
available to a) determine the current state of the art and b) check  
for prior art when proposing 'solutions' to various problems.


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

   I don't sound like nobody.

   -- Elvis Presley



Re: inter-domain link recovery

2007-08-15 Thread Roland Dobbins



On Aug 15, 2007, at 12:11 AM, Chengchen Hu wrote:

But when in these cases, how to recover it? The network operators  
just wait for physically reparing the link or they may manully  
configure an alternative path by paying another network for transit  
service or finding a peering network?


Or they've already sufficient diversity in terms of peering/transit  
relationships and physical interconnectivity to handle the situation  
in question - depending upon the situation, of course.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: inter-domain link recovery

2007-08-15 Thread Roland Dobbins



On Aug 15, 2007, at 12:07 AM, Chengchen Hu wrote:

is it always possible for BGP to automatically find an alternative  
path when failure occurs if exist one? If not, what may be the causes?


Barring implementation bugs or network misconfigurations, I've never  
experienced an operational problem with BGP4 (or OSPF or EIGRP or IS- 
IS or RIPv2, for that matter) converging correctly due to a flaw in  
the routing protocol, if that's the gist of the first question.   
There are many other factors external to the workings of the protocol  
itself which may affect routing convergence, of course; it really  
isn't practical to provide a meaningful answer to the second question  
in a reasonable amount of time, please see the previous reply.


The questions that you're asking essentially boil down to 'How does  
the Internet work?', or, even more fundamentally, 'How does routing  
work?'.  I would strongly suggest familiarizing oneself with the  
reference materials cited in the previous reply, as they provide a  
good introduction to the fundamentals of this topic.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: inter-domain link recovery

2007-08-14 Thread Roland Dobbins



On Aug 14, 2007, at 9:06 PM, Chengchen Hu wrote:

1. Why BGP-like protocol failed to recover the path sometimes? Is  
it mainly because the policy setting by the ISP and network operators?


There are an infinitude of possible answers to these questions which  
have nothing to do with BGP, per se; those answers are very  
subjective in nature.  Can you provide some specific examples  
(citing, say, publicly-available historical BGP tables available from  
route-views, RIPE, et. al.) of an instance in which you believe that  
the BGP protocol itself is the culprit, along with the supporting  
data which indicate that the prefixes in question should've remained  
globally (for some value of 'globally') reachable?


Or are these questions more to do with the general provisioning of  
interconnection relationships, and not specific to the routing  
protocol(s) in question?


Physical connectivity to a specific point in a geographical region  
does not equate to logical connectivity to all the various networks  
in that larger region; SP networks (and customer networks, for that  
matter) are interconnected and exchange routing information (and, by  
implication, traffic) based upon various economic/contractual,  
technical/operational, and policy considerations which vary greatly  
from one instance to the next.  So, the assertion that there were  
multiple unaffected physical data links to/from Taiwan in the cited  
instance - leaving aside for the moment whether this was actually the  
case, or whether sufficient capacity existed in those links to  
service traffic to/from the prefixes in question - in and of itself  
has no bearing on whether or not the appropriate physical and logical  
connectivity was in place in the form of peering or transit  
relationships to allow continued global reachability of the prefixes  
in question.


2. What is the actions a network operator will take when such  
failures occures? Is it the case like that, 1)to find (a)  
alternative path(s); 2)negotiate with other ISP if need; 3)modify  
the policy and reroute the traffic. Which actions may be time  
consuming?


All of the above, and all of the above.  Again, it's very  
situationally dependent.


3. There may be more than one alternative paths and what is the  
criterion for the network operator to finally select one or some of  
them?


Proximate physical connectivity; capacity; economic/contractual,  
technical/operational, and policy considerations.


4. what infomation is required for a network operator to find the  
new route?


By 'find the new route', do you mean a new physical and logical  
interconnection to another SP?


The following references should help shed some light on the general  
principles involved:


<http://en.wikipedia.org/wiki/Peering>

<http://www.nanog.org/subjects.html#peering>

<http://www.aw-bc.com/catalog/academic/product/ 
0,1144,0321127005,00.html>


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: [policy] When Tech Meets Policy...

2007-08-13 Thread Roland Dobbins



On Aug 13, 2007, at 2:06 PM, Chris L. Morrow wrote:

why don't the equivalent 'domain tasters' on the phone side exploit  
the ability to sign

up 1-8XX numbers like mad and send the calls to their ad-music call
centers?


1.  Maybe they do.

;>

2.	People tend to be much more careful about punching numbers into a  
telephone than typing words on a keyboard, I think.  There's also not  
a conceptual conflation of common typo mistakes with common telephone  
number transpositions, I don't think (i.e., I'm unsure there's any  
such thing as a common number transposition, while there certainly is  
with linguistic constructs such as letters).


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: [policy] When Tech Meets Policy...

2007-08-13 Thread Roland Dobbins



On Aug 13, 2007, at 1:32 PM, Justin Scott wrote:


Usually it revolves around the
marketing department not being in-touch with the rest of the  
company and
the wrong/misspelled domain name ends up in a print/radio/tv ad  
that is

about to go to thousands of people and cannot be changed.


There's a case to be made that a policy which results in  
organizations registering and owning domain names which are close to  
the intended domain anme but represent a common typographical  
transition is desirable from a security standpoint . . .


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: large organization nameservers sending icmp packets to dns servers.

2007-08-10 Thread Roland Dobbins



On Aug 10, 2007, at 4:41 PM, Paul Vixie wrote:

On the other hand, potentially larger messages may offer the  
necessary

motivation for adding ACLs on recursive DNS, and deploying BCP 38.


i surely do hope so.  we need those ACLs and we need that  
deployment, and if
message size and TCP fallback is a motivator, then let's turn UP  
the volume.


There are so many more larger and immediate reasons for doing these  
things that I seriously doubt message size and TCP fallback on the  
DNS will have any impact at all in terms of motivating the non- 
motivated.


But, one can always hope.

;>

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: How should ISPs notify customers about Bots (Was Re: DNS Hijacking

2007-07-24 Thread Roland Dobbins



On Jul 24, 2007, at 11:01 AM, Joe Greco wrote:

Since what I'm talking about is mainly IDS-style inspection of  
packets,

combined with optional redirection of candidate hosts to a local
"cleanser" IRCD, the only real problem is dumping outbound packets
somewhere where the /32 routing loop would be avoided.


On a large network, particularly a network which hasn't already  
deployed a sinkhole/re-injection system which covers all the relevant  
portions of this topology, that's a lot of work.  Transit networks  
are probably more likely than broadband access networks to've done  
so, IMHO (I've no knowledge of whether any of the relevant SPs have  
or haven't done so, that's just a general observation).



  Presumably it
isn't a substantial challenge for a network engineer to implement a
policy route for packets from that box to the closest transit (even
if it isn't an optimal path).  It's only IRC, after all.  ;-)


But we're talking about multiple destination points, and the static  
nature of [Cisco, at least] PBR doesn't always lend itself well to  
that kind of model.  Multipoint GRE potentially does, but again,  
that's more infrastructure to plan and deploy.




Similar in complexity, just without the networking angle.


In much the same way that flying an airplane is similar to driving a  
car, just without the 35,000-feet-in-the-air angle.


;>


I don't see how what I suggest could be anything other than a benefit
to the Internet community, when considering this situation.


I think sinkholing and re-injection is a very useful technique, and  
constantly exhort operators who haven't done so to implement it.  My  
point was that if one hasn't implemented it, there's a substantial  
effort involved, one that's more complex than implementing DNS  
poisoning.




  If your
network is generating a gigabit of traffic towards an IRC server, and
is forced to run it through an IDS that only has 100Mbps ports, then
you've decreased the attack by 90%.


And one may've backhauled a gig of traffic across a portion of one's  
topology which can ill-afford it, causing collateral damage.  Always  
a concern with any kind of redirection technology, it's important to  
monitor.



  Your local clients break, because
they're suddenly seeing 90% packet loss to the IRC server, and you now
have a better incentive to fix the attack sources.


There are other incentives which are less traumatic, one hopes.

;>



Am I missing some angle there?  I haven't spent a lot of time  
considering

it.


See above.

;>


Yes, there is some truth there, especially in networks made up of

independent autonomous systems.  DNS redirection to a host would
favor port redirection, so an undesirable side effect would be that
all Cox customers connecting to irc.vel.net would have appeared to
be coming from the same host.  It is less effort, but more invasive.


Yes - sometimes more invasive methods are necessary, depending upon  
one's goal.


  The point is that there are other ways to conduct such an  
exercise.  In particular, I firmly believe that any time there

is a decision to break legitimate services on the net, that we have an
obligation to seriously consider the alternatives and the  
consequences.


Yes, but it's also very easy to second-guess what others are doing  
when not in full possession of the facts.  None of us are, so it's  
probably a bit premature to speculate about someone else's chain of  
reasoning and then attack his logic, in the absence of any concrete  
information regarding same.


;>

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Culture eats strategy for breakfast.

   -- Ford Motor Company




Re: How should ISPs notify customers about Bots (Was Re: DNS Hijacking

2007-07-24 Thread Roland Dobbins



On Jul 24, 2007, at 8:59 AM, Joe Greco wrote:

But, hey, it can be done, and with an amount of effort that isn't  
substantially different from the

amount of work Cox would have had to do to accomplish what they did.


Actually, it's requires a bit more planning and effort, especially if  
one gets into sinkholing and then reinjecting, which necessitates  
breaking out of the /32 routing loop post-analysis/-proxy.  It can  
and is done, but performing DNS poisoning with an irchoneyd setup is  
quite a bit easier.  And in terms of the amount of traffic headed  
towards the IRC servers in question - the miscreants DDoS one  
another's C&C servers all the time, so it pays to be careful what one  
sinkholes, backhauls, and re-injects not only in terms of current  
traffic, but likely traffic.


In large networks, scale is also a barrier to deployment.  Leveraging  
DNS can provide a pretty large footprint over the entire topology for  
less effort, IMHO.


Also, it appears (I've no firsthand knowledge of this, only the same  
public discussions everyone else has seen) that the goal wasn't just  
to classify possibly-botted hosts, but to issue self-destruct  
commands for several bot variations which support this functionality.


[Note:  This is not intended as commentary as to whether or not the  
DNS poisoning in question was a Good or Bad Idea, just on the delta  
of effort and other operational considerations of DNS poisoning vs.  
sinkholing/re-injection.]


Public reports that both Cox and Time-Warner performed this activity  
nearly simultaneously; was it a coordinated effort?  Was this a one- 
time, short-term measure to try and de-bot some hosts?  Does anyone  
have any insight as to whether this exercise has resulted in less  
undesirable activity on the networks in question?


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

   Culture eats strategy for breakfast.

   -- Ford Motor Company





Re: FBI tells the public to call their ISP for help

2007-06-14 Thread Roland Dobbins



On Jun 14, 2007, at 12:21 PM, Sean Donelan wrote:

Read the Microsoft license agreement for WSUS, the information is  
out there.  It works for institutional license holders, but not for  
public

ISPs.


Maybe I'm totally off-base, but I could've sworn I read something  
somewhere in the last year or so about Microsoft working with some or  
genning up a program to work with SPs in order to offer this  
functionality to their customers, if they so choose?


Can anyone from Microsoft comment?

------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

   Equo ne credite, Teucri.

  -- Laocoön





Re: FBI tells the public to call their ISP for help

2007-06-13 Thread Roland Dobbins



On Jun 13, 2007, at 11:49 AM, Sean Donelan wrote:


BTW, 1 million compromised computers is probably a low estimate.


Besides the 'call your ISP for technical help' blunder, there's  
actually more useful info, believe it or not, in the press release  
linked in the article:


<http://www.fbi.gov/pressrel/pressrel07/botnet061307.htm>

The FBI aren't claiming only 1 million infected machines, they're  
saying that this particular sweep involves up to a million botted hosts.


It seems to me that the larger inference is that law enforcement are  
taking the botnet problem more seriously, which is what a lot of  
folks in the operational community have been advocating for a long  
time.  While one aspect of the messaging is questionable, it seems to  
me that active national-level LEO involvement in this problem-space  
would be welcomed by many.


It's just a first step, and those are always the hardest to take.

----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

   Equo ne credite, Teucri.

  -- Laocoön





Re: Number of BGP routes a large ISP sees in total

2007-04-17 Thread Roland Dobbins



On Apr 17, 2007, at 4:45 PM, Yi Wang wrote:

I couldn't find information about the number of different routes  
for the same prefix

a (large) AS typically receives/learns.  Hints?


I'd suggest taking a look at the RIBs from routeviews.org or RIPE and  
performing an analysis on same.  Some SPs also offer public  
routeservers.


A few minutes with a search engine should prove fruitful in this regard.

-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: Question on 7.0.0.0/8

2007-04-15 Thread Roland Dobbins



On Apr 15, 2007, at 2:58 PM, <[EMAIL PROTECTED]> wrote:


And why don't they do all this with some 21st century technology?


Do they have the requisite staff and funding?

-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names

2007-04-02 Thread Roland Dobbins



On Apr 2, 2007, at 4:56 PM, Douglas Otis wrote:

The recommendation was for registries to provide a preview of the  
next day's zone.  A preview can reduce the amount of protective  
data required, and increase the timeframe alloted to push  
correlated threat information to the edge.  This correlated threat  
information can act in a preemptive fashion to provide a  
significant improvement in security.  This added level of  
protection can help defeat expected and even unexpected threats  
that are becoming far too common as well.


OK, I understand this, but the previously-expressed comments about  
unintentional/undesirable consequences and not addressing the actual  
cause of the problem (inadequate and/or inefficient credit card  
processing and inefficient business processes), as well as the  
comments regarding practicalities and so forth, haven't really been  
addressed (pardon the pun), IMHO.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names

2007-04-02 Thread Roland Dobbins



On Apr 1, 2007, at 6:16 PM, Douglas Otis wrote:

Until Internet commerce requires some physical proof of identity,  
fraud

will continue.


As has already been stated, this is hardly a guarantee.

It seems to me that we're in danger of straying into déformation  
professionnelle.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names

2007-04-01 Thread Roland Dobbins



On Apr 1, 2007, at 6:16 PM, Douglas Otis wrote:


 Reacting to new domains after the fact is often too late.


What happens when they're wrong?

And who's 'they', btw?  What qualifications must 'they' have?  And  
what happens if a registrar disagrees with 'them'?  Or when 'they'  
are instructed by their governments to objection to a domain because  
of its perceived lack of redeeming social value, or somesuch?


It seems to me as if we've just talked through the  
institutionalization of the Department of Domain Pre-Crime, with all  
that entails.  It could be argued that the proposed solution might be  
worse than the problem it's purporting to solve.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names

2007-04-01 Thread Roland Dobbins



On Apr 1, 2007, at 3:36 PM, Douglas Otis wrote:


By ensuring data published by registry's can be previewed, all
registrars would be affected equally.


But what is the probative value of the 'preview'?  By what criteria  
is the reputational quality of the domain assessed, and by whom?


It almost seems as if the base problem has to do with credit-card  
transaction validation and fraud reporting, rather than anything to  
do with the actual domain registration process?


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names

2007-04-01 Thread Roland Dobbins



On Apr 1, 2007, at 11:51 AM, Douglas Otis wrote:


Instituting
notification of domain name additions before publishing would enable
several preemptive defenses not otherwise possible.


How does this help?  Are you saying that new domains somehow are  
somehow to be judged based upon someone's interpretation as to  
whether or not the domain 'reads' well, or some other factor?  Who  
makes that determination, and by what criteria?


Or are you saying that notification of someone whose credit card has  
been stolen would somehow help?  How would the registrar know whether  
or not an email address given at the time of registration is valid  
for the purported registree?  If there's some kind of 'click-to- 
validate' system put into place, the miscreants will simply automate  
the acceptance process (there's been a lot of work done on defeating  
CAPTCHAs, for example; even if they do it by hand, that would work.   
And services like Mailinator can make it even easier for the  
miscreants due to their FIFO nature - no forensics possible).


Several registrars offer private domain registration as an option, as  
well.  How does this affect the notification model?


I generally agree with you that when possible, time for analysis can  
be useful (though I'm unsure how that helps in this scenario, see  
above).  But one of the ways registrars compete ison timeliness; last  
night, for example, I registered a few domains on a whim.  If the  
registrar I chose to use had told me there was some delay in the  
process for vetting, I would've cancelled the order and gone  
somewhere else, because I wanted those domains -right then-, before  
someone else registered them.


This is all probably way off-topic for NANOG, anyways.

-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names (kill this thread)

2007-03-31 Thread Roland Dobbins



On Mar 31, 2007, at 11:16 PM, william(at)elan.net wrote:


 But DNS here is just a tool, bad guys could
easily build quite complex system of control by using active HTTP
such as XML-RPC, they are just not that sophisticated (yet) or
maybe they don't need anything but simple list of pointers.


Actually, the discussion isn't about the use of the DNS protocol  
itself as a botnet C&C channel (as you indicate, that's certainly  
doable), but rather about domains used as pointers to malware which  
is then distributed via various methods, same for phishing, as well  
as the use of DNS to provide server agility for botnet controllers  
irrespective of the actual protocol used for C&C.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Roland Dobbins



On Mar 31, 2007, at 5:16 PM, Eric Brunner-Williams in Portland Maine  
wrote:


The temporal value of a domain that sinks phish click stream has  
some decay
property, that is, today's phish name of (check your inbox)  
probably isn't
very useful to the authors of the present phish (etc) decades from  
now, or

even days from now.


Certainly, in a case where everything works according to plan.

What about the inputs to the system, however, and the potential for  
abuse?  Who decides the legigitmacy/reputational value of a  
particular domain?  What about mistakes and collateral damage?


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Roland Dobbins



On Apr 1, 2007, at 12:24 AM, Fergie wrote:


Care to expand?


Well, one reads about a) overly broad DMCA claims and b) overly broad  
DMCA takedowns (oftentimes with no direct causation between the two),  
and then a counterclaim process which seems to be somewhat ad hoc in  
nature, often inefficient, and sometimes ineffective.


I'm wondering if there are any lessons, positive or negative, to be  
drawn from the DMCA experience which may be relevant when discussing  
the desirability/efficacy/workability/potential for abuse/possible  
collateral damage/legal liabilities of a domain takedown regime?


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Roland Dobbins



On Mar 31, 2007, at 11:36 PM, Fergie wrote:


Would love to arguments to the contrary.


Are there any similarities between the current system involving DMCA  
takedown notices/counterclaims and what's being posited?


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Roland Dobbins



On Mar 31, 2007, at 9:20 AM, Paul Vixie wrote:

fundamentally, this isn't a dns technical problem, and using dns  
technology
to solve it will either not work or set a dangerous precedent.  and  
since

the data is authentic, some day, dnssec will make this kind of poison
impossible.


Some SPs are doing DNS manipulation/poisoning now for various  
reasons, with varying degrees of utility/annoyance.  If those SPs  
choose to manipulate their own DNS in a way which affects their own  
users, that's fine; if the users don't like it, they can to  
elsewhere.  Some enterprises are doing the same kinds of things, with  
the same options available to the user population (though not always  
quite as easy to 'go elsewhere', heh).


What SPs or enterprises choose to do for/to their own user bases is  
between them and their users.  When we start talking about involving  
registries, etc., that's when we've clearly jumped the shark.


There is no 'emergency', any more than there was an 'emergency' last  
week or the week before or the month before that - after a while, a  
state of 'emergency' becomes the norm, and thus the bar is raised.   
It's merely business as usual, and no extraordinary measures are  
required.  Yes, there are ongoing, long-term problems, but they need  
rationally-thought-out, long-term solutions.


'Think globally, act locally' seems a good principle to keep in mind,  
along with 'Be liberal in what you accept, and conservative in what  
you send'.  Much unnecessary grief and gnashing of teeth would be  
avoided if folks worries about what was going on in their own  
networks vs. grandiose, 'fix-the-Internet'-type 'solutions' (the  
appeal of the latter is that it requires no actual  useful effort or  
sacrifice on one's own part, merely heated rhetoric and a pointed  
finger, which appeals to some of the least attractive aspects of  
human nature).


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: ICMP unreachables, code 9,10,13

2007-03-28 Thread Roland Dobbins



On Mar 28, 2007, at 3:57 PM, Christos Papadopoulos wrote:


Responses with these codes seem to imply the presence of a firewall.
Is this assumption correct or are these codes meaningless?


Not just firewalls - ACLs on routers, too.

A common practice is to either turn off sending of unreachables or to  
at least rate-limit them to preserve CPU on the router.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: Netops list

2007-03-28 Thread Roland Dobbins



On Mar 28, 2007, at 1:20 PM, Jared Mauch wrote:


I'm not aware of anyone yelling at folks for technical
discussions on that list.


Oh, I thought he meant the NOC list you maintain, not an email list,  
my mistake.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: Netops list

2007-03-28 Thread Roland Dobbins



On Mar 28, 2007, at 12:54 PM, Steve Sobol wrote:

If I am seeing a routing problem, is Jared's list an appropriate  
place to

check for contacts at the ISP with the problem?


One hopes so.

;>

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: TCP and WAN issue

2007-03-27 Thread Roland Dobbins



On Mar 27, 2007, at 3:24 PM, <[EMAIL PROTECTED]> wrote:

Personally, I would prefer to see more people fixing the  
infrastructure

rather than accepting it as a limit.


Concur - what I meant is, 'can support when fully optimized'.

;>


Tweaking apps generally turns out to be heavy-duty stuff with lots of
release control and testing. Also, the applications programmers
generally have a poor understanding of network issues. If you can
separate the applications stuff from the data transfer stuff, and  
tackle

the network issues first, then you will have an easier time of it.


Concur - app-tweaking should be the penultimate approach (and then,  
maybe, look at boxes, if there's an issue which can't be resolved the  
other ways; but my guess is that doing the BCPs should yield good  
results).


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: TCP and WAN issue

2007-03-27 Thread Roland Dobbins



On Mar 27, 2007, at 2:41 PM, Philip Lavine wrote:

This is the exact issue. I can only get between 5-7 Mbps. So the  
question is really what incremental performance gain do WAN  
accelerators/optimizers offer?


I don't know if you'd get much of a performance benefit from this  
approach.  Bandwidth savings, possibly, depending upon your  
application.  We have a box called a WAAS which is a WAN optimizer,  
so do several other vendors (search online for 'wan optimizer' or  
'wan optimization', you should get a lot of hits), but I have no  
experience with these types of boxes.


Can registry/OS tweaks really make a significant difference because  
so far with all the "speed enhancements" I have deployed to the  
registry based on the some of the aforementioned sites I have seen  
no improvement.


I'm not a Windows person, so I don't know the answer to this; I know  
you can do a fair amount of optimization with other OSes, depending  
upon the OS and your NICs.  The MTU, MSS, window-size stuff mentioned  
previously all applies, as do jumbo frames, if your end-stations and  
network infrastructure support them.


What you want to see is large packets, as large as your end-to-end  
infrastructure can support.


I  guess I am looking for a product that as a wrapper can multiplex  
a single socket connection.


Your application should be able to do that, potentially, and as other  
folks mentioned, your app can potentially be tweaked to open up  
multiple connections.  I think there are also NICs which do something  
of this sort, but it's not something I've personally used (maybe  
others have experiences they can relate?).


My general advice would be to look at all the things mentioned  
previously you can potentially do with your existing OSes, network  
infrastructure, and apps, and do them prior to looking at specialized  
boxes.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: TCP and WAN issue

2007-03-27 Thread Roland Dobbins



On Mar 27, 2007, at 1:26 PM, Philip Lavine wrote:


inherent in Windows (registry tweaks seem to not functions too well)?


You should certainly look at your MTU and MSS values to ensure there  
are no difficulties of that sort.  Is there any other factor such as  
perhaps slow DNS server (or other lookup-type services) responses  
which can be contributing to the perceived slowness?  How about  
tuning the I/O buffers on the relevant routers?  Can you tune the I/O  
buffers on the servers?


And what about your link utilization?  Is the DS3 sufficient?  Take a  
look at pps and bps, and take a look at your average packet sizes  
(NetFlow can help with this).  Are your apps sending lots of smaller  
packets, or are you getting nice, large packet-sizes?


Finally, if none of the above help, you could look at something like  
SCTP, if your servers and apps support it.  But I'd go through the  
above exercise, first.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-14 Thread Roland Dobbins



On Mar 14, 2007, at 8:23 PM, Frank Bulk wrote:


USF has made it possible for us to
serve DSL to almost every customer in our exchanges.


I'm glad to hear it - the reports of how that fund is (un)used are  
almost overwhelmingly negative, I'm glad some folks, somewhere are  
benefiting from it.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: NOC Personel Question (Possibly OT)

2007-03-14 Thread Roland Dobbins



On Mar 14, 2007, at 7:07 PM, Justin M. Streiner wrote:


NOC (insert generic group name here)?


NOC NOC?

[Who's there?]

;>

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-14 Thread Roland Dobbins



On Mar 14, 2007, at 11:22 AM, Bora Akyol wrote:


Unfortunately, neither the telcos nor the cable companies quite get
this. They are stuck to their "channels" and everything is priced in
terms of channels.


To be fair, part of this onus is on the content developers themselves  
- after all, it's easier to produce something and then have a channel  
take care of distribution for you, rather than having to figure it  
out for yourself.  And of course, the channels don't want their  
business going away, either.  To top it all off, many SPs want to  
become the 'channel' for their customers.


Just another example of how network effects tend to lead to  
disintermediation, which is of course extremely disruptive to  
traditional distribution models.


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-13 Thread Roland Dobbins



On Mar 13, 2007, at 11:19 AM, Daniel Senie wrote:

A universal service charge could be applied to all bills, with the  
funds going to subsidize rural areas.


This is already done in the U.S., to no discernible effect.

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-13 Thread Roland Dobbins



On Mar 13, 2007, at 10:11 AM, Todd Vierling wrote:


There are other technologies better
suited to rural deployment, such as satellite, powerline, some cable,
or even re-use of the previous generation's ADSL gear once metro areas
are upgraded.


Or something like WiMAX?

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-13 Thread Roland Dobbins



On Mar 13, 2007, at 10:10 AM, Daniel Senie wrote:

As with the deployment of telephone service a century ago, the  
ubiquitious availability of broadband service will require  
government involvement in the form of fees on some and subsidies  
for others (might be a good use for the funds Massachusetts is  
trying to extract from Verizon for property tax on telephone poles,  
I suppose). Otherwise, we'll see the broadband providers continue  
to cherry pick the communities to service, and leave others in the  
digital dustbowl.


Various rural phone companies aside, the majority of this was  
accomplished in the U.S. via a regulated monopoly, and in many other  
countries via a government-owned regulated monopoly.  Do you believe  
that's necessary and/or desirable in order to make broadband  
ubiquitous?  How do longer-range wireless technologies like WiMAX  
potentially impact the equation?


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-13 Thread Roland Dobbins



On Mar 13, 2007, at 10:08 AM, Matthew F. Ringel wrote:


DSL[1] and DOCSIS require active cooperation from the carrier.  Ergo,
tech advancement in the carrier-assisted data transport arena is
dependent on the carrier cooperating.


Are infrastructure build-out costs any less of an issue for consumer  
broadband SPs who offer metered service?  Is their revenue model more  
amenable to doing capacity-expansion buildouts, vs. all-you-can-eat  
(except when you eat too much, heh) revenue models?


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-13 Thread Roland Dobbins



On Mar 13, 2007, at 9:11 AM, Joe Abley wrote:

So long as most torrent clients are used to share content  
illicitly, that doesn't sound like much of a business driver for  
the DSL/CATV ISP. And so long as the average user doesn't have an  
alternative provider which gives better torrent sharing  
capabilities, there doesn't seem to be much of a risk of churn  
because of being torrent-unfriendly


er, that's why I put a smiley below it.  Like this:

;>

In all seriousness, DVR-on-demand type services offered by the SPs  
themselves would be one driver.  Right now, they're all overlay  
networks which the SPs don't view as being directly monetizable.  If/ 
when they offer such services themselves, however, I, predict this  
will change.


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-13 Thread Roland Dobbins



On Mar 13, 2007, at 8:17 AM, Chris L. Morrow wrote:


what business drivers are there to put more bits on the wire to
the end user?


BitTorrent.

;>

And on-demand DVR-type things which I believe will grow in  
popularity.  Of course, most of those are overlays which the SPs  
themselves don't offer; when they wish to do so, it'll become an  
issue, IMHO.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Words that come from a machine have no soul.

  -- Duong Van Ngo



Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-02 Thread Roland Dobbins



On Mar 2, 2007, at 1:18 PM, Sean Donelan wrote:

How much of a problem is traffic from unallocated addresses?   
Backbone operators probably have NetFlow data which they could mine  
to find out.
On the other hand, how much of a problem is obsolete bogon filters  
causing

everytime IANA delegates another block to an RIR?

Or by the way, how much spoofed traffic uses allocated addresses?


No one has done the digging required to answer any of these  
questions, unfortunately.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-02 Thread Roland Dobbins



On Mar 2, 2007, at 7:31 AM, <[EMAIL PROTECTED]> wrote:


Sometimes, network operators have to take the bull
by the horns and develop their own systems to do a job that vendors
simply don't understand.


Concur - but it seems that many seem to be looking for someone else  
to do this for them (or, perhaps, the lack of someone to do it for  
them as an excuse to do nothing at all).


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-02 Thread Roland Dobbins



On Mar 2, 2007, at 4:12 AM, Robert E. Seastrom wrote:

uRPF isn't always adequate for all antispoofing cases, as you know.   
What about iACLs?




bogon
filtering by end sites is the sort of thing that is recommended by
"experts" for whom "security" is an end in and of itself, rather than
a component of the arsenal you bring forth (backups, DR, spares,
multihoming, etc) to improve uptime and business availability and
decrease potential risk.


I don't claim to be an 'expert' at anything, but I most certainly - 
do- recommend bogon filtering, along with multihoming, infrastructure  
self-protection measures (iACLs, rACLs, CoPP, core-hiding, et. al.),  
various antispoofing techniques (all the way down to layer-2 where  
possible), instrumentation and telemetry, anomaly-detection, reaction  
tools, layer-7 things like backup and DR, layer-8 things like  
sparing, and so forth.  And my goal isn't 'security', it's a  
resilient Internet infrastructure which keeps the packets flowing so  
that the users can access the applications which are the point of the  
whole exercise.


I'm not the only one who thinks like that, either.  So, painting us  
all with the same broad brush hardly seems fair, does it?


Rejecting bogon filtering out of hand because it isn't effortless to  
maintain doesn't make much sense to me.  After all, if one's being a  
good Internet neighbor, one's doing ingress filtering (routes and  
packets) from one's customers and egress filtering (routes and  
packets) to one's peers/transits/customers, anyways; one will see  
more churn there than in the bogon lists.


It's also part of the very basic protections which ought to be  
provided to one's customers.  No, the SP can't be the 'Internet  
firewall' for customers, and, no, the SP can't be expected to keep  
the customer magically protected from all the Bad Things which can  
happen to him (and for free, naturally), but protecting one's  
customers from being spammed/packeted from purported source addresses  
which are by definition nonsensical (as well as protecting everyone  
else's customers from same) doesn't seem much to ask.


What's needed here are better/easier/less brittle mechanisms for  
same.  Until such time as they're invented and deployed, let's not  
make the perfect the enemy of the merely good, yes?


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-02 Thread Roland Dobbins



On Mar 2, 2007, at 12:55 AM, <[EMAIL PROTECTED]> wrote:


One might argue that if a company is not capable of
setting a policy and managing that policy, then you
should not implement the policy at all.


I think this really goes to the heart of the matter - the inability/ 
unwillingness to prioritize and allocate resources to properly  
implement 'good neighbor' policies which are not perceived as having  
any financial benefit to the organization.


So, can this sort of activity somehow be monetized by the SPs,  
remedied by the vendors, or is it a matter for the standards bodies  
(or some combination thereof)?


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-01 Thread Roland Dobbins



On Mar 1, 2007, at 1:10 PM, Chris L. Morrow wrote:


So... again, are bogon filters 'in the core' useful? (call 'core' some
network not yours)


Antispoofing is 'static' and therefore brittle in nature, people  
change jobs, etc. - so, we shouldn't do antispoofing, either?


Enterprises typically don't do this stuff.  They should, and we work  
to educate them, but it's even more difficult in that space than in  
the SP space.


A question I have is whether or not this class of problems is more of  
a 'need the vendors to come up with better/easier functionality' type  
of problem, a 'need the SPs to do a better job with this' kind of  
problem, or is it more in the realm of a 'TCP/IP in its current  
incarnation(s) lends itself these kinds of issues' type of problem?


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-01 Thread Roland Dobbins



On Mar 1, 2007, at 11:33 AM, Chris L. Morrow wrote:

I absolutely agree, but without some tool or process to follow...  
we get
stuck acls/filters and no idea that there is a problem until it's  
far into

the problem :(


There are canonical email lists on which these changes are announced,  
and Team Cymru maintain examples which are updated regularly.


But, of course, you know this, so I suspect somehow you're trying to  
make a different kind of point.


;>

-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-01 Thread Roland Dobbins



On Mar 1, 2007, at 6:22 AM, Chris L. Morrow wrote:


So, where are static bogon filters appropriate?


#define static

Obviously, one's bogon filters (both for iACLs and for prefix-lists  
or whatever other mechanism one uses to filter the route  
announcements one accepts) must be dynamic enough in nature to  
accommodate updates when new blocks are cracked open.  'Static'  
shouldn't be read as 'eternal', although that's often what ends up  
happening.


;>

-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: botnets: web servers, end-systems and Vint Cerf [LONG, sorry]

2007-02-19 Thread Roland Dobbins





I look forward to your paper on "the end to end concept, and
why it doesn't
apply to email" ;)


I think the problem here is that people invoke something they think  
of as 'the end-to-end principle', but actually isn't.


from <http://web.mit.edu/Saltzer/www/publications/endtoend/ 
endtoend.pdf>:


-

 . . .  functions placed at low levels of a system may be redundant  
or of little

value when compared with the cost of providing them at that low level.

-

*That* is the actual 'end-to-end principle'.  The imposition of  
hierarchy in application-layer email routing (or DNS infrastructure,  
etc.) has nothing to do with the actual end-to-end principle, except  
as a good example of honoring it.


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: botnets: web servers, end-systems and Vint Cerf

2007-02-19 Thread Roland Dobbins



On Feb 19, 2007, at 8:06 AM, <[EMAIL PROTECTED]>  
<[EMAIL PROTECTED]> wrote:



And if the system designer is creative enough, then
this firewall thingy which is reputed to protect you from bad stuff,
would also download and install the latest patches to protect against
browser exploits. If this is all run on a separate CPU it can also do
some pretty in-depth inspection and do things like block .exe
attachements in email.


If we had some cheese, we could make a ham-and-cheese sandwich, if we  
had some ham.


;>

This discussion started out with an assertion that that security  
problem for general-purpose OS endpoints had been 'solved'.  It in  
fact has not been solved for any reasonable degree of solved - there  
are basic layer-7 problems with the fundamentals such as HTTP (which  
to most users is 'the Internet), and while there are various efforts  
to attempt to mitigate these problems via the insertion of inspection/ 
removal by network devices, these efforts are in their infancy and  
also introduce other complexities which are corollaries of the  
canonical end-to-end principle (vs. the common misperception of what  
the end-to-end principle actually encompasses).


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: botnets: web servers, end-systems and Vint Cerf [LONG, sorry]

2007-02-19 Thread Roland Dobbins



On Feb 19, 2007, at 6:04 AM, Simon Waters wrote:

I look forward to your paper on "the end to end concept, and why it  
doesn't

apply to email"


The end-to-end principle has no bearing upon this discussion at all,  
unless you're referring to firewalls/NATs.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: botnets: web servers, end-systems and Vint Cerf

2007-02-19 Thread Roland Dobbins



On Feb 19, 2007, at 1:24 AM, <[EMAIL PROTECTED]> wrote:


You need, at minimum, weeks of training in order to safely operate an
automobile. But to safely operate on the Internet, you simply open the
box, plug the DSL cable into the DSL port of the
NAT/firewall/switch/gateway box, plug the brand new unsecured computer
into the Ethernet port, and you can now safely operate on the  
Internet.


That's right, you've made my point for me.  Weeks and weeks of training.

People don't need weeks and weeks of training to operate a  
television, or a blender, or even a videogame console.



The technical problem has been solved for a long, long time. The same
factors which drive down the cost of computers, have also driven down
the cost of NAT/firewall devices to the point where they could  
actually

be integrated right into the PC's hardware.


NATting firewalls don't help at all with email-delivered malware,  
browser exploits, etc.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: botnets: web servers, end-systems and Vint Cerf

2007-02-16 Thread Roland Dobbins



On Feb 16, 2007, at 9:12 AM, <[EMAIL PROTECTED]> wrote:


It is regularly done with servers connected to the Internet.
There is no *COMPUTING* problem or technical problem.


I beg to differ.  Yes, it is possible for tech-savvy users to secure  
their machines pretty effectively.  But the level of technical  
knowledge required to do so is completely out of line with, say, the  
level of automotive knowledge required to safely operate an automobile.


The problem of the 100 million machines is a social or business  
problem.

We know how they can be secured, but the solution is not being
implemented.


We know how -people with specialized knowledge- can secure them, not  
ordinary people - and I submit that we in fact do not know how to  
clean and validate compromised systems running modern general-purpose  
operating systems, that the only sane option is re-installation of OS  
and applications from scratch.


There have been very real strides in increasing the default security  
posture of general-purpose operating systems and applications in  
recent years, but there is still a large gap in terms of what a  
consumer ought to be able to reasonably expect in terms of security  
and resiliency from his operating systems/applications, and what he  
actually gets.  This gap has been narrowed, but is still quite wide,  
and will be for the foreseeable future (witness the current  
renaissance in the area of browser/HTML/XSS/Javascript  
vulnerabilities as an example of how the miscreants can change their  
focus as needs must).


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: what the heck do i do now?

2007-02-01 Thread Roland Dobbins



On Jan 31, 2007, at 7:04 PM, Matthew Kaufman wrote:

(As an example, consider what happens *to you* if a hospital stops  
getting emailed results back from their outside laboratory service  
because their "email firewall" is checking your server, and someone  
dies as a result of the delay)


Moral issues aside, I'd love to see this litigated.

-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

  The telephone demands complete participation.

  -- Marshall McLuhan



Re: Google wants to be your Internet

2007-01-24 Thread Roland Dobbins



On Jan 24, 2007, at 5:48 AM, <[EMAIL PROTECTED]> wrote:


The whole address conservation mantra has turned out to be a lot
of smoke and mirrors anyway.


At the time, yes, this particular issue was overhyped, just as the  
routing-table-expansion issue was underhyped.  As we move to an  
'Internet of Things', however, it will become manifestl


With regards to the perceived advantages and disadvantages of IPv6 as  
it is currently defined, there is wide range of opinion on the  
subject.  For many, the 'still-need-NAT-under-IPv6 vs. IPv6- 
eliminates-the-need-for-NAT' debate is of minor importance compared  
to more fundamental questions.


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-24 Thread Roland Dobbins



On Jan 24, 2007, at 4:58 AM, Mark Smith wrote:

The problem is that you can't be sure that if you use RFC1918 today  
you

won't be bitten by it's non-uniqueness property in the future. When
you're asked to diagnose a fault with a device with the IP address
192.168.1.1, and you've got an unknown number of candidate devices
using that address, you really start to see the value in having world
wide unique, but not necessarily publically visible addressing.



That's what I meant by the 'as long as one is sure one isn't buying  
trouble down the road' part.  Having encountered problems with  
overlapping address space many times in the past, I'm quite aware of  
the pain, thanks.


;>

RFC1918 was created for a reason, and it is used (and misused, we all  
understand that) today by many network operators for a reason.  It is  
up to the architects and operators of networks to determine whether  
or not they should make use of globally-unique addresses or RFC1918  
addresses on a case-by-case basis; making use of RFC1918 addressing  
is not an inherently stupid course of action, its appropriateness in  
any given situation is entirely subjective.


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-24 Thread Roland Dobbins



On Jan 24, 2007, at 12:33 AM, <[EMAIL PROTECTED]> wrote:


Just remember, IP addresses are *NOT* Internet addresses.
They are Internet Protocol addresses. Connection to the
Internet and public announcement of prefixes are totally
irrelevant.


Of course I understand this, but I also understand that if one can  
get away with RFC1918 addresses on a non-Internet-connected network,  
it's not a bad idea to do so in and of itself; quite the opposite, in  
fact, as long as one is sure one isn't buying trouble down the road.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-23 Thread Roland Dobbins



On Jan 23, 2007, at 3:38 PM, Adrian Chadd wrote:


The majority of them seem to be government organisations too. :)


We also see this with extranet/supply-chain-type connectivity between  
large companies who have overlapping address space, and I'm afraid  
it's only going to become more common as more of these types of  
relationships are established.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-23 Thread Roland Dobbins



On Jan 23, 2007, at 11:51 AM, Jeroen Massar wrote:


  a) use global addresses for everything,


Everything which needs to be accessed globally, sure.  But I don't  
see this as a hard and fast requirement, it's up to the user based  
upon his projected use.



  b) use proper acl's),


Of course.


  c) toys exist that some people clearly don't know about yet ;)


Indeed.

-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-22 Thread Roland Dobbins



On Jan 22, 2007, at 10:49 AM, Jeroen Massar wrote:


But which address space do you put in the network behind the VPN?

RFC1918!? Oh, already using that on the DSL link to where you are
VPN'ing in from. oopsy ;)


Actually, NBD, because you can handle that with a VPN client which  
does a virtual adaptor-type of deal and overlapping address space  
doesn't matter, because once you're in the tunnel, you're not sending/ 
receiving outside of the tunnel.  Port-forwarding and NAT (ugly, but  
people do it) can apply, too.




That is the case for globally unique addresses and the reason why  
banks

that use RFC1918 don't like it when they need to merge etc etc etc...


Sure, and then you get into double-NATting and who redistributes what  
routes into who's IGP and all that kind of jazz (it's a big problem  
on extranet-type connections, too).  To be clear, all I was saying is  
that the subsidiary point that there are things which don't belong on  
the global Internet is a valid one, and entirely separate from any  
discussions of universal uniqueness in terms of address-space, as  
there are (ugly, non-scalable, brittle, but available) ways to work  
around such problems, in many cases.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-22 Thread Roland Dobbins



On Jan 22, 2007, at 9:38 AM, Jeroen Massar wrote:


But I guess it is nonsense.


This is what ssh tunnels and/or VPN are for, IMHO.  It's perfectly  
legitimate to construct private networks (DCN/OOB nets, anyone?  How  
about that IV flow-control monitor which determines how much  
antibiotics you're getting per hour after your open-heart surgery?)  
for purposes which aren't suited to direct connectivity to/from  
anyone on the global Internet.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 8:10 PM, Mark Smith wrote:

I think you're more or less describing what already Akamai do -  
they're
just not doing it for authorised P2P protocol distributed content  
(yet?).


Yes, and P2P might make sense for them to explore - but a) it doesn't  
help SPs smooth out bandwidth 'hotspots' in and around their  access  
networks due to P2P activity, b) doesn't bring the content out to the  
very edges of the access network, where the users are, and c) isn't  
something which can be woven together out of more or less off-the- 
shelf technology with the users themselves supplying the  
infrastructure and paying for (and being compensated for, a la FON or  
SpeakEasy's WiFi sharing program) the access bandwidth.


It seems to me that a FON-/Speakeasy-type bandwidth-charge  
compensation model for end-user P2P caching and distribution might be  
an interesting approach for SPs to consider, as it would reduce the  
CAPEX and OPEX for caching services and encourage the users  
themselves to subsidize the bandwidth costs to one degree or another.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 7:38 PM, Mark Smith wrote:


Maybe I haven't understood what that exactly does, however it seems to
me that's really just a bit-torrent client/server in the ADSL router.
Certainly having a bittorrent server in the ADSL router is unique, but
not really what I was getting at.


I understand it's not what you meant; my point is that if the SPs  
don't figure out how to do this, the customers will, by whatever  
means they have at their disposal, with always-on devices which do  
the distribution and seeding and caching automagically, and with a  
revenue model attached.  I foresee consumer-level devices like this  
little Asus router which not only act as torrent clients/servers, but  
which also are woven together into caches with something like PNRP as  
the location service (and perhaps an innovative content producer/ 
distributor acting as a billing overlay prover a la FON in order to  
monetize same, leaving the SP with nothing).


The advantage of providing caching services is that they both help  
preserve scare resources and result in a more pleasing user  
experience.  As already pointed out, CAPEX/OPEX along with insertion  
into the network are the current barriers, along with potential legal  
liabilities; cooperation between content providers and SPs could help  
alleviate some of these problems and make it a more attractive model,  
and help fund this kind of infrastructure in order to make more  
efficient use of bandwidth at various points in the topology.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 6:14 PM, Mark Smith wrote:


It doesn't seem that the P2P
application developers are doing it, maybe because they don't care
because it doesn't directly impact them, or maybe because they don't
know how to. If squid could provide a traffic localising solution  
which

is just another traffic sink or source (e.g. a server) to an ISP,
rather than something that requires enabling knobs on the network
infrastructure for special handling or requires special traffic
engineering for it to work, I'd think you'd get quite a bit of
interest.


I think there's interest from the consumer level, already:

http://torrentfreak.com/review-the-wireless-BitTorrent-router/

It's early days, but if this becomes the norm, then the end-users  
themselves will end up doing the caching.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 1:02 PM, Marshall Eubanks wrote:


as long as humans are the primary consumers of
bandwidth.


This is an interesting phrase.  Did you mean it T-I-C, or are you  
speculating that M2M (machine-to-machine) communications will at some  
point rival/overtake bandwidth consumption which is interactively  
triggered by human actions?  Right now TiVo will record television  
programs it thinks you might like; what effect will this type of  
technology have on IPTV, more mature P2P systems, etc.?


It would be very interesting to try and determine how much automated  
bandwdith consumption is taking place now and try to extrapolate some  
trends; a good topic for a PhD dissertation, IMHO.


;>

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 1:00 PM, David Ulevitch wrote:

maybe we'll see "eyeball" networks start to peer with each other as  
they start sourcing more and more of the bits. Maybe that's already  
happening.


At some point, I think MANET/mesh/roofnets/Zigbee/etc. are going to  
start fulfilling this role, at least in part.


Which should give NSPs something to think about in terms of how they  
can embrace this model and make money with it.  Getting your  
customers to build and maintain your infrastructure for you is a  
pretty powerful incentive, IMHO.


http://en.fon.com/ (not MANET/mesh, but may be going there, at some  
point)


http://www.speakeasy.net/netshare/terms/ (an NSP who are embracing a  
sharing model)


http://www.netequality.org/ (nonprofit mesh)

http://www.cuwin.net/about (mesh community)

http://www.wi-fiplanet.com/columns/article.php/3634931 (roofnet SP/ 
facilitator)


http://www.meraki.net/

http://www.microsoft.com/technet/network/p2p/pnrp.mspx (built into  
Vista, enabled by default, I think)



-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 11:55 AM, Randy Bush wrote:


the question to me is whether isps and end user borders (universities,
large enterprises, ...) will learn to embrace this as opposed to
fighting it; i.e. find a business model that embraces delivering what
the customer wants as opposed to winging and warring against it.


I believe that it will end up becoming the norm, as it's a form of  
cost-shifting from content providers to NSPs and end-users - but for  
it to really take off, the tension between content-providers and  
their customers (i.e., crippling DRM) needs to be resolved.


There have been some experiments in U.S. universities over the last  
couple of years in which private music-sharing services have been run  
by the universities themselves, and the students pay a fee for access  
to said music.  I haven't seen any studies which provide a clue as to  
whether or not these experiments have been successful (for some value  
of 'successful'); my suspicion is that crippling DRM combined with a  
lack of variety may have been 'features' of these systems, which is  
not a good test.


OTOH, emusic.com seem to be going great guns with non-DRMed .mp3s and  
a subscription model; perhaps (an official) P2P distribution might be  
a logical next step for a service of this type.  I think it would be  
a very interesting experiment.


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Prefix list formats: advice needed

2007-01-14 Thread Roland Dobbins



On Jan 14, 2007, at 2:12 PM, Bill Woodcock wrote:

Howdy.  For a tool we're writing, we need to be able to accept  
lists of prefixes people might want to BGP advertise.


Bill, what is the purpose of your tool?  Does it parse router configs?

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-13 Thread Roland Dobbins



On Jan 13, 2007, at 3:01 PM, Stephen Sprunk wrote:


Consumers, OTOH, want to buy _programs_, not _channels_.


This is a very important point - perceived disintermediation,  
perceived unbundling, ad reduction/elimination, and timeshifting are  
the main reasons that DVRs are so popular (and now, placeshifting  
with things like Slingbox and Tivo2Go, though it's very early days in  
that regard).  So, at least on the face of it, there appears to be a  
high degree of congruence between the things which make DVRs  
attractive and things which make P2P attractive.


As to an earlier comment about video editing in order to remove ads,  
this is apparently the norm in the world of people who are heavy  
uploaders/crossloaders of video content via P2P systems.  It seems  
there are different 'crews' who compete to produce a 'quality  
product' in terms of the quality of the encoding, compression,  
bundling/remixing, etc.; it's very reminiscent of the 'warez' scene  
in that regard.


I believe that many of the people engaged in the above process do so  
because it's become a point of pride with them in the social circles  
they inhabit, again a la the warez community.  It's an interesting  
question as to whether or not the energy and 'professional pride' of  
this group of people could somehow be harnessed in order to provide  
and distribute content legally (as almost all of what people really  
want seems to be infringing content under the current standard  
model), and monetized so that they receive compensation and  
essentially act as the packaging and distribution arm for content  
providers willing to try such a model.  A related question is just  
how important the perceived social cachet of editing/rebundling/ 
redistributing -infringing- content is to them, and whether  
normalizing this behavior from a legal standpoint would increase or  
decrease the motivation of the 'crews' to continue providing these  
services in a legitimized commercial environment.


As a side note, it seems there's a growing phenomenon of 'upload  
cheating' taking place in the BitTorrent space, with clients such as  
BitTyrant and BitThief becoming more and more popular while at the  
same time disrupting the distribution economies of P2P networks.   
This has caused a great deal of consternation in the infringing- 
oriented P2P community of interest, with the developers/operators of  
various BitTorrent-type systems such as BitComet working at  
developing methods of detecting and blocking downloading from users  
who 'cheat' in this fashion; it is instructive (and more than a  
little ironic) to watch as various elements within the infringing- 
oriented P2P community attempt to outwit and police one another's  
behavior, especially when compared/contrasted with the same classes  
of ongoing conflict between the infringing-oriented P2P community,  
content producers, and SPs.


---

Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Roland Dobbins



On Jan 7, 2007, at 1:15 PM, Colm MacCarthaigh wrote:


I'll try to answer the questions which are relevant to Network
Operations, and I have not already answered, anyway


And thank you very much for popping up and answering the questions  
you *can* answer - it's useful info, and much appreciated!



-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Roland Dobbins



On Jan 7, 2007, at 1:15 PM, Colm MacCarthaigh wrote:

Now comes the "please forgive me" part, but most of your questions  
arn't

relevant to the NANOG charter


I believe that's open to interpretation - for example, the question  
about mobile devices is relevant to mobile SPs, the question about  
offline viewing has an impact on perceived network usage patterns,  
the 'supernode' questions same, the TV vs. HDTV question, same (size/ 
length), the DRM question same (help desk/supportability), platforms  
same (help desk/supportability).  The ad question is is actually out- 
of-charter, though I suspect of great interest to many of the list  
subscribers


Now, if you don't *want* to answer the above questions, that's  
perfectly fine; but they're certainly within the list charter, and  
entirely relevant to network operations, heh.


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Roland Dobbins



On Jan 7, 2007, at 12:28 PM, Roland Dobbins wrote:


Colm, a few random questions as they came to mind:[;>]


Two more questions:

Do you plan to offer the Venice Project for mobile devices?  If so,  
which ones?


Will you support offline storage/playback?

Thanks again!

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-07 Thread Roland Dobbins



On Jan 6, 2007, at 6:07 AM, Colm MacCarthaigh wrote:

I'll try and answer any questions I can, I may be a little  
restricted in

revealing details of forthcoming developments and so on, so please
forgive me if there's later something I can't answer, but for now I'll
try and answer any of the technicalities. Our philosophy is to pretty
open about how we work and what we do.


Colm, a few random questions as they came to mind:[;>]

Will your downloads be encrypted/obfuscated?  Will your application  
be port-agile?  Is it HTTP, or Something Else?


If it's not encrypted, will you be cache-friendly?

Will you be supporting/enforcing some form of DRM?

Will you be multi-platform?  If so, which ones?

When you say 'TV', do you mean HDTV?  If so, 1080i/1080p?

Will you have Skype-like 'supernode' functionality?  If so, will it  
be user-configurable?


Will you insert ads into the content?  If so, will you offer a  
revenue-sharing model for SPs who wish to participate?


Many thanks!

-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Home media servers, AUPs, and upstream bandwidth utilization.

2006-12-26 Thread Roland Dobbins



On Dec 26, 2006, at 12:12 PM, John Kristoff wrote:


I'm not very excited about things like jumbo frames, in part because
of the good work you did there to show hard they are to actually get
end-to-end, but all it takes these days is for one middle box in the
path to cripple, in any myriad of ways, an end host stack  
optimization.


Jumbo frames can certainly be helpful within the IDC, for example  
between front-end systems and back-end database and/or storage  
systems; the IDC is also a more controlled and predictable  
environment (or at least it should be, heh) than the aggregate of  
multiple transit/access networks, and therefore in most cases one  
ought to be able to ensure that jumbo frames are supported end-to-end  
between the relevant IDC-hosted systems (or even between multiple  
IDCs within the same SP network).  This isn't the same as a true end- 
to-end capability between any discrete set of nodes on the Internet,  
but they can still indirectly increased performance for a  
topologically diverse user population by virtue of more optimal  
throughput 'behind the curtains', as it were.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman





Re: Home media servers, AUPs, and upstream bandwidth utilization.

2006-12-26 Thread Roland Dobbins



On Dec 24, 2006, at 11:29 PM, Mikael Abrahamsson wrote:

So to sum up, the upstream problem you're talking about is already  
here, it's just that instead of using your own PVR box and then  
sharing that, someone did this somewhere in the world, encoded it  
into Xvid and then it is shared between end users (illegally). I  
believe the problem is the same.


Understood - part of what I'm trying to ask (not very well,  
apparently, heh) is whether presumably non-infringing mechanisms/ 
services such as the Slingbox are viewed and/or would be treated any  
differently than P2P filesharing apps.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman





Re: Home media servers, AUPs, and upstream bandwidth utilization.

2006-12-25 Thread Roland Dobbins



On Dec 25, 2006, at 3:05 PM, Randy Bush wrote:


Kenjiro Cho, Kensuke Fukuda, Hiroshi Esaki, & Akira Kato.
"The Impact and Implications of the Growth in Residential
User-to-User Traffic."
SIGCOMM2006, pp207-218. Pisa, Italy. September 2006.
<http://www.iijlab.net/~kjc/papers/rbb-sigcomm2006.pdf>


I saw this paper when it came out Randy, thanks - I had several  
interrelated questions about TOS/AUP, and whether or not the presumed  
legality/illegality of a potentially popular non-infringing home  
media server vs. standard P2P applications (and the jaundiced view of  
them, rightly or wrongly) would affect what folks are doing or  
considering doing.  The questions were also somewhat specific to  
North America, which is a substantially different market than the one  
described in this paper, and which may well evolve differently.


This is a very interesting and thought-provoking paper, but it  
doesn't answer the questions I was asking, I'm sorry if that wasn't  
clear.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman





Re: Home media servers, AUPs, and upstream bandwidth utilization.

2006-12-24 Thread Roland Dobbins



On Dec 24, 2006, at 4:44 PM, Jeroen Massar wrote:


That said ISP's should simply have a package saying "50GiB/month costs
XX euros, 100GiB/month costs double" etc.


In the U.S. and Canada, the expectation has been set to an assumption  
of 'unlimited' bandwdith consumption for a fixed price in the  
consumer market.  AT&T WorldNet helped popularize that model early-on  
(you can thank or curse Tom Evslin for that, according to your  
inclinations, heh), and it has become de rigeur for most U.S./ 
Canadian broadband SPs to follow suit.


Which raises a related question of whether North American operators  
believe that offering value-added services such as placeshifting  
(which is a familiar enough concept that a significant population of  
the userbase seem to grasp the idea without a lot of explanation)  
might prove amenable to metered billing?


-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman





Home media servers, AUPs, and upstream bandwidth utilization.

2006-12-24 Thread Roland Dobbins



I recently purchased a Slingbox Pro, and have set it up so that I can  
remotely access/control my home HDTV DVR and stream video remotely.   
My broadband access SP specifically allow home users to run servers,  
as long as said servers don't cause a problem for the SP  
infrastructure nor for other users or doing anything illegal; as long  
as I'm not breaking the law or making problems for others, they don't  
care.


The Slingbox is pretty cool; when I access it, both the video and  
audio quality are more than acceptable.  It even works well when I  
access it via EVDO; on average, I'm pulling down about 450kb/sec up  
to about 580kb/sec over TCP (my home upstream link is a theoretical  
768kb/sec, minus overhead; I generally get something pretty close to  
that).


What I'm wondering is, do broadband SPs believe that this kind of  
system will become common enough to make a signficant difference in  
traffic paterns, and if so, how do they believe it will affect their  
access infrastructures in terms of capacity, given the typical  
asymmetries seen in upstream vs. downstream capacity in many  
broadband access networks?  If a user isn't doing something like  
breaking the law by illegally redistributing copyrighted content, is  
this sort of activity permitted by your AUPs?  If so, would you  
change your AUPs if you saw a significant shift towards non- 
infringing upstream content streaming by your broadband access  
customers?  If not, would you consider changing your AUPs in order to  
allow this sort of upstream content streaming of non-infringing  
content, with the caveat that users can't caused problems for your  
infrastructure or for other users, and perhaps with a bandwidth cap?


Would you police down this traffic if you could readily classify it,  
as many SPs do with P2P applications?  Would the fact that this type  
of traffic doesn't appear to be illegal or infringing in any way lead  
you to treat it differently than P2P traffic (even though there are  
many legitimate uses for P2P file-sharing systems, the presumption  
always seems to be that the majority of P2P traffic is in illegally- 
redistributed copyrighted content, and thus P2P technologies seem  
to've acquired a taint of distaste from many quarters, rightly or  
wrongly).


Also, have you considered running a service like this yourselves, a  
la VoIP/IPTV?


Vidoeconferencing is somewhat analogous, but in most cases,  
videoconference calls (things like iChat, Skype videoconferencing,  
etc.) generally seem to use a less bandwidth than the Slingox, and it  
seems to me that they will in most cases be of shorter duration than,  
say, a business traveler who wants to keep up with Lost or 24 and so  
sits down to stream video from his home A/V system for 45 minutes to  
an hour at a stretch.


Sorry to ramble, this neat little toy just sparked a few questions,  
and I figured that some of you are dealing with these kinds of issues  
already, or are anticipating doing so in the not-so-distant future.   
Any insight or informed speculation greatly appreciated!



-----------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman





Re: Bogon Filter - Please check for 77/8 78/8 79/8

2006-12-14 Thread Roland Dobbins



On Dec 14, 2006, at 4:50 PM, David Conrad wrote:

IANA has a project along these lines at the earliest stage of  
development (that is, we're trying to figure out if this is a good  
idea and if so, the best way to implement it).  I'd be interested  
in hearing opinions (either publicly or privately) as to what IANA  
should do here.


Are IANA considering operating a BGP routeserver infrastructure?   
What about LDAP and other mechanisms?


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman





Re: The IESG Approved the Expansion of the AS Number Registry

2006-12-01 Thread Roland Dobbins



On Dec 1, 2006, at 4:50 AM, Andy Davidson wrote:

RIPE will be accepting requests for 32-bit ASNs from 1/1/07,  
according to an email to ncc-services two weeks ago.  It does not  
feel too early to start to understand what we must do to as a  
community to guarantee ubiquity of reachable networks.


Is there any possibility we can now get a block of ASNs set aside for  
documentation purposes, akin to example.com and/or the TEST network?   
A block of ASNs for this purpose would be very helpful for folks  
writing docs, would reduce the possibility of 'cut-and-paste  
hijacking', and would also allow more accurate documentation (many  
products and tools have special handling for the designated private  
ASNs which make documentation difficult).


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman





Re: analyse tcpdump output

2006-11-22 Thread Roland Dobbins



On Nov 22, 2006, at 12:37 PM, Netfortius wrote:


I wonder if someone knows a tool to use a tcpdump output for anomaly
dedection. It is sometimes really time consuming when looking for  
identical

patterns in the tcpdump output.


For this sort of thing, you can do it far more scalably with  
NetFlow.  There are several good commercial NetFlow-based anomaly- 
detection systems (Arbor, Lancope, Narus, Q1, etc.) and even an open- 
source project (currently fallow) called Panoptis.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman





Re: BGP analyzing tool

2006-11-21 Thread Roland Dobbins



On Nov 21, 2006, at 10:08 PM, Doug Marschke wrote:


I need
this
info to understand why a certain upstream is being more prefered  
than the

others. Thanks.


I believe that PacketDesign's Route Explorer may be helpful in  
answering questions of this type:


http://www.packetdesign.com

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

All battles are perpetual.

   -- Milton Friedman





Re: advise on network security report

2006-10-31 Thread Roland Dobbins



On Oct 31, 2006, at 3:45 PM, Rick Wesson wrote:


the point of the posting are to generate discussion;


I believe there are those who would argue that there's already a  
surfeit of discussion on NANOG, quite a bit of it irrelevant and of  
little interest to many subscribers.


Posting stats and reports to a list which contains people who may not  
be interested in same often results in those stats and reports being  
filtered out and ignored.  Posting a pointer to said stats and lists  
so that interested parties can subscribe if they so choose guarantees  
a community of common interests to whom discussion of the topic(s) at  
hand will come naturally, without the need for artificial stimulus.


---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Any information security mechanism, process, or procedure which can
be consistently defeated by the successful application of a single
class of attacks must be considered fatally flawed.

-- The Lucy Van Pelt Principle of Secure Systems Design



Re: advise on network security report

2006-10-30 Thread Roland Dobbins



On Oct 30, 2006, at 8:53 AM, Rick Wesson wrote:

I'm expecting to post a weekly report once a month to nanog, would  
this be disruptive?


Far better to simply post a pointer to your new list, IMHO, and let  
folks subscribe if the so choose.  As it is, many of these various  
automated postings to NANOG are mildly annoying to those who aren't  
interested or who already receive the information in another form.


Whatever service you end up offering, a a full-text RSS or Atom feed  
would probably be useful, as well.


-------
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Any information security mechanism, process, or procedure which can
be consistently defeated by the successful application of a single
class of attacks must be considered fatally flawed.

-- The Lucy Van Pelt Principle of Secure Systems Design



Re: Boeing's Connexion announcement

2006-10-15 Thread Roland Dobbins



On Oct 15, 2006, at 9:09 PM, Christian Kuhtz wrote:

 Credit card transaction records are sufficient for some expenses  
(except hotels), far above $25.


Many portals for hotspot services provide an HTML splash page with  
the amount paid - one can save that to one's hard drive and print it  
out later.


And, of course, this thread is now irretrievably off-topic, heh.

---
Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice

Any information security mechanism, process, or procedure which can
be consistently defeated by the successful application of a single
class of attacks must be considered fatally flawed.

-- The Lucy Van Pelt Principle of Secure Systems Design



  1   2   >