Re: Speedtest site accuracy [was: Bandwidth issues in the Sprint network]

2008-04-08 Thread Mikael Abrahamsson


On Tue, 8 Apr 2008, Scott Weeks wrote:

To other medium-sized eyeball network providers (I'm defining medium 
size as 50-150K DSL/Cable connections and 50-1500 leased line 
customers): are you seeing this and what do you tell your customers?


We're having this big push here in Sweden with something that basically 
translates into "broadband checkup". It's also web based, and it ended up 
in the national papers last week, where the newspapers misinterpreted 9.53 
megabit/s of TCP thruput on a 10 meg ethernet connection as "barely 
acceptable" or something of that nature.


We're seeing difference in results on the same computer depending on what 
browser is being used, and other strange results. Yes, it's a basic test 
and it should be treated that way, unfortunately quite a lot of users 
expect to get the same number they have purchased, so when they have 
purchased 8 megabit ADSL, they expect this test to say 8 megabit/s. 
Industry standard here is that 8/1 is ATM speed, so best results one can 
expect is approx 6.7 megabit/s of TCP thruput.


So yes, this is seen and it's a problem I guess we as an industry have to 
learn how to handle. Swedish ISPs are adopting fineprint in their ads on 
expected speed to be seen in this tool, as it seems the users are very 
keen on using it.


What worries me is that people will get dissatisfied with their connection 
even though there is nothing wrong with it and that they won't get better 
service elsewhere if they switch ISPs. It's good that there is a test, but 
since we're a market where 100/100 ethernet connections are fairly 
prevalent, this test doesn't work properly (75 megabit/s result on a 
100/100 was listed in the paper as "not acceptable" which we all 
understand is unfair).


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: latency (was: RE: cooling door)

2008-03-30 Thread Mikael Abrahamsson


On Sun, 30 Mar 2008, Fred Reimer wrote:


application to take advantage of the networks' capabilities.   Mikael (seems
to) complain that developers have to put latency inducing applications into
the development environment.  I'd say that those developers are some of the
few who actually have a clue, and are doing the right thing.


I was definately not complaining, I brought it up as an example where 
developers have clue and where they're doing the right thing.


I've too often been involved in customer complaints which ended up being 
the fault of Microsoft SMB and the customers having the firm idea that it 
must be a network problem since MS is a world standard and that can't be 
changed. Even proposing to change TCP Window settings to get FTP transfers 
quicker is met with the same sceptisism.


Even after describing to them about the propagation delay of light in 
fiber and the physical limitations, they're still very suspicious about it 
all.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: latency (was: RE: cooling door)

2008-03-30 Thread Mikael Abrahamsson


On Sat, 29 Mar 2008, Frank Coluccio wrote:


Understandably, some applications fall into a class that requires very-short
distances for the reasons you cite, although I'm still not comfortable with the
setup you've outlined. Why, for example, are you showing two Ethernet switches
for the fiber option (which would naturally double the switch-induced latency),
but only a single switch for the UTP option?


Yes, I am showing a case where you have switches in each rack so each rack 
is uplinked with a fiber to a central aggregation switch, as opposed to 
having a lot of UTP from the rack directly into the aggregation switch.



Now, I'm comfortable in ceding this point. I should have made allowances for 
this
type of exception in my introductory post, but didn't, as I also omitted mention
of other considerations for the sake of brevity. For what it's worth, 
propagation
over copper is faster propagation over fiber, as copper has a higher nominal
velocity of propagation (NVP) rating than does fiber, but not significantly
greater to cause the difference you've cited.


The 2/3 speed of light in fiber as opposed to propagation speed in copper 
was not in my mind.



As an aside, the manner in which o-e-o and e-o-e conversions take place when
transitioning from electronic to optical states, and back, affects latency
differently across differing link assembly approaches used. In cases where 
10Gbps


My opinion is that the major factors of added end-to-end latency in my 
example is that the packet has to be serialisted three times as opposed to 
once and there are three lookups instead of one. Lookups take time, 
putting the packet on the wire take time.


Back in the 10 megabit/s days, there were switches that did cut-through, 
ie if the output port was not being used the instant the packet came in, 
it could start to send out the packet on the outgoing port before it was 
completely taken in on the incoming port (when the header was received, 
the forwarding decision was taken and the equipment would start to send 
the packet out before it was completely received from the input port).



By chance, is the "deserialization" you cited earlier, perhaps related to this
inverse muxing process? If so, then that would explain the disconnect, and if it
is so, then one shouldn't despair, because there is a direct path to avoiding 
this.


No, it's the store-and-forward architecture used in all modern equipment 
(that I know of). A packet has to be completely taken in over the wire 
into a buffer, a lookup has to be done as to where this packet should be 
put out, it needs to be sent over a bus or fabric, and then it has to be 
clocked out on the outgoing port from another buffer. This adds latency in 
each switch hop on the way.


As Adrian Chadd mentioned in the email sent after yours, this can of 
course be handled by modifying or creating new protocols that handle this 
fact. It's just that with what is available today, this is a problem. Each 
directory listing or file access takes a bit longer over NFS with added 
latency, and this reduces performance in current protocols.


Programmers who do client/server applications are starting to notice this 
and I know of companies that put latency-inducing applications in the 
development servers so that the programmer is exposed to the same 
conditions in the development environment as in the real world. This means 
for some that they have to write more advanced SQL queries to get 
everything done in a single query instead of asking multiple and changing 
the queries depending on what the first query result was.


Also, protocols such as SMB and NFS that use message blocks over TCP have 
to be abandonded and replaced with real streaming protocols and large 
window sizes. Xmodem wasn't a good idea back then, it's not a good idea 
now (even though the blocks now are larger than the 128 bytes of 20-30 
years ago).


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: latency (was: RE: cooling door)

2008-03-29 Thread Mikael Abrahamsson


On Sat, 29 Mar 2008, Frank Coluccio wrote:


Please clarify. To which network element are you referring in connection with
extended lookup times? Is it the collapsed optical backbone switch, or the
upstream L3 element, or perhaps both?


I am talking about the matter that the following topology:

server - 5 meter UTP - switch - 20 meter fiber - switch - 20 meter 
fiber - switch - 5 meter UTP - server


has worse NFS performance than:

server - 25 meter UTP - switch - 25 meter UTP - server

Imagine bringing this into metro with 1-2ms delay instead of 0.1-0.5ms.

This is one of the issues that the server/storage people have to deal 
with.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


latency (was: RE: cooling door)

2008-03-29 Thread Mikael Abrahamsson


On Sat, 29 Mar 2008, Frank Coluccio wrote:


We often discuss the empowerment afforded by optical technology, but we've 
barely
scratched the surface of its ability to effect meaningful architectural changes.


If you talk to the server people, they have an issue with this:

Latency.

I've talked to people who have collapsed layers in their LAN because they 
can see performance degradation for each additional switch packets have to 
pass in their NFS-mount. Yes, higher speeds means lower serialisation 
delay, but there is still a lookup time involved and 10GE is 
substantionally more expensive than GE.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: IPv6 on SOHO routers?

2008-03-12 Thread Mikael Abrahamsson


On Wed, 12 Mar 2008, John Lee wrote:

What I would like to see today is SOHO routers that do not interfere 
with 6 over 4 transport since my ISP does not offer home DSL termination 
of v6. Taking the silicon in a SOHO and adding 5 to 10 $ US in cost for 
v6 and multiple that by 5 to get a retail price of those features. Then 
offset that with the decrease in silicon size when you add both together 
with smaller size lines and transistors on the chips, I would project 
SOHO prices of 250 - 350 $ US to start with for v4 & v6 and dropping 
from there.


OpenWRT which actually supports IPv6 (by virtue of being linux based) can 
be run on very cheap devices (as most smaller home NAT-gateways are 
CPU based, no biggie), I suspect IPv6 on most of these is only a matter of 
someone actually putting it in their RFQ and be willing to pay a few $ 
extra per unit when buying the normal large telco volumes.


Running code is out there, it's just a matter of getting it into the 
devices.


The smaller SOHO routers that cisco has (800 and 1800 series) are quite 
ready for this, 12.4T even has support for DHCPv6 prefix delegation on the 
878 for instance (it was the only one I checked in the software advisor).


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: ISP's who where affected by the misconfiguration: start using IRR and checking your BGP updates (Was: YouTube IP Hijacking)

2008-02-24 Thread Mikael Abrahamsson


On Sun, 24 Feb 2008, Jeroen Massar wrote:


* Routing Registry checking, as per the above two
 rr.arin.net & whois.ripe.net contains all the data you need
 Networks who are not in there are simply not important enough to
 exist on the internet as clearly those ops folks don't care about
 their network...


For us who actually have customers we care about, we probably find it 
better for business to try to make sure our own customers can't announce 
prefixes they don't own, but accept basically anything from the world that 
isn't ours.


Using pure RR based filtering just isn't cost efficient today, as these 
borks (unintentional mostly) we see sometimes are few and fairly far 
between, but problems due to wrong or missing information in the RRs is 
plentyful and constant.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: EU Official: IP Is Personal

2008-01-23 Thread Mikael Abrahamsson


On Wed, 23 Jan 2008, Lou Katz wrote:


They are both right. If you have a dynamic IP such as most college students
have, it is here-today-gone-tomorrow.


The local antipiracy organization in Sweden needed a permit to 
collect/handle IP+timestamp and save it in their database, as this 
information was regarded as personal information. Since ISPs regularily 
save who has an IP at what time, IP+timestamp can be used to discern at 
least what access port a certain IP was at, or in case of PPPoE etc, what 
account was used to obtain the IP that that time.


I still think IP+timestamp doesn't imply what person did something, 
license plate information tracking is also considered personal information 
even though it says nothing about who drove the car at that time, and I 
think IP+timestamp is approximately on the same level as a car license 
plate when it comes to level of personal information.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-23 Thread Mikael Abrahamsson


On Wed, 23 Jan 2008, Andy Davidson wrote:

I think that charging for deaggregation of PA is hard to imagine.  I 
think charging for PI as a model may have been worthy of consideration 
several years ago, but since we're only months away from entire product 
lines of deployed edge kit nolonger accepting a full table, the battle 
is over (and operators lost).


As far as I can see, the only way to solve de-aggregation and PI is to 
create some kind of cryptographic signing of aggregate routes sent out to 
DFZ.


RIPE/ARIN and other equal instances need to sign the combination of AS 
and prefix, and this is then used (somehow) to authenticate the prefix 
when received. This would also have the added benefit of stopping people 
from sending more specifics with other ISPs IP space (or even their own, 
as only the actual aggregate prefix would be signed, not more specifics 
that people use for "TE").


So this "certificate" or alike needs to be time limited and coupled to 
payment if we're going to charge for PI/PA yearly.


Yes, this increases complexity in the DFZ enormously, and I don't know if 
the benefit outweighs the complexity and added risks for failures.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Lessons from the AU model

2008-01-22 Thread Mikael Abrahamsson


On Tue, 22 Jan 2008, Sean Donelan wrote:

If there was one tenant that left the hot water running 24 hours, 7 days a 
week; so other tenants complained they didn't get enough hot water. One


That has never happened to me. We have good enough infrastructure that one 
tenant filling up their hot water bath doesn't deplete the infrastructure 
of "hot water production" in my building. I seriously doubt anyone would 
notice me running hot water 24/7, because the infrastructure is able to 
handle that. No, everybody can't do it, but if I need to for a couple of 
hours, it works.


tenant plugged in maximum wattage heaters on every circuit and left them on 
high 24 hours a day; left the television volume turned up to the maximum 24 
hours a day; and so forth.


I know people who run servers in their dorms due to this. It might go 
away, power is easy to meter.


If you were the neighbor of such a tenant in a building, would you be 
pleased that your monthly fees were being increased or that one tenant 
was using all the hot water and generating a lot of noise all day and 
all night?  Or might you complain to the landlord about those problems.


Your analogy is halting, but that's to be expected. I certainly wouldn't 
want to pay more for the landlord to install metering everywhere. There is 
much overhead in metering and billing on that.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Lessons from the AU model

2008-01-22 Thread Mikael Abrahamsson


On Tue, 22 Jan 2008, Mark Newton wrote:


Power is metered.  Water is metered.  Gas is metered.  Heating
oil is metered.  Even cable-TV is packaged so that you pay more
if you want to use more channels...


I know of places in my nick of the world where all those are flat-rate. 
When the usage difference is small enough, metering is not effective.


Typical dorm here includes power, water, (gas is usually not used, but 
most of the places that have gas charge ~ USD15 per month per apartment 
for gas, flatrate), and heating. Basic cable included in rent. I also know 
of quite a few regular apartments that have this model. In my apartment I 
pay for power. Water, heating and basic cable is included in the monthly 
fee.


Some claim that metering is 50% of cost in the telco industry, and I have 
no reason to doubt that. Staying out of metering saves money on all 
levels, less complex equipment, less supportcalls, less hassle with 
billing.


I am also hesitant regarding billing when a person is being DDOS:ed. How 
is that handled in .AU? I can see billing being done on outgoing traffic 
from the customer because they can control that, but what about incoming, 
the customer has only partial control over that.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-18 Thread Mikael Abrahamsson


On Fri, 18 Jan 2008, Patrick W. Gilmore wrote:

Right.  And mobile phones, which you admit are more difficult to 
understand and manage, have clearly been a disastrous failure.  By your 
analogy, we should expect this to be a slightly less disastrous failure. 
(Would that Time Warner were so lucky. :)


No, it's easier to understand that by making a call where you're 
physically abroad you're charged more. Otoh "unlimited wireless" plans are 
common here and now people are discovering that if you're close to the 
border of another country you're all of a sudden roaming and instead of 
free wireless broadband, you're paying several dollars per megabyte 
transferred (without you noticing it). This is not what people expect.


This is why some people really really want "tokens" or prepaid and then 
have their account severely limited or shut off when their account is 
"empty", instead of being charged per-usage without upper limit.


If the cheap flatrate broadband were to go away and be replaced by a 
metered one, we as an industry need to figure out how to do billing in a 
customer-friendly manner. We do not have much experience with this in many 
markets.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-18 Thread Mikael Abrahamsson


On Fri, 18 Jan 2008, Rod Beck wrote:


http://www.ecommercetimes.com/rsstory/61251.html


So, anyone but me think that this will end in disaster? I think the model 
where you get high speed for X amount of bytes and then you're limited to 
let's say 64kilobit/s until you actually go to the web page and buy 
another "token" for more Y more bytes at high speed? We already have this 
problem with metered mobile phones, which of course is even more 
complicated for users due to different rates depending on where you might 
be roaming.


Customers want control, that's why the prepaid mobile phone where you get 
an "account" you have to prepay into, are so popular in some markets. It 
also enables people who perhaps otherwise would not be eligable because of 
bad credit, to get these kind of services.


I'm also looking forward to the pricing, all the per-byte plans I have 
seen so far makes the ISP look extremely greedy by overpricing, as 
opposed to "we want to charge fairly for use" that is what they say in 
their press statements.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Network Operator Groups Outside the US

2008-01-17 Thread Mikael Abrahamsson


On Thu, 17 Jan 2008, Bill Woodcock wrote:

Patrik, Kurtis, et al organized a few NordNOGs; I think there were three 
of them, but it didn't seem to get much traction outside of Sweden, and 
I think they got tired of being the only ones pushing it forward.


SOF, Swedish Operator Forum meets a 4-5 times a year, but it's usually 
just 2-3 hours and almost exclusively concerns national matters. It's more 
of a "Netnod-IX customer club" than anything else.


http://sof.isoc-se.a.se

(webpage isn't being updated much either, and is in swedish only).

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mikael Abrahamsson


On Tue, 15 Jan 2008, Frank Bulk wrote:


Except that upstreams are not at 27 Mbps
(http://i.cmpnet.com/commsdesign/csd/2002/jun02/imedia-fig1.gif show that
you would be using 32 QAM at 6.4 MHz).  The majority of MSOs are at 16-QAM
at 3.2 MHz, which is about 10 Mbps.  We just took over two systems that were
at QPSK at 3.2 Mbps, which is about 5 Mbps.


Ok, so the wikipedia article <http://en.wikipedia.org/wiki/Docsis> is 
heavily simplified? Any chance someone with good knowledge of this could 
update the page to be more accurate?



And upstreams are usually sized not to be more than 250 users per upstream
port.  So that would be a 10:1 oversubscription on upstream, not too bad, by
my reckoning.  The 1000 you are thinking of is probably 1000 users per
downstream power, and there is a usually a 1:4 to 1:6 ratio of downstream to
upstream ports.


250 users sharing 10 megabit/s would mean 40 kilobit/s average utilization 
which to me seems very tight. Or is this "250 apartments" meaning perhaps 
40% subscribe to the service indicating that those "250" really are 100 
and that the average utilization then can be 100 kilobit/s upstream?


With these figures I can really see why companies using HFC/Coax have a 
problem with P2P, the technical implementation is not really suited for 
the application.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mikael Abrahamsson


On Tue, 15 Jan 2008, Frank Bulk wrote:


I'm not aware of MSOs configuring their upstreams to attain rates for 9 and
27 Mbps for version 1 and 2, respectively.  The numbers you quote are the
theoretical max, not the deployed values.


But with 1000 users on a segment, don't these share the 27 megabit/s for 
v2, even though they are configured to only be able to use 384kilobit/s 
peak individually?


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mikael Abrahamsson


On Tue, 15 Jan 2008, Brandon Galbraith wrote:


I think no matter what happens, it's going to be very interesting as Comcast
rolls out DOCSIS 3.0 (with speeds around 100-150Mbps possible), Verizon FIOS


Well, according to wikipedia DOCSIS 3.0 gives 108 megabit/s upstream as 
opposed to 27 and 9 megabit/s for v2 and v1 respectively. That's not what 
I would call revolution as I still guess hundreds if not thousands of 
subscribers share those 108 megabit/s, right? Yes, fourfold increase but 
... that's still only factor 4.



expands it's offering (currently, you can get 50Mb/s down and 30Mb/sec up),
etc. If things are really as fragile as some have been saying, then the
bottlenecks will slowly make themselves apparent.


Upstream capacity will still be scarce on shared media as far as I can 
see.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: ISPs slowing P2P traffic...

2008-01-14 Thread Mikael Abrahamsson


On Mon, 14 Jan 2008, Frank Bulk wrote:


Interesting, because we have a whole college attached of 10/100/1000 users,
and they still have a 3:1 ratio of downloading to uploading.  Of course,
that might be because the school is rate-limiting P2P traffic.  That further
confirms that P2P, generally illegal in content, is the source of what I
would call disproportionate ratios.


You're not delivering "Full Internet IP connectivity", you're delivering 
some degraded pseudo-Internet connectivity.


If you take away one of the major reasons for people to upload (ie P2P) 
then of course they'll use less upstream bw. And what you call 
disproportionate ratio is just an idea of "users should be consumers" and 
"we want to make money at both ends by selling download capacity to users 
and upload capacity to webhosting" instead of the Internet idea that 
you're fully part of the internet as soon as you're connected to it.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: ISPs slowing P2P traffic...

2008-01-14 Thread Mikael Abrahamsson


On Mon, 14 Jan 2008, Frank Bulk wrote:

In other words, you're denying the reality that people download a 3 to 4 
times more than they upload and penalizing every in trying to attain a 
1:1 ratio.


That might be your reality.

My reality is that people with 8/1 ADSL download twice as much as they 
upload, people with 10/10 upload twice as much as they download.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Mikael Abrahamsson


On Sun, 13 Jan 2008, David E. Smith wrote:

It's not the bandwidth, it's the number of packets being sent out. One 
customer, talking to twenty or fifty remote hosts at a time, can "kill" 
a wireless access point in some instances. All those little tiny packets 
tie up the AP's radio time, and the other nine customers call and 
complain.


If it's concurrent tcp connections per customer you're worried about, then 
I guess you should aquire something that can actually enforce a limitation 
you want to impose.


Or if you want to protect yourself from customers going encrypted on you, 
I guess you can start to limit the concurrent number of servers they can 
talk to.


I can think of numerous problems with this approach though, so like other 
people here have suggested, you really need to look into your technical 
platform you use to produce your service as it's most likely is not going 
to work very far into the future. P2P isn't going to go away.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Assigning IPv6 /48's to CPE's?

2008-01-03 Thread Mikael Abrahamsson


On Thu, 3 Jan 2008, Rick Astley wrote:


If Bob has a multihomed network, he can't just give one /48 to a customer in
NY and the next one to a customer in CA unless he wants to fill up Internet
routing tables with /48's, so he will have to assign large aggregate blocks
to each region.


Could you please elaborate on this? Unless Bob is actually breaking the 
"single AS needs to have common IGP and be connected internally", I don't 
understand the relevance of your statement above. Just because he's 
multihomed doesn't mean he can't just announce /32 and then internally 
handle the routing (of course he should do aggregation though, but perhaps 
is smaller chunks).


It seems to me while being extra super sure we meet goal 1 of making 
sure NAT is gone for ever (and ever) we fail goal 2 of not allocating a 
bunch of prefixes to ISP's that are too small.


Well, if you need a /20 for your business needs, you should request it. 
Afaik as long as you justify it, it shouldn't be a problem?


But I do agree that /56 should be enough for residential users for quite a 
while, so let's start there.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: v6 subnet size for DSL & leased line customers

2007-12-22 Thread Mikael Abrahamsson


On Sat, 22 Dec 2007, Joel Jaeggli wrote:


Leave enough address space for pd to occur? We know that if I hand you
the end-user a /64 that the first device that you connect to the network


What about the "wan" side of that connection? I want the customer to only 
source traffic from the /56 being assigned and I don't want ISP router to 
have any IPs in that space. Does IPv6 capable CPEs have the possibility to 
source packets from local network (received via pd) and only have FE80: IP 
for upstream routing which is never used as an SRC address for 
communication with the outside world (apart from upstream router)?


If not, is this something that we should ask the CPE vendors for? It would 
be extremely nice for CoPP etc for ISP routers to have no IP in customer 
space, and CPEs to have no IP in ISP link-network space. Would make for 
very effective infrastructure ACLs.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: /48 for each and every endsite (Was: European ISP enables IPv6 for all?)

2007-12-19 Thread Mikael Abrahamsson


On Wed, 19 Dec 2007, Owen DeLong wrote:


Do you mean the staff at the RIR?

Do you mean the RIR Boards, Advisory Councils, or other representative 
governing bodies?


Both these. The few times I have ventured to start emailing on a policy wg 
emailing list, I have gotten the notion that people who habit these have 
no idea about operational reality of running an ISP. They also expect 
suggestions in a form that is quite academic and one that most likely 
nobody actually working operationally at an ISP will be able to produce (I 
found the email reply to me from Jeroen Massar to be right on the money 
what I expect in this context).


Yes, I understand that if your life is to run an RIR, it's frustrating to 
have to interact with people that don't even use the correct terms and 
separate between allocations, delegations and assignments.


IPv6 needs a much longer time horizon than IPv4 in my opinion.  If 
nothing else, I would say that you should be able to project your 
addressing needs for the next two years at least in the ball-park of 
continuing your previous growth trends.  If you added 100k customers 
last year and 80k customers the year before, then, I think it's 
reasonable, especially in IPv6, to plan for 125k customer adds next year 
and 150k customer adds the following year.


Yes, so why does the RIRs still ask for a 2 year planning horizon for 
IPv6? Why isn't this 5 or 10 years? If we have plenty of addresses and 
hand out a /28 for each AS number present on the internet right now, that 
would be equivalent to each AS supporting 270M /56 customers but we would 
still only have used up /15 of the IPv6 address space. We would though 
have fairly well have made sure that more than 99% of ISPs will only ever 
need a single IPv6 PA block, hopefully making DFZ glut less in 10-15 
years.


If you're figures turn out to be excessive, then, in two years when 
you'd normally have to apply for more space (I'd like to see this move 
to more like 5 for IPv6), you can skip that application until you catch 
up.  No real problem for anyone in that case.


I don't want anyone to apply for more space later as this would normally 
mean a second route. If everybody needs to do this, then we'll add 40k 
routes to DFZ without any good reason.


So split the difference and ask for a /28.  Personally, I think /56s are 
plenty for most residential users.  I'm a pretty serious residential 
end-user, and, I can't imagine I'd need more than a /56 in terms of 
address planning.  However, I have a /48 because that's the smallest 
direct assignment available for my multihomed end-site.


I agree, but with current policy, asking for a /28 means (afaik) that I 
have to claim to have 270M /56 customers in 2-5 years. That's a pretty 
bold statement. But I guess that we can just keep only telling lies to the 
RIRs to get our addresses, which has been the standard workaround.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: /48 for each and every endsite (Was: European ISP enables IPv6 for all?)

2007-12-19 Thread Mikael Abrahamsson


On Wed, 19 Dec 2007, Jeroen Massar wrote:


you got a /32 in 2000 and you had 10k customers then you should be fine.
If you already had 200k customers or so and then only requested a /32
though I think one can definitely state you made a big booboo.


From what I have been told by my colleagues, we actually received a /35 
back then and the requirement was to have 200 end users, otherwise you 
basically couldn't receive a "PA" at all. This was then converted into a 
/32 at a later date, I guess due to a change in policy.


So my wondering is basically, if we say we have millions of end users 
right now and we want to give them a /56 each, and this is no problem, 
then the policy is correct. We might not have them all IPv6 activated in 2 
years which is the RIR planning horizon. I do concur with other posters 
here that the planning horizon for IPv6 should be longer than three years 
so we get fewer prefixes in the DFZ as a whole. Then again, *RIR people 
don't care about routing so I am still sceptical about that being taken 
into account.


you will be having. Unless you will suddenly in a year grow by 60k 
clients (might happen) or really insanely with other large amounts your 
initial planning should hold up for quite some while


We grow by much more than 60k a year, it's just hard to plan for it. If we 
project for the highest amount of growth then we're most likely wasteful 
(in the case of IPv4 space anyway), if we project for lowest amount of 
growth then we get DFZ glut.


We would also like to do regional IPv6 address planning since we're too 
often in the habit of (without much notice for the operational people) 
selling off part of the business.


Then again, with a /32 we can support ~16 million residential end-users 
with /56 each, which I guess will be enough for a while.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: /48 for each and every endsite (Was: European ISP enables IPv6 for all?)

2007-12-19 Thread Mikael Abrahamsson


On Wed, 19 Dec 2007, Jeroen Massar wrote:


Can I read from this that you never actually read any of the $RIR policy
documentation about getting IPv6 address space even though you did
request a /32 before, clearly without thinking about it?


I never requested IPv6 space personally. I work with routing, not with LIR 
administration. I also know that RIR people don't work with routing, and 
it shows.


"new" as in "We already have one, but we actually didn't really know 
what we where requesting, now we need more"


We got our current block in 2000 (or earlier, I don't know for sure, but 
2000 at the latest). So yes, we didn't know what we were doing back then. 
Then again, I'd say nobody knew back then.


That is exactly what it is for. Then again, if you actually had 
*PLANNED* your address space like you are supposed to when you make a 
request you could have already calculated how much address space you 
really needed and then justify it to the $RIR. In case you have to go 
back to ask the $RIR for more you already made a mistake while doing the 
initial request...


The world tends to change in 7 years. You seem to like bashing people for 
not knowing future policy and changes 7 year ahead of time, which I think 
it quite sad.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: European ISP enables IPv6 for all?

2007-12-19 Thread Mikael Abrahamsson


On Wed, 19 Dec 2007, Iljitsch van Beijnum wrote:

customers something bigger, like a /64, a /56 or even a /48. (Yes, we have 
enough address space for a /48 per customer.)


The good part about using /48 is that it gives customers an even : boundry 
for their space. Apart from that, I think /56 is a better idea (or perhaps 
even a /60). Good point there about autoconfiguration, subnetting using 
less than /64 is probably a bad idea.


So, out of our /32, if we assign each customer a /48 we can only support 
65k customers. So in order to support millions of customers, we need a new 
allocation and I would really like for each new subnet allocated to be 
very much larger so we in the forseeable future don't need to get a newer 
one. So for larger ISPs with millions of customers, next step after /32 
should be /20 (or in that neighborhood). Does RIPE/ARIN policy conform to 
this, so we don't end up with ISPs themselves having tens of aggregates 
(we don't need to drive the default free FIB more than what's really 
needed).


Other option is to have more restrictive assignments to end users and 
therefore save on the /32, but that might be bad as well (long term).


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: European ISP enables IPv6 for all?

2007-12-19 Thread Mikael Abrahamsson


On Tue, 18 Dec 2007, Kevin Oberman wrote:


If you see IPv6 as a solution to the exhaustion of IPv4 space, take a
look at http://www.civil-tongue.net/clusterf/. It may help at some
point, but many of us see no clear way to get from here to there without
massive growth in both the RIB and the FIB in the process.


I am actually more concerned with the CPE problem and how to make 
autoconfiguration work for end users.


For instance, should we assign /64 to end users and have them do whatever 
they need (subnet to /80 if they need more than one network)? We need to 
provision routes in whatever routers connect to customers, which I guess 
is the FIB/RIB-problem mentioned above?


Is there general agreement that IPv6 requires a router at the customer 
prem to scale (ISP doesn't want to know what the end user devices are)?
Also, is it ok to statically assign /64 to end user or should end user be 
able to switch addresses (some like dynamic IPs because it's not 
persistant over time and like the "privacy" you get by changing IP all the 
time).


I haven't been able to find a BCP regarding the end user equipment and how 
to configure it, does anyone have any pointers?


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Somewhat bizarre scenario... (Fiber distance)

2007-12-14 Thread Mikael Abrahamsson


On Fri, 14 Dec 2007, Deepak Jain wrote:




Given this 40G/100G topic, I figured I'd bring this up given the topics.

I've got a link that is testing out at 29.5db loss @ 1550. Its 107km.

I seem to remember a few good solutions for 1Gb/s or 2.5Gb/s that can handle 
a link like this, but its been a while and I can't seem to remember. I can 
put a regen in there if I have to, but that or an EDFA both seem like ugly 
solutions since I just need 1 wave.


Shoot me a few suggestions?


http://www.finisar.com/product-113-1_Gigabit_CWDM_GBIC_with_APD_Receiver_(FTR-1619-xx)

30dB. Will do more, we've done ~180km (~36dB) with one of those.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: IEEE 40GE & 100GE

2007-12-12 Thread Mikael Abrahamsson


On Wed, 12 Dec 2007, Chris Cole wrote:

uniform agreement that the ratio of <10km to 10km to 40km applications 
is 10x or more. So given this view, it would be a very hard sell to 
convince the IEEE to only support a single 40km reach. In effect this 
would double the optics cost for most users.


Ah, single reach wasn't my intention. The point I tried to make was that 
if 10km and 40km is very near in price, and 3km is considerably cheaper, 
then it makes more sense to do 3km and 40km reaches to get two distinct 
prices and reaches.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: IEEE 40GE & 100GE

2007-12-12 Thread Mikael Abrahamsson


On Wed, 12 Dec 2007, Robert E. Seastrom wrote:


Link budget information on page 4, here:
http://www.ieee802.org/3/hssg/public/reach/Matsumoto_r1_1207.pdf
Relative cost estimates on page 5.


(totally disregarding the HSSG policy of talking cost and not price here)

If the cost estimate has any bearing on actual end-user purchase price, 
then I would say that the 3-4km reach alternative makes sense. Having a 
10km reach alternative costing 60% of 40km reach optics just doesn't make 
sense. Today we live in a world where 10km reach optics is ~1/4 the price 
of 40-80km optics, what's being said in that table is that the 40km reach 
optics cost 2.1x of the 3km one. The 40km optics would cost 1.6x the 10km 
one.


Since cost of keeping spares and considering the operational expense of 
bringing up links with beforementioned bad connectors etc, it might even 
be rational to just go with 40km optics at this cost difference level.


Different optics variants need to have a distinct price difference, 
otherwise they're just complicating things. Otoh if we need attenuators 
for 40km optics on 5km links then that's a complicating factor as well. 
That's not been needed before.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: SC vs other connectors, optical budgets decreasing (was Re: IEEE 40GE & 100GE)

2007-12-12 Thread Mikael Abrahamsson


On Wed, 12 Dec 2007, Alex Pilosov wrote:


wiring system is getting larger. Given the specified SC connector
insertion loss of .75dB, it is not uncommon to see loss within a facility


Where does this .75 dB figure come from? Googling around seems to yield 
.15-.5 with a typical around .2. .75 sounds very high.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-28 Thread Mikael Abrahamsson


On Sun, 28 Oct 2007, Sean Donelan wrote:


If you performed a simple Google search, you would have discovered many
universities around the world having similar problems.

The university network engineers are saying adding capacity alone isn't 
solving their problems.


You're welcome to provide proper technical links. I'm looking for ones 
that say that 10GE didn't solve their problem, not the ones saying "we 
upgraded from T3 to OC3 from our campus of 30k student dorms connected 
with 100/100 and it's still overloaded", because that's just silly.


I had someone send me one that contradicts your opinion:

http://www.uoregon.edu/~joe/i2-cap-plan/internet2-capacity-planning.ppt


Since I know poeple that offer 100/100 to university dorms, and are having
problems with GE and even 10 GE depending on the size of the dorms, if you
did a Google search you would find the problem.


Please provide links. I tried googling for instance for capacity problem p2p 10ge> and didn't find anything useful.



1. You are assuming traffic mixes don't change.
2. You are assuming traffix mixes on every network are the same.


I'm using real world data from swedish ISPs, each with tens of thousands 
of residential users, including the university ones. I tend to think we 
have one of the highest internet per capita usages in the world unless 
someone can give me data that says something else.



If you restrict demand, statistical multiplexing works.  The problem is
how do you restrict demand?


By giving people 10/10 instead of your network can't handle 100/100. Or 
you create a management system that checks port usage and limits the heavy 
users to 10/10, or you use microflow policing to limit uploads to 10, 
especially at times of congestion.


There are numerous ways of doing it that doesn't involve sending RST:s to 
customer TCP sessions or other ways of spoofing traffic.


What happens when 10 x 100/100 users drive demand on your GigE ring to 99%? 
What happens when P2P become popular and 30% of your subscribers

use P2P?  What happens when 80% of your subscribers use P2P?  What happens
with 100% of your subscribers use P2P?


If 100% of the userbase use p2p, then traffic patterns will change and 
more content will be local.



TCP "friendly" flows voluntarily restrict demand by backing off when they
detect congestion.  The problem is TCP assumes single flows, not grouped 
flows used by some applications.


TCP assumes all flows are created equal, and doesn't take into account 
that a single user can use hundreds of flows, that's correct.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-28 Thread Mikael Abrahamsson


On Sat, 27 Oct 2007, Sean Donelan wrote:

Why artificially keep access link speeds low just to prevent upstream 
network congestion?  Why can't you have big access links?


You're the one that says that statistical overbooking doesn't work, not 
anyone else.


Since I know people that offer 100/100 to residential users that upstream 
this with GE/10GE in their networks and they are happy with it, I don't 
agree with you about the problem description.


For statistical overbooking to work, a good rule of thumb is that the 
upstream can never be more than half full normally, and each customer 
cannot have more access speed than 1/10 of the speed of the upstream 
capacity.


So for example, you can have a large number of people with 100/100 
uplinked with gig as long as that gig ring doesn't carry more than approx 
500 meg peak 5 minute average and it'll work just fine.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Mikael Abrahamsson


On Fri, 26 Oct 2007, Sean Donelan wrote:

If Comcast had used Sandvine's other capabilities to inspect and drop 
particular packets, would that have been more acceptable?


Yes, definately.


Dropping random packets (i.e. FIFO queue, RED, not good on multiple-flows)
Dropping particular packets (i.e. AQM, WRED, etc, difficult for multiple 
flows)
Dropping DSCP marked packets first (i.e. scavenger class requires voluntary 
marking)

Dropping particular protocols (i.e. ACLs, difficult for dynamic protocols)


Dropping a limited ratio of the packets is acceptable at least to me.

Sending a TCP RST (i.e. most application protocols respond, easy for 
out-of-band devices)


... but terminating the connection is not. Spoofing packets is not 
something an ISP should do. Ever. Dropping and/or delaying packets, yes, 
spoofing, no.


Changing IP headers (i.e. ECN bits, not implemented widely, requires inline 
device)

Changing TCP headers (i.e. decrease windowsize, requires inline device)
Changing access speed (i.e. dropping user down to 64Kbps, crushes every 
application)
Charging for overuse (i.e. more than X Gbps data transferred per time period, 
complaints about extra charges)
Terminate customers using too much capacity (i.e. move the problem to a 
different provider)


These are all acceptable, where I think the adjust MSS is bordering on 
intrusion in customer traffic. An ISP should be in the market of 
forwarding packets, not changing them.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-25 Thread Mikael Abrahamsson


On Fri, 26 Oct 2007, Sean Donelan wrote:

When 5% of the users don't play nicely with the rest of the 95% of the 
users; how can network operators manage the network so every user 
receives a fair share of the network capacity?


By making sure that the 5% of users upstream capacity doesn't cause the 
distribution and core to be full. If the 5% causes 90% of the traffic and 
at peak the core is 98% full, the 95% of the users that cause 10% of the 
traffic couldn't tell the different from if the core/distribution was only 
used at 10%.


If your access media doesn't support what's needed (it might be a shared 
media like cable) then your original bad engineering decision of choosing 
a shared media without fairness implemented from the beginning is 
something you have to live with, and you have to keep making bad decisions 
and implementations to patch what's already broken to begin with.


You can't rely on end user applications to play fair when it comes to 
ISP network being full, and if they don't play fair and it's filling up 
the end user access, then it's that single end user that gets affected by 
it, not their neighbors.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: BitTorrent swarms have a deadly bite on broadband nets

2007-10-25 Thread Mikael Abrahamsson


On Thu, 25 Oct 2007, Geo. wrote:

Seems to me a programmer setting a default schedule in an application is 
far simpler than many of the other suggestions I've seen for solving 
this problem.


End users do not have any interest in saving ISP upstream bandwidth, their 
interest is to get as much as they can, when they want/need it. So solving 
a bandwidth crunch by trying to make end user applications behave in an 
ISP friendly manner is a concept that doesn't play well with reality.


Congestion should be at the individual customer access, not in the 
distribution, not at the core.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: The next broadband killer: advanced operating systems?

2007-10-23 Thread Mikael Abrahamsson


On Tue, 23 Oct 2007, Sam Stickland wrote:

servers. From this little bit of evidence I can blazenly extrpolate to 
suggest that maximum bandwidth consumption is currently limited to some 
noticable degree by the lack of widely deployed TCP window size tuning. Links 
that are currently uncongested might suddenly see a sizable amount of extra 
traffic.


So, do we think that traffic will have a higher peak due to this (more 
traffic at peak time compared to low time), or that people will actually 
transfer more data because they get higher thruput?


I don't see it as natural that people will transfer more data totally 
because they get higher thruput.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Mikael Abrahamsson


On Tue, 23 Oct 2007, Sean Donelan wrote:

Ok, maybe the greedy commercial folks screwed up and deserve what they 
got; but why are the nobel non-profit universities having the same 
problems?


Because if you look at a residential population with ADSL2+ and 10/10 or 
100/100 respectively, the upload/download ratios are reversed, from 1:2 
with ADSL2+ (double the amount of download to upload), to 2:1 (double the 
amount of upload to download). In my experience, the amount of download is 
approximately the same in both cases, which gives that the upload factor 
changes 1:4 with the access media symmetry.


Otoh, long term savings (several years) on operational costs still make 
residential ethernet a better deal since experience is that "it just 
works" as opposed to ADSL2+ where you have a very disturbing signal 
environment where customers are impacting each other which leads to a lot 
of customer calls regarding poor quality and varying speeds/bit errors 
over time.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: The next broadband killer: advanced operating systems?

2007-10-22 Thread Mikael Abrahamsson


On Mon, 22 Oct 2007, Sam Stickland wrote:

Does anyone know if there are any plans by Microsoft to push this out as 
a Windows XP update as well?


You can achieve the same thing by running a utility such as TCP Optimizer.

http://www.speedguide.net/downloads.php

Turn on window scaling and increase the TCP window size to 1 meg or so, 
and you should be good to go.


The "only" thing this changes for ISPs is that all of a sudden increasing 
the latency by 30-50ms by buffering in a router that has a link that is 
full, won't help much, end user machines will be able to cope with that 
and still use the bw. So if you want to make the gamers happy you might 
want to look into that WRED drop profile one more time with this in mind 
if you're in the habit of congesting your core regularily.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Mikael Abrahamsson


On Sun, 21 Oct 2007, Eric Spaeth wrote:

They have.   Enter DOCSIS 3.0.   The problem is that the benefits of DOCSIS 
3.0 will only come after they've allocated more frequency space, upgraded 
their CMTS hardware, upgraded their HFC node hardware where necessary, and 
replaced subscriber modems with DOCSIS 3.0 capable versions.   On an 
optimistic timeline that's at least 18-24 months before things are going to 
be better; the problem is things are broken _today_.


Could someone who knows DOCSIS 3.0 (perhaps these are general 
DOCSIS questions) enlighten me (and others?) by responding to a few things 
I have been thinking about.


Let's say cable provider is worried about aggregate upstream capacity for 
each HFC node that might have a few hundred users. Do the modems support 
schemes such as "everybody is guaranteed 128 kilobit/s, if there is 
anything to spare, people can use it but it's marked differently in IP 
PRECEDENCE and treated accordingly to the HFC node", and then carry it 
into the IP aggregation layer, where packets could also be treated 
differently depending on IP PREC.


This is in my mind a much better scheme (guarantee subscribers a certain 
percentage of their total upstream capacity, mark their packets 
differently if they burst above this), as this is general and not protocol 
specific. It could of course also differentiate on packet sizes and a lot 
of other factors. Bad part is that it gives the user an incentive to 
"hack" their CPE to allow them to send higher speed with high priority 
traffic, thus hurting their neighbors.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Mikael Abrahamsson


On Sun, 21 Oct 2007, Sean Donelan wrote:

So your recommendation is that universities, enterprises and ISPs simply 
stop offering all Internet service because a few particular application 
protocols are badly behaved?


They should stop to offer flat-rate ones anyway. Or do general per-user 
ratelimiting that is protocol/application agnostic.


There are many ways to solve the problem generally instead of per 
application, that will also work 10 years from now when the next couple of 
killer apps have arrived and past away again.


A better idea might be for the application protocol designers to improve 
those particular applications.


Good luck with that.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Mikael Abrahamsson


On Sun, 21 Oct 2007, Sean Donelan wrote:

Sandvine, packeteer, etc boxes aren't cheap either.  The problem is 
giving P2P more resources just means P2P consumes more resources, it 
doesn't solve the problem of sharing those resources with other users. 
Only if P2P shared network resources with other applications well does 
increasing network resources make more sense.


If your network cannot handle the traffic, don't offer the services.

It all boils down to the fact that the only thing that end users really 
have to give us as ISPs, is their source address (which we usually assign 
to them), the destination address of the packet they want transported, and 
we can implicitly look at the size of the packet and get that information. 
That's the ONLY thing they have to give us. Forget looking at L4 or alike, 
that will be encrypted as soon as ISPs start to discriminate on it. Users 
have enough computing power available to encrypt everything.


So any device that looks inside packets to decide what to do with them is 
going to fail in the long run and is thus a stop-gap measure before you 
can figure out anything better.


Next step for these devices is to start doing statistical analysis of 
traffic to find patterns, such as "you're sending traffic to hundreds of 
different IPs simultaneously, you must be filesharing" or alike. This is 
the next step and a lot of the box manufacturers are already looking into 
this. So, trench war again, I can see countermeasures to this also.


The long term solution is of course to make sure that you can handle the 
traffic that the customer wants to send (because that's what they can 
control), perhaps by charging for it by some scheme that involves not 
offering flat-fee.


Saying "p2p doesn't play nice with the rest of the network" and blaming 
p2p, only means you're congesting due to insufficient resources, and the 
fact that p2p uses a lot of simultaneous TCP sessions and individually 
they're playing nice, but together they're not when compared to web 
surfing.


The solution is not to try to change p2p, the solution is to fix 
the network or the business model so your network is not congesting.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Why do some ISP's have bandwidth quotas?

2007-10-13 Thread Mikael Abrahamsson


On Fri, 12 Oct 2007, Brandon Galbraith wrote:

Not to drag this too far off topic, but have serious studies been done 
looking at moving switching fabric closer to the DSLAMs (versus doing 
everything PPPoE)? I know this sort of goes opposite of how ILECs are 
setup to dish out DSL, but as more traffic is being pushed user to user, 
it may make economic/technical sense.


I know some som non-ILECs that do DSL bitstream via L3/MPLS IPVPN and IP 
DSLAMs, which then if they implement multicast in their VPN would be able 
to provide a service that could support multicast TV.


For me any tunnel based bitstream doesn't scale for the future and in 
competetive markets it's already been going away (mostly because ISPs 
buying the bitstream service can't compete anyway).


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Why do some ISP's have bandwidth quotas?

2007-10-10 Thread Mikael Abrahamsson


On Wed, 10 Oct 2007, Marshall Eubanks wrote:


Many people leave the TV on all the time, at least while they are home.

On the Internet broadcasting side, we (AmericaFree.TV) have some viewers 
that do the same - one has racked up a cumulative 109 _days_ of viewing 
so far this year. (109 days in 280 days duration works out to 9.3 hours 
per day.) I am sure that other video providers can provide similar 
reports. So, I don't think that things are that different here in the 
new regime.


If it's multicast TV I don't see the problem, it doesn't increase your 
backbone traffic linearly with the number of people doing it.


But this is of course a problem in a VOD environment, but on the other 
hand, people are probably less likely to just leave it on if it's actually 
programming they can have when they want. You don't need a TiVo when you 
have network based service that does the TiVo functionality for you.


Personally, I'd rather pay per hour I'm watching VOD, than paying nothing 
for channels filled with commercials where I have no control over when and 
what I could watch.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Why do some ISP's have bandwidth quotas?

2007-10-10 Thread Mikael Abrahamsson


On Wed, 10 Oct 2007, Joe Greco wrote:


One of the biggest challenges for the Internet has got to be the steadily
increasing storage market, combined with the continued development of
small, portable processors for every application, meaning that there's
been an explosion of computing devices.


The one thing that scares me the most is that I have discovered people 
around me that use their bittorrent clients with rss feeds from bittorrent 
sites to download "everything" (basically, or at least a category) and 
then just delete what they don't want. Because they're paying for flat 
rate there is little incentive in trying to save on bandwidth.


If this spreads, be afraid, be very afraid. I can't think of anything more 
bandwidth intensive than video, no software updates downloads in the world 
can compete with people automatically downloading DVDRs or xvids of tv 
shows and movies, and then throwing it away because they were too lazy to 
set up proper filtering in the first place.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Why do some ISP's have bandwidth quotas?

2007-10-07 Thread Mikael Abrahamsson


On Mon, 8 Oct 2007, Mark Newton wrote:

Thought experiment:  With $250 per megabit per month transit and $30 - 
$50 per month tail costs, what would _you_ do to create the perfect 
internet industry?


I would fix the problem, ie get more competition into these two areas 
where the prices are obvisouly way higher than in most parts of the 
civilised world, much higher than is motivated by the placement there in 
the middle of an ocean.


Perhaps it's hard to get the transoceanic cost down to european levels, 
but a 25 time difference, that's just too much.


And about the local tail, that's also 5-10 times higher than normal in the 
western world, I don't see that being motivated by some fundamental 
difference.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Why do some ISP's have bandwidth quotas?

2007-10-06 Thread Mikael Abrahamsson


On Sun, 7 Oct 2007, Mark Newton wrote:


We're living in an environment where European service providers use
DPI boxes to shape just about everyone to about 40 Gbytes per month,


This doesn't fit with my picture of european broadband at all. Most 
markets are developing into flat rate ones without per-minute or 
per-traffic charges, and residential broadband is closing in on 50-70% 
market penetration all across the continent, with the northern part being 
a bit ahead of the southern part.


Competition is so fierce that a lot of ISPs are electing to get out of 
certain markets due to uncertainty of future profits even with quite slim 
organisations and tight budgets on technology without ATM etc (IP dslams). 
France for instance, it's hard to make any money unless you sell triple 
play and try to make total profits on the combined services, just selling 
one doesn't work.


It's not uncommon for low-bandwidth (.25-.5 megabit/s) residential access 
to be in the USD15/month range and 24 meg costing USD30-50 per month 
including tax. This is without any monthly quota at all, ie flatrate.


5-10% of swedish households have the possiblity to purchase 100/10 over 
CAT5 for USD50 a month including 25% sales tax, without any quota, and 
they can actually use the speeds. Some even have 100/100.


Recipe for this is to have competitive markets with copper being 
deregulated and resold at a decent price. Bitstream with incumbant 
providing access just doesn't work, new services such as multicast IPTV 
doesn't work over bitstream.


In a lot of continental europe ISPs can purchase wholesale internet in the 
gigabit range for USD6-15/meg/month depending on country and if it 
includes national traffic or not.


Having a competitive market with a lot of players makes all the 
difference.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Congestion control train-wreck workshop at Stanford: Call for Demos

2007-09-05 Thread Mikael Abrahamsson


On Wed, 5 Sep 2007, Fred Baker wrote:

capacity. My ISP in front of my home does that; they configure my cable modem 
to shape my traffic up and down to not exceed certain rates, and lo and


Well, in case you're being DDoS:ed at 1 gigabit/s, you'll use more 
resources in the backbone than most, by some definition of "you".


So my take is that this is impossible to solve in the core because routers 
can't keep track of individual conversations and act on them, doing so 
would increase cost and complexity enormously.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: "2M today, 10M with no change in technology"? An informal survey.

2007-08-28 Thread Mikael Abrahamsson


On Wed, 29 Aug 2007, Lincoln Dale wrote:


reason i ask is that since circa. 12.2(18)SXF9 (i.e. back in 2005), there has


One of the problems with this is that the people that have the tendency of 
not knowing their hardware limitations are the same people that will be 
running SXD because they haven't put CFs into their SUP:s to handle the 
larger image sizes of SXE and later.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: "2M today, 10M with no change in technology"? An informal survey.

2007-08-27 Thread Mikael Abrahamsson


On Mon, 27 Aug 2007, Deepak Jain wrote:

Where do the FIBs break on older 12000 series and M-series routers? (or 
pick the *next* most popular piece of network equipment that is used in 
full-routes scenarios).


On the 12000, I'd give the following observations on the state of the 
older linecards for DFZ routing:


GRP that can't handle 512 meg memory has been useless for quite some time.
GRP-B with 512 megs of ram seems ok for at least 6-12 more months.
PRP needs 1 gig of ram.
All LCs need at least 256 megs of route memory.
4GE engine3 LC needs 512 megs of route memory.
10x1GE Engine 4 LC needs 512 megs of route memory.
Engine2 LCs are starting to run out of forwarding resources, cisco states 
200k routes, but obviously they still work, but for how long?


Otoh the SIP-601 comes with 2 gigs of route memory, which is really nice. 
The 12000 with recent hardware will most likely last quite some time, but 
the hardware designed in the late 90ties is (not strangely) running out of 
steam.


So if you have old hardware, you need to monitor your memory and table 
utilization on a monthly basis.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-21 Thread Mikael Abrahamsson


On Tue, 21 Aug 2007, Alexander Harrowell wrote:


This is what I eventually upshot..

http://www.telco2.net/blog/2007/08/variable_speed_limits_for_the.html


You wrote in your blog:

"The problem is that if there is a major problem, very large numbers of 
users applications will all try to resend; generating a packet storm and 
creating even more congestion."


Do you have any data/facts to back up this statement? I'd be very 
interested to hear them, as I have heard this statement a few times before 
but it's a contradiction to the way I understand things to work.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-19 Thread Mikael Abrahamsson


On Sun, 19 Aug 2007, Perry Lorier wrote:

Many networking stacks have a "TCP_INFO" ioctl that can be used to query for 
more accurate statistics on how the TCP connection is fairing (number of 
retransmits, TCP's current estimate of the RTT (and jitter), etc).  I've 
always pondered if bittorrent clients made use of this to better choose which 
connections to prefer and which ones to avoid.  I'm unfortunately unsure if 
windows has anything similar.


Well, by design bittorrent will try to get everything as fast as possible 
from all peers, so any TCP session giving good performance (often low 
packet loss and low latency) will thus end up transmitting a lot of the 
data in the torrent, so by design bittorrent is kind of localised, at 
least in the sense that it will utilize fast peers more than slower ones 
and these are normally closer to you.


One problem with having clients only getting told about clients that are near 
to them is that the network starts forming "cliques".  Each clique works as a 
separate network and you can end up with silly things like one clique being 
full of seeders, and another clique not even having any seeders at all. 
Obviously this means that a tracker has to send a handful of addresses of 
clients outside the "clique" network that the current client belongs to.


The idea we pitched was that of the 50 addresses that the tracker returns 
to the client, 25 (if possible) should be from the same ASN as the client 
itself, or a nearby ASN (by some definition). If there are a lot of peers 
(more than 50) the tracker will return a random set of clients, we wanted 
this to be not random but 25 of them should be by network proximity (by 
some definition).


You want to make hosts talk to people that are close to you, you want to make 
sure that hosts don't form cliques, and you want something that a tracker can 
very quickly figure out from information that is easily available to people 
who run trackers.  My thought here was to sort all the IP addresses, and send 
the next 'n' IP addresses after the client IP as well as some random ones. 
If we assume that IP's are generally allocated in contiguous groups then this 
means that clients should be generally at least told about people nearby, and 
hopefully that these hosts aren't too far apart (at least likely to be within 
a LIR or RIR).  This should be able to be done in O(log n) which should be 
fairly efficient.


Yeah, we discussed that the list of IPs should be sorted (doing insertion 
sort) in the data structures in the tracker already, so what you're saying 
is one way of defining proximity that as you're saying, would probably be 
quite efficient.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-16 Thread Mikael Abrahamsson


On Thu, 16 Aug 2007, Fred Baker wrote:

world, they're perfectly happen to move it around the world. Hence, moving a 
file into a campus doesn't mean that the campus has the file and will stop 
bothering you. I'm pushing an agenda in the open source world to add some 
concept of locality, with the purpose of moving traffic off ISP networks when 
I can. I think the user will be just as happy or happier, and folks pushing 
large optics will certainly be.


With the regular user small TCP window size, you still get a sense of 
locality as more data during the same time will flow from a source that is 
closer to you RTT-wise than from one that is far away.


We've been pitching the idea to bittorrent tracker authors to include a 
BGP feed and prioritize peers that are in the same ASN as the user 
himself, but they're having performance problems already so they're not so 
keen on adding complexity. If it could be solved better at the client 
level that might help, but the end user who pays flat rate has little 
incentive to help the ISP in this case.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-16 Thread Mikael Abrahamsson


On Thu, 16 Aug 2007, Deepak Jain wrote:

Depends on your traffic type and I think this really depends on the 
granularity of your study set (when you are calculating 80-90% usage). If you 
upgrade early, or your (shallow) packet buffers convince to upgrade late, the 
effects might be different.


My guess is that the value comes from mrtg or alike, 5 minute average 
utilization.


If you do upgrades assuming the same amount of latency and packet loss on any 
circuit, you should see the same effect irrespective of buffer depth. (for 
any production equipment by a main vendor).


I do not agree. A shallow buffer device will give you packet loss without 
any major latency increase, whereas a deep buffer device will give you 
latency without packet loss (as most users out there will not have 
sufficient tcp window size to utilize a 300+ ms latency due to buffering, 
they will throttle back their usage of the link, and it can stay at 100% 
utilization without packet loss for quite some time).


Yes, these two cases will both enable link utilization to get to 100% on 
average, and in most cases users will actually complain less as the packet 
loss will most likely be less noticable to them in traceroute than the 
latency increase due to buffering.


Anyhow, I still consider a congested backbone an operational failure as 
one is failing to provide adequate service to the customers. Congestion 
should happen on the access line to the customer, nowhere else.


Deeper buffers allow you to run closer to 100% (longer) with fewer packet 
drops at the cost of higher latency. The assumption being that more congested 
devices with smaller buffers are dropping some packets here and there and 
causing those sessions to back off in a way the deeper buffer systems don't.


Correct.

Its a business case whether its better to upgrade early or buy gear that lets 
you upgrade later.


It depends on your bw cost, if your link is very expensive then it might 
make sense to use manpower opex and equipment capex to prolong the usage 
of that link by trying to cram everything you can out of it. In the long 
run there is of course no way to avoid upgrade, as users will notice it 
anyhow.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: Extreme congestion (was Re: inter-domain link recovery)

2007-08-16 Thread Mikael Abrahamsson


On Thu, 16 Aug 2007, [EMAIL PROTECTED] wrote:

How many people have noticed that when you replace a circuit with a 
higher capacity one, the traffic on the new circuit is suddenly greater 
than 100% of the old one. Obviously this doesn't happen all the time, 
such as when you have a 40% threshold for initiating a circuit upgrade, 
but if you do your upgrades when they are 80% or 90% full, this does 
happen.


I'd say this might happen on links connected to devices with small buffers 
such as with a 7600 with lan cards, foundry device or alike. If you look 
at the same behaviour of a deep packet buffer device such as juniper or 
cisco GSR/CRS-1 the behaviour you're describing doesn't exist (at least 
not that I have noticed).


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Cisco CRS-1 vs Juniper 1600 vs Huawei NE5000E

2007-08-03 Thread Mikael Abrahamsson

On Fri, 3 Aug 2007, ALEJANDRO ESQUIVEL RODRIGUEZ wrote:


  Which equipment is better ( perfomance, availability, 
scalability, features, Support, and Price ($$$) ) ???


There is no single answer to your question. Looking at what the platforms 
offer NOW (if you want future you have to talk to the vendors), some key 
points:


CRS-1 scales to at least 4-8 linecard chassis with current software.
Juniper T1600 doesn't have a multichassi solution.
NE5000E is available in two linecard chassi solution.

CRS-1 was designed from the beginning as a 64 (or 72, I dont remember) 
linecard chassi solution, Juniper and Huawei are working on their 
scalability.


If you need a lot of multicast you need to look into how the platforms do 
this, none of them will do wirespeed multicast on all packet sizes and 
they all have different ways of handling it internally. If you have less 
than 10% of your packets that are multicast, this is less of a worry.


Since Huawei is the challenger here, it's most likely they'll give you the 
most aggressive price.


If you need netflow, it might be good to know that CRS-1 does without the 
need for anything additional, both T1600 and NE5000E needs feature 
acceleration cards to do netflow properly, and NE5000E will only do 
netflow in the ingress direction on a linecard whereas CRS-1 and T1600 
will do it bidirectionally.


When it comes to operational issues, my personal opinion:

If you know Juniper, the OS of course identical on the T1600.
If you know IOS, IOS XR is fairly easy to learn.
Huawei OS looks configurationwise structurally like IOS, but with the 
commands changed on purpose (show is "display" etc).


There are a lot more things to say but a lot of it might be under NDA, so 
you need to talk to the vendors directly to get more details.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]

Re: Port 587 vs. 25 [was: DNS Hijacking by Cox]

2007-07-23 Thread Mikael Abrahamsson


On Mon, 23 Jul 2007, Jeroen Wunnink wrote:

It's a lot more trouble for hosting providers that provide customers with 
webhosting and E-mail services.


Why? What stops you from migrating them to TCP/587? I'd imagine direct 
TCP/25 access to your servers would be spotty at best, anyway. Where I'm 
at, there are more ISPs blocking TCP/25 to anything but their own email 
servers, that those who do not block.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Port 587 vs. 25 [was: DNS Hijacking by Cox]

2007-07-23 Thread Mikael Abrahamsson


On Mon, 23 Jul 2007, Patrick W. Gilmore wrote:

They can, but they do not.  AFAIK, not a single ISP redirects port 587 to 
their own servers.


I work for an ISP that has totally blocked TCP/25 for all use. We require 
all users to use 587 (with authentication when connecting to our own mail 
system). We have substantially over 1M broadband users in 10-15 european 
countries (I don't know the exact number).


This took planning, lots of information and HOWTOs to users, and some 
helpdesk backing to get into place, but it's done, and it works. It was 
less painful that we dreaded.


Unfortunately we don't have internet operations in native english speaking 
country, so this will be in whatever it might autodetect your language to.


http://www.tele2mail.com/

As you can see, there are configuration manuals for all major email 
clients, for instance outlook:


http://www.tele2mail.com/manual/outlook/

So my recommendation is for other ISPs to do the same thing. Yes, I know 
IP providers should only move IP packets and don't care about the 
contents, but... well... you know.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


IEEE HSSG (was Re: peter lothberg's mother slashdotted)

2007-07-12 Thread Mikael Abrahamsson


On Fri, 13 Jul 2007, Randy Bush wrote:

sell us a merlin-like hack to deal with it until the ieee gets off its 
butt.


I hope everybody tells their vendors that 100GE is of high importance for 
them. The IEEE HSSG seem to be bogged down by 40GE vs 100GE right now with 
some voicing very strongly that 40GE should be included together with 
100GE goal, mostly from the fibre channel and server folks.


If you want 100GE by 2009-2010, please read up at 
<http://www.ieee802.org/3/hssg/public/index.html> and voice your opinion 
either on the email list or via your vendors.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: peter lothberg's mother slashdotted

2007-07-12 Thread Mikael Abrahamsson


On Thu, 12 Jul 2007, Steve Gibbard wrote:

So, Peter, are you reading this?  I'm curious what the real story on the 
economics here is.  How affordable is affordable?  What should we be 
learning from this that would let those of us backwards people with DSL 
connections cheaply move into the modern world?


It's a demonstration of backbone technology, not really access technology.

It just makes its appeal better in the media if you put it in your mothers 
house.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: peter lothberg's mother slashdotted

2007-07-12 Thread Mikael Abrahamsson


On Thu, 12 Jul 2007, Patrick W. Gilmore wrote:


His bathroom, IIRC.  And I heard a rumor about his g/f's flat.


I believe that was a GSR. He has his picture albums linked from 
<http://www.stupi.se> you might be able to find something there. I know 
I've seen pictures from the flat.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: peter lothberg's mother slashdotted

2007-07-12 Thread Mikael Abrahamsson


On Thu, 12 Jul 2007, Stephen Wilcox wrote:

They cite this as worlds fastest home broadband but didnt Peter install 
an OC-768 to his basement a few years ago when he was testing some stuff 
for Sprint?


This story has spread a lot, I'm actually quite surprised, but Peter is 
good at PR.


The point he tried to make was that fiber can carry high speed 
communications as opposed to other people who seem to think radio is the 
future. The equipment used (the CRS-1 OC768 DWDMPOS linecard) has been 
shipping for over a year so that's not new. Sprint did test with beta 
versions of this LC in 2004:


<http://www.lightreading.com/document.asp?doc_id=53816>

What I like about it is that it uses single 50GHz wave that can traverse 
existing DWDM systems designed for 10G waves but that can now carry 40G.


So basically there is little new with this test, but the amount of 
publicity it has received I interpret that the technology is fairly 
unknown even to professionals in the business, so it might have been a 
good thing after all, as it made more people aware of what's possible.


I sat at the Stockholm end when he brought up the wave and it was 
surprisingly easy to get it all to work, if this was real production 
traffic I would have liked to do more verification and testing, but it 
shows that some IP people can bring up new waves in a DWDM system using 
routers as end nodes, one doesn't need a huge transmission staff to do it, 
neither does one have to have SR-DWDM transponders.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Number of BGP routes a large ISP sees in total

2007-04-18 Thread Mikael Abrahamsson


On Tue, 17 Apr 2007, Yi Wang wrote:


sense about the average (e.g., about 5? 10? 20?), as for a "large" ISP.


Well, if you're interconnecting with other large ISPs in 5 places then 
you'll get each prefix at least 5 times. Having 5 eBGP sessions between 
two ASes is quite common if both are large ISPs. So yes, I'd say that 
between 5-10 is quite common.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: Thoughts on increasing MTUs on the internet

2007-04-13 Thread Mikael Abrahamsson


On Fri, 13 Apr 2007, Leigh Porter wrote:

What would be good is if when a jumbogram capable path on the Internet 
exists, jumbograms can be used.


Yes, and it would be good if PMTUD worked, and ECN, oh and large 
UDP-packets for DNS, and BCP38, and... and... and.


The internet is a very diverse and complicated beast and if end systems 
can properly detect PMTU by doing discovery of this, it might work. 
Requiring the core and distribution to change isn't going to happen 
overnight, so end systems first. Make sure they can properly detect PMTU 
by use of nothing more than "is this packet size getting thru" (ie no 
ICMP-NEED-TO-FRAG) or alike, then we might see partial adoption of larger 
MTU in some parts and if this becomes a major customer requirement then it 
might spread.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Thoughts on increasing MTUs on the internet

2007-04-12 Thread Mikael Abrahamsson


On Thu, 12 Apr 2007, Joe Loiacono wrote:


Window size is of course critical, but it turns out that MTU also impacts
rates (as much as 33%, see below):

   MSS  0.7
Rate = - * ---
   RTT(P)**0.5

MSS = Maximum Segment Size
RTT = Round Trip Time
P   = packet loss


So am I to understand that with 0 packetloss I get infinite rate? And TCP 
window size doesn't affect the rate?


I am quite confused by this statement. Yes, under congestion larger MSS is 
better, but without congestion I don't see where it would differ apart 
from the interrupt load I mentioned earlier?


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Thoughts on increasing MTUs on the internet

2007-04-12 Thread Mikael Abrahamsson


On Thu, 12 Apr 2007, Joe Loiacono wrote:


Large MTUs enable significant throughput performance enhancements for
large data transfers over long round-trip times (RTTs.) The original


This is solved by increasing TCP window size, it doesn't depend very much 
on MTU.


Larger MTU is better for devices that for instance do per-packet 
interrupting, like most endsystems probably do. It doesn't increase 
long-RTT transfer performance per se (unless you have high packetloss 
because you'll slow-start more efficiently).


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Thoughts on increasing MTUs on the internet

2007-04-12 Thread Mikael Abrahamsson


On Thu, 12 Apr 2007, Saku Ytti wrote:


IXP peeps, why are you not offering high MTU VLAN option?


Netnod in Sweden offer MTU 4470 option.

Otoh it's not so easy operationally since for instance Juniper and Cisco 
calculates MTU differently.


But I don't really see it beneficial to try to up the endsystem MTU to 
over standard ethernet MTU, if you think it's operationally troublesome 
with PMTUD now, imagine when everybody is running different MTU.


Biggest benefit would be if the transport network people run PPPoE and 
other tunneled traffic over, would allow for whatever MTU needed to carry 
unfragmented 1500 byte tunneled packets, so we could assure that all hosts 
on the internet actually have 1500 IP MTU transparently.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


RE: Abuse procedures... Reality Checks

2007-04-11 Thread Mikael Abrahamsson


On Wed, 11 Apr 2007, Frank Bulk wrote:


It truly is a wonder that Comcast doesn't apply DOCSIS config file filters
on their consumer accounts, leaving just the IPs of their email servers
open.  Yes, it would take an education campaign on their part for all the
consumers that do use alternate SMTP servers, but imagine how much work it
would save their abuse department in the long run.


There are several large ISPs (millions of subscribers) that have done away 
with TCP/25 altogether. If you want to send email thru the ISPs own email 
system you have to use TCP/587 (SMTP AUTH).


Yes, this takes committment and resources, but it's been done 
successfully.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Abuse procedures... Reality Checks

2007-04-07 Thread Mikael Abrahamsson


On Sat, 7 Apr 2007, Chris Owen wrote:

And how do you know the difference?  The Cox IP address is SWIPed.  Its 
even sub-allocated.  The allocation is just a /19.


Exactly, so why not just block whatever the suballocation is? Would mean 
that companies that properly SWIP their IP-blocks and put in the effort to 
maintain them, are given an advantage to companies that do not.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: On-going Internet Emergency and Domain Names (kill this thread)

2007-03-31 Thread Mikael Abrahamsson


On Sat, 31 Mar 2007, Jeff Shultz wrote:


Does that sound about right?


If ISPs cannot be forced into running a 24/7/365 response function, I 
don't see the registry/registrars doing it.


Solving this at the DNS level is just silly, if you want to solve it it 
either you get to the core (block IP access, perhaps by BGP blacklisting) 
or go to level 8, ie the human level, and get these infected machines off 
the net permanently.


So Gadi, to accomplish what you want you need to propose to the ISPs all 
over the net that what you're trying to do is so important that some 
entity publishing a realtime blacklist is important enough that all major 
ISPs should subscribe to a BGP blackhole list from there. Also that this 
is important enough to seriously violate the distributed structure of the 
net today that has made it into the raging success it is today. It's not 
perfect, but it works, and it doesn't have a single point of failure.


... and people have very bad experiences from blacklists not being 
maintained properly.


--
Mikael Abrahamsson email: [EMAIL PROTECTED]



Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Mikael Abrahamsson


On Sat, 31 Mar 2007, Gadi Evron wrote:


In this case, we speak of a problem with DNS, not sendmail, and not bind.


The argument can be made that you're trying to solve a windows-problem by 
implementing blocking in DNS.


Next step would be to ask all access providers to block outgoing UDP/53 so 
people can't use open resolvers or machines set up to act as resolvers for 
certain DNS information that the botnets need, as per the same analysis 
that blocking TCP/25 stops spam.


So what you're trying to do is a pure stop-gap measure that won't scale in 
the long run. Fix the real problem instead of trying to bandaid the 
symptoms.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: TCP and WAN issue

2007-03-27 Thread Mikael Abrahamsson


On Tue, 27 Mar 2007, Philip Lavine wrote:

I have an east coast and west coast data center connected with a DS3. I 
am running into issues with streaming data via TCP and was wondering 
besides hardware acceleration, is there any options at increasing 
throughput and maximizing the bandwidth? How can I overcome the TCP 
stack limitations inherent in Windows (registry tweaks seem to not 
functions too well)?


You should talk to the vendor (microsoft) and ask them how to tweak their 
product to properly work over the WAN.


Don't let them get away with substandard product when it comes to WAN 
optimization. If you can get microsoft to clean up their act, you'd have 
done ISPs a great service, because then we can stop trying to convince 
customers that it's not ISP fault that they get bad speeds with their 
windows PCs.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-13 Thread Mikael Abrahamsson


On Tue, 13 Mar 2007, Joel Jaeggli wrote:


sell you 100/24 vdsl2 for around 80euro a month.


100/10 over CAT5 ethernet (and also 100/100) is available here in Sweden 
for around $35+tax in quite a lot of places. Weirdly enough it's more 
commonly available in places where the real estate owner has a harder time 
renting out apartments, because it actually brings people over who 
wouldn't normally considering living there. Competetive advantage.


Real estate owner will pay up front for the CAT5 cabling and will then 
bring in one or more ISPs to provide IP connectivity and switches (well, a 
lot of different business models are available). Real estate owner invests 
a few hundred dollars and gets more apartments rented out, the ISP has to 
bring fiber into the building/area and can then reach a lot of people with 
highspeed connections that give high take rates.


Some ISPs that prefer CAT5 do so because of less maintenance and that the 
VDSL(2) equipment is actually more expensive than CAT5 cabling+ethernet 
switches in a lot of the cases.


I think it's weird that cable(coax) is the premium service in the US, 
because here it's considered inferior to DSL, and it's the service you get 
when you don't care about performance and quality. Just the other month 
there was some kind of disruption on the cable system where I live, and 
when I called in to report it they first asked me to go check with my 
neighbors (beside me, and both upstairs and downstairs) before they would 
even take my fault report. Then they had to coordinate a time when both I 
and my upstair neighbor could be home from work at the same time so the 
technician could try to find the fault. Ended up me having basically no TV 
(almost unwatchable) or Telephony (cable modem wouldnt link up) for 10 
days. I'm glad I had my internet connectivity via other means. I'll take 
star topology all days of the week, thank you.


So to sum it all up, my take on the US problems is that there is too 
little competition in the market place. LLUB has brought a lot of 
competition into the marketplace here and to compete with the LLUB 
offerings, some other ISPs go directly with infrastructure to the curb or 
even directly into homes in some of the cases.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-06 Thread Mikael Abrahamsson


On Tue, 6 Mar 2007, [EMAIL PROTECTED] wrote:


On Tue, 06 Mar 2007 21:54:06 +0100, Mikael Abrahamsson said:

So instead I just drop their spoofed traffic and if they call and say that
their line is slow, I'll just say it's full and they can themselves track
down the offending machine and shut it off to solve the problem.


This doesn't sound very scalable.  You're almost certainly overcommitted on
the upstream side and likely looking at congestion if many customers are
spewing.


If they're spewing spoofed traffic I'm dropping it, so that's not a 
problem.



What do you tell the customer who calls and complains that *he* isn't a major
traffic source, but he's seeing dropped packets and delays on your upstream
link?  Do you tell him its full and they can track down which other customer
is the offender?


Do you usually design networks that can't handle customers using what they 
have paid for? I don't. (for any reasonable amount of statistical 
oversubscripion of course)


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-06 Thread Mikael Abrahamsson


On Tue, 6 Mar 2007, Sean Donelan wrote:

Isn't this true of everything (bad source addresses, worms, abuse, etc). 
Does hiding/ignoring the problem just makes it worse because there is no 
incentive to fix the problem while it is still a small problem? If it 
isn't important enough to bother the customer, why bother to fix it?


Let's take a concrete example:

Customer gets hacked, one of their boxen starts spewing traffic with 
spoofed addresses. The way I understand your solution is to automatically 
shut their port and disrupt all their traffic, and have them call customer 
support to get any further.


Do you really think this is a good solution?

I don't see any customer with a choice continuing having a relationship 
with me if I treat them like that. It will cost me and them too much.


So instead I just drop their spoofed traffic and if they call and say that 
their line is slow, I'll just say it's full and they can themselves track 
down the offending machine and shut it off to solve the problem.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-06 Thread Mikael Abrahamsson


On Sun, 4 Mar 2007, Sean Donelan wrote:

When customers misconfigure their router, e.g. wrong BGP neighbor or 
ASN, wrong interface IP address, exceed max prefix limit, etc; don't 
they lose Internet connectivity until they fix it?


A properly configure router should never forward even a single bad 
packet. If it does, isn't it likely to have configuration problems so 
why continue to keep misconfigured routers connected?


Customers are unlikely to fix problems which don't cause them to lose 
service.


Even though the BOFH in me agrees with you, I also know that every cent on 
my paycheck comes from the customers, so I prefer not to treat them like 
crap. If I can protect the internet from my customers by doing uRPF or 
source IP based filtering, I achieve the same thing as you but with less 
customer impact.


Also, all the examples you give implies a BGP transit customer. I am 
imagining all kinds of customers, from colo customers where I am their 
default gateway, to residential customers where it's the same way. 
Disabling their port and punting them to customer support is NOT a cost 
efficient way of dealing with the problems, at least not in the market I 
am in.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Where are static bogon filters appropriate? was: 96.2.0.0/16 Bogons

2007-03-03 Thread Mikael Abrahamsson


On Sat, 3 Mar 2007, Sean Donelan wrote:

Instead of dropping packets with unallocated sources addresses, perhaps 
backbones should shutdown interfaces they receive packets from 
unallocated address space.  Would this be more effective at both 
stopping the sources of unallocated addresses; as well as sources that 
spoof other addresses because the best way to prevent your interface 
from being shutdown by backbone operators is to be certain you only 
transmit packets with your source addresses.


uRPF or plain source-based filtering for the IP blocks allocated to the 
customer is enough. Shutting it down doesn't make any commercial sense, 
customers wont buy your service if their port is going to be shut down due 
to a single packet. They'll (likely) understand if you won't forward a 
packet from them which has a source address not not belonging to them, 
though.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-14 Thread Mikael Abrahamsson


On Sat, 13 Jan 2007, Roland Dobbins wrote:

again a la the warez community.  It's an interesting question as to whether 
or not the energy and 'professional pride' of this group of people could 
somehow be harnessed in order to provide and distribute content legally (as 
almost all of what people really want seems to be infringing content under 
the current standard model), and monetized so that they receive compensation 
and essentially act as the packaging and distribution arm for content 
providers willing to try such a model.  A related question is just how


You make a lot of very valid points in your email, but I just had to 
respond to the above. The only reason they have for ripping, adremoving 
and distributing TV series over the internet is because there is no legal 
way to obtain these in the quality they provide. So you're right, they 
provide a service people want at a price they want (remember that people 
spend quite a lot of money on harddrives, broadband connections etc to 
give them the experience they require).


If this same experience could be enjoyed via a VoD box from a service 
provider at a low enough price that people would want to pay for it (along 
the prices I mentioned earlier) I am sure that a lot of regular people 
would switch away from getting their content via P2P and get it directly 
from the source. Why go over ripping, ad-removing, xvid-encoding, 
warez-scene, then to P2P sites, then you have to unpack the content to 
watch it, perhaps on your computer, when the content creator is sitting 
there on a perhaps 50-100 megabit/s MPEG stream of the content that you 
directly could create a high VBR MPEG4 stream from via some replication 
system, and then VoD to your home via your broadband internet connection?


There is only one reason for those people doing what they do, it's because 
the content owners want to control the distribution channel and they're 
not realising they never will be able to do that. DRM has always failed, 
systems like Macrovision, region coding (DVD), encryption (DVD) and now I 
read that the HDDVD system, are all broken and future systems will be 
broken.


So the key is convenience and quality at a low price, aka 
price/performance on the experience. Make it cheap and convenient enough 
that the current hassle is just not worth it.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-13 Thread Mikael Abrahamsson


On Sat, 13 Jan 2007, Marshall Eubanks wrote:

A technical issue that I have to deal with is that you get a 30 minute 
show (actually 24 minutes of content) as 30 minutes, _with the ads slots 
included_. To show it without ads, you actually have to take the show 
into a video editor and remove the ad slots, which costs video editor 
time, which is expensive.


Well, in this case you'd hopefully get the show directly from whoever is 
producing it without ads in the first place, basically the same content 
you might see if you buy the show on DVD.


In the USA at least, the cable companies make you pay for "bundles" to 
get channels you want. I have to pay for 3 bundles to get 2 channels we 
actually want to watch. (One of these bundle is apparently only sold if 
you are already getting another, which we don't actually care about.) 
So, it actually costs us $ 40 + / month to get the two channels we want 
(plus a bunch we don't.) So, it occurs to me that there is a business 
selling solo channels on the Internet, as is, with the ads, for order $ 
5 - $ 10 per subscriber per month, which should leave a substantial 
profit after the payments to the networks and bandwidth costs.


There is zero problem for the cable companies to immediately compete with 
you by offering the same thing, as soon as there is competition. Since 
their channel is the most established, my guess is that you would have a 
hard time succeeding where they already have a footprint and established 
customers.


Where you could do well with your proposal, is where there is no cable TV 
available at all.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-13 Thread Mikael Abrahamsson


On Sat, 13 Jan 2007, Marshall Eubanks wrote:


For the US, an analysis by Kenneth Wilbur
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=885465   , table 1, from 
this recent meeting in DC

http://www.web.virginia.edu/media/agenda.html


Couldn't read the PDFs so I'll just go from your below figures:

shows that the cost per thousand per ad (the CPM) averaged over 5 networks 
and all nights of the week, was $ 24 +- 9; these
are 1/2 minute ads. The mean ad level per half-hour is 5.15 minutes, so 
that's 10.3 x $ 24 or $ 247 / hour / 1000. This is for the evening; rates and 
audiences at other times or less. So, for a 1/2 hour evening show, on average 
the VOD would need to cost at least $ 0.12 US to re-coup the ad revenues. 
Popular shows get a higher CPM, so they would cost more. The Wilbur paper and 
some of the other papers at this conference present a lot of breakdown of 
these sorts of statistics, if you are interested.


Thanks for the figures. So basically if we can encode a 23 minute show (30 
minutes minus ads) into a gig of traffic the network (precomputed HD 1080i 
with high VBR) cost would be around $0.2 (figure from my previous email, 
on margin) and pay $0.2 to the content owner, they would make the same 
amount of money as they do now? So basically the marginal cost of this 
service would be around $0.4-0.5 per show, and double that for a 45 minute 
episode (current 1 hour show format)?


So question becomes whether people might be inclined to pay $1 to watch an 
adfree TV show? If they're paying $1.99 to iTunes for the actual download 
right now, they might be willing to pay $0.99 to watch it over VoD?


As you said, of course this would take enormous amount of time and effort 
to convince the content owners of this model. Wonder if ISPs would be 
interested at these levels, that's also a good question.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-13 Thread Mikael Abrahamsson


On Sat, 13 Jan 2007, Sean Donelan wrote:

What happens if a 100Mbps port is $19.95/month with $1.95 per GB 
transferred up and down?  Are P2P swarms as attractive?


$1.95 is outrageously expensive. Let's say we want to pass on our costs to 
the users with the highest usage:


1 megabit/s for a month is:

1/8*60*60*24*30=324000M=324 gigabytes

Let's say this 1 megabit/s costs us $20 (which is fairly high in most 
markets), that means the price of a gigabyte transferred should be $0.06, 
let's say we increase that (because of peak usage, administrative costs 
etc) to $0.2.


Now, let's include 35 gigs of traffic in each users alottment to get rid 
of usage based billing for most users (100 kilobit/s average usage) and 
add that to your above 100 meg port, and we end up with around $28, let's 
make that $29.95 a month including the 35 gigs. Hey, make it 50 gigs for 
good measure.


Now, my guess is that 90% of the users will never use more than 50 gigs, 
and if they do, their increased usage will be quite marginal, but if 
someone actually uses 5 megabit/s on average (1.6terabytes per month (not 
unheard of) that person will have to fork out some money ($300 extra per 
month).


Oh, this model would also require that you pay for bw you PRODUCE, not 
what you receive (since you cannot control that (DDoS, scanning etc)). So 
basically anyone sourcing material to the internet would have to pay in 
some way, the ones receiving wouldn't have to pay so much (only their 
monthly fee).


The bad part is that this model would most likely hinder a lot of 
content-producers from actually publishing their content, but on the other 
hand it might be a better deal to distribute content more closer to the 
customers as carriers might be inclined to let you put servers in their 
network that only can send traffic to their network, not anybody else. It 
might also preclude a model where carriers charge each other on the amount 
of incoming traffic they see from peers.


Personally, I don't think I want to see this but it does make sense in a 
economical/technical way, somewhat like road tolls.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-12 Thread Mikael Abrahamsson


On Fri, 12 Jan 2007, Gian Constantine wrote:

I am pretty sure we are not becoming a VoD world. Linear programming is much 
better for advertisers. I do not think content providers, nor consumers, 
would prefer a VoD only service. A handful of consumers would love it, but 
many would not.


My experience is that when you show people VoD, they like it. A lot of 
people won't abandon linear programming because it's easy to just watch 
whatever is "on", but if you give them the possibility of watching VoD 
(DVD sales of TV series for instance) some will definately start doing 
both. Same thing with HDTV, until you show it to people they couldn't care 
less, but when you've shown them they do start to get interested.


I have been trying to find out the advertising ARPU for the cable 
companies for a prime time TV show in the US, ie how much would I need to 
pay them to get the same content but without the advertising, and then add 
the cost of VoD delivery. This is purely theoretical, but it would give a 
rough indication on what a VoD distribution model might cost the end user 
if we were to add that distribution channel. Does anyone know any rough 
figures for advertising ARPU per hour on primetime? I'd love to hear it.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Mikael Abrahamsson


On Tue, 9 Jan 2007, [EMAIL PROTECTED] wrote:

between handling 30K unicast streams, and 30K multicast streams that 
each have only one or at most 2-3 viewers?


My opinion on the downside of video multicast is that if you want it 
realtime your SLA figures on acceptable packet loss goes down from 
fractions of a percent into the thousands of a percent, at least with 
current implementations of video.


Imagine internet multicast and having customers complain about bad video 
quality and trying to chase down that last 1/10 packet loss that 
makes peoples video pixelate every 20-30 minutes, and the video stream 
doesn't even originate in your network?


For multicast video to be easier to implement we need more robust video 
codecs that can handle jitter and packet loss that are currently present 
in networks and handled acceptably by TCP for unicast.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Router and Infrastructure Hacking (CCC conference last week)

2007-01-04 Thread Mikael Abrahamsson


On Fri, 5 Jan 2007, Gadi Evron wrote:

Speaking of IPv4, an interesting thing from the CCC presentation was 
that the IPV6 space used equaled (if I got this right) the entire EU 
IPv6 normal use.


Would this be that the 100-150 megabit/s of IPv6 used at 23C3 equaled the 
100-150 megabit/s of IPv6 used at AMS-IX? I think it was also mentioned 
that this was because some major news providers used IPv6 for their NNTP 
sessions.


But yes, I was surprised at the amount of IPv6 used at 23C3, wonder if it 
was because local services was IPv6 enabled. There was no distinction 
between internal IPv6 traffic and external IPv6 traffic so I don't know.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: NATting a whole country?

2007-01-03 Thread Mikael Abrahamsson


On Wed, 3 Jan 2007, Steven M. Bellovin wrote:



According to
http://www.nytimes.com/aponline/technology/AP-TechBit-Wikipedia-Block.html
all of Qatar appears on the net as a single IP address.  I don't know
if it's NAT or a proxy that you need to use to get out to the world,
but whatever the exact cause, it had a predictable consequence -- the
entire country was barred from editing Wikipedia, due to abuse by
(presumably) a few people.


I think I read at Wikipedia that this is their proxy-servers IP address 
(or proxy server farm probably).


Also, the only thing that was stopped was anonymous editing, editing after 
login and anonymous reading wasn't stopped.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Home media servers, AUPs, and upstream bandwidth utilization.

2006-12-25 Thread Mikael Abrahamsson


On Mon, 25 Dec 2006, Simon Leinen wrote:


Yes.  With Jeroen's suggestion, there's a risk that power-users'
consumption will only be reduced for off-peak hours, and then the ISP
doesn't save much.  A possible countermeasure is to not count off-peak
traffic (or not as much).  Our charging scheme works like that, but
our customers are mostly large campus networks, and I don't know how
digestible this would be to retail ISP consumers.


Also, doing fine grained measurements for all your customers (if you have 
millions of residential users) is a big pain. Yes, it makes sense to only 
count usage perhaps between 16.00-02.00 or so, but trying to do this in an 
efficient manner and handle the customer complaint calls might very well 
be more expensive than actually not doing it at all.


Cost of customer interaction should never be underestimated. The simpler 
the service, the less things that can go wrong, the less customer service 
calls you get.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Home media servers, AUPs, and upstream bandwidth utilization.

2006-12-24 Thread Mikael Abrahamsson


On Sun, 24 Dec 2006, Roland Dobbins wrote:

What I'm wondering is, do broadband SPs believe that this kind of system will 
become common enough to make a signficant difference in traffic paterns, and 
if so, how do they believe it will affect their access infrastructures in 
terms of capacity, given the typical asymmetries seen in upstream vs. 
downstream capacity in many broadband access networks?  If a user isn't doing


Experiences from high-upstream bandwidth ISPs are that if you give 
customers high upstream bw, they'll use it. One example is one town, half 
of the customers were on ADSL (8/1 and 24/1 megabit/s) and half were on 
10/10 ethernet (in-building CAT5 or fiber converters). Downstream usage of 
these two populations were equal, with approx 100 kilobit/s average peak 
usage. The upstream bw usage was approx 50 kilobit/s for the ADSL crowd, 
but 200 kilobit/s for the ethernet crowd. This is roughly the figures I 
have heard from others as well.


This is largely from filesharing, and the difference in usage within the 
population is enormous. Some will average 5-10 kilobit/s over a month, if 
even that, some will run their upstreams full pretty much 100% of the 
time.


Customers expect unmetered usage but most ISPs have "normal use" clauses 
in their AUPs. If customers change their behaviour then I believe that 
ISPs will start to enforce this towards their biggest bw using users, 
just to try to prolong the usage of their existing investment (or actually 
their new investment).


For me this is actually a core problem, not an access problem. The core is 
getting faster (4x) every 4-5 years or so, but the traffic is increasing 
faster than that. Also cost for the core isn't really going down in any 
major fashion, and it can be cheaper per megabit to build a 10G core than 
to build a core capable of 100G (parallell links) with todays technology.


So to sum up, the upstream problem you're talking about is already here, 
it's just that instead of using your own PVR box and then sharing that, 
someone did this somewhere in the world, encoded it into Xvid and then it 
is shared between end users (illegally). I believe the problem is the 
same.


Also, trying to limit peoples traffic on L4 information or up is futile 
and won't work. The only information readily available to us ISPs to do 
anything with, is L3 information and packet size. So in the future I see 
AUPs that limit traffic to 100-200G per month actually being enforced, 
because this will cap the powerusers without affecting most of the major 
population.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: CWDM equipment (current favorites)

2006-10-30 Thread Mikael Abrahamsson


On Mon, 30 Oct 2006, Deepak Jain wrote:

We need to place a new order for some new fiber builds and were considering 
some other vendors. Especially in the nx2.5G and nx10G (are CWDM x-cievers 
even available in 10G yet?) range. Anyone have any new favorites?


I have recommended Transmode (www.transmode.com) to several people and not 
been flamed yet, so I think people are resonable satisfied with them.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Extreme Slowness

2006-10-27 Thread Mikael Abrahamsson


On Fri, 27 Oct 2006, [EMAIL PROTECTED] wrote:


For the record, TCP traceroute and similar TCP based
tools rely on the fact that if you send a TCP SYN
packet to a host it will respond with either a
TCP RST (if the port is NOT listening) or a TCP
SYN/ACK. The round trip time of this provides useful
information which is unaffected by any ICMP chicanery
on the part of routers or firewalls. A polite application
such as TCP traceroute will reply to the SYN/ACK with
an RST packet so it is reasonably safe to use this tool
with live services.


Intermediate nodes are still discovered by "ICMP TTL Exceeded in transit" 
just like UDP based traceroute, ie the outgoing TCP SYN packet has a low 
TTL.


So yes, tcptraceroute is good for getting thru firewalls in the forward 
direction, but intermediate routers are discovered in the same way by you 
getting an ICMP back because the TTL ran out.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: BCP38 thread 93,871,738,435 (was Re: register.com down sev0?)

2006-10-26 Thread Mikael Abrahamsson


On Thu, 26 Oct 2006, Fergie wrote:

The point I'm trying to make is that if the community thinks it is 
valuable, then the path is clear.


What is the biggest problem to solve? Would it be enough for ISPs to make 
sure that they will not send out packets which didn't belong within their 
PA blocks, or is it that one user shouldn't be able to spoof at all (even 
IPs adjacant to their own)? Would the global problem go away if global 
spoofing stopped working?


I of course realise that it's best if user cannot spoof at all, but it 
might be easier for ISPs to filter based on their PA blocks than to (in 
some cases) purchase new equipment to replace their current equipment that 
cannot do IP spoof filtering.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Refusing Pings on Core Routers??? A new trend?

2006-10-20 Thread Mikael Abrahamsson


On Thu, 19 Oct 2006, Jeremy Chadwick wrote:


I am absolutely fine with ICMP being prioritised last, but those
scenarios induce more questions; "so ICMP is prio'd last, which
would mean the router is busy processing other packets, which could
mean your router is over-utilised either CPU-wise or iface-wise
since we're seeing 250ms at your hop and beyond".  48 hours later,


No, that is not neccessarily true. I know of at least one vendor that 
punts all ICMP (on certain versions of their HW) to CPU, and the CPU is 
normally not otherwise involved in packet forwarding, so seeing latencies 
on ICMP (perhaps due to some housekeeping going on) doesn't at all mean 
other packets are being delayed. It might, of course.


Then we also have the problem of people not understanding how traceroute 
works, ie sending UDP with low TTL going one way, then getting ICMP back, 
with the router expiring the TTL and generating the ICMP TTL-exceeded 
message, perhaps having to punt this to CPU first, or perhaps processing 
it on a linecard. To understand what you're seeing really means, you have 
to know all platforms involved in all hops both going there and back 
(return path perhaps being asymmetrical to the path you're seeing in 
traceroute).


But to your question regardnig filtering, I'd venture to guess that more 
and more people are going to filter access attempts to their 
infrastructure to hinder DoS attacks. If I were to build a brand new 
network today I'd use loopbacks and link addresses that I'd either filter 
at the border or not announce on the internet at all. Not announcing it at 
all of course brings the problem of people using "uRPF loose" and dropping 
these packets, which will break traceroute and other tools. Better might 
of course be to rate-limit traffic to the infrastructure addresses to a 
very low number, let's say 2 megabit/s or so. This will limit DoS attacks 
and break diagnostics during an attack, but will make traceroute work 
properly during normal conditions. Guess everybody have to make up their 
mind regarding these prioritizations when designing their networks. It's 
important to be aware of all aspects of course, and it's good we have 
these discussions so more people understand the ramifications.


Anyone know of a document that describes this operationally? This would be 
a good thing to include in an "ISP essentials" type of document.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Boeing's Connexion announcement

2006-10-15 Thread Mikael Abrahamsson


On Sun, 15 Oct 2006, Roland Dobbins wrote:

into that.  As others have indicated, AC is in fact available on Lufthansa in 
business class and higher.


And on SAS it's available on Economy Plus and higher.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Boeing's Connexion announcement

2006-10-15 Thread Mikael Abrahamsson


On Sun, 15 Oct 2006, Patrick W. Gilmore wrote:

e-mail from the plane. :)  Lack of seat power was not an issue, I just had 
two batteries.  And this was BOS -> MUC, which ain't a short flight.


It's quite likely that on a grander scale of things, it's better economy 
that the few people who want to use their laptop the whole flight, do get 
two batteries, than doing the investment of putting AC power in all seats.


Otoh, more batteries on planes increases the risk of fire due to exploding 
batteries happening in the plane :P


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: Market Share of broadband provider in Scandidavia

2006-09-08 Thread Mikael Abrahamsson


On Fri, 8 Sep 2006, Fredy Kuenzler wrote:



Could anyone point me to a market-share by-country overview of broadband 
provider in Scandinavia (Denmark, Sweden, Norway, Finland, Iceland). Any help 
would be appreciated.


For Sweden, you can go to www.pts.se, more exactly 
<http://www.pts.se/Sidor/sida.asp?Sectionid=1341&Itemid=&Languageid=EN>.


They publish both in Sweden and English as far as I can discern.

PTS is the regulatory entity in Sweden for Telecommunications (and Post, 
but that's beside the point here :) ).


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: NNTP feed.

2006-09-05 Thread Mikael Abrahamsson


On Wed, 6 Sep 2006, Jeroen Massar wrote:

seems to be loads of people doing a lot of posting and reading, where else 
would the volume of that traffic come from?


I guess experiences differ from different organisations, when I discovered 
that server-server traffic was at least 10x more than people actually read 
(server-client) I didn't feel like trying to get my (then) employer 
continue running the NNTP server.


My feeling today (or rather, 3-5 years ago) is that NNTP is used instead 
of bittorrent and other PtP protocols to move copyrighted material, and 
I'd say that it probably makes more sense for some to let their users 
invest in that drivespace than to have themselves run an NNTP server and 
spend operational resources on keeping it running well.


I've also heard people complaining about a few NNTP users causing a lot of 
helpdesk tickets in respect to "single message missing" because they 
cannot download that 4.7 gig ISO correctly because a message got lost 
somewhere in the middle.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: ams-ix - worth using?

2006-08-23 Thread Mikael Abrahamsson


On Wed, 23 Aug 2006, matthew zeier wrote:

Does it simply provide an easy way to privately connect to transit and 
peers? Or can I also go crazy and peer with anyone who wants to peer 
(like in the olden day!) ?


There are plenty of ISPs both at AMSIX and LINX and they're mostly very 
happy to peer with people that they would otherwise see thru US transit 
partner. My guess is that you'll be able to offload quite a bit of your EU 
transit if you connect to AMSIX.


If you otoh purchase EU transit thru someone, it's quite likely that a 
chunk of the bigger EU ISPs won't peer with you ("we already peer with 
your upstream") so you probably want to think about how you're going to 
play the peering game! :)


Otoh purchasing transit in Amsterdam will probably get you quite decent 
pricing and with your low traffic volume it might make economic sense to 
wait with the peering until you have grown into higher volume.


I'd say AMS-IX is mostly for peering with a lot of people, if that answers 
your question.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: SORBS Contact

2006-08-09 Thread Mikael Abrahamsson


On Wed, 9 Aug 2006, Michael Nicks wrote:

themselves and their obviously broken practices. We should not have to 
jump through hoops to satisfy your requirements.


We were hit by the requirement to include the word "static" in our DNS 
names to satisfy requirements. It wasn't enough to just say "this /17 is 
only static IPs, one customer, one IP, no dhcp or other dynamics at all), 
we actually had to change all PTR records to this arbitrary "standard".


Took several weeks to get delisted even after that.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: mitigating botnet C&Cs has become useless

2006-08-08 Thread Mikael Abrahamsson


On Tue, 8 Aug 2006, Rick Wesson wrote:

Last sunday at DEFCON I explained how one consumer ISP cost American business 
$29M per month because of the existence of key-logging botnets.


you want to talk economics? Its not complicated to show that mitigating 
key-logging bots could save American business 2B or 4% of =losses to identity 
theft -- using FTC loss estimates from 2003


just because an ISP looses some money over transit costs does not equate to 
the loss american business+consumers are loosing to fraud.


I am sure that the total cost would be less if everybody cleaned up their 
act. It doesn't change the fact that the individual ISP has to spend money 
it will never see returns on, for this common good to emerge.


If the government wants to do this, then I guess it should start demanding 
responsibility from individuals as well, otherwise I don't see this 
happening anytime soon. Microsoft has a big cash reserve, perhaps the US 
government should start demanding them clean up their act and release more 
secure products, and start fining people who don't use their products 
responsibly. Oh, and go after the companies installing spyware, in ernest? 
And to find these, they have to start wiretapping everybody to collect the 
information they need.


Otoh this added security might add up to more losses than 2B per year in 
less functionality and more administration and procedures (overhead), so 
perhaps those 2B is the price we pay for freedom and liberty in this 
space?


Always hard to find the balance.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: mitigating botnet C&Cs has become useless

2006-08-08 Thread Mikael Abrahamsson


On Tue, 8 Aug 2006, Simon Waters wrote:

However most big residential ISPs must be getting to the point where 10% 
bandwidth saving would justify buying in third party solutions for 
containing malware sources. I assume residential ISPs must be worse than


The problem here is that if you build your network "right", ie just IP 
routing and no tunneling, you don't get a natural choke-point on where to 
put any kind of solution like you propose.


When I did the business calculations on DSL solution my math told me it 
cost approx the same (or even cheaper) to just provide internet capacity 
than to offer bitstream/tunneling. The devices involved in the tunneling 
cost more than actually providing global internet bandwidth and not doing 
any tunneling at all. It's also a much cleaner solution with fewer places 
than can break or cause problems. You have a clean 1500 MTU all the way, 
etc. So in all of thise, if the 10% figure is correct then it's cheaper to 
just waste those 10% for the residential ISP than to try to stop it, so 
I'd have to agree with the people in the thread who said that.


It might not be the right thing, but the economics for the residential ISP 
it costs a lot to try to be proactive about these things, especially since 
botnets can send just a little traffic per host and it's hard to even 
detect.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


Re: traffic from DE to DE goes via NL->UK->US->FR

2006-08-04 Thread Mikael Abrahamsson


On Fri, 4 Aug 2006, Andrius Kazimieras Kasparavi?ius wrote:

Just wondering if it is normal for traffic from DE to DE to flow through 
NL->UK->US->FR and so increase delay nearly 100 times? Traceroute here: 
http://pastebin.ca/115200 and there is only 4 AS, so ASPATH does not 
help a lot in finding such links with a horrifying optimisation. I 
believe there is much worse links, any software to detect this? 
Something like scanning one ip from larger IP blocks with icmp and 
comparing geotrajectoyi via geoip?


You should direct the question to whereever you are a customer.

These things usually happen when one party doesn't want to peer with 
another party and the one that wants to peer, will route traffic really 
far away to make sure that both parties are paying for the traffic, thus 
increasing the motivation for the other party to change their mind 
regarding peering.


--
Mikael Abrahamssonemail: [EMAIL PROTECTED]


  1   2   3   4   >