Re: Bandwidth issues in the Sprint network

2008-04-08 Thread Sam Stickland


Could be your TCP window size? A 17520 byte TCP window (Windows 2000) 
will cause a single flow to top out at 5Mbps at about 50ms. What is the 
latency on the link?


Try some figures here and see what limit you might be hitting:

http://www.wand.net.nz/~perry/max_download.php?bits_per_second=15500&ack_size=40&no_delayed_acks=2&mss=1460&rtt=35&wsize=17520&ploss=0

Sam

Brian Raaen wrote:
I am currently having problems get upload bandwidth on a Sprint circuit. I am 
using a full OC3 circuit.  I am doing fine on downloading data, but uploading 
data I can only get about 5Mbps with ftp or a speedtest.  I have tested 
against multiple networks and this has stayed the same.  Monitoring Cacti 
graphs and the router I do get about 30Mbps total traffic outbound, but 
individual (flows/ip?) test always seem limited.  I would like to know if 
anyone else sees anything similar, or where I can get help.  The assistance I 
have gotten from Sprint up to this point is that they find no problems.  Due 
to the consistency of 5Mbps I am suspecting rate limiting, but wanted to know 
if I was overlooking something else.


  




Re: "Does TCP Need an Overhaul?" (internetevolution, via slashdot)

2008-04-07 Thread Sam Stickland


Kevin Day wrote:
Yeah, I guess the point I was trying to make is that once you throw 
SACK into the equation you lose the assumption that if you drop TCP 
packets, TCP slows down. Before New Reno, fast-retransmit and SACK 
this was true and very easy to model. Now you can drop a considerable 
number of packets and TCP doesn't slow down very much, if at all. If 
you're worried about data that your clients are downloading you're 
either throwing away data from the server (which is wasting bandwidth 
getting all the way to you) or throwing away your clients' ACKs. Lost 
ACKs do almost nothing to slow down TCP unless you've thrown them 
*all* away.
If this was true surely it would mean that drop models such WRED/RED are 
becoming useless?


Sam


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Sam Stickland


Sean Donelan wrote:

When 5% of the users don't play nicely with the rest of the 95% of
the users; how can network operators manage the network so every user
receives a fair share of the network capacity?
This question keeps getting asked in this thread. What is there about a 
scavenger class (based either on monthly volume or actual traffic rate) 
that doesn't solve this?


Sam


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Sam Stickland


Iljitsch van Beijnum wrote:

On 23-okt-2007, at 15:43, Sam Stickland wrote:

What I would like is a system where there are two diffserv traffic 
classes: normal and scavenger-like. When a user trips some 
predefined traffic limit within a certain period, all their traffic 
is put in the scavenger bucket which takes a back seat to normal 
traffic. P2P users can then voluntarily choose to classify their 
traffic in the lower service class where it doesn't get in the way 
of interactive applications (both theirs and their neighbor's).


Surely you would only want to set traffic that falls outside the 
limit as scavenger, rather than all of it?


If the ISP gives you (say) 1 GB a month upload capacity and on the 3rd 
you've used that up, then you'd be in the "worse effort" traffic class 
for ALL your traffic the rest of the month. But if you voluntarily 
give your P2P stuff the worse effort traffic class, this means you get 
to upload all the time (although probably not as fast) without having 
to worry about hurting your other traffic. This is both good in the 
short term, because your VoIP stuff still works when an upload is 
happening, and in the long term, because you get to do video 
conferencing throughout the month, which didn't work before after you 
went over 1 GB.
Oh, you mean to do this based on traffic volume, and not current traffic 
rate? I suspose an external monitoring/billing tool would need track 
this and reprogram the neccessary router/switch, but it's the sort of 
infrastructure most ISPs would need to have anyway.


I was thinking more along the lines of: everything above 512 kbps (that 
isn't already marked worse-effort) gets marked worse effort, all of the 
time.


Sam


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Sam Stickland


Iljitsch van Beijnum wrote:


On 22-okt-2007, at 18:12, Sean Donelan wrote:

Network operators probably aren't operating from altruistic 
principles, but for most network operators when the pain isn't spread 
equally across the the customer base it represents a "fairness" 
issue.  If 490 customers are complaining about bad network 
performance and the cause is traced to what 10 customers are doing, 
the reaction is to hammer the nails sticking out.


The problem here is that they seem to be using a sledge hammer: 
BitTorrent is essentially left dead in the water. And they deny doing 
anything, to boot.


A reasonable approach would be to throttle the offending applications 
to make them fit inside the maximum reasonable traffic envelope.


What I would like is a system where there are two diffserv traffic 
classes: normal and scavenger-like. When a user trips some predefined 
traffic limit within a certain period, all their traffic is put in the 
scavenger bucket which takes a back seat to normal traffic. P2P users 
can then voluntarily choose to classify their traffic in the lower 
service class where it doesn't get in the way of interactive 
applications (both theirs and their neighbor's). I believe Azureus can 
already do this today. It would even be somewhat reasonable to require 
heavy users to buy a new modem that can implement this.
Surely you would only want to set traffic that falls outside the limit 
as scavenger, rather than all of it?


S


Re: The next broadband killer: advanced operating systems?

2007-10-23 Thread Sam Stickland


Adrian Chadd wrote:

On Tue, Oct 23, 2007, Sam Stickland wrote:

  
I'm concerned that if Microsoft were to post this as a patch to Windows 
XP/2003 then we would see the effects of this "all at once", instead of 
the gradual process of Vista deployment. Anyone agree?



You need both ends to have large buffers so TCP window sizes can grow.

So a few possibilities:

* you're running content servers but you're on training wheels and you're just
  not aware of this. Windows default sizes are small, so you never notice
  as you never grow enough TCP windows to fill your set buffer size.
  These guys would notice if Windows XP was patched to use larger/adaptive
  buffering.
  
Yes. I was imagining a scenario where released patches mean that 
currently untuned servers and clients are suddenly adaptively tuning 
their TCP Window sizes. According to the Web100 website 
(www.web100.org), their automatic TCP buffer tuning has already been 
merged into mainline Linux kernels. If Microsoft release an XP patch 
that enabled all the Windows based clients out there to take advantage 
of this then there could be lot of surprised faces.

* .. caveat to the above: until Linux goes and does what Linux does best
  and change system defaults; enabling adaptive socket buffers by default
  during a minor version increment. Anyone remember ECN? :P Then even some
  cluey server admins will cry in pain a little.
  

Is the adaptive buffer tuning in Linux not enabled by default?

* I don't think the proposals are changing TCP congestion avoidance/etc, are
  they?
  

Not as far as I know.

Its easily solvable - just drop the window sizes. In fact, I think the window
size increase/adaptive window size stuff would be much more useful for P2P over
LFN than average websites -> clients. General page HTTP traffic atm doesn't hit
window size before the reply has completed. Sites serving larger content
than HTML+images (say, Youtube, Music sites, etc) would've already given
this some thought and fixed their servers to not run out of RAM so easily.
Those are on a CDN anyway..
  
True. It would still be interesting to know if Microsoft were planning 
on patches all XP boxes to support this anytime soon though ;)


Sam


Re: The next broadband killer: advanced operating systems?

2007-10-23 Thread Sam Stickland


Mikael Abrahamsson wrote:


On Mon, 22 Oct 2007, Sam Stickland wrote:

Does anyone know if there are any plans by Microsoft to push this out 
as a Windows XP update as well?


You can achieve the same thing by running a utility such as TCP 
Optimizer.


http://www.speedguide.net/downloads.php

Turn on window scaling and increase the TCP window size to 1 meg or 
so, and you should be good to go.


The "only" thing this changes for ISPs is that all of a sudden 
increasing the latency by 30-50ms by buffering in a router that has a 
link that is full, won't help much, end user machines will be able to 
cope with that and still use the bw. So if you want to make the gamers 
happy you might want to look into that WRED drop profile one more time 
with this in mind if you're in the habit of congesting your core 
regularily.


I've already hand adjusted the default TCP window size on my machine and 
it noticably made quite a big difference my transfer rates from my own 
tuned servers. From this little bit of evidence I can blazenly 
extrpolate to suggest that maximum bandwidth consumption is currently 
limited to some noticable degree by the lack of widely deployed TCP 
window size tuning. Links that are currently uncongested might suddenly 
see a sizable amount of extra traffic.


I'm concerned that if Microsoft were to post this as a patch to Windows 
XP/2003 then we would see the effects of this "all at once", instead of 
the gradual process of Vista deployment. Anyone agree?


Sam


Re: The next broadband killer: advanced operating systems?

2007-10-22 Thread Sam Stickland


Interesting. I imainge this could have a large impact to the typical 
enterprise, where they might do large scale upgrades in a short period 
of time.


Does anyone know if there are any plans by Microsoft to push this out as 
a Windows XP update as well?


S

Leo Bicknell wrote:

Windows Vista, and next week Mac OS X Leopard introduced a significant
improvement to the TCP stack, Window Auto-Tuning.  FreeBSD is
committing TCP Socket Buffer Auto-Sizing in FreeBSD 7.  I've also
been told similar features are in the 2.6 Kernel used by several
popular Linux distributions.

Today a large number of consumer / web server combinations are limited
to a 32k window size, which on a 60ms link across the country limits
the speed of a single TCP connection to 533kbytes/sec, or 4.2Mbits/sec.
Users with 6 and 8 MBps broadband connections can't even fill their
pipe on a software download.

With these improvements in both clients and servers soon these
systems may auto-tune to fill 100Mbps (or larger) pipes.  Related
to our current discussion of bittorrent clients as much as they are
"unfair" by trying to use the entire pipe, will these auto-tuning
improvements create the same situation?

  




Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Sam Stickland


Sean Donelan wrote:


Much of the same content is available through NNTP, HTTP and P2P. The 
content part gets a lot of attention and outrage, but network 
engineers seem to be responding to something else.


If its not the content, why are network engineers at many university 
networks, enterprise networks, public networks concerned about the 
impact particular P2P protocols have on network operations?  If it was 
just a

single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications 
cooperate and fairly share network resources.  NNTP is usually 
considered a very well-behaved network protocol.  Big bandwidth, but 
sharing network resources.  HTTP is a little less behaved, but still 
roughly seems to share network resources equally with other users. P2P 
applications seem

to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

What exactly is it that P2P applications do that is impolite? AFAIK they 
are mostly TCP based, so it can't be that they don't have any congestion 
avoidance, it's just that they utilise multiple TCP flows? Or it is the 
view that the need for TCP congestion avoidance to kick in is bad in 
itself (i.e. raw bandwidth consumption)?


It seems to me that the problem is more general than just P2P 
applications, and there are two possible solutions:


1) Some kind of magical quality is given to the network to allow it to 
do congestion avoidance on an IP basis, rather than on a TCP flow basis. 
As previously discussed on nanog there are many problems with this 
approach, not least the fact the core ends up tracking a lot of flow 
information.


2) A QoS scavenger class is implemented so that users get a guaranteed 
minimum, with everything above this marked to be dropped first in the 
event of congestion. Of course, the QoS markings aren't carried 
inter-provider, but I assume that most of the congestion this thread 
talks about is occuring the first AS?


Sam


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-17 Thread Sam Stickland


Ted Hardie wrote:

Fred Baker writes:

  
Hence, moving a file into a campus doesn't mean that the campus has the file and 
will stop  bothering you. I'm pushing an agenda in the open source world to add  
some concept of locality, with the purpose of moving traffic off ISP  
networks when I can. I think the user will be just as happy or  
happier, and folks pushing large optics will certainly be.



As I mentioned to Fred in a bar once, there is at least one case where you have
to be a bit careful with how you push locality.  In the wired campus case, he's 
certainly
right:  if you have the file topologically close to other potentially 
interested users,
delivering it from that "nearer" source is a win for pretty much everyone.
This is partly the case because the local wired network is unlikely to be 
resource
constrained, especially in comparison to the upstream network links.

In some wireless cases, though, it can be a bad thing.  Imagine for a moment 
that
Fred and I are using a p2p protocol while stuck in an airport.  We're both 
looking
for the same file.  The p2p network pushes it first to Fred and then directs me 
to get
it from him.  If he and I are doing this while we're both connected to the same 
resource-constrained base station, we may actually be worse off, as the

same base station has to allocate data channels for two high data traffic
flows while it passes from him to me.  If I/the second user gets the file from 
outside the pool of devices connected to that base  station, in other words, 
the base station , I, and its other users may well be better off.  

  
A similar (and far more common) issue exists in the UK where ISPs are 
buying their DSL 'last mile' connectivity via a BT central pipe. 
Essentially in this setup BT owns all the exchange equipment and the 
connectivity back to a central hand-off location - implemented as a L2TP 
VPDN. When the DSL customers connects, their realm is used to route 
their connection over the VPDN to the ISP. The physical hand-off point 
between BT and the ISP is what BT term a BT Central Pipe, which is many 
orders of magnitude more expensive than IP transit.


In this scenario it's more expensive for the ISP to have a customer 
retrieve the file from another customer on their network, then it is to 
go off net for the file.


(LLU (where the ISP has installed their own equipment in the exchange) 
changes this dynamic obviously).


S


Re: Routing public traffic across county boundaries in Europe

2007-07-27 Thread Sam Stickland


Scott Weeks wrote:


--- [EMAIL PROTECTED] wrote:

What (if any) are the legal implications of taking internet destined
traffic in one country and egressing it in another (with an ip block
correctly marked for the correct country).

Somebody mentioned to me the other day that they thought the Dutch
government didn't allow an ISP to take internet traffic from a Dutch
citizen and egress in another country because it makes it easy for the
local country to snoop.
--


That's funny.  I've always thought of the internet as a global, borderless 
entity where ideas and information are shared without restraint.  Perhaps it's 
time to whap the gov't with a clue bat?

scott
  
Yes, but laws dictate that not all information can be shared without 
restraint. The EU, for example, has laws preventing the export of 
personal information to countries deemed to have weaker privacy 
protection laws.


There's also grey areas (that may simply result from legal departments 
not having enough technical knowledge). For example, I've worked with 
companies before that have had the rights to stream certain sporting 
events to certain countries only. Even if you were only streaming to UK 
ISPs and UK IP addresses (via what ever checking mechanisms were deemed 
adequate), legal departments tend to have quite a lot to say on the 
matter if you were egressing that traffic, at say, AMS-IX.


Sam


Re: Security gain from NAT

2007-06-04 Thread Sam Stickland


Matthew Palmer wrote:

I can think of one counter-example to this argument, and that's
SSL-protected services, where having a proxy, transparent or otherwise, in
your data stream just isn't going to work.  

Not so. Look at: http://muffin.doit.org/docs/rfc/tunneling_ssl.html

S


Re: summarising [was: Re: ICANNs role]

2007-04-03 Thread Sam Stickland


Joseph S D Yao wrote:

On Mon, Apr 02, 2007 at 10:56:00PM -0500, Gadi Evron wrote:
...
  

I just posted this, and I believe it makes sense:

Title: Put Security Alongside .XXX

Isn't security as important to discuss as .XSS?

The DNS has become an abuse infrastructure, it is no longer just a
functional infrastructure. It is not being used by malware, phishing and
other Bad Things [TM], it facilitates them.




Again - DNS is the infrastructure for EVERYTHING.  It facilitates
EVERYTHING.  If you threw it out and put something else in that was not
as clunky as editing hosts.txt files 'scp'ed from DARPA daily, then THAT
would be what was facilitating everything.
  
Maybe it would make sense for someone to reiterate what types of abuse 
DNS is facilitating? I believe what Gadi was getting at was mainly the 
ability to use fake details to register a domain, and then very rapidly 
cycling the A records through a wide range of hosts, attempting to avoid 
detection. As opposed to there actually being fundamental flaws open to 
abuse in a system that maps names to IP addresses.


Sam


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Sam Stickland


Will Hargrave wrote:

[EMAIL PROTECTED] wrote:

  

I have to admit that I have no idea how BT charges
ISPs for wholesale ADSL. If there is indeed some kind
of metered charging then Internet video will be a big
problem for the business model. 


They vary, it depends on what pricing model has been selected.

http://tinyurl.com/yjgsum has BT Central pipe pricing. Note those are
prices, not telephone numbers. ;-)

If you convert into per-megabit charges - at least an order of magnitude
greater than the cost of transit, and at least a couple of orders of
magnitude more than peering/partial transit.
  
A cursory look at the document doesn't seem to show any prices above 
622Mbps, but for that you're looking at about £160,000 a year or 
£21/Mbps/month.


2GB per day, equates to 190Kbps (assuming a perfectly even distribution 
pattern, which of course would never happen), which would be £3.98 a 
month per user. In reality I imagine that you could see usage peaking at 
about 3 times the average, or considerably greater if large flash crowd 
events occur.


I would say that in the UK market today, those sorts of figures are 
enough to destroy current margins, but certainly not high enough that 
the costs couldn't be passed onto the end user as part of an "Internet 
TV" package.

p2p is no panacea to get around these charges; in the worst case p2p
traffic will just transit your central pipe twice, which means the
situation is worse with p2p not better.

For a smaller UK ISP, I do not know if there is a credible wholesale LLU
alternative to BT.
  
Both Bulldog (C&W) and Easynet sell wholesale LLU via an L2TP handoff. 
It's been a while since I was in that game so any prices I have will be 
out of date by now, but IIRC both had the option to pay them per line 
_or_ for a central pipe style model. The per line prices were just about 
low enough to remain competitve, with the central pipe being cheaper for 
volume (but of course, only because you'd currently need to buy far less 
bandwidth than the total of all the lines in use; most ASDL users 
consume a surprisingly small amount of bandwidth and they aggregate very 
well).

Note this information is of course completely UK-centric. A more
regionalised model (21CN?!) would change the situation.

Will

  

S


Re: Lucent GBE (4 x VC4) clues needed

2006-09-21 Thread Sam Stickland


Saku Ytti wrote:

(oops technical question in nanog, wearing my asbestos suit)

Consider this topology

GSR - 3750 --(GE over 4xVC4) - NSE100 - NSE100 --(GE over 4xVC4) -- 3550 - GSR

All other fibres are dark fibres, except marked.

When we ping either NSE100 <-> GSR leg, when there is no background traffic
there is no packet loss. If there is even few Mbps, lets say 10Mbps of 
background traffic we get 1-5% packet loss on 1500 bytes, and bit

less packet loss on small packets. As background traffic increases
packet loss quickly increases.


[SNIP]


There isn't very much that can be configured in the Lucent, and we've
tried pretty much every setting. We've tried to set autonego on
and off in every gear in the path, without any changes to observed
behaviour. 


Did you try power cycling the Lucents after changing the auto-neg 
settings? I've seen some broken autoneg implementations in the past on 
managed media converters that didn't change settings immediately. It's 
worth a shot as you seem to be all out of other ideas ;)


Sam


Re: Multiple BGP Routes in FIB

2006-09-08 Thread Sam Stickland


Hi Glenn,

Glen Kent wrote:


Hi,

There is an interesting discussion going on in the IDR WG and i am
cross posting a mail on Nanog to hear from the operators, if what is
described below, a common practise followed by them:


>> I don't think its correct to advertise one while using both for
>> forwarding.
>> NOTE: I am assuming that the routes share the same path length but 
have

>> different AS Paths (as mentioned by you earlier in this mail)
>
> I think this is being done by many providers.

Consider two paths for nlri X

as_path 1 {x y z} next_hop n1
as_path 2 {m n z} next_hop n2

Are you suggesting that providers are installing ecmp routes for X with
next-hops n1 and n2, while advertising only one of the paths to their 
IBGP

peers?


Yes.


Do providers really do this? Would they install multiple BGP Paths
with different AS Paths (but same length) in their FIB, and yet
advertise only one?

Is the the right thing to do?


I believe the problem is with the BGP withdrawal mechanism. When BGP 
withdraws a route it only specifies the prefix being withdrawn and not 
the path. In this case, if the peer advertised both paths {x y z} and {m 
n z} for a single prefix it would be impossible to withdraw only one of 
the paths. I guess, even when using ECMP, BGP still really only 
considers there to be one best route. Everything else is local FIB 
manipulations based on local policy (in a similar vein to policy routing 
- the BGP advertisements don't always reflect which way the traffic will 
actually be routed).


Sam


Re: Router / Protocol Problem

2006-09-07 Thread Sam Stickland


Hi John,

John Kristoff wrote:

On Thu, 7 Sep 2006 07:27:16 -0400
"Mike Walter" <[EMAIL PROTECTED]> wrote:


Sep  7 06:50:20.697 EST: %SEC-6-IPACCESSLOGP: list 166 denied tcp
69.50.222.8(25) -> 69.4.74.14(2421), 4 packets

[...]
I'm not very familiar with NBAR or how to use it for CodeRed, but this
first rule:


access-list 166 deny   ip any any dscp 1 log


Seems dubious.  So I'm not not sure what sets the codepoint to 01
by default, but apparently CodeRed does?  Nevertheless, this seems like
a very weak basis for determining whether something is malicious.


It's his NBAR config lower down that sets the dscp value:

class-map match-any http-hacks
match protocol http url "*default.ida*"
match protocol http url "*cmd.exe*"
match protocol http url "*root.exe*"

policy-map mark-inbound-http-hacks
class http-hacks
set ip dscp 1


So, there's probably two things that could happen here: One, NBAR is 
incorrectly identifying the SMTP traffic as code red, or two, the SMTP 
traffic is already marked with dscp 1. If you've using these values 
internally in your own network then they should be reset on all 
externally received traffic.


Sam


RE: text based netflow top ASN tool?

2006-08-04 Thread Sam Stickland

It's called Ehnt - the Extremely Happy Netflow Tool :)

http://ehnt.sourceforge.net/

S

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
> matthew zeier
> Sent: 04 August 2006 07:05
> To: [EMAIL PROTECTED]
> Subject: text based netflow top ASN tool?
> 
> 
> 
> I recall using a text based netflow collector that would show me top
> destination ASNs.  I recall it being really simple to get working too.
> 
> But it's been some time since I used it and can't recall what it's called.
> 
> Can someone give me a hint?



RE: Hot weather and power outages continue

2006-07-25 Thread Sam Stickland



> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
> Sean Donelan
> Sent: 24 July 2006 13:27
> To: nanog@merit.edu
> Subject: Re: Hot weather and power outages continue
>
>
> I've always been a fan of being able to force 100% economizer and chiller
> loop bypass emergency operation; it won't keep you "cool" but will help
> keep your data center from turning into an Easy-Bake Oven(tm). But that
> failure operating mode is rarely part of the standard HVAC programming.

Sean,

Can you elaborate on what you mean by " force 100% economizer and chiller
loop bypass emergency operation"

Thanks,

Sam



RE: DNS Based Load Balancers

2006-07-04 Thread Sam Stickland

Matt,

A few quick questions for you, if you got the time to answer it would be
appreciated (questions inline):

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
> Matt Ghali
> Sent: 04 July 2006 07:21
> To: Patrick W. Gilmore
> Cc: nanog@merit.edu
> Subject: Re: DNS Based Load Balancers
> 
> 
> On Sun, 2 Jul 2006, Patrick W. Gilmore wrote:
> 
> > Would you mind giving us a little more to go on than "the love of
> > god" before making strategic architectural decisions?
> >
> > Just in case we like to decide things for ourselves. :)
> 
> Patrick, I am sorry if I have hit a nerve with you- it seems you've
> got a vested interest in the answer to this question, and I
> appreciate your position.
> 
> > For instance, was F5's implementation flawed, or do you have a reason to
> > dislike the basic idea?  And why?
> 
> For the record, what I _should_ have advised the OP was "for the
> love of god, don't try to do this yourself with an appliance." I
> wholeheartedly encourage him to give his local Akamai sales rep a
> call. I am sorry for the confusion and angst my brevity has caused.

We work with a couple of different technologies here - our own GSS's, cache
farms and also external CDNs (for overflow). This is currently and area that
is currently under evaluation for a quite significant expansion.

Are you able to give some kind of description as to the problems you
experienced whilst using your own appliances? It would be very useful to be
able to avoid making the same mistakes.

Sam



Re: IP Addresses from a different region

2006-01-19 Thread Sam Stickland


That's a fair point David.

I can't of course start naming our clients. I could harp on about how they 
are a multinational, running legimate operations and blah blah blah.. But 
you'd only have my word for it. So you'll just have to take my word for 
the fact that we run an operational that comes down hard on nefrarious 
activities. Sorry.


Sam


On Thu, 19 Jan 2006, David Ulevitch wrote:


Be wary.

Who is this client?  Some of us in the security abuse world wouldn't mind a 
heads up...


-david


On Jan 19, 2006, at 6:20 AM, Sam Stickland wrote:



Hi,

Long story short... I'm under some considerable pressure from management to 
obtain a /24 of addresses from ARIN. We are a UK based ISP that are, of 
course, members of RIPE. Is this possible? If I approach one of ARIN's LIRs 
can they obtain ARIN PI space for our client, and are their any volunteers? 
Or, is it the case, that being outside of ARIN's geographic reach that we 
can't approach them?


Short story long... This is for a client that currently has a USA based 
operation, that wants to consolidate everything in one of our colo centres. 
Their USA servers are involved in some of kind advertising system that 
attempts to do geographic location based on the IP address prefix. I am 
assured that traceroutes, SWIP data, and AS number play no part in this 
geographic lookup.


The client insists that they must have addresses from a prefix that appear 
to have been allocated by ARIN for this system to continue to work. The 
language barrier between us has been scrupering attempts by me to get more 
techincal information from them.


As usual management are interested in "solutions not problems" ;) and this 
deal is worth quite a bit of money. I hate to be contributing to the ever 
increasing PI swamp space, but unfortunately I don't pay my own wages.


Any suggestions?

Sam





IP Addresses from a different region

2006-01-19 Thread Sam Stickland


Hi,

Long story short... I'm under some considerable pressure from management 
to obtain a /24 of addresses from ARIN. We are a UK based ISP that are, of 
course, members of RIPE. Is this possible? If I approach one of ARIN's 
LIRs can they obtain ARIN PI space for our client, and are their any 
volunteers? Or, is it the case, that being outside of ARIN's geographic 
reach that we can't approach them?


Short story long... This is for a client that currently has a USA based 
operation, that wants to consolidate everything in one of our colo 
centres. Their USA servers are involved in some of kind advertising system 
that attempts to do geographic location based on the IP address prefix. I 
am assured that traceroutes, SWIP data, and AS number play no part in this 
geographic lookup.


The client insists that they must have addresses from a prefix that appear 
to have been allocated by ARIN for this system to continue to work. The 
language barrier between us has been scrupering attempts by me to get more 
techincal information from them.


As usual management are interested in "solutions not problems" ;) and this 
deal is worth quite a bit of money. I hate to be contributing to the ever 
increasing PI swamp space, but unfortunately I don't pay my own wages.


Any suggestions?

Sam


Redux - RE: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-19 Thread Sam Stickland


Hi,

I've had a lot of emails asking me how I was getting on with this, so I 
figured I'd do a quick redux of the issues for the archives.


One of the main problem actually turned out to be a damaged strand in the 
CAT5e underfloor cabling, which meant that the connection would work at 
10/100, but not at 1000 (GE) because of the use of extra pairs on GE. So 
leaving comments about castrating the cabling the engineer who assured me 
it had tested fine aside ;) the other problems we encountered were:


Foundry's GE auto-negotiation seems very tempermental.

Another engineer here just told there's a connection (on SX/MM) between a 
Foundry BigIron and clients Dell switch in the network where both ends 
only get a link light when the Foundry is configured to neg-full-auto 
(Autonegotiation first, if failed try non-autonegotiation). auto-gig 
(Autonegotiation) and neg-off (Non-autonegotiation) don't work! Only 
neg-full-auto on which the connection is fine and stable.


We ditched the copper GBIC and used a 1000Base-SX to 1000Base-TX 
convertor. The foundry won't form a real link (from a fixed SX/SC blade) 
with the 1000Base-SX convertor until "gig-default auto-gig" is entered. 
With the default settings both ends get a link light but the Foundry never 
sees any mac-addresses.


The cisco 2950T does support auto MDI/MDI-X on the gigabit copper ports 
dispite this not being mentioned in the any switch specification I could 
find. I never did have any luck with a GE cross-over cable (even not using 
the damaged underfloor cabling).


Dispite never being able to get a link-light between the Foundry Copper 
GBIC and the cisco 2950T, I've since found another point in our network, 
using exactly the same model GBIC in this configuration, that's working 
fine. The GBIC has since been placed back into a cisco 6500 where it's 
working fine (hooked at to a 10/100/1000 port on another cisco 6500).


So, multi-vendor Copper GE - not always as easy as it sounds!

Sam

On Mon, 16 Jan 2006 [EMAIL PROTECTED] wrote:



Hopefully the cisco copper GBIC supports Auto-MDI though, so a straight cable 
should be good.


S

On Sun, 15 Jan 2006, David Prall wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

GigE 1000Base-T requires all 4 pairs of wire. Auto-MDI is not supported on 
the 2950 series, so it won't handle this automagically. I would test with 
speed set to 100 if the foundry can support 10/100/1000 with the Copper 
GBIC. Put the two next to each other and test with 1000Base-T crossover 
cable, removing all the extra stuff if that doesn't work.


1000Base-T requires that pairs 1 and 4 are also crossed along with 2 and 3.
http://en.wikipedia.org/wiki/Crossover_cable

David

- --
David C Prall [EMAIL PROTECTED] http://dcp.dcptech.com



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Mark Smith
Sent: Sunday, January 15, 2006 5:44 PM
To: Randy Bush
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: Problems connectivity GE on Foundry BigIron to
Cisco 2950T


Hi Randy,

On Sun, 15 Jan 2006 11:10:04 -1000
Randy Bush <[EMAIL PROTECTED]> wrote:




You are using a crossover cable right?

I'm having a right mare trying to get a Foundry BigIron to
connect up to a cisco 2950T, via Gigabit copper.


i was under the impression that gige spec handled crossover
automagically



According to "Ethernet, The Definitive Guide", that feature is an
optional part of the spec.

One thing I've heard people encounter is that if they use a cross-over
cable, which probably really implies a 100BASE-TX cross-over, then the
ports only go to 100Mbps. A Gig-E rated straight through, in
conjunction
with the automatic crossover feature, was necessary to get to GigE.

Regards,
Mark.

--

"Sheep are slow and tasty, and therefore must remain
constantly
 alert."
   - Bruce Schneier, "Beyond Fear"



-BEGIN PGP SIGNATURE-
Version: PGP Desktop 9.0.3 (Build 2932)

iQEVAwUBQ8rlb4YwPzEDHVgLAQjCAQgAnb36uEi0T6NuJZj7bu2vJXPgr7Y0rFK8
TjDip2kT1DYwf3mF2j0ZPCWIh7eKM4skx431VB9C8nq5glUwtOv7LYxnyMHCsyTo
W6aixupMiDA2l4po1pxzoNaw1p2CVvdTxhVBEG/AP73+Ls9k2lgJii59X+4BF8qw
ChmZwGdQ8GDOIWKjvax63hbsprBhf1ORUwP2qZPcg2p8P4M0yqznhrk84VpSipGU
itdv/juvrFmfkGyPSozBqErpYx4hmVzBz6uLPQ2cF6ux48sPIUDz3+ta49xKlMwY
zhON0whXxZA49YlwZZV5CzbjAmp8/zJWfMnl76kYkISxOqgX0Xo9dg==
=D7cE
-END PGP SIGNATURE-







Re: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Sam Stickland


Thanks Mark - just found the same thing out myself :)

S


On Mon, 16 Jan 2006, Mark Smith wrote:



On Mon, 16 Jan 2006 00:24:35 + (GMT Standard Time)
Sam Stickland <[EMAIL PROTECTED]> wrote:



On Mon, 16 Jan 2006, Mark Smith wrote:


On Sun, 15 Jan 2006 23:50:07 + (GMT Standard Time)
Sam Stickland <[EMAIL PROTECTED]> wrote:



Hi,






The cabling arrangement is:

Foundry -- Straight -- Patch -- Underfloor -- Patch -- Crossover -- Cisco
  GBIC   Cable  Panel Straight Panel  Cable

If I replace the final crossover cable with a straight,

Just do that ^^^ and give it a try.


Will do.



Having done a bit more looking into this myself, one thing that might be
a cause is the cross-over, in the sense that if it is a 100BASE-T
crossover, only two of the pairs will be crossed, and the other two
pairs are usually wired straight.

A GigE cross over, assuming you need one if you're ports don't support
automatic cross over, has all four pairs crossed over
(1-3,2-6,3-1,6-2,4-7,5-8,7-4,8-5). My guess would be that if a device
only detects two of the four pairs crossed, it drops back to 100BASE-T.
In other words, GigE cross overs are backwards compatible with
10/100BASE-T, but 10/100BASE-T crossovers aren't forward compatible with
GigE.

A GigE rated straight through path would be the first thing I'd test,
after that, possibly try a GigE crossover somewhere between the devices.

Regards,
Mark.


--

   "Sheep are slow and tasty, and therefore must remain constantly
alert."
  - Bruce Schneier, "Beyond Fear"



Re: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Sam Stickland


Wikipedia reveals all. http://en.wikipedia.org/wiki/Crossover_cable

Turns out that for 1000Base-T crossovers pairs 2 and 3, and 1 and 4 have 
to swapped. Standard TIA-568B to TIA-568A over swaps 2 and 3.


Auto MDI/MDI-X is optional in the 1000Base-T spec so the way forward is to 
try a straight cable and if that fails I'll have to make up a 1000Base-T 
crossover cable.


Thanks for all the help people,

S


On Mon, 16 Jan 2006, Sam Stickland wrote:



On Mon, 16 Jan 2006, Mark Smith wrote:


On Sun, 15 Jan 2006 23:50:07 + (GMT Standard Time)
Sam Stickland <[EMAIL PROTECTED]> wrote:



Hi,






The cabling arrangement is:

Foundry -- Straight -- Patch -- Underfloor -- Patch -- Crossover -- Cisco
  GBIC   Cable  Panel Straight Panel  Cable

If I replace the final crossover cable with a straight,

Just do that ^^^ and give it a try.


Will do.

If that fails I might have to dig out a non-passive 1000SX to 1000-BaseT 
media convertor that's on one of the other sites and give that a try.


Btw, several people have suggested "speed nonegotiate" on the cisco speed. 
This command is only supported on GBIC slots on cisco, not on fixed 
1000Base-T interfaces (at least not a Sup720/WS-X6748-GE-TX).


I also found this reference about the clock signals in use on 1000Base-T:

"Synchronous transmission
1000BASE-T is based on synchronous transmission to facilitate the cancelation 
of Echo/NEXT/FEXT interferences at the receivers. To achieve synchronous 
transmission between the two PHYs at the ends of a link, a master-slave 
clocking relationship is established by the PHYs. The master-slave 
relationship between two stations sharing a link segment is established 
during auto-negotiation. The master PHY uses an external clock to determine 
the timing of transmitter and receiver operations. This master clock is also 
provided to the other stations in the network. The slave PHY recovers the 
clock from the received signal and uses it to determine the timing of 
transmitter operations. In a typical network, the PHY at the repeater will 
become the master and the PHY at the data terminal equipment (DTE) will 
become the slave."


So it would appear that auto-negotiation is a requirement in 1000Base-T.

Sam



Re: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Sam Stickland


On Mon, 16 Jan 2006, Mark Smith wrote:


On Sun, 15 Jan 2006 23:50:07 + (GMT Standard Time)
Sam Stickland <[EMAIL PROTECTED]> wrote:



Hi,






The cabling arrangement is:

Foundry -- Straight -- Patch -- Underfloor -- Patch -- Crossover -- Cisco
  GBIC   Cable  Panel Straight Panel  Cable

If I replace the final crossover cable with a straight,

Just do that ^^^ and give it a try.


Will do.

If that fails I might have to dig out a non-passive 1000SX to 1000-BaseT 
media convertor that's on one of the other sites and give that a try.


Btw, several people have suggested "speed nonegotiate" on the cisco speed. 
This command is only supported on GBIC slots on cisco, not on fixed 
1000Base-T interfaces (at least not a Sup720/WS-X6748-GE-TX).


I also found this reference about the clock signals in use on 1000Base-T:

"Synchronous transmission
1000BASE-T is based on synchronous transmission to facilitate the 
cancelation of Echo/NEXT/FEXT interferences at the receivers. To achieve 
synchronous transmission between the two PHYs at the ends of a link, a 
master-slave clocking relationship is established by the PHYs. The 
master-slave relationship between two stations sharing a link segment is 
established during auto-negotiation. The master PHY uses an external clock 
to determine the timing of transmitter and receiver operations. This 
master clock is also provided to the other stations in the network. The 
slave PHY recovers the clock from the received signal and uses it to 
determine the timing of transmitter operations. In a typical network, the 
PHY at the repeater will become the master and the PHY at the data 
terminal equipment (DTE) will become the slave."


So it would appear that auto-negotiation is a requirement in 1000Base-T.

Sam


Re: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Sam Stickland


Replying to my own email..

I've found some sites that suggest it's not possible to disable 
auto-negotiation on 1000Base-T since other operational parameters are 
negotiated including selection of the master clock signal. I was aware 
that flow control was negotiated, but not the clock signal.


Can anyone elaborate?

Sam


On Sun, 15 Jan 2006, Sam Stickland wrote:



Hi,

On Sun, 15 Jan 2006, Paul G wrote:


- Original Message - From: "Farrell,Bob" <[EMAIL PROTECTED]>
To: "Randy Bush" <[EMAIL PROTECTED]>; "David Hubbard" 
<[EMAIL PROTECTED]>

Cc: "Sam Stickland" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Sunday, January 15, 2006 4:45 PM
Subject: RE: Problems connectivity GE on Foundry BigIron to Cisco 2950T


Cisco commands-



speed 1000
duplex full


the bigiron wants (iirc):

spe 1000-full

i strongly suggest you peruse the cli reference for both devices.


On the foundry GBIC blades you can't configure the speed and duplex settings, 
they only support 1000-full.


(config-if-e1000-1/2)#speed-duplex 1000-full
Error - can't change speed and duplex mode

I've dug through as much information as I can about the cisco 2950T and 
802.3z/802.3ab and disabling the auto-negiation. There appears to be no 
command at all available to do this.


The cabling arrangement is:

Foundry -- Straight -- Patch -- Underfloor -- Patch -- Crossover -- Cisco
GBIC   Cable  Panel Straight Panel  Cable

If I replace the final crossover cable with a straight, change the foundry to 
a 10/100 port, and plug the final end into a host NIC instead of the cisco I 
get a connection. Crossover cable has been changed twice now, and the RJ45 
GBIC was previously working in a cisco 6500.


I am extensively familar (at least I believe I am) with both these models, 
and this one has me stumped.


If nobody else can see any configuration errors I guess I'm down to hardware 
issues.


Sam



Re: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Sam Stickland


Hi,

On Sun, 15 Jan 2006, Paul G wrote:


- Original Message - From: "Farrell,Bob" <[EMAIL PROTECTED]>
To: "Randy Bush" <[EMAIL PROTECTED]>; "David Hubbard" 
<[EMAIL PROTECTED]>

Cc: "Sam Stickland" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Sunday, January 15, 2006 4:45 PM
Subject: RE: Problems connectivity GE on Foundry BigIron to Cisco 2950T


Cisco commands-



speed 1000
duplex full


the bigiron wants (iirc):

spe 1000-full

i strongly suggest you peruse the cli reference for both devices.


On the foundry GBIC blades you can't configure the speed and duplex 
settings, they only support 1000-full.


(config-if-e1000-1/2)#speed-duplex 1000-full
Error - can't change speed and duplex mode

I've dug through as much information as I can about the cisco 2950T and 
802.3z/802.3ab and disabling the auto-negiation. There appears to be no 
command at all available to do this.


The cabling arrangement is:

Foundry -- Straight -- Patch -- Underfloor -- Patch -- Crossover -- Cisco
 GBIC   Cable  Panel Straight Panel  Cable

If I replace the final crossover cable with a straight, change the foundry 
to a 10/100 port, and plug the final end into a host NIC instead of the 
cisco I get a connection. Crossover cable has been changed twice now, and 
the RJ45 GBIC was previously working in a cisco 6500.


I am extensively familar (at least I believe I am) with both these models, 
and this one has me stumped.


If nobody else can see any configuration errors I guess I'm down to 
hardware issues.


Sam


RE: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Sam Stickland


Hi,

Yup, it's definately a cross-over cable. ;) I had already tried this 
suggestion but the cisco 2950T doesn't appear to have the "no nego auto" 
command :/


(config)#int Gi0/2
(config-if)#no n?
% Unrecognized command
(config-if)#no n
(config-if)#no neg auto
   ^
% Invalid input detected at '^' marker.

Sam

On Sun, 15 Jan 2006, David Hubbard wrote:


You are using a crossover cable right?  If that's all set, you
do need to have neg-off on the Foundry and "no nego auto" on the
Cisco.  I haven't used the rj-45 gbics in the Foundry equipment
before, not sure if that could be an issue.  I would go with
the hard set 1000-full on both sides.

David

From: Sam Stickland


Hi,

I'm having a right mare trying to get a Foundry BigIron to
connect up to a cisco 2950T, via Gigabit copper.

The Foundry BigIron is using a cisco RJ45/copper GBIC that
was pulled from a live cisco 6500, where it was working
fine. The cisco 2950T has two fixed 10/100/1000 RJ45 ports.

The cables between the equipment have been tested and are fine.

The Foundry has three different types of the gigabit negiation modes:

   auto-gigAutonegotiation
   neg-full-auto   Autonegotiation first, if failed try
non-autonegotiation
   neg-off Non-autonegotiation

I've tried all three, complete with all the other
possibilities with the cisco 2950T (which has fixed full
duplex operation, but can be set to 'speed auto' or
'speed 1000').

None of these combinations bring up the link. The cisco 2950
never gets a link light. The Foundry gets a link light
regardless when it's mode is set to 'gig-default neg-off'.

I'm at a bit of a loss to explain this. Does anyone know of any
configuration issues that can explain this, or is it time to start
swapping out hardware components?

Sam






Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Sam Stickland


Hi,

I'm having a right mare trying to get a Foundry BigIron to connect up to a 
cisco 2950T, via Gigabit copper.


The Foundry BigIron is using a cisco RJ45/copper GBIC that was pulled from 
a live cisco 6500, where it was working fine. The cisco 2950T has two 
fixed 10/100/1000 RJ45 ports.


The cables between the equipment have been tested and are fine.

The Foundry has three different types of the gigabit negiation modes:

  auto-gigAutonegotiation
  neg-full-auto   Autonegotiation first, if failed try non-autonegotiation
  neg-off Non-autonegotiation

I've tried all three, complete with all the other possibilities with the 
cisco 2950T (which has fixed full duplex operation, but can be set to 
'speed auto' or 'speed 1000').


None of these combinations bring up the link. The cisco 2950 never gets a 
link light. The Foundry gets a link light regardless when it's mode is set 
to 'gig-default neg-off'.


I'm at a bit of a loss to explain this. Does anyone know of any 
configuration issues that can explain this, or is it time to start 
swapping out hardware components?


Sam


Best Practice where BGP router is "distance" from client

2005-06-16 Thread Sam Stickland


Hi,

I'm wondering what seen as best practice in this network layout:

cisco6500  Network Cloud  cisco3550 --- Client

The client needs a full BGP feed, which of course the 3550 is unable to 
provide, but the cisco 6500 can. The network cloud is relatively simple, 
and is running IP.


There's a few options:

1) Create a VLAN all the way back from the client to the cisco 6500, and 
rely on STP/RSTP to provide redundancy over the cloud


2a) Get the client to form a BGP session with the cisco3550 and announce 
there network(s) to it. The cisco3550 announces our internal address range 
to the client. Over the top of the this another BGP (multihop) is setup 
between the client and the 6500. Layer3 protocols (in this case OSPF) 
provide redundancy in the cloud. Traffic entering our network for the 
client will be routed straight to the cisco 3550. Traffic from the client 
will be backhauled all the way to the cisco 6500 before being sent on it's 
way.


2b) Same as 2a) but with next-hop-unchanged configured on the cisco6500. 
This should be that traffic leaving the client will be routed from the 
cisco3550 to the most appropiate network exit-point. The only problem I 
can see with this senario is if private loopback addresses are in use on 
the iBGP sessions.


Thoughts? Are there any nasty gotcha's I missed, or pain to be encounted 
later?


Re: BCP regarding TOS transparancy for internet traffic

2005-05-25 Thread Sam Stickland


On Wed, 25 May 2005, Eric A. Hall wrote:




On 5/25/2005 7:08 AM, Mikael Abrahamsson wrote:


I've been debating whether the TOS header information must be left
untouched by an ISP, or if it's ok to zero/(or modify) it for internet
traffic. Does anyone know of a BCP that touches on this?

My thoughts was otherwise to zero TOS information incoming on IXes,
transits and incoming from customers, question is if customers expect this
to be transparent or not.

Reading 
it looks like in the Diffserv terminology, it's ok to do whatever one
would want.

Any feedback appreciated.


Long ugly history here that I will try to avoid.

IP is end-to-end and you aren't supposed to muck with the packets that are
sent by your customers (or worse, sent by *their* customers). You don't
know what the bits mean to their applications (unless you are one of the
end-points of course) and screwing around with that stuff is a good way to
make people very angry. They're not your packets--leave them alone unless
you are being paid to do otherwise.


While it's true that IP is end-to-end, are fields such as TOS and DSCP 
meant to be end to end? A case could be argued that they are used by the 
actual forwarding devices on route in order to make QoS or even routing 
decisions, and that the end devices shouldn't actually rely on the values 
of these fields?


For example if ISPA is paying ISPX for a different amount of garenteed 
(sic) bandwidth than ISPB, how is ISPX meant to mark their traffic in such 
a way to control them seperately without using DSCP/TOS marking (assuming 
a non-MPLS network).


Also, if you are using TOS in your network to mark VoIP traffic for 
garenteed bandwidth then you're pretty much gonna have to zero it on entry 
into the network or people are going to be able to eat into your VoIP 
buckets just by setting the right TOS bits.


Seems to me that the actual meaning of TOS and DSCP is utilised on-route 
and not by the end nodes. What cause could the end nodes have to rely on 
these values?


Sam


Utilising upstream MED values

2005-03-18 Thread Sam Stickland
Hi,
We're looking at doing outbound traffic values based on upstream ("tier1") 
MED values. But, of course, there's no standard for MED values. Assuming I 
can get definations from the upstreams as to what their MED values mean, I 
have to rebase them into a common range.

However, a route-map (cisco, foundry) only allows you to set, add or 
subtract to an MED value, so this doesn't seem possible.

Is anyone doing this, and if so how?
Sam


E1 - RJ45 pinout with ethernet crossover cable

2005-02-25 Thread Sam Stickland
Hi,
Quick question: If I have two E1 ports (RJ45), then will running a 
straight ethernet cable between the two ports have the same affect as 
plugging a ballan into each port and using a pair of coax (over a v. 
short distance).

Likewise would using an ethernet crossover cable have the same affect as 
swapping the pairs round on one balland.

Or are the pinouts different to ethernet? I tried googling but couldn't 
find anything (perhaps because I can't seem to spell ballan :/ ).

Sam


IPv6, IPSEC and deep packet inspection

2004-12-31 Thread Sam Stickland
Since IPSEC is an integral part of IPv6 won't this have an affect on the 
deep packet inspection firewalls? Is this type of inspection expected to 
work in IPv6?

Perhaps using some kind of NAP the firewall is allowed to speak on behalf 
of the host(s) it firewalls, so that to the client it appears to be the 
firewall itself appears to be the IPSEC endpoint?

Sam


Re: Affects of rate-limiting at the far end of links

2004-12-13 Thread Sam Stickland
On Mon, 13 Dec 2004, Alex Bligh wrote:
--On 13 December 2004 13:18 + Sam Stickland <[EMAIL PROTECTED]> 
wrote:

doesn't lock out traffic for such long periods of time.
Could it be that buffers and flow-control over the 14ms third party leg
are causing the rate-limiting leaky bucket to continue to overflow long
after it's full?
Or you are losing line protocol keepalives of some sort (e.g. at L2), or
routing protocol packets. It may also be that your MPLS provider limits
the traffic at X kbps INCLUDING protocol overhead - if so it's going to
police out all sorts of important stuff (assuming you are running FR, ATM
or something rather than some sort of TDM over MPLS).
Hey Alex, thanks for your reply . It's all IP over MPLS AFAIK, and we're 
using static routes to the site so I can't imagine it's either of these.

We're going to traffic shape at the remote-end, so this should alleviate 
the problem. Just really wanted to check the line 'outages' could be 
caused by the nature of rate-limiting (looks like it can) and weren't 
indicitive of another underlying problem.

Thanks,
S


Affects of rate-limiting at the far end of links

2004-12-13 Thread Sam Stickland
Hi,
Just a quicky. We've got leased line out to a remote site that's pretty 
much at capacity for remote to local site traffic, and from time to time 
it appears to lock up for periods of 30 seconds or more.

Investigating it appears we outbound traffic shape, and ingress rate-limit 
at the 'local' end of the line, but nothing is done at the 'remote' end. 
The remote end is some 14ms away across a third party MPLS network.

Obviously we need to shape at the remote end, but the current behaviour 
intrigues me. Rate-limiting, while bad compared to shaping, in my 
experience doesn't lock out traffic for such long periods of time.

Could it be that buffers and flow-control over the 14ms third party leg 
are causing the rate-limiting leaky bucket to continue to overflow long 
after it's full?

Sam


Remote sites, aggregates and more-specific routes

2004-12-07 Thread Sam Stickland
Hi,
We currently announce our entire range as the largest possible aggregates. 
We are about to add the first site that's a sizable distance away.

The link to the remote site is relatively expensive, so we don't want to 
have to backhaul traffic between the sites if we can help it.

We seem to have the following options available:
1) Announce the greater aggregate at both ends, and risk having to haul 
traffic between the sites ourselves.

2) Deaggregate our ranges completely. I don't particulary want to do this, 
since the predicted 80/20 split in IP address usage across the sites will 
create a quite a few new routes in the ever growing table.

3) Only announce more-specifics at the remote site, and tag the more 
specific routes NO-EXPORT if we peer with the AS in both locations.

Am I right in thinking that #3 seems is the best option? AFAICS it adds no 
new unnecessary routes to the global table (outside of our immediate peers 
and transit providers) and still keeps unneccessary traffic off of the 
intersite link.

Are there any options I missed?
Sam


Weird MTU and TCP retranmission problem

2004-10-22 Thread Sam Stickland
I haven't seen anything like this before, so I'm hoping someone here could 
enlighten me.

We have a customer that has taken a single co-located server from us. They 
can download large files from this server to any machine, except the Mac 
OS X machines at the end of their shared leased line at their office 
premises. Windows and OS9 machines at this site can download these files 
fine, as can Mac OS X machines at the end of consumer ADSL lines 
(offsite).

Downloads just stall shortly after starting, which initially appears to an 
MTU problem. Lowering the MTU on the affect Mac OS X machines fails to 
solve the problem until the MTU is set to 100 (yes 100) bytes. Strange 
that the windows machines don't have this problem. These affected Mac OS X 
machines don't experience this problem if the exact same files are 
downloaded from a different server in the same datacentre, behind the same 
router.

An ethereal dump of a failed download shows the following:
  1.873772  Mac -> Server TCP [TCP Dup ACK 203#32] 7798 > http [ACK] 
Seq=786 Ack=106053 Win=65535 Len=0 TSV=4064715650 TSER=2370598
  1.874145 Server -> Mac  HTTP Continuation
  1.885515  Mac -> Server TCP [TCP Dup ACK 203#33] 7798 > http [ACK] 
Seq=786 Ack=106053 Win=65535 Len=0 TSV=4064715650 TSER=2370598
  1.885889 Server -> Mac  HTTP Continuation
  1.897384  Mac -> Server TCP [TCP Dup ACK 203#34] 7798 > http [ACK] 
Seq=786 Ack=106053 Win=65535 Len=0 TSV=4064715650 TSER=2370598
  1.897758 Server -> Mac  HTTP Continuation
  1.909627  Mac -> Server TCP [TCP Dup ACK 203#35] 7798 > http [ACK] 
Seq=786 Ack=106053 Win=65535 Len=0 TSV=4064715650 TSER=2370598
  1.921996  Mac -> Server TCP [TCP Dup ACK 203#36] 7798 > http [ACK] 
Seq=786 Ack=106053 Win=65535 Len=0 TSV=4064715650 TSER=2370598
  1.933865  Mac -> Server TCP [TCP Dup ACK 203#37] 7798 > http [ACK] 
Seq=786 Ack=106053 Win=65535 Len=0 TSV=4064715650 TSER=2370598

which leads me unsure as to whether the server is failing to receive the 
ACKs (hence the ACK retransmission) or whether the Mac is failing to 
receive the next packet so is retransmitting what it believes to be a lost 
ACK for the last packet?

The server is an HP Proliant running Windows 2003, setup and installed by 
HP. It's running the built-in windows firewall (ICF), but the effects are 
the same if this is disabled.

Any suggestions of where to continue to look would be very much 
appreciated.

Sam


Re: I-D on operational MTU/fragmentation issues in tunneling

2004-10-19 Thread Sam Stickland

On Thu, 14 Oct 2004, Joe Maimon wrote:
Sabri Berisha wrote:
On Mon, Oct 11, 2004 at 11:12:55AM +0300, Pekka Savola wrote:
Hi Pekka and others,

Please send comments to me by the end of this week, either on- of
off-list, as you deem appropriate.
With the risk of stating the obvious I would say that normally, PMTUD
should do the trick. 
On todays internet everything is more reliable than PMTUD.
How about replacing it completely with something more inband, less prone to 
firewall breakage?
You mean something like Packetization Layer Path MTU Discovery (PLPMTUD)?
http://www.ietf.org/internet-drafts/draft-ietf-pmtud-method-02.txt
http://www.psc.edu/~mathis/MTU/pmtud/
Sam


TDM over IP products

2004-09-07 Thread Sam Stickland
Hi,
I'm interested in experiences (good and bad) that people have had with 
various TDM over IP products.

If people can reply off-list I'll post a summary to the list in a day or 
two.

Sam


Re: low-latency bandwidth for cheap?

2004-08-06 Thread Sam Stickland

On Wed, 4 Aug 2004, Randy Bush wrote:

How much is "low latency"? I have 6ms RTT over my 8M/800k ADSL, it's
usually 6-8ms over an equivalent 2M g.shdsl line.
interesting question.  i have two adsl lines.  pinging the first hop
router
 verizon / lavanet (hawi to honolulu, 25 mins air time by plane)
   64 bytes from 64.65.95.73: icmp_seq=0 ttl=64 time=20.637 ms
   64 bytes from 64.65.95.73: icmp_seq=1 ttl=64 time=22.186 ms
   64 bytes from 64.65.95.73: icmp_seq=2 ttl=64 time=21.965 ms
   64 bytes from 64.65.95.73: icmp_seq=3 ttl=64 time=21.723 ms
   64 bytes from 64.65.95.73: icmp_seq=4 ttl=64 time=21.538 ms
 qwest / iinet (30 miles from bainbridge to hellview wa us)
   64 bytes from 209.20.186.1: icmp_seq=0 ttl=63 time=67.008 ms
   64 bytes from 209.20.186.1: icmp_seq=1 ttl=63 time=67.700 ms
   64 bytes from 209.20.186.1: icmp_seq=2 ttl=63 time=56.696 ms
   64 bytes from 209.20.186.1: icmp_seq=3 ttl=63 time=60.249 ms
i do not know why and can get no useful info on provisioning.
i know iinet is redback.
Looks like Qwest are using data interleaving on their connection, while 
Verizon aren't. It helps reduce dataloss at the expense of increased 
latency, by interleaving bits over time so that a short burst of signal 
destroying noise can only remove part of any given larger block. Data 
blocks reserve some space for error-correction data, which can salvage a 
partially damaged block.

I hear a lot of ISPs in the states are turning on interleaving by default 
these days, while in the UK I've never actually encountered it. Some ADSL 
modems have an option to disable it also.

Sam


What ever happened to... MARP (Multi-Access Reachability Protocol)

2004-07-27 Thread Sam Stickland

Last draft appeared to be 

http://www.watersprings.org/pub/id/draft-retana-marp-02.txt

which expired Sept 2003 (Abstract: defines a protocol to quickly determine
the existence or aliveness of devices attached to a shared media
(broadcast) subnet.)

First read about it in this presentation, where it was billed as an 
alternative to fast hellos:

http://routing.internet2.edu/wg-meetings/20021029-I2rwg-slides/20021029-daugherty-routing-opt.pdf

The idea was that a switch could notify connected routers of link failures 
immediately - there would be no need to wait for the dead and hold-timers 
to expire.

Is this idea still flying? There appears to be very little on the net 
about it, except what I've mentioned above.

Sam



RE: VeriSign's rapid DNS updates in .com/.net (fwd from ml)

2004-07-22 Thread Sam Stickland

I got forwarded this URL from Patrick McManus. I haven't had a chance to
read the paper myself yet so I won't comment on it. I've included the link
and the abstract below.

A choice quote is "these results suggest that the performance of DNS is
not as dependent on aggressive caching as is commonly believed, and that
the widespread use of dynamic, low-TTL A-record bindings should not
degrade DNS performance."

http://nms.lcs.mit.edu/papers/dns-imw2001.html



Abstract:

This paper presents a detailed analysis of traces of DNS and associated 
TCP traffic collected on the Internet links of the MIT Laboratory for 
Computer Science and the Korea Advanced Institute of Science and 
Technology (KAIST). The first part of the analysis details how clients at 
these institutions interact with the wide-area DNS system, focusing on 
performance and prevalence of failures. The second part evaluates the 
effectiveness of DNS caching. 

In the most recent MIT trace, 23% of lookups receive no answer; these 
lookups account for more than half of all traced DNS packets since they 
are retransmitted multiple times. About 13% of all lookups result in an 
answer that indicates a failure. Many of these failures appear to be 
caused by missing inverse (IP-to-name) mappings or NS records that point 
to non-existent or inappropriate hosts. 27% of the queries sent to the 
root name servers result in such failures. 

The paper presents trace-driven simulations that explore the effect of 
varying TTLs and varying degrees of cache sharing on DNS cache hit rates. 
The results show that reducing the TTLs of address (A) records to as low 
as a few hundred seconds has little adverse effect on hit rates, and that 
little benefit is obtained from sharing a forwarding DNS cache among more 
than 10 or 20 clients. These results suggest that the performance of DNS 
is not as dependent on aggressive caching as is commonly believed, and 
that the widespread use of dynamic, low-TTL A-record bindings should not 
degrade DNS performance. 

Sam

On Thu, 22 Jul 2004, Sam Stickland wrote:

> 
> I think I ought to qualify my earlier email - I certainly didn't mean to 
> suggest that this would happen. I meant to merely comment on what the 
> expected increase in load might be if we did see a trend towards lower 
> TTLs.
> 
> Any trend towards lower TTLs would be outside of Verisign's control 
> anyhow, and if it did happen, it would no doubt be a gradual effect. Which 
> brings me back to my original question - does anyone know of any stastics 
> for TTL values?
> 
> Sam
> 
> On Thu, 22 Jul 2004, Henry Linneweh wrote:
> 
> > 
> > Before a big panic starts, they can restore it back to
> > the way it was if there is an event of such proportion
> > to totally hoze the entire network or any major
> > portion of it, until they fix any major issue with
> > these changes
> > 
> > -Henry
> > 
> > --- Sam Stickland <[EMAIL PROTECTED]> wrote:
> > > 
> > > Well, a naive calculation, based on reducing the TTL
> > > to 15 mins from 24
> > > hours to match Verisign's new update times, would
> > > suggest that the number
> > > of queries would increase by (24 * 60) / 15 = 96
> > > times? (or twice that if 
> > > you factor in for the Nyquist interval).
> > > 
> > > Any there any resources out there there that have
> > > information on global 
> > > DNS statistics? ie. the average TTL currently in
> > > use.
> > > 
> > > But I guess it remains to be seen if this will have
> > > a knock on effect like 
> > > that described below. Verisign are only doing this
> > > for the nameserver 
> > > records at present time - it just depends on whether
> > > expection for such 
> > > rapid changes gets pushed on down.
> > > 
> > > Sam
> > > 
> > > On Thu, 22 Jul 2004, Ray Plzak wrote:
> > > 
> > > > 
> > > > Good point!  You can reduce TTLs to such a point
> > > that the servers will
> > > > become preoccupied with doing something other than
> > > providing answers.
> > > > 
> > > > Ray
> > > > 
> > > > > -Original Message-
> > > > > From: [EMAIL PROTECTED]
> > > [mailto:[EMAIL PROTECTED] On Behalf Of
> > > > > Daniel Karrenberg
> > > > > Sent: Thursday, July 22, 2004 3:12 AM
> > > > > To: Matt Larson
> > > > > Cc: [EMAIL PROTECTED]
> > > > > Subject: Re: VeriSign's rapid DNS updates in
> > > .com/.net
> > > > > 
> > > > > 
> > > > &

RE: VeriSign's rapid DNS updates in .com/.net (fwd from ml)

2004-07-22 Thread Sam Stickland

I think I ought to qualify my earlier email - I certainly didn't mean to 
suggest that this would happen. I meant to merely comment on what the 
expected increase in load might be if we did see a trend towards lower 
TTLs.

Any trend towards lower TTLs would be outside of Verisign's control 
anyhow, and if it did happen, it would no doubt be a gradual effect. Which 
brings me back to my original question - does anyone know of any stastics 
for TTL values?

Sam

On Thu, 22 Jul 2004, Henry Linneweh wrote:

> 
> Before a big panic starts, they can restore it back to
> the way it was if there is an event of such proportion
> to totally hoze the entire network or any major
> portion of it, until they fix any major issue with
> these changes....
> 
> -Henry
> 
> --- Sam Stickland <[EMAIL PROTECTED]> wrote:
> > 
> > Well, a naive calculation, based on reducing the TTL
> > to 15 mins from 24
> > hours to match Verisign's new update times, would
> > suggest that the number
> > of queries would increase by (24 * 60) / 15 = 96
> > times? (or twice that if 
> > you factor in for the Nyquist interval).
> > 
> > Any there any resources out there there that have
> > information on global 
> > DNS statistics? ie. the average TTL currently in
> > use.
> > 
> > But I guess it remains to be seen if this will have
> > a knock on effect like 
> > that described below. Verisign are only doing this
> > for the nameserver 
> > records at present time - it just depends on whether
> > expection for such 
> > rapid changes gets pushed on down.
> > 
> > Sam
> > 
> > On Thu, 22 Jul 2004, Ray Plzak wrote:
> > 
> > > 
> > > Good point!  You can reduce TTLs to such a point
> > that the servers will
> > > become preoccupied with doing something other than
> > providing answers.
> > > 
> > > Ray
> > > 
> > > > -Original Message-
> > > > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of
> > > > Daniel Karrenberg
> > > > Sent: Thursday, July 22, 2004 3:12 AM
> > > > To: Matt Larson
> > > > Cc: [EMAIL PROTECTED]
> > > > Subject: Re: VeriSign's rapid DNS updates in
> > .com/.net
> > > > 
> > > > 
> > > > Matt, others,
> > > > 
> > > > I am a quite concerned about these zone update
> > speed improvements
> > > > because they are likely to result in
> > considerable pressure to reduce
> > > > TTLs **throughout the DNS** for little to no
> > good reason.
> > > > 
> > > > It will not be long before the marketeers will
> > discover that they do not
> > > > deliver what they (implicitly) promise to
> > customers in case of **changes
> > > > and removals** rather than just additions to a
> > zone.
> > > > 
> > > > Reducing TTLs across the board will be the
> > obvious *soloution*.
> > > > 
> > > > Yet, the DNS architecture is built around
> > effective caching!
> > > > 
> > > > Are we sure that the DNS as a whole will remain
> > operational when
> > > > (not if) this happens in a significant way?
> > > > 
> > > > Can we still mitigate that trend by education of
> > marketeers and users?
> > > > 
> > > > Daniel
> > > 
> > 
> > 
> 



RE: VeriSign's rapid DNS updates in .com/.net

2004-07-22 Thread Sam Stickland

Well, a naive calculation, based on reducing the TTL to 15 mins from 24
hours to match Verisign's new update times, would suggest that the number
of queries would increase by (24 * 60) / 15 = 96 times? (or twice that if 
you factor in for the Nyquist interval).

Any there any resources out there there that have information on global 
DNS statistics? ie. the average TTL currently in use.

But I guess it remains to be seen if this will have a knock on effect like 
that described below. Verisign are only doing this for the nameserver 
records at present time - it just depends on whether expection for such 
rapid changes gets pushed on down.

Sam

On Thu, 22 Jul 2004, Ray Plzak wrote:

> 
> Good point!  You can reduce TTLs to such a point that the servers will
> become preoccupied with doing something other than providing answers.
> 
> Ray
> 
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
> > Daniel Karrenberg
> > Sent: Thursday, July 22, 2004 3:12 AM
> > To: Matt Larson
> > Cc: [EMAIL PROTECTED]
> > Subject: Re: VeriSign's rapid DNS updates in .com/.net
> > 
> > 
> > Matt, others,
> > 
> > I am a quite concerned about these zone update speed improvements
> > because they are likely to result in considerable pressure to reduce
> > TTLs **throughout the DNS** for little to no good reason.
> > 
> > It will not be long before the marketeers will discover that they do not
> > deliver what they (implicitly) promise to customers in case of **changes
> > and removals** rather than just additions to a zone.
> > 
> > Reducing TTLs across the board will be the obvious *soloution*.
> > 
> > Yet, the DNS architecture is built around effective caching!
> > 
> > Are we sure that the DNS as a whole will remain operational when
> > (not if) this happens in a significant way?
> > 
> > Can we still mitigate that trend by education of marketeers and users?
> > 
> > Daniel
> 



Re: OT: xDSL hardware

2004-07-14 Thread Sam Stickland

On Wed, 14 Jul 2004, Joe Maimon wrote:
>
> Sam Stickland wrote:
> 
> >On Tue, 13 Jul 2004, Eric Kagan wrote:
> >
> >>There is a WIC-1ADSL for 1700/2600. Not sure about an SDSL WIC.  We have
> >>done a few T1/ADSL and ADSL/ISDN setups and it seems to work fairly well.  I
> >>also spoke to a computer integrator that claimed they were working with
> >>Cisco to develop a ping like action for determining if the next hop was
> >>alive and if not set the interface down so it would failover to secondary
> >>interface / route.  I assume it would be a 12.3(x) ish release.
> >
> >I believe you're talking about this, available in 12.3(8)T.
> >
> >Reliable Static Routing Backup Using Object Tracking
> >
> >http://www.cisco.com/univercd/cc/td/doc/product/software/ios123/123newft/123limit/123x/123xe/dbackupx.htm
> >
> >Sam
> >
> Yes but is 12.3(8)T available for the 1700/2600 routers?  IIRC, not.
> 
> Joe

Admittedly I haven't checked the feature navigator, but that page says 
"This feature is supported in all Cisco IOS software images for the Cisco 
1700 series modular access routers except the Cisco IOS IP Base image.", 
so I would imagine that it is.

S



Re: OT: xDSL hardware

2004-07-14 Thread Sam Stickland

On Tue, 13 Jul 2004, Eric Kagan wrote:

> 
> > > Is anyone aware of a WIC card that will work with the lower end Cisco
> gear
> > > (1700 or 2600 series) that will allow me to terminate an ADSL or
> > > preferably an SDSL line directly on the router?  The idea being that the
> > > router is then aware of link up/down status...
> 
> There is a WIC-1ADSL for 1700/2600. Not sure about an SDSL WIC.  We have
> done a few T1/ADSL and ADSL/ISDN setups and it seems to work fairly well.  I
> also spoke to a computer integrator that claimed they were working with
> Cisco to develop a ping like action for determining if the next hop was
> alive and if not set the interface down so it would failover to secondary
> interface / route.  I assume it would be a 12.3(x) ish release.

I believe you're talking about this, available in 12.3(8)T.

Reliable Static Routing Backup Using Object Tracking

http://www.cisco.com/univercd/cc/td/doc/product/software/ios123/123newft/123limit/123x/123xe/dbackupx.htm

Sam



RE: 802.17 RPR and L2 Ethernet interoperablity (Ethernet over RPR)

2004-07-07 Thread Sam Stickland

On Wed, 7 Jul 2004, Mikael Abrahamsson wrote:

> 
> On Wed, 7 Jul 2004, Sam Stickland wrote:
> 
> > One question about this, the Q-in-Q tunnelling would have to take place on
> > the switch connected to the ring - what happens if the packet has already
> > been placed in a dot1Q tunnel? I haven't really worked much with dot1Q
> > tunneling - are their any know problems with extra tags? (aside from MTU 
> > issues, but I imagine most rings will support at least 9bytes)
> 
> Most switches will only see the outer tag and will thus be transparent for 
> Q-in-Q:ed packets.

That was my worry - the definition of most. 99% of switches or 60%? This
isn't actually a standard is it, so I presume this behaviour is expected,
but not required?

Sam



RE: 802.17 RPR and L2 Ethernet interoperablity (Ethernet over RPR)

2004-07-07 Thread Sam Stickland

Thanks for the reply. Pretty much everyone has told me that it's vendor 
specific, although the implementation mentioned below sounds nice. Any 
chance of naming that vendor?

One question about this, the Q-in-Q tunnelling would have to take place on
the switch connected to the ring - what happens if the packet has already
been placed in a dot1Q tunnel? I haven't really worked much with dot1Q
tunneling - are their any know problems with extra tags? (aside from MTU 
issues, but I imagine most rings will support at least 9bytes)

Sam

On Tue, 6 Jul 2004, Michael Smith wrote:

> Hello:
> 
> I think this is pretty provider-specific.  However, we are doing this
> right now with a particular vendor using their flavor of RPR.  The ring
> uses Q in Q tunneling in the core and all switches communicate directly
> to one another using .1Q encapsulated frames.  
> 
> Mike
> 
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
> Of
> > [EMAIL PROTECTED]
> > Sent: Tuesday, July 06, 2004 11:50 AM
> > To: [EMAIL PROTECTED]
> > Subject: 802.17 RPR and L2 Ethernet interoperablity (Ethernet over
> RPR)
> > 
> > 
> > Hi,
> > 
> > This is probably a fairly simply question, I'm probably just not quite
> > groking the layers involved here.
> > 
> > If I had the following setup:
> > 
> > Endstation A -- Switch A === RPR Ring === Switch B -- Endstation B
> > 
> > could there be a VLAN setup such that Endstations A and B are both in
> it,
> > and can communicate as if they are on the same LAN segment? (And I
> mean
> > natively. ie. not using an MPLS VPN). ie. Will the switches involved
> > tranlate the different framing formats in use? Is this vendor
> dependent?
> > 
> > Sam
> > 
> 
> 
> 



Re: Open Source BGP Route Optimization?

2004-05-29 Thread Sam Stickland

Per Gregers Bilse <[EMAIL PROTECTED]> wrote:
> On May 28, 10:37am, "Sam Stickland" <[EMAIL PROTECTED]> wrote:
>> Are there any BGP extensions that would cause a BGP speaker to
>> foward all of it's paths, not just it best? I believe quagga had
>> made some recent attempts
>
> It has been discussed and been on wish lists, but:
>
>> in this direction. IIRC the problem isn't to do with the route
>> annoucements, it's the route withdrawals. I believe BGP only
>> specifies the prefix being withdrawn and not the path, so if it's
>> advertised multiple paths to a prefix it's impossible to know which
>> has been withdrawn.
>
> That is 100% correct, yes.  Selective withdrawal is not supported.
>
> Another issue is that there isn't much point, as far as regular BGP
> and routing considerations go.  Whichever is the best path for a
> border router is the best path; telling other routers about paths it
> will not use serves no (or at best very little) point in this context.

Well something came up recently on a transit router. It takes multiple
Tier-1 feeds, but management wanted to sell a just MFN to a customer. It's
possible to policy route all of their traffic to the MFN interface and only
advertise their prefixes to MFN, but not possible to only feed them the MFN
routes without starting to use VRFs etc.

Of course this is a great perversion of resources ;)

Sam




Re: Open Source BGP Route Optimization?

2004-05-29 Thread Sam Stickland

Are there any BGP extensions that would cause a BGP speaker to foward all of
it's paths, not just it best? I believe quagga had made some recent attempts
in this direction. IIRC the problem isn't to do with the route annoucements,
it's the route withdrawals. I believe BGP only specifies the prefix being
withdrawn and not the path, so if it's advertised multiple paths to a prefix
it's impossible to know which has been withdrawn.

Sam

Per Gregers Bilse <[EMAIL PROTECTED]> wrote:
> At first I wasn't sure what a "route optimizer" was supposed to do --
> the term is rather generic and could have a lot of different
> interpretations.
>
> A multi-path traffic balancing solution in the style of Cisco's OER
> has to be tightly integrated with the routing infrastructure.
> Specifically, it needs first hand BGP peer data in order to work
> reliably.  There will be a number of cases where an add-on solution
> might be able to improve on certain things, but there is one major
> hurdle: a BGP speaker only forwards its own best paths, so an add-on
> analyzer might well never learn about alternative paths.  The only
> way for any implementation to reliably learn (all) alternative paths
> and otherwise maintain routing integrity is by receiving BGP data
> first hand, ie directly peer with transit providers and other peers.
>
> Best,
>
>   -- Per



Re: Open Source BGP Route Optimization?

2004-05-29 Thread Sam Stickland

Bruce Pinsky <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Per Gregers Bilse wrote:
>
>> On May 28, 10:37am, "Sam Stickland" <[EMAIL PROTECTED]> wrote:
>>
>>> Are there any BGP extensions that would cause a BGP speaker to
>>> foward all of it's paths, not just it best? I believe quagga had
>>> made some recent attempts
>>
>>
>> It has been discussed and been on wish lists, but:
>>
>
> And as I said in my other post, there were two drafts submitted that
> never went anywhere.
>
>>
>>> in this direction. IIRC the problem isn't to do with the route
>>> annoucements, it's the route withdrawals. I believe BGP only
>>> specifies the prefix being withdrawn and not the path, so if it's
>>> advertised multiple paths to a prefix it's impossible to know which
>>> has been withdrawn.
>>
>>
>
> But the "optimizing" device is in need of receiving all potential
> paths from the border routers.  Essentially, it needs a complete
> picture of all viable paths, not just the best that each border has.
> It would not be advertising multiple paths.

No, that's not what I meant. I simply meant that the optimising device
couldn't just be a iBGP peer - it wouldn't get enough information. It occurs
to me now that walking the BGP4-MIB could be enough, but I'm wouldn't like
to bet on the efficently of detecting prefix withdrawals by constantly
rewalking the table!

To bring this back on topic, I imagine I'd be happy with a tool that simply
identified top traffic flows and automatically provided me with traceroute
and ping results. Though admittedly I'm not sure how useful it would end up
in real life, it sounds like it could be relatively useful tool in the hands
of a someone that understands it.

Thinking about the potential problems with it, I wonder if there could be
any milage in the idea of preformance beacons: Points at key points in an AS
(possibly registered in an RIR) that are garanteed to be useable for
prefined traffic metrics. Hmm.. maybe it's late and I haven't had enough
coffee - one of the route optimisation does something like this already?

Sam



Re: Open Source BGP Route Optimization?

2004-05-29 Thread Sam Stickland

Andrew - Supernews <[EMAIL PROTECTED]> wrote:
>> "Per" == Per Gregers Bilse <[EMAIL PROTECTED]> writes:
>
>  Per> But that wasn't really the point.  If I telnet to all border
>  Per> routers and do 'sh ip b' I can get all tables too; likewise if I
>  Per> have a starting point and do a lot of LS traceroutes; and maybe
>  Per> even via SNMP (haven't checked what various MIBs support).
>
> You can get the received routes via SNMP. I've done this manually on
> occasion for the purposes of doing "what-if" analysis of potential
> traffic plans - take a dump of all available external routes via SNMP,
> apply to that the proposed policy with regard to selecting the best
> route, then correlate the resulting route choices with known traffic
> statistics to determine the resulting utilisation levels of each
> external link. This has proven useful in a number of situations where
> radical changes to external routing were being made, to avoid
> unexpectedly overloading particular links.

I would had liked to had been able to have done such things in the past. I
was thinking about having a go at writing something, but it's not like I
have enough time as it is, and our network isn't really big enough to
warrant it.

Though I am interested in what tools already exist to support this (more out
of curiousity than need). A quick google search threw up a couple of
research papers and IP/MPLSView (www.wandl.com), but I presume others on
this list will know of more?

Sam



Re: Spamhaus Exposed

2004-03-18 Thread Sam Stickland

[EMAIL PROTECTED] wrote:

> So, the US gov't is "Satan" going after "innocent" hackers in Wales?
> It still boggles my mind how prevelant this shallow, trendy attitude
> is in Europe, even among supposedly educated people.  Why think when
> you can just join the crowd spewing ignorance, as long as it sounds
> Bohemian and anti-establishment?
>
> PS: Without Satan, there would be no Internet for you to express your
> considered opions on.

I think the US comes across as a very righteous country to the rest of the
world, with all the good and bad conatations that brings with it.

right·eous
adj.
1. Morally upright; without guilt or sin: a righteous parishioner.
2. In accordance with virtue or morality: a righteous judgment.
3. Morally justifiable: righteous anger, righteous indignation.

"Fearless in his righteous cause."
  - Milton.

"I prefer the most unfair peace to the most righteous war."
  - Cicero.

"People must have righteous principals in the first, and then they will not
fail to perform virtuous actions."
  - Martin Luther

"Lord save us from the fury of the righteous"
  - Unknown

Anyway, I should get back to some actual network engineering and less
popcorn eating, diverting as this thread is.

Sam



Re: Counter DoS

2004-03-13 Thread Sam Stickland

Joel Jaeggli wrote:
> On Thu, 11 Mar 2004, Petri Helenius wrote:
>
>>
>> Gregory Taylor wrote:
>>
>>>
>>> Oh yes, lets not forget the fact that if enough sites have this
>>> 'firewall' and one of them gets attacked by other sites using this
>>> firewall it'll create a nuclear fission sized chain reaction of
>>> looping Denial of Service Attacks that would probably bring most
>>> major backbone providers to their knees.
>>>
>> Fortunately people with less clue usually have less bandwidth.
>
> When pricing structures and deployment of broadband in the US
> approaches that of Korea and Japan, I think you'll find that that
> isn't the case in the US anymore.

Out of interest, do the people see much in the way of DDOS attacks from
Japan? All that bandwidth and quite a sizable population (130 million) - but
maybe the latency to US and European targets contrains it?

Sam



Re: dealing with w32/bagle

2004-03-05 Thread Sam Stickland

Curtis Maurand wrote:
> On Thu, 4 Mar 2004, Laurence F. Sheldon, Jr. wrote:
>
>>
>> Jeff Shultz wrote:
>>
>> There are others.
>> unquote
>>
>
> But nothing that's been developed.  Joe user's ip address changes on a
> regular basis.  One would still need to find that machine.  DNS gets
> cached (some go past TTL's I've set.)  and is too static to be an
> effective means to get a file.
>
> Most instant messengers have facilities for exchanging files, but both
> sides need to be connected at the same time.  Having that file in an
> email is better.
>
> I like SCP, too.  It works well, so well that I use that, instead of
> ftp. You still have to find the other end that has its address
> changed every day or two.  With email, only one end needs to be
> connected at any one time.  email is about the most convenient and
> easiest way that I know of to get pictures of little Johnnie to
> Grandmother in a way that is easy for her to understand.  Whatever
> anyone proposes needs to be that easy. Chances are that Grandma's not
> a geek like most of us.

In terms of whether the system is open to abuse or not, part of the problem
is simplicity you need to achieve for it to take off in the first place. If
it's simple, it can be automated. If it can be automated it's open to
automated abuse.

(NB/OT: Perhaps the only solution is systems that can detect when they are
being abused and do something to force manual intervention. That could take
whatever form it needs to, from manual account reactivation, more passwords,
or reverse turing tests - depending on which party is required to take
action.

But I don't see systems like this being developed and deployed anytime soon
;) )



w32/bagle variants

2004-03-04 Thread Sam Stickland

For the people talking about how quickly the variants have been produced ;)
 
http://news.bbc.co.uk/1/hi/technology/3532009.stm
 
Seems the authors are taunting each other in the code.
 
 Sam
 



Re: Possibly yet another MS mail worm

2004-03-01 Thread Sam Stickland

Curtis Maurand wrote:
> On Mon, 1 Mar 2004, Todd Vierling wrote:
>
>> On Mon, 1 Mar 2004, Curtis Maurand wrote:
>>
>>> Sure they doits called COM/DCOM/OLE/ActiveX or whatever they
>>> want to call it this week.  Its on every windows system.
>>
>> No, my point was that the majority of newer trojan mail viruses
>> don't depend on ActiveX exploits -- they simply wait, dormant, for a
>> n00b to click on this mysterious-looking Zip Folder, and the
>> mysterious-looking EXE inside.
>>
>> It's as if the modern e-mail viruses are closer to human infections.
>> Only the clueful are immune.  8-)
>
> The latter is very true.
>
> My point is that the COM/DCOM/OLE/ActiveX is what allows for a script
> in an email message that gets executed to have access to the rest of
> the system, rather than executing within a protected sandbox.  Of
> course scripts within email messages shouldn't execute at all.  Once
> they do execute, they have access to the OLE objects on the machine.
> Its a security hole big enough to drive a tank through.

I don't think that defines the problem very well. The current Bagle.C virus
does the following:

"W32/Bagle-C opens up a backdoor on port 2745 and listens for connections.
If it receives the appropriate command it attempts to download and execute a
file. W32/Bagle-C also makes a web connection to a remote URL, thus
reporting the location and open port of infected computers.

Adds the value:

gouday.exe = \readme.exe

to the registry key:

HKCU\Software\Microsoft\Windows\CurrentVersion\Run

This means that W32/Bagle-C runs every time you logon to your computer"

It also uses it's own SMTP engine to replicate itself. So effectively it's
opening a connection to port 80 (from an unprivileged port), listening on
port 2745 (an unprivileged port), and opening connections to port 25 (from
an unprivileged port).

Maybe I'm missing something here, but where does access to OLE objects come
into play? Also this virus would appear to function just as well even if a
non-adminstrator user opened it.

Sam



Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)

2004-02-27 Thread Sam Stickland

[EMAIL PROTECTED] wrote:
>
> P.S. I think a solution lies in the general direction
> of converting the entire world to use 112 for emergency
> services and having the VoIP services set up an automated
> system that rings back whenever your phone connects using
> a different IP address and asks you where you are.

For what it's worth, I believe here in the UK dialing just 99 will also
connect you to the emergency services. The rational is that if you are
behind a switchboard you have to dial 9 to get an outside line, and in the
heat of the moment you might forget to dial four nines. That's definately an
advantage that 999 has, taht 911 and 112 don't?

Sam



Re: BGP, MED, Confederation presentation

2004-02-23 Thread Sam Stickland

Thanks Pete, that's exactly what I was looking for :)

Sam

Pete Templin wrote:
> This might be it: http://www.nanog.org/mtg-0006/confed.html
> 
> (It's certainly been a great reference to me!)
> 
> Sam Stickland wrote:
> 
>> Hi,
>> 
>> There was a link posted to this list about six months ago, of a
>> presentation that showed how to use additive MEDs to set up traffic
>> flows correctly between sites (where each site is it's own BGP
>> confederation) and showing animation of the resulting (example)
>> traffic flows. I remember that the technique it described was well
>> regarded and that it was reasonably unqiue. 
>> 
>> Sorry for the vague description, but my web searches and trawls
>> through the nanog archives have drawn a blank. I'm hoping that
>> someone here knows what I'm talking about :)
>> 
>> Sam



BGP, MED, Confederation presentation

2004-02-23 Thread Sam Stickland

Hi,

There was a link posted to this list about six months ago, of a presentation
that showed how to use additive MEDs to set up traffic flows correctly
between sites (where each site is it's own BGP confederation) and showing
animation of the resulting (example) traffic flows. I remember that the
technique it described was well regarded and that it was reasonably unqiue.

Sorry for the vague description, but my web searches and trawls through the
nanog archives have drawn a blank. I'm hoping that someone here knows what
I'm talking about :)

Sam



Re: in case nobody else noticed it, there was a mail worm released today

2004-01-29 Thread Sam Stickland

Christopher Bird wrote:
> Please pardon my ignorance, but I am *mightily* confused.
> In a message from Michel Py is the following:
> 
>>
>>
>>> and ISTR one patch for Outlook 2000 that blocked
>>> your ability to save executables was released)
>>
>> It default in Outlook XP and Outlook 2003, which has prompted large
>> numbers of persons to download Winzip, which as not stopped worms to
>> be propagated as you pointed out.
>>
>> Michel.
>
> The bit I don't get is how a zip file is created such that launching
> it invokes winzip and then executes the malware. When I open a normal
> .zip file, winzip opens a pane that shows me the contents. After that
> I can extract a file or I can "doubleclick" on a file to open it -
> which if it is executable will cause it to execute. I haven't seen a
> case where simply opening a zip archive causes execution of something
> in its contents unless it is a self extracting archive in which case
> it unzips and executes, but doesn't have the .zip suffix.
>
> Would anyone explain to me how this occurs (and if RTFM with a pointer
> to the M is the best way, then so be it!)

I don't think that was the point Michael was trying to make. I believe he
meant that MS stopped the ability to _even_ save executables attached to
emails to disk in some forms of Outlook, but this did nothing to stop the
spread of viruses. People simply sent executables as zipped files, which
people then had to extract to run. Dispite the fact that an external program
has to be used to get to to the executable, people still run them.

Sam




Re: sniffer/promisc detector

2004-01-17 Thread Sam Stickland


- Original Message -
From: "Laurence F. Sheldon, Jr." <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, January 16, 2004 10:49 PM
Subject: Re: sniffer/promisc detector


>
> Gerald wrote:
> >
> > Subject says it all. Someone asked the other day here for sniffers. Any
> > progress or suggestions for programs that detect cards in promisc mode
or
> > sniffing traffic?
>
> I can't even imagine how one might do that.  Traditionally the only
> way to know that you have a mole is to encounter secrets that "had to"
> have been stolen.

In an all switched network, sniffing can normally only be accomplished with
MAC address spoofing (Man In The Middle). Watching for MAC address changes
(from every machines perspective), along with scanning for seperate machines
with the same ARP address, and using switches that can detect when a MAC
address moves between ports will go a long way towards detecting sniffing.

It can also be worthwhile setting up a machine on a switch to detect
non-broadcast traffic that isn't for it - sometimes older switches get
'leaky' when they shouldn't be used.

I'm not sure if it's still the case, but it used to be the case that when
Linux is in promiscuous mode, it will answer to TCP/IP packets sent to its
IP address even if the MAC address on that packet is wrong. Sending TCP/IP
packets to all the IP addresses on the subnet, where the MAC address
contains wrong information, will tell you which machines are Linux machines
in promiscuous mode (the answer from those machines will be a RST packet).

Some tools that google turned up (haven't tried them myself):

http://www.securityfriday.com/ToolDownload/PromiScan/promiscan_doc.html

http://www.packetstormsecurity.org/sniffers/antisniff/

Apparently Man In The Middle attacks can also be detected by measuring the
latency under different traffic loads, but I haven't looked to much into
that.

Sam