Re: Unbelievable Spam.

2004-02-02 Thread Vadim Antonov


On 2 Feb 2004, Paul Vixie wrote:

 the spammers have nothing to fear from you, or us, or me, or anybody.  with
 the incredible number of bottomfeeders and antivirus companies polluting the
 econsystem with their own various get-rich-quick schemes, there's no way to
 tell the difference between good and bad traffic, good and bad intent, good
 and bad providers, etc.  the spam/antispam battleground is all just mud now.

Everyone should be glad for the Internet making all of us feel like rich
and famous.  A lot more people want our attention (and money) than we wish
to deal with.  And this is not only the spam problem - the
technology-related privacy and identity issues are merely the other side
of the same phenomenon - the rich  famous had to fight with gossips,
paparazzi and various con artists for as long as there were money, power
and fame.

And because rich and famous had this problem for a long, long time, they 
managed to devise some solutions.  So everything we do about cyberage 
problems like spam is going to be some automation of those old solutions.

Call me elitist, or old-worlder, but my preferred way of dealing with it 
is choose who you are associating with.  Introductions.  In newspeak - 
whitelists.

--vadim



Re: Unbelievable Spam.

2004-02-02 Thread Vadim Antonov


On Mon, 2 Feb 2004, Brian Bruns wrote:

 They are bold, and don't seem to fear anyone.  You can keep killing them, and
 they don't learn.

That's because nobody's _killing_ them.

There is an anecdotal story of some russian ISP actually sending few 
toughs to beat up some HACK0R DUD3Z.  That ISP had seen a dramatically 
decreased number of attacks on its servers and customers.

--vadim



Re: Misplaced flamewar... WAS: RE: in case nobody else noticed it, there was a mail worm released today

2004-01-30 Thread Vadim Antonov

On Fri, 30 Jan 2004, Iljitsch van Beijnum wrote:

 Actually IMO putting all their crap in their own dir is a feature 
 rather than a bug. I really hate the way unix apps just put their stuff 
 all over the place so it's an incredible pain to get rid of it again.

Putting all crap in the working directory is bad design (no way to 
separate read-only stuff from mutable). Unix/Linux design (all over the 
place) is pure and simple lack of discipline, or hack before thinking 
approach.

Plan 9 nearly got it right, but for the lack of persistent mounts (it's 
all in an rc file, executed at each login).

 I think MacOS got it right: for most apps, installing just means 
 dumping the icon wherever you want it to be, deinstalling is done by 
 dropping it in the trash. The fact that the icon hides a directory with 
 a bunch of different files in it is transparent to the user.

That's UI.  Inside it's the same Unix crap.
 
 I think MS's tradeoffs are mainly time to market vs even faster time to 
 market.

It's mostly We don't care, we don't have to, we're The Microsoft 
mentality.

--vadim



Re: sniffer/promisc detector

2004-01-19 Thread Vadim Antonov


Criminal hackers _are_ stupid (like most criminals) for purely economical
reasons: those who are smart can make more money in various legal ways,
like by holding a good job or running their own business.  Hacking into
other people's computers does not pay well (if at all).

Those who aren't in that for money are either psychopaths or adolescents,
pure and simple.  Neither of those are smart.

The real smart ones - professionals - won't attack unless there's a chance
of a serious payback.  This excludes most businesses, and makes anything
but a well-known script-based attack a very remote possibility.

Honeypots are indeed a good technique to catch those attacks, and may be
quite adequate for the probable threat model for most people.  Of course,
if you're doing security for a bank, or a nuclear plant, then you may want
to adjust your expectations of adversary's motivation and capabilities and
upgrade your defenses accordingly.  But, then, bribing an insider or some
other form of social engineering is going to be more likely than any
direct network-based attack.

For most other people a trivial packet-filtering firewall, lack of
Windoze, and a switch instead of a hub will do just fine.

--vadim


On Sat, 17 Jan 2004 [EMAIL PROTECTED] wrote:

 
 I think I'll pass this onto zen of Rob T. :)
 
 i think he said something along the lines of security industry is here for my
 amusement in the last nanog.
 
 so yea.. let's install bunch of honeypots and hope all those stupid hackers
 will get caught like the mouse.
 
 by the time you think your enemy is less capable than you, you've already lost
 the war.
 
 -J
 
 On Sat, Jan 17, 2004 at 02:31:06AM -0800, Alexei Roudnev wrote:
  
  The best anty-sniffer is HoneyPot (it is a method, not a tool). Create so
  many false information (and track it's usage) that hackers will be catched
  before they do something really wrong.



Re: PC Routers (was Re: /24s run amuck)

2004-01-15 Thread Vadim Antonov


I can project a nearly infinite rate of growth in my personal income when
I deposit a $3.95 rebate check.  It's a matter of defining the sampling
period.

The truth is, that kind of creative statistics is exactly what allowed
Worldcom (and the rest of the telecom) to get into the deep pile of
manure.  

--vadim

On Thu, 15 Jan 2004, Randy Bush wrote:

  He also said that Internet is growing by 1000% a year.
  we're adding a DS3 per day [to the network]
 
 and, at the time, both statements were true.  
 
 randy
 



Re: interesting article on Saudi Arabia's http filtering

2004-01-15 Thread Vadim Antonov


On Thu, 15 Jan 2004, Randy Bush wrote:

 i was helping get the link up into kacst (their nsf equivalent) in
 ryadh back in '94, and a rather grownup friend there, Abdulaziz A.
 Al Muammar, who had his phd from the states and all that, explained
 it to me something like this way.
 
 yes, to a westerner, our ways of shielding our society seem silly,
 and sometimes even worse.  but tell me, how do we liberalize and
 open the culture without becoming like the united states [0]?
 
 not an easy problem.  considering the *highly* offensive material
 that arrives in my mailbox (and i do not mean clueless nanog
 ravings:-), my sympathy for abdulaziz increases monotonically.

Installing a whitelisting and challenge-response mail filer on my box 
reduced amount of spam to nearly zero.  I mostly get spam through the e2e 
list nowadays.

The solution to high offensiveness is to grow up and stop behaving like
the sight of some physiological function is going to kill us. It is 
offensive only because the offended party thinks that the world should be 
a sterile place, and instead of concluding that the sender of the 
offensive material is a tasteless moron and moving on decides to wage a 
war against human nature.
 
 so perhaps we should ask, rather than ranting, how do we, the
 self-appointed ubergeeks of the net, think we can clean up our own
 back yards, before we start talking about how others maintain
 theirs?

Maybe we should stop whining when others refuse to accept mail from total 
unknowns without those unknowns making a small token effort to prove their 
willingness to hold a civilized conversation?

I certainly don't care what they want to read or see. Or send, for that 
matter. None of my business.

 [0] - which, americans need to realize is, to much of the civilized
   world, the barbarian hordes, sodom, and gomorrah rolled into
   one

To much of the civilized world (and, besides Europe and Japan, no other 
places qualify, sorry) Americans look like neurotic prudes who have a 
peculiar hang-up on sex and deep inferiority complex compelling them to 
unceasingly seek affirmations of their superiority.

Much of what goes for offensive in US won't get an eyebrow raised in 
Paris or Amsterdam.  In fact, the more likely reaction would be how 
boringly lame.

As for the arabian friend who seeks to control what his compatriots are 
allowed to see, I'd say that his sensibilities are his own problem, and 
that if he wished to impose them on _me_ I'd tell him to mind his own 
business, possibly augmenting my message with appropriate degree of 
violence.

--vadim



RE: interesting article on Saudi Arabia's http filtering

2004-01-15 Thread Vadim Antonov


On Thu, 15 Jan 2004, H. Michael Smith, Jr. wrote:
 
 For the record... I have first hand knowledge that KSA's filtering is
 not too effective.

Good :) The more people are exposed to humanity of the Great Satan, the 
less they're likely to tolerate their own fanatics and zealots.

--vadim



Re: PC Routers (was Re: /24s run amuck)

2004-01-14 Thread Vadim Antonov

On Wed, 14 Jan 2004 [EMAIL PROTECTED] wrote:

 Getting to 1mpps on a single router today will probably be hard. However,
 I've been considering implementing a clustered router architecture,
 should scale pps more or less linearly based on number of PCs or
 routing nodes involved. I'm not sure if discussion of that is on-topic
 here, so maybe better to take it offline.

This is exactly what Pluris PC-based proof-of-concept prototype did in 97.
PCs were single-board 133MHz P-IIs, running custom forwarding code on bare
metal, yielding about 120kpps per board, or 1.9Mpps per cage.

In the production box CPU-based forwarding was replaced with ASICs, 1Gbps
hybrid optical/electrical butterfly/hypercube interconnect was replaced
with 12Gbps optical hypercube interconnect, otherwise architecture was
unchanged.  That was a total overkill which was one of the reasons the 
company went down.

--vadim



Re: PC Routers (was Re: /24s run amuck)

2004-01-14 Thread Vadim Antonov


He also said that Internet is growing by 1000% a year.

In fact I think that it is an extremely bad idea to use clusters of
enterprise boxes to build a global network.

--vadim

On Wed, 14 Jan 2004, Randy Bush wrote:

 
  On the topic of PC routers, I've fully given in to the zen
  of Randy Bush.  I FULLY encourage my competitor to use
  them. :)
 
 actually, i stole it from mike o'dell.  
 
 he also said something on the order of let's not bother to
 discuss using home appliances to build a global network.
 
 randy



RE: /24s run amuck

2004-01-13 Thread Vadim Antonov


On Tue, 13 Jan 2004, Michael Hallgren wrote:
  
  On Jan 13, 2004, at 6:33 AM, Michael Hallgren wrote:
  
  Unfortunately, I've seen Peering Policies which require 
  things like Must announce a minimum of 5,000 prefixes. :(
 
 
 Wonderful...
 
 mh

Easy to fix by changing to covering N million IP addresses - but, then, 
that becomes an address space conservation issue.

--vadim



re: This may be stupid but

2003-11-13 Thread Vadim Antonov


On Thu, 13 Nov 2003, Don Mills wrote:

 But it would 
 be a tragic mistake on anyone's behalf to pre-assume that all those letters 
 means I don't know what I am talking about.  That's stereotyping, isn't it?

Don (take it as a good-spirited needling, please) I'd like to point out
that this means that you have way too much spare time and an employer 
who doesn't care much about squeezing from you all 110% of what you can 
possibly do :)

--vadim



Re: This may be stupid but

2003-11-13 Thread Vadim Antonov


On Thu, 13 Nov 2003, Don Mills wrote:

 Nah.  I'm just a quick study and it's better than drinking all weekend.

Oh, you _do_ have weekends :)

--vadim



Re: This may be stupid but..

2003-11-10 Thread Vadim Antonov



Now, the problem of finding a good recruiter is substituted for the
problem of finding a good engineer :)  The trade-off is good only if
you're planning to hire dozens of engineers, considering monetary costs of
such arrangement.  Even better, if you're creating a large org, get a
headhunter on board, and give him stock options - otherwise he has wrong
incentives (i.e. he's better off with job-hopping upward mobility type
of guys (the more expensive, the better) when he works for himself, and
you really want smart and trainable staff and don't give a damn about
perfect resume - and he's going to be cost-conscious).

--vadim

On Sun, 9 Nov 2003, Andy Walden wrote:

 Again, as with most things, there tends
 to be two ends to the spectrum.



Re: This may be stupid but..

2003-11-08 Thread Vadim Antonov



The only problem - they have no clue about the profession they're
recruiting for and tend to judge applicants not by them saying reasonable
things but by their self-assuredness and by keywords in resume.

Recruiters are only good for initial screening and attracting applicants,
and in this economic climate theis services are nearly worthless, too. As
for presuming they actually read resumes... well, they may, but they never
seem to be able to distinguish between reality and exaggregation or
outright lies.  In the end, they screen out all geeks and you end up with
a bunch of polished liars.

Better use networking and referrals, and Internet-based resources.

--vadim

On Sat, 8 Nov 2003,  John Brown (CV) wrote:

 
 so negotiate with the recruiter.
 
 benifits of a recuriter are:
 
 * they take the twit calls
 * they read thru the resumes and sort the junk out
 * they do the screening
 * they do the reference and background checks
 * they have more resources to find people than you do
 
 this saves you time and money on your end.  time better
 spent building customer base, solving customer problems, etc.
 
 and if you do a good contract with the recruiter, if the
 person you hire is sacked, they find you a new one at no cost :)
 
 
 On Sat, Nov 08, 2003 at 05:16:46PM -0500, Fisher, Shawn wrote:
  
  If this question is inappropriate for this list I apoligize in advance.
  
  I have several open engineering positions that I am trying to fill without
  the use of a recruiter.  My thoughts on using a recruiter is they end up
  extracting a fee from the employer that would be better put to the future
  employee.  
  
  My question, what is the most effective way to recruit quality engineers?
  Does anyone have experience or opinions to share?
  
  TIA,
  
  Shawn




Re: Tomatoes for Verisign at NANOG 29

2003-10-16 Thread Vadim Antonov

 
 Ahem. Many of us are Star Trek experts, and it will take a LOT more 
 than this to get people to wear a red shirt.

A red EFF t-shirt (as a sign of recent donation) would be a good choice :)

--vadim



RE: Another DNS blacklist is taken down

2003-09-24 Thread Vadim Antonov


 RBLs Sounds like a great application for P2P.
 
 Perhaps, but it also seems like moving an RBL onto a P2P network would
 making poisoning the RBL far too easy...
 
 Andrew

USENET, PGP-signed files, 20 lines in perl.

--vadim 



RE: Blacklisting: obvious P2P app

2003-09-24 Thread Vadim Antonov

On Wed, 24 Sep 2003, David Schwartz wrote:

 
 
  Each mailserver could keep a cryptographically verified list, the
  list is distributed via some P2P mechanism, and DoS directed at the
  'source' of the service only interrupts updates, and only does so until
  the source slips an updated copy of the list to a few peers, and then
  the update spreads. Spam is an economic activity and they won't DoS a
  source if they know it won't help their situation.
 
   If anyone who attempts to distribute such a list is DoSed to oblivion,
 people will stop being willing to distribute such a list. Yes, spam is an
 economic activity, but spammers may engage in long-term planning. You can't
 keep the list of distributors secret. I'd be very interested in techiques
 that overcome this problem. I've been looking into tricking existing
 widely-deployed infrastructures into acting a distributors, but this raises
 both ethical and technical questions.
 
   DS
 
 



Re: Verisign Responds

2003-09-23 Thread Vadim Antonov


On Tue, 23 Sep 2003, Randy Bush wrote:

 some engineers think that all social and business problems
 can be solved by technical hacks.  

Dunno about some engineers, but engineers in general can do a lot to avoid
creation of many problems in the first place.  This wildcard flop is a
perfect example of a bad design decision coming back to bite.

I'd say that engineers pay too little attention to the social and business
implications of their decisions.

--vadim




Re: Root Server Operators (Re: What *are* they smoking?)

2003-09-17 Thread Vadim Antonov


On Wed, 17 Sep 2003, John Brown wrote:

 speaking as a shareholder of Verisign, I'm NOT HAPPY
 with the way they handled this wildcard deal, nor
 am I happy about them doing it all.  As a *shareholder*
 I'd cast my vote that they *remove* it.

You have no control over operations of the company.  However, you may vote
Verisign officers out of the office... if you can get other shareholders
to see the benefits of giving business ethics preference over short-term
profits.

--vadim 



Re: News of ISC Developing BIND Patch

2003-09-17 Thread Vadim Antonov


If we take a step back, we could say that the whole Verisign incident
demonstrated pretty clearly that the fundamental DNS premise of having no
more than one root in the namespace is seriously wrong.  This is the
fallacy of universal classification so convincingly trashed by
J.L.Borges in The Analytical Language of John Wilkins.  Sigle-root
classifications simply do not work in real-world contexts.

On a more practical plane, as long as there is a central chokepoint there
will be an enormous advantage for a commercial or political interest to
control that chokepoint.  As Internet becomes more and more important, the
reward for playing funny games with the top levels of the name space are
only bound to get higher.

I do not want to play a Nostradamus, but it is pretty obvious that it's
likely to be sooner than later that there will be an incident in which a
bribed or planted Verisign employee aids a massive identity theft on
behalf of a criminal group.  And that we will see politically-motivated
removal of domain names (my bet is that porn sites will be targeted
first).  How about twiddling NS records pointing to sites of a political
party not currently in power?  DNS is no longer a geeks sandbox, it lost
its innocence.

The Name Service is engineered with this fatal weakness. It cannot be
fixed, as long as it depends on any central point.  It already has many
problems with trademark and fair competition laws. In some countries,
national DNS roots are controlled by secret police. It is a good time to
stop patching it, and start thinking about how to address the root cause
of the problem: namely, that there's no way for an end-user to choose (or
create) his own root of the search space. (The implication is that names
become paths - which matches human psychology quite well, considering
that we posess an evolved ability to navigate using local landmarks).

In fact, we do have an enormously useful and popular way of doing exactly
that - this is called search engines and bookmarks.  What is needed is
an infrastructure for allocation of unique semantic-free end point
identifiers (to a large extent, MAC addresses may play this role, or, say,
128-bit random numbers), a way to translate EIDs to the topologically
allocated IP addresses (a kind of simplified numbers-only DNS?) and a
coordinated effort to change applications and expunge domain names from
protocols, databases, webpages and such, replacing URLs containing domain
names with URLs containing EIDs.

This way, the whole meaning-to-address translation chain becomes
decentralized and absolutely resistant to any kind of deliberate
monopolozation (except for scale-free networking effect). And, in any
case, I would trade Verisign for Google any day.

--vadim



Re: News of ISC Developing BIND Patch

2003-09-17 Thread Vadim Antonov


On Wed, 17 Sep 2003, [ISO-8859-1] Mathias Krber wrote:

  If we take a step back, we could say that the whole Verisign incident
  demonstrated pretty clearly that the fundamental DNS premise of having no
  more than one root in the namespace is seriously wrong.  This is the
  fallacy of universal classification so convincingly trashed by
  J.L.Borges in The Analytical Language of John Wilkins.  Sigle-root
  classifications simply do not work in real-world contexts.

 ... for objects which are created outside said classification and need
 to/ want to/should be classified in it. However, the DNS does not
 pretend to classify anything existing outside it in the real-world but
 implements a namespace with the stated goal of providing unique
 identification (which still requires a single-root)

Technically, DNS encodes the authority delegation, _and_ tries to attach
human-readable labels to every entity accessible by the Internet.

If the goal were unique identification, MAC addresses would do just fine.
No need for DNS.

The whole snake nest of issues about DNS revolves around the fact that the
labels themselves carry semantic load. Semantic-free labels do not
generate trademark, fair-use, squatting, etc controversies - and there's
quite a lot of those around us.

The issue with authority delegation is not clear-cut, too, for it raises
the important questions who's the authority? and how authority is
selected?.  This is pretty much the question most wars were fought about.

 So this argument is bogus IMHO...

I would say you should consider it more carefully. 

As is, we have an artificially contentuous structure, which cannot be
fixed.  There are known better methods of converting semantically loaded
labels into pointers to the entities, which do not suffer from the
artificially imposed limitation of seeing everyting as a strict hierarchy.
Most Internet users are well-versed in use of those methods. So the
question here is merely engineering, and convincing people to switch
over.

_Users_ have already voted with their patterns of use... how often do you
see them actually typing domain names in?  Address books, bookmarks, my
favorites, Reply-To: etc are used in most cases, as is Yahoo or Google.  
Your statements, in effect, confirm my position that most people do not
even recognize semantic value of the domain names, considering them mere
unique IDs.

In other words - there are much better search engines than DNS.  Remove
it from the critical path, and the whole Verisign, ICANN, etc issue will
go away, with little practical change in the end-user experience.

--vadim



Re: News of ISC Developing BIND Patch

2003-09-17 Thread Vadim Antonov


I see what it says is pretty much similar to what I was writing on the
matter of DNS some years ago :) Should be on record somewhere in NANOG
archives.

I do not claim that I'm the author of this idea, though.  Unfortunately, I
cannot remember how I acquired it :(

Thank you for the pointer!

--vadim

On Wed, 17 Sep 2003, David G. Andersen wrote:

 On Wed, Sep 17, 2003 at 02:50:51AM -0700, Vadim Antonov quacked:
  
  In fact, we do have an enormously useful and popular way of doing exactly
  that - this is called search engines and bookmarks.  What is needed is
  an infrastructure for allocation of unique semantic-free end point
  identifiers (to a large extent, MAC addresses may play this role, or, say,
  128-bit random numbers), a way to translate EIDs to the topologically
  allocated IP addresses (a kind of simplified numbers-only DNS?) and a
  coordinated effort to change applications and expunge domain names from
  protocols, databases, webpages and such, replacing URLs containing domain
  names with URLs containing EIDs.
 
   Oh, you mean something like the Semantic Free Referencing project?
 
   http://nms.lcs.mit.edu/projects/sfr/
 
   (Blatant plug for a friend's research, yes, but oh my god does it
 seem relevant today)
 
   -Dave
 
 



Re: News of ISC Developing BIND Patch

2003-09-17 Thread Vadim Antonov


On Wed, 17 Sep 2003 [EMAIL PROTECTED] wrote:

  If the goal were unique identification, MAC addresses would do just fine.
  No need for DNS.
 
 MAC addresses are not without authority delegation. The IEEE is the ultimate
 authority in said case.

Yep... But have you seen any controversy about who gets which block of MAC
addresses recently?  They're not scarce, and every block is just as good
as any other block.
 
 Any solution which requires uniqueness also requires a singular ultimate
 authority.

Not really.  You can just take random numbers. If you have enough bits
(and a good RNG) the probability of collision would be less than
probability of an asteroid wiping the life on Earth in the next year.

There's no reason to use allocated MAC addresses, too; picking them
randomly on power-up is actually better from the privacy point of view...
however, a EEPROM and programming it at manufacture time seems to be about
1 cent less expensive than a built-in hardware RNG :)

--vadim



RE: News of ISC Developing BIND Patch

2003-09-17 Thread Vadim Antonov


On Wed, 17 Sep 2003, David Schwartz wrote:

   In fact, you could just use an RSA public key as the identifier directly.
 This is likely not the best algorithm, but it's certainly an existence proof
 that such algorithms can be devised without difficulty.
 
   In fact, I'm going to call my patent attorney instead of sending this
 email. ;)

Too late. The details can be found in my final report for the US Army SBIR
program for developing Security for Open Architecture Web-Centric 
Systems.

:)

--vadim



Re: Change to .com/.net behavior

2003-09-15 Thread Vadim Antonov


I'm going to hack my BIND so it'll discard wildcard RRs in TLDs, as a
matter of reducing the flood of advertising junk reaching my desktop.

I think BIND  resolver developers would do everyone a service by adding
an option having the same effect.

Thank you, VeriSign, I will never do business with you again. You are as
bad as any spammer lowlife simply because you leave everyone with no
choice to opt out of your advertising blitz.

--vadim

On Mon, 15 Sep 2003, Matt Larson wrote:

 
 Today VeriSign is adding a wildcard A record to the .com and .net
 zones.  The wildcard record in the .net zone was activated from
 10:45AM EDT to 13:30PM EDT.  The wildcard record in the .com zone is
 being added now.  We have prepared a white paper describing VeriSign's
 wildcard implementation, which is available here:
 
 http://www.verisign.com/resources/gd/sitefinder/implementation.pdf 
 
 By way of background, over the course of last year, VeriSign has been
 engaged in various aspects of web navigation work and study.  These
 activities were prompted by analysis of the IAB's recommendations
 regarding IDN navigation and discussions within the Council of
 European National Top-Level Domain Registries (CENTR) prompted by DNS
 wildcard testing in the .biz and .us top-level domains.  Understanding
 that some registries have already implemented wildcards and that
 others may in the future, we believe that it would be helpful to have
 a set of guidelines for registries and would like to make them
 publicly available for that purpose.  Accordingly, we drafted a white
 paper describing guidelines for the use of DNS wildcards in top-level
 domain zones.  This document, which may be of interest to the NANOG
 community, is available here:
 
 http://www.verisign.com/resources/gd/sitefinder/bestpractices.pdf
 
 Matt
 --
 Matt Larson [EMAIL PROTECTED]
 VeriSign Naming and Directory Services
 



Re: BMITU

2003-09-04 Thread Vadim Antonov


Communigate Pro is not a Windows mail server... It runs on nearly
everything; and can handle millions of accounts (it has extensive
clustering support).  Check their website: www.stalker.com for specs.

--vadim

On Thu, 4 Sep 2003, Robert Boyle wrote:

 
 At 11:02 AM 9/4/2003, you wrote:
 This is my first post so please be gentle.
 
 I would like to get some opinions on the Best Mailserver in the Universe.
 Is there a more appropriate list for this question?
 
 I have looked at Communigate Pro, IMAIL, and others.
 
 I am interested in integrated solution that can scale to handle 500k
 accounts
 
 Any experience good / bad would be great.
 
 None of the Windows mail servers listed above or the others such as 
 Mailsite, MDaemon, Merak, etc. are capable of more than 10-20k active 
 users. Forget about 500k with any you have listed. If you want a solid mail 
 server which WILL handle 500k users and will run on Windows and most *nix 
 platforms, look at Surgemail from http://www.netwinsite.com It is 
 incredibly scalable and VERY fast. It uses a spam assassin-like filter 
 which is written in C so it is at least 20-100 times faster than spam 
 assassin and 95% as effective. It includes support for AVAST anti-virus and 
 the webmail program is powerful, fast, and includes support for PGP. It is 
 an AWESOME product and the support and developers are top notch too. I 
 don't have any vested interest in the company, but I am a very happy 
 customer. (They also make DNews which many people here are probably 
 familiar with)
 
 -Robert
 
 
 Tellurian Networks - The Ultimate Internet Connection
 http://www.tellurian.com | 888-TELLURIAN | 973-300-9211
 Good will, like a good name, is got by many actions, and lost by one. - 
 Francis Jeffrey
 



RE: What do you want your ISP to block today?

2003-09-02 Thread Vadim Antonov


On Mon, 1 Sep 2003, David Schwartz wrote:

  When you don't have liability you don't have to worry about quality.
 
  What we need is lemon laws for software.
 
   That would destroy the free software community. You could try to exempt
 free software, but then you would just succeed in destroying the 'low cost'
 software community. 

This is somewhat strange argument; gifts are not subject to lemon laws, 
AFAIK. The whole purpose of those laws is to protect consumers from
unscurpulous vendors exploiting inability of consumers to recognize
defects in the products _prior to sale_.

The low-cost low-quality software community deserves to be destroyed,
because it, essentially, preys on the fact that in most organizations
acquisition costs are visible while maintenance costs are hidden.  This
amounts to rip-off of unsuspecting customers; and, besides, the drive to
lower costs at the expense of quality is central to the whole story of
off-shoring and decline of the better-quality producers.  The availability
of initially indistinguishable lower-quality stuff means that the market
will engage in the race to the bottom, effectively destroying the
industry in the process.

 (And, in any event, since free software is not really free, you would
 have a hard time exempting the free software community. Licensing
 terms, even if not explicitly in dollars, have a cost associated with
 them.)

Free software producers make no implied presentation of fitness of the
product for a particular purpose - any reasonable person understands that
a good-faith gift is not meant to make the giver liable.  Vendors,
however, are commonly held to imply such fitness if they offer a product
for sale, because they receive supposedly fair compensation. That is why
software companies have to explicitly disclaim this implied claim of
fitness and merchantability in their (often shrink-wrap) licenses.

 Any agreement two uncoerced people make with full knowledge of the
 terms is fair by definition.

Consumer of software cannot be reasonably expected to be able to perform
adequate pre-sale inspection of the offered product, and therefore the
vendor has the advantage of much better knowledge. This is hardly fair to
consumers.  That is why the consumer-protection laws (and professional
licensing laws) are here in the first place.

 If I don't want to buy software unless the manufacturer takes
 liability, I am already free to accept only those terms.

There are no vendors of consumer-grade software who would assume any
liability in their end-user licensing agreements.  They don't have to do
that, so they don't, and doing otherwise would put them at the immediate
competitive disadvantage.

 All you want to do is remove from the buyer the freedom to negotiate
 away his right to sue for liability in exchange for a lower price.

You can negotiate if you have a choice. There is no freedom to negotiate
in practice, so the choice is, at best, illusory. Go find a vendor which
will sell you the equivalent of Outlook _and_ assume liability.

 If you seriously think government regulation to reduce people's software
 buying choices can produce more reliable software, you're living in a
 different world from the one that I'm living in.

It definitely helped to stem the rampant quackery in the medical
profession, and significantly improved safety of cars and appliances. I
would advise you to read some history of fake medicines and medical
devices in the US; some of them, sold as lately as in 50s, were quite
dangerous (for example, home water chargers including large quantities
of radium).

Regulation is needed to make the bargain more balanced - as it stands now,
the consumers are at the mercy of software companies because of grossly
unequal knowledge and inablity of consumers to make reasonable evaluation
of the products prior to commencing transactions.

(I am living in a country having economical system full of regulation, and
it is so far the best-performing system around.  Are you suggesting that
radically changing it will produce better results?  As you may know, what
you offer as a solution was already tried and rejected by the same
country, leaving a lot of romantic, but somewhat obsolete, notions of
radical agrarian capitalism lingering around).

 In fact, if all companies were required to accept liability for their
 software, companies that produce more reliable software couldn't
 choose to accept liability as a competitive edge. So you'd reduce
 competition's ability to pressure manufacturers to make reliable
 software.

I admire your faith in the all-mighty force of the competition. Now would
you please explain how the single vendor of the rather crappy software
came to thoroughly dominate the marketplace?  (Hint: there's a thing
called network externalities).

Absolutely free market doesn't work, and that is why there are anti-trust,
securities, commercial, and consumer-protection laws - all of which were
created to address the actual problems 

RE: What do you want your ISP to block today?

2003-09-02 Thread Vadim Antonov


On Tue, 2 Sep 2003, David Schwartz wrote:

 this will be my last reply.

David, since all your arguments are variations on You think you know
better than anyone else what they need (whereby you, supposedly, extoll
virtues of a system which you don't yourself think is the best one) I do
concur that the further discussion makes no sense.

--vadim



Re: What do you want your ISP to block today?

2003-09-01 Thread Vadim Antonov


When you don't have liability you don't have to worry about quality.

What we need is lemon laws for software.

--vadim

On 1 Sep 2003, Paul Vixie wrote:

 
  ... Micr0$0ft's level of engineered-in vulnerabilities and wanton
  disregard for security in the name of features.  ...
 
 i can't see it.  i know folks who write code at microsoft and they worry
 as much about security bugs as people who work at other places or who do
 software as a hobby.  the problem microsoft has with software quality that
 they have no competition, and their marketing people know that ship dates
 will drive total dollar volume regardless of quality.  (when you have
 competition, you have to worry about quality; when you don't, you don't.)
 



Re: Hey, QWEST clean up your network

2003-08-29 Thread Vadim Antonov


On Fri, 29 Aug 2003, Randy Bush wrote:

 when folk want to pay $50/mb, how much clue do we think
 isps can pay for, especially to deal with peak clue loads
 such as this last week or two?
 
 yes, money talks.  but in many ways.

Doesn't work this way.  It is much better to have one clueful guy than to
keep three clueless ones.  Costs the same, the results are strikingly
different.

--vadim



Re: relays.osirusoft.com

2003-08-28 Thread Vadim Antonov


On Wed, 27 Aug 2003, Iljitsch van Beijnum wrote:

 I wouldn't recommend this. If you have two DNS servers on different 
 addresses, everyone can talk to #2 if #1 doesn't answer.

I noticed that many Windoze mail servers don't bother to check the second
server if the primary's dead.

--vadim



Re: Fun new policy at AOL

2003-08-28 Thread Vadim Antonov


On Thu, 28 Aug 2003, Matthew Crocker wrote:

 Shouldn't customers that purchase IP services from an ISP use the ISPs 
 mail server as a smart host for outbound mail? 

Shouldn't. There are privacy implications of having mail to be recorded
(even temporarily) at someone's disk drive.

--vadim



Re: Fun new policy at AOL

2003-08-28 Thread Vadim Antonov


On Thu, 28 Aug 2003, Matthew Crocker wrote:

 If your ISP violates your privacy or has a privacy policy you don't 
 like, find another one.

How do I know that?

As a hobby, I'm running a community site for an often misunderstood
sexual/lifestyle minority.  Most of patrons would be very unhappy if there
was an uncontrolled record of their affiliation with the community (such
as mail logs) - they may trust me, but not some anonymous tech at the ISP!

So, no third-party SMTP relays for me.

--vadim



Re: Dealing with infected users (Re: ICMP traffic increasing on most backbones Re: GLBX ICMP rate limiting

2003-08-28 Thread Vadim Antonov


It should be pointed put that the ISPs have their share of blame for the
quick-spreading worms, beause they neglected very simple precautions --
such as giving cutomers pre-configured routers or DSL/cable modems with
firewalls disabled by default (instead of the standard end-user, let only
outgoing connections thru configuration), and providing insufficient
information to end-users on configuring these firewalls.

--vadim



Re: East Coast outage?

2003-08-18 Thread Vadim Antonov

On Sun, 17 Aug 2003 [EMAIL PROTECTED] wrote:

 Use hydrogen. One solar panel (which will last forever unless you drop 
 something on it) can split H2O into H and O.

Solar panels do not last forever. In fact, they degrade rather quickly due
to the radiation damage to the semiconductor (older thin film panels were
guaranteed to perform within specs for 2-5 years, new crystalline ones
stay within nominal parameters for 20 years).  Lifetimes of hydrogen
storage products, and electrolytic converters are also limited.  Note that
exploitation of those involve creation and eventual disposal of toxic compounds.

Making those panels requires energy, and involves processes producing
pollition.  So does their disposal. Besides, solar panels convert
visible-light high-energy photons (used by the biosphere) into low-energy
(infrared) photons which are a form of pollution, and are useless for the
biosphere.  Fossil fuels and nuclear energy do not steal this source of
negative enthropy from the biospere (just a counterpoint - I'm no big fan
of those ways of producing energy, for different reasons).  Given the
relatively low power density of the solar energy, the full-lifecycle
adjustments are much higher on per-joule basis than for traditional energy
sources.

So when you talk about advantages of the solar (or any other renewable
power) you need to take into account the full energy budget (including
manufacturing and disposal) and ecological impact of the entire lifecycle
of the product, not just the generation phase.  Such analysis will likely
show that renewables are not as green or renewable as they seem to be.

It seems to me that the debate on superiority of different methods of
producing useable energy is high on emotions and very low on useful
data; it will be a horrible mistake to waste lots of time or resources on
an approach which may turn out to be worse than others in the final
analysis.

--vadim

PS My personal favourite option is to move power generation out to space,
   where pollution will not be a problem for a very long time.

   This option is technically feasible now, economics and political will
   are entirely different matters, however. Quoting from one of my
   favourite authors: ...most of people ... were quite unhappy for pretty
   much of the time. Many solutions were suggested for this problem, but
   most of these were largely concerned with the movements of small green
   pieces of paper, which is odd because on the whole it was not the small
   green pieces of paper that were unhappy. 



Re: East Coast outage?

2003-08-15 Thread Vadim Antonov

On Sat, 16 Aug 2003, Petri Helenius wrote:

 Maybe we could attach the packets to hot air balloons and send them with the wind?

This seems to be a promising idea, given that the high-tech industry is
already adept at producing immeasureable quantities of hot air.

--vadim



Re: East Coast outage?

2003-08-15 Thread Vadim Antonov


On 15 Aug 2003, Scott A Crosby wrote:

 I also think that its hard to appreciate the stability differences
 between shipping power a few hundred feet and shipping power 1000
 miles. It looks like that long-distance shipping is the root cause of
 the half-dozen major outages over the past 30 years.

Yep. That's why DC power transmission is the way to go. No potentially
harmful low-frequency EM emissions, too.

--vadim



Re: WANTED: ISPs with DDoS defense solutions

2003-08-10 Thread Vadim Antonov

On 5 Aug 2003, Paul Vixie wrote:

 i'd like to discuss these, or see them discussed.  networks have edges,
 even if some networks are edge networks and some are backbone networks.
 bcp38 talks about various kinds of loose rpf, for example not accepting
 a source for which there's no corresponding nondefault route.

When I proposed reverse-path filtering in the first place I stated that
loose RPF is applicable to multi-homed networks which do not transit.

http://www.cctec.com/maillists/nanog/historical/9609/msg00321.html
http://www.cctec.com/maillists/nanog/historical/9609/msg00406.html

--vadim



Re: WANTED: ISPs with DDoS defense solutions

2003-08-09 Thread Vadim Antonov

On Tue, 5 Aug 2003, Christopher L. Morrow wrote:

  Spoofed packets are harder to trace to the source than non-spoofed
  packets. Knowing where a malicious packet is very important to the
 
 this is patently incorrect: www.secsup.org/Tracking/ has some information
 you might want to review. Tracking spoofed attacks is infact EASIER than
 non-spoofed attacks, especially if your network has a large 'edge'.

Errr... you don't need to _track_ non-spoofed attacks - you _know_ where
the source is.  Instead of going box to box back to the source (most
likely across several providers) you can immediately go to _their_
provider.

--vadim



Re: WANTED: ISPs with DDoS defense solutions

2003-08-01 Thread Vadim Antonov


On Fri, 1 Aug 2003, Jack Bates wrote:

 There is nothing in C which guarantees that code will be unreliable or 
 insecure.

Lack of real strong typing, built-in var-size strings (so the compiler can
actually optimize string ops) and uncontrollable pointer operations is
enough to guarantee that any complicated program will have buffer-overflow
vulnerabilities.

 C has the advantage of power and flexibility.

So does assembler - way more than C.

 It does no hand holding, so any idiot coder claiming to be a
 programmer can slap together code poorly. This is the fault of the
 programmer, and not the language.

Presumeably, a non-idiot can produce ideal code in significant
quantities.  May I politely inquire if you ever wrote anything bigger than
10k lines - because anyone who did knows for sure that no program is
ideal, and that humans forget, make mistakes and cannot hold the entire
project in mind, and that our minds tend to see how things are supposed to
be, not how they are - making overlooking silly mistakes a certainity.

Some languages help to catch mistakes.  Some help them to stay unnoticed,
until a hacker kid with spermotoxicosis and too much free time comes
poking around.

 The syntax for C is just fine, and since any language is 
 nothing more than syntax, C is a workable language.

I'm afraid you're mistaken - a language is a lot more than syntax. Syntax
is easy, the semantics is hard.  To my knowledge, only one group ever
atempted to formally define semantics of a real programming language - and
what they produced is 300-something pages of barely readable Algol-68:
The Revised Report filled with statements in a context-dependent grammar.  
All _syntax_ for the same language fits in 6 pages.

C is a workable language, but it is not close (by far) to a language which
would incorporate support for known best practices for large-scale
software engineering.  C++ is somewhat better, but it fails horribly in
some places, particulary when you want to write reusable code (hint:
STLport.com was hosted on one of my home boxes for some time :)  Java is
overly restrictive and has no support for generic programming (aka
templates), besides the insistence on garbage colletion makes it nearly
useless for any high-performance stuff.

Anyway, my point is not that there is an ideal language which everyone
must use, but rather that the existing ones are inadequate, and no serious
effort is being spent on getting them better (or even getting existing
better research languages into the field).  The lack of effort is simply a
consequence of lack of demand.

 There are libraries 
 out there for handling arrays with sanity checks. The fact that people 
 don't use them is their own fault.

Overhead. To get reasonable performance on boundary-checked arrays you
need compiler to do deeper optimization than is possible with calling
library routines (or even inlining them - because the semantics of
procedure call is restrictive).

 For that matter, one can easily write their own. I don't know how many
 times I have gotten a vacant expression when mentioning the word
 flowchart; which is nothing more than the visual form of what any
 programmer should have going through their head (and on paper if they
 really want to limit mistakes).

I don't use flowcharts - they're less compact than text, so they hinder
comprehension of complex pieces of code (it is a well-known fact that
splitting text onto separate pages which need to be flipped back and forth
significantly degrades speed and accuracy of comprehension - check any
textbook on cognitive psychology).  There were many graphical programming
projects (this is a perinneal mania in programming tool-smith circles),
none of them yielded any significant improvement in productivity or
quality.

 What I'd give to see a detailed flowchart for sendmail. I'd hang it on
 my walls (as I'm sure it'd take more than one).

Sendmail is a horrible kludge, and, frankly, I'm amazed that it is still
being supplied as a default MTA with Unix-like OS-es.

 Write a small program in C and then write it in Perl.
 ... 
 Time both programs. For what it's worth,
 sorry Perl took so long.

Perl is interpreted, C is compiled. In fact, Perl is worse than C when it
comes to writing reliable programs, for obvious reasons.  If anything, I
wouldn't advocate using any of the new-fangled hack-and-run languages for
anything but writing 10-line scripts.

 If a programmer can write a process in any language, then naturally the 
 programmer should choose the language which provides the most 
 flexibility, performance, and diversity; or the right tool.

A professional programmer will choose a language which lets him do the
required job with minimal effort. Since quality is not generally a project
requirement in this industry (for reasons I already mentioned) the result
is predictable - use of languages which allow quick and dirty programming;
getting stuff to do something fast, so it can be shipped, and screw the

Re: WANTED: ISPs with DDoS defense solutions

2003-07-31 Thread Vadim Antonov


On 31 Jul 2003, Paul Vixie wrote:

 the anti-nat anti-firewall pure-end-to-end crowd has always argued in
 favour of every host for itself but in a world with a hundred million
 unmanaged but reprogrammable devices is that really practical?

Not everything could be hidden behind a firewall, particularly in this
world of increasingly mobile and transient connectivity.

Besides, firewalls only protect against outsiders, whereas most damaging
attacks are from insiders.

What we need is a new programming paradigm, capable of actually producing
secure (and, yes, reliable) software.  C and its progeny (and program
now, test never lifestyle) must go.  I'm afraid it'll take laws which
would actually make software makers to pay for bugs and security
vulnerabilities in shipped code to make such paradigm shift a reality.

--vadim



Re: WANTED: ISPs with DDoS defense solutions

2003-07-31 Thread Vadim Antonov

On Thu, 31 Jul 2003, Petri Helenius wrote:

  What we need is a new programming paradigm, capable of actually producing
  secure (and, yes, reliable) software.  C and its progeny (and program
  now, test never lifestyle) must go.  I'm afraid it'll take laws which
  would actually make software makers to pay for bugs and security
  vulnerabilities in shipped code to make such paradigm shift a reality.
 
 Blaming the tools for the mistakes programmers make is like saying guns kill people
 when the truth is that people kill people with guns.

Yep, it is people who choose tools and methods which produce code which is
guaranteed to be unreliable and insecure - simply because those tools
allow one to be lazy and cobble things together fast without much design
or planning.
 
 Weve code running, where the core parts are C and has a track record better
 than the utopian five nines so many people mistakenly look for.

A real programmer can write FORTRAN program in any language.  The problem
is that the even the best programmers make mistakes.  Many of those
mistakes (particularly, security-related - such as not checking for buffer
overflows) can be virtually eliminated by the right tools.

As for code running - in the course of my current project I had to write
code interacting with ciscos - and immediately found a handful of bugs
(some of them serious) in the supposedly stable and working code which has
hundreds of thousands of users. I'm afraid you're confusing code running
stably in a particular environment with good-quality code. (Excuse me for
being rude - but my notion of reliable code comes from my early
programming experience in an organization which produced systems
controlling high-energy industrial processes - where an average computer
crash causes immediate deaths of those unlucky to be around the controlled
object, and prison terms for the manufacturer's management).

 However, since improvements are always welcome, please recommend tools which
 would allow us to progress above and beyond C and its deficencies.

May I suggest Algol-68, for example? Or any other language which actually
supports boundary checks in arrays and strings not added as an
afterthought?  Or using CPUs and OSes which won't let to execute code from
stack and data segments? Or doing event-driven programming instead of
practically undebuggable multithreading?

There's no market[*] for higher-quality software - therefore, there's no
pressure to improve tools and methods. If anything, the trend is to use
more and more languages lacking strong typing and lots of implicit
conversions, specifically designed for rapid prototyping aka quick and
dirty hackery - all of which was known to be dangerous for decades.

--vadim

[*] No market means that quality is not a differentiator because it is
impossible to evaluate quality prior to purchase, and after purchase
manufacturers are shielded from any responsibility for actually delivering
on promises by the license language.



Re: Hollywood plot: Attack critical infrastructure while Presidentis in town

2003-07-28 Thread Vadim Antonov

On Mon, 28 Jul 2003, Sean Donelan wrote:

 But the President's movements creates its own vulnerabilities for the rest
 of the critical infrastructures nearby. If you know the President will be
 in the area (the FAA posts advance notice to airman)...

First of all it creates vulnerabilities for the President in the first
place.  If bad guys know that the President will be in the area (i.e. by
receiving advance notice from FAA) they can mount an effective ambush.

I think this is more about display of power and about overzealous
ass-licking in the security bureaucracy than about security.  Reminds me
good old Soviet days where daily trips of members of Politbureau were a
major nuisance for those unlucky to live along the most frequent routes.

--vadi



Re: National Do Not Call Registry has opened

2003-07-02 Thread Vadim Antonov


On Wed, 2 Jul 2003, Steven M. Bellovin wrote:

 Oh, joy -- more spam instead of telemarketers.

Joy, actually, since e-mail is not prone to giving unsolicited wake-up
calls to those of us who live graveyard shift.

--vadim



RE: Router crash unplugs 1m Swedish Internet users

2003-06-23 Thread Vadim Antonov


On Mon, 23 Jun 2003, Jim Deleskie wrote:

 One router and it takes there entire network off-line... Maybe someone needs
 a Intro to Networks 101 class.

No matter what kind of technology or design you have there are always
kinds of faults which may bring the entire system down.  The problem is
generally in recognizing when a fault has occured, so the the operation
may be switched over to a backup.

Particularly, the present Internet routing architecture is (mis)designed
in such a way that it is incredibly easy for a local fault or human error
to bring a significant portion of the network down.  Even single-box
_hardware_ faults may lead to global crashes.

Long long time ago I had to track down a problem which made US and EU
pretty much disconnected for several hours. This turned out to be a
hardware problem in 7000's SSE card, which happily worked with packets
originating and terminating in the router itself, but silently dropped all
transit packets.  Voila!  Neighbour boxes were convinced that this one's
working - because all routing protocols were happy, and were trying to
send lots of traffic through it, which was simply going to a blackhole to
the mighty annoyance of everyone.  I've got a speeding ticket showing over
100mph on Dulles hwy at 3am, too, as a memento of rushing to DC with a
spare card...

So, in the absense of details, I would reserve judgement on soundness of
design practices.

--vadim



RE: IPv6

2003-06-15 Thread Vadim Antonov



Well, since adding a simple option to IPv4 header would solve all address
space problems w/o any need to change core routing infrastructure (unlike
introduding v6) - I see little need to go for an entirely new L3 protocol.

--vadim

On Sun, 15 Jun 2003, Deepak Jain wrote:

   1) Is IPV4 approaching an addressing limitation?
   2) Does IPV6 provide a significant buffer of new addresses (given current
 allocation policies) the way
   IPV4 did when it was new?
 
 If (1  2) = IPV6 is good
 If (1 | 2) = undefined
 If !(1  2) = who cares?



Re: IPv6

2003-06-14 Thread Vadim Antonov


On Sat, 14 Jun 2003, Nick Hilliard wrote:

 At least there is general consensus among pretty much 
 everyone - with the exception of a small number of cranks - that IPv6 is 
 good. 

Now I'm officially a crank because i fail to see why IPv6 is any better
than slightly perked up IPv4 - except for the bottom line of box vendors
who'll get to sell more of the new boxes doing essentially the same thing.

--vadim




Re: AC/AC power conversion for datacenters

2003-06-04 Thread Vadim Antonov


Here's a 3KW one for E389:

http://www.taunus-transformatoren.de/transformers/transformers_110_120_220_230_240.htmlMatthew
 Zito [EMAIL PROTECTED], 

(Actually, you don't need a two-coil transformer - a one-coil transformer
with a tap in the middle will do, and those may be even cheaper).

Note that transformers do NOT change frequency from 50Hz as in EU to 60Hz
as in US; typically this is not a problem for electronics power supplies,
because the first thing they do is rectify the mains voltage to DC, but
you may want to check that with your equipment specs or with their tech
support.

Note also that if your higher-voltage equipment requires multiple phases,
you're out of luck. The 220V in US is usually two 110V feeds with 180
degrees shift, so there's a zero-point wire as well, allowing asymmetrical
loading.  This kind of supply may be provided with a transformer, but I've
never seen one like that.

The EU multi-phase power is typically 3-phases with 120 degree-shifted
230V supplies, there's no way to convert to it from single- or dual-phase
power to it w/o electronic invertor or motor-generator combination.

--vadim



Re: AC/AC power conversion for datacenters

2003-06-04 Thread Vadim Antonov


On Tue, 3 Jun 2003, Vadim Antonov wrote:

http://www.pfsc-ice.com/bbo/b/magnet/get_a_110_220_voltage_transformer_11.htm

2KW - for less than $100 ... enough for 8.5 Amp at 230V.

--vadim



Re: NJ: Red alert? Stay home, await word

2003-03-19 Thread Vadim Antonov


There's only thing worse than government full of idiots: government
full of scared idiots.

--vadim


On Wed, 19 Mar 2003, J.A. Terranson wrote:

 
 On Wed, 19 Mar 2003, Jeff Wasilko wrote:
 
  http://www.southjerseynews.com/issues/march/m031603e.htm
  
  If the nation escalates to red alert, which is the highest in
  the color-coded readiness against terror, you will be assumed by
  authorities to be the enemy if you so much as venture outside
  your home, the state's anti-terror czar says.
  
  ...
 
 You literally are staying home, is what happens, unless you are required to 
 be out. No different than if you had a state of emergency with a
 snowstorm. 
 
 
 Except that in a snow storm, I can go out if I want to, and not face criminal
 liability.  Are they planning to at least go through the farce of declaring
 martial law first? 
 
 



Re: Route Supression Problem

2003-03-12 Thread Vadim Antonov


On Wed, 12 Mar 2003, Randy Bush wrote:
 
  You need at least three flaps to trigger dampening.
 
 i guess you really need to look at that pdf.
 
 randy

Better Algorithms --

http://www.kotovnik.com/~avg/flap-rfc.txt
http://www.kotovnik.com/~avg/flap-rfc.ps

I didn't publish that one because I wanted to compare that with
penalty-based dampening on historical (pre-dampening) flap records, but
then got distracted by other projects.  Preliminary data (from frequency
analysis) indicates that unwarranted downtime (defined as suppression
after the last flap prior to entering stable state) is reduced by a factor
of 3 to 4 compared with penalty-based algorithm tuned to produce the same
post-dampening flap rate.

--vadim



Re: Port 445 issues (was: Port 80 Issues)

2003-03-10 Thread Vadim Antonov



I'm just waiting for hakerz to finally figure out that having the port
number a hash of host address will effectively make port-based 
notch filtering useless. Usin


On Sun, 9 Mar 2003, Sean Donelan wrote:
 
 Blocking ports in the core doesn't stop stuff from spreading.  There are
 too many alternate paths in the core for systems to get infected through.
 In reality, backbones dropped 1434 packets as a traffic management practice
 (excessive traffic), not as a security management practice (protecting
 users).
 



Re: BGP to doom us all

2003-02-28 Thread Vadim Antonov



Thank you very much, but no.

DNS (and DNSSEC) relies on working IP transport for its operation.

Now you effectively propose to make routing (and so operation of IP
transport) dependent on DNS(SEC).

Am I the only one who sees the problem?

--vadim

PS. The only sane method for routing info validation I've seen so far is
the plain old public-key crypto signatures.


On 1 Mar 2003, Paul Vixie wrote:
 
  It wouldn't be too hard for me to trust:
  
  4969.24.origin.0.254.200.10.in-addr.arpa returning something like true.
  to check whether 4969 is allowed to originaate 10.200.254.0/24.  ...
 
 at last, an application for dnssec!




RE: VoIP over IPsec

2003-02-18 Thread Vadim Antonov


Well, sloppy thinking breeds complexity -- what I dislike about standards
commitees (IETF/IESG included) is that they always sink to the lowest
common denominator of the design talent or competence of its participants.

In fact, a method to encrypt small parcels of data efficiently is
well-known for decades.  It is called stream cypher (surprise). Besides
LFSR-based and other stream cyphers, any block cypher can be used in this
mode. Its application to RTP is trivial and straight-forward.  Just leave
sequence number in clear text, so that position in the stream is
recoverable in case of packet loss. It also allows precomputation of the
key stream, adding nearly zero latency/jitter to the actual packet
processing.

--vadim

On Wed, 19 Feb 2003, David Luyer wrote:

 ...leaving a dream of RTP as true and presumably light-weight
 protocol...






Re: VoIP over IPsec

2003-02-18 Thread Vadim Antonov

On Tue, 18 Feb 2003, Stephen Sprunk wrote:

  It also allows precomputation of the key stream, adding nearly zero
  latency/jitter to the actual packet processing.
 
 You fail to note that this requires precomputing and storing a keystream for
 every SA on the encrypting device, which often number in the thousands.
 This isn't feasible in a software implementation, and it's unnecessary in
 hardware.

You don' have to store the entire keystream, just enough to allow
on-the-fly packet processing.  Besides, memory is cheap. 100 msec buffers
for 100,000 simultaneous voice connections is an astonishing 80 Mb.

More realistically, it's 10k calls and 30 msec of buffering.

--vadim




Re: Cascading Failures Could Crash the Global Internet

2003-02-06 Thread Vadim Antonov


On Thu, 6 Feb 2003, N. Richard Solis wrote:

 The main cause of AC disruption is a power plant getting out of phase
 with the rest of the power plants on the grid.

This is typically a result of sudden load change (loss of transmission
line, short, etc) changing the electromagnetic drag in generators, and,
therefore, the speed of rotation of turbines.

 When that happens, the plant trips of goes off-line to protect the
 entire grid. 

Some difference in phase is tolerable, the resulting cross-currents
generate heat in the trasmission lines and transformers.

It is not sufficient to disconnect a generator from the grid. Since water
gates or steam supply can not be closed off fast, the unloaded turbine
would accelerate to the point of very violent self-destruction.  So the
generators are connected to the resistive load to dump the energy there.
Those resistors are huge, and go red-hot in seconds.  If a gate or valve
gets stuck, they melt down, with the resulting explosion of the turbine.

 You lose some generating capacity but you dont fry everything on the
 network either.

Well... not that simple.  A plant going off-line causes sudden load
redistribution in the network, potentially causing overload and phase
shifting in other plants, etc.  A cascading failure, in other words.

--vadim




Re: routing between provider edge and CPE routers

2003-01-29 Thread Vadim Antonov


On Wed, 29 Jan 2003, Christopher L. Morrow wrote:

 On Wed, 29 Jan 2003, Mike Bernico wrote:
 
  We currently use an IGP to route between our distribution routers and
  the CPE routers we manage. 
 
 So, if customers bounce your IGP churns away? And customers have access to
 your IGP data (provided they break into the CPE, which is trivial, eh?)

Worse yet, any customer which is able to feed routing information to the
backbone (be it any IGP or BGP), unless filtered properly, is able to
trivially create a man-in-the-middle (or trojan horse) attack on systems
protected with plain-text passwords.  Simply inject a longer-prefix route
to someone else's network, and then examine (or modify) and bounce the
source-routed packets to the ultimate destination. (Yes, Virginia, source
routing IS evil, and has virtually no legitimate use).

Even supposedly secure things like SSL-protected websites and SSH logins
are vulnerable due to the simple fact that most people won't think twice
to say yes to SSH complaining that it detected a new host key; or notice
that they're really talking to a different website (or that the lock icon
is not showing) - if it looks the same, and its URL is similar-looking
(l-1, O-0, etc; and with newish Unicode URLs the fun is unlimited).

So, by accepting routes from CPE you create a huge security vulnerability
for your customers, and other parties.  This practice was understood as a
very bad network engineering for decades.

The additional problems created by taking routing information from CPE
are: increased amounts of route flap (because any bouncy tail circuit
or malfunctioning/misconfigured CPE box will cause a flood of routing
updates, potentially killing your entire network), and dramatically
increased incidence of bogus routes (interfering with connectivity of your
other customers, or some third parties).

(I've seen even stupider things - people configuring CPE boxes to
redistribute routes learned from customer's internal LANs! Any compromised
PC, and you're toast).

The solution is:

1) for single-homed sites use static routing, period.  Dynamic routing
does not add anything useful in this case (if circuit is down, it's down,
there are no alternative ways to reach the customer's network).

The convinience of having to configure only CPE box is no excuse. Invest
some resources in a rather trivial configuration management system, which
keeps track of what network addresses were allocated to which customer,
and produces corresponding bits of router configuration automatically.
Most respectable ISPs did that long time ago.  That will also reduce your
tech support costs.

2) for muti-homed sites you have to use routing protocols. Use BGP (_NOT_
IGP!) Implement a strict filtering on all routing updates you get from the
customer.  Manage these filters like you manage static routes.


--vadim

PS. They should really require a test in defensive networking before
letting anyone to touch provider's routers...




RE: routing between provider edge and CPE routers

2003-01-29 Thread Vadim Antonov

On Wed, 29 Jan 2003, Mike Bernico wrote:

 Is there someplace I can find tidbits of information like this?  I
 haven't been alive decades so I must have missed that memo.  Other than
 this list I don't know where to find anyone with lots of experience
 working for a service provider.

Well, this list... in the old archives.  The current backbone design
issues were pretty much tossed around in 93-94, the defensive networking
concept included.
 
 I've never heard of software like that.  Do you have a recommended
 vendor?  Is it typically developed in house?

There's no sustainable market for those, so they're always home-built...
Often it is just a collection of scripts and some RCS to keep configs in.

 What can I say, I must work cheap!

:)

--vadim




RE: What could have been done differently?

2003-01-28 Thread Vadim Antonov


On Tue, 28 Jan 2003, Eric Germann wrote:

 
 Not to sound to pro-MS, but if they are going to sue, they should be able to
 sue ALL software makers.  And what does that do to open source?

A law can be crafted in such a way so as to create distinction between
selling for profit (and assuming liability) and giving for free as-is. In
fact, you don't have Goodwill to sign papers to the effect that it won't
sue you if they decide later that you've brought junk - because you know
they won't win in court. However, that does not protect you if you bring
them a bomb disguised as a valuable.

The reason for this is: if someone sells you stuff, and it turns out not
to be up to your reasonable expectations, you suffered demonstrable loss
because vendor has misled you (_not_ because the stuff is bad).  I.e. the
amount of that loss is the price you paid, and, therefore, this is
vendor's direct liability.

When someone gives you something for free, his direct liability is,
correspondingly, zero.

So, what you want is a law permitting direct liability (i.e. the lemon
law, like the ones regulating sale of cars or houses) but setting much
higher standards (i.e. willfully deceiptive advertisement, maliciously
dangerous software, etc) for suing for punitive damages.  Note that in
class actions it is often much easier to prove the malicious intent of a
defendant in cases concering deceiptive advertisement - it is one thing
when someone gets cold feet and claims he's been misled, and quite another
when you have thousands of independent complaints.  Because there's
nothing to gain suing non-profits (unless they're churches:) the
reluctance of class action lawyers to work for free would protect
non-profits from that kind of abuse.

A lemon law for software may actually be a boost for the proprietary
software, as people will realize that the vendors have incentive to
deliver on promises.

--vadim




Re: uunet

2003-01-20 Thread Vadim Antonov


I have a suggestion for UUNET's backbone engineering folks:

Please, create a fake customer ID and publish it, so outside folks could
file trouble reports regarding routing issues within UUNET.

--vadim


On Sat, 18 Jan 2003, Scott Granados wrote:

 
 What's interesting is that I just tried to call the noc and was told
 We have to have you e-mail the group
 
 my response, I can't I have no route working to uunet
 
 Well you have to
 
 my response, ok I'll use someone elses mail box where do I mail?
 
 We can't tell you your not a customer
 
 My response its a routing issue do you have somewhere I can e-mail you.
 
 Your not my customer I really don't care  *click*
 
 Nice. professional too.
 
 Anyone have a number to the noc that someone with clue might answer?
 
 - Original Message -
 From: David Diaz [EMAIL PROTECTED]
 To: Scott Granados [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Sent: Saturday, January 18, 2003 4:35 PM
 Subject: Re: uunet
 
 
  Im not seeing anything coming from qwest.
 
 
 
  At 16:55 -0800 1/18/03, Scott Granados wrote:
  Is something up on uunet tonight?
  
  It looks to me that dns is broken forward and reverse but more likely it
  looks like a bad bogan fiilter popped up suddenly.  I have issue as soon
 as
  I leave mfn's network and hit uunet.
 
  --
 
  David Diaz
  [EMAIL PROTECTED] [Email]
  [EMAIL PROTECTED] [Pager]
  www.smoton.net [Peering Site under development]
  Smotons (Smart Photons) trump dumb photons
 
 
 
 




Re: FW: Re: Is there a line of defense against Distributed Reflectiveattacks?

2003-01-20 Thread Vadim Antonov


On Mon, 20 Jan 2003, Avleen Vig wrote:

 
 On Mon, 20 Jan 2003, Christopher L. Morrow wrote:
 
   I was refering specifically to end user workstations. For example home
   machines on dial up or broadband connections.
   A lot of broadband providers already prohibit running servers and block
   certain inbound ports (eg 21 and 80).
   *shrug* just seems like it would make more sense to block all incoming
   'syn' packets.
 
 Indeed it does break that. P2P clients: Mostly transfer illegal content.
 As much as a lot of people love using these, I'm sure most realise they're
 on borrowed time in their current state.

Well, blocking TCP SYNs is not a way to block establishment of sessions
between _cooperating_ hosts.

Simply make a small hack in TCP stack to leave SYN flag clear, and use
some other bit instead.

To really block something you need an application proxy... and then there
are always ways to subvert those. Elimination of covert channels is one of
the hardest problems. In any case, no sane provider will restrict traffic
only to applications which can be served by its proxies.

Going further, the growing awareness of the importance of security will
cause more and more legitimate apps to create totally indiscriminate
encrypted traffic... and it is a good idea to routinely encrypt all
traffic, to avoid revealing importance of particular communications.
Leaving identity of applications (different port #s) in the clear is also
a bad idea, security-wise.

--vadim




Re: Is there a line of defense against Distributed Reflective attacks?

2003-01-17 Thread Vadim Antonov

 
 Do we need te equivalent of a dog bite law for computers.  If your
 computer attacks another computer, the owner is responsible.  File a
 police report, and the ISP will give the results of the *57 trace to
 the local police.  The police can then put down the rabid computer,
 permanently.

Good in theory... in practice police has more important things to do. Like
catching pot smokers. 

--vadim




Re: FYI: Anyone seen this?

2003-01-15 Thread Vadim Antonov


This is not entirely hoax.

I know for sure (first-hand) that such actions were contemplated by at
least some recording companies.

--vadim


On Wed, 15 Jan 2003, Marshall Eubanks wrote:

 The feeling in the music community is that this is almost certainly a 
 hoax.
 
 Of course, RIAA apparently tried to legalize such activities in the 
 Berman Bill.
 
  Of course, even if it were true, they'd probably want to deny it, since
  they haven't gotten their hack back legislation passed yet :)




Re: Operational Issues with 69.0.0.0/8...

2002-12-10 Thread Vadim Antonov


On Tue, 10 Dec 2002, Stephen J. Wilcox wrote:

  The better way of dealing with the problem of bogus routes is strong
  authentication of the actual routing updates, whith key being allocated 
  together with the address block.  Solves unused address space reclaimation 
  problem, too - when the key expires, it becomes unroutable.
 
 Of course, who would maintain the key databases and do you mean every route
 would need a key with the central registrar or would it be carved up to eg
 authority on /8 level or lir level which could be /22s.. seems at some point you
 still have to go back to a central resource and if you dont have a single
 resource you make it complicated?

There's a big difference: address allocation (and key distribution) is
off-line, and is not involved in operation of the routing system.
Its failure doesn't cause network malfunction, just aggravation of new
customers.

OTOH, invalid RADB data can easily prevent network from operating, on a 
massive scale.

--vadim




Re: Operational Issues with 69.0.0.0/8...

2002-12-10 Thread Vadim Antonov


On Tue, 10 Dec 2002, Harsha Narayan wrote:

 Key databases:
 Using cryptography to authenticate routing updates gets messy very soon.
 Then, there will again be the same problem of the Public Key Infrastucture
 not getting updated or something like that.

Hard to do it right, yes, but not impossible. (Actually I did strong 
crypto for a living last few years, so I think I have somewhat informed 
opinion on the subject :)

--vadim




Re: Spam. Again.. -- and blocking net blocks?

2002-12-10 Thread Vadim Antonov

On Tue, 10 Dec 2002, Barry Shein wrote:

 The only solution to spam is to start charging for email (perhaps with
 reasonable included minimums if that calms you down for some large set
 of you) and thus create an economic incentive for all parties
 involved.

Absolutely unrealistic... micropayments never got off the ground for a 
number of good reasons - some of them having to do with unwillingness of 
national governments to forfeit financial surveillance.

Even if e-mail will cost something, you'd still be getting a lot more spam
than useful mail.  Check your snail-mail box for empirical evidence :)

I'd say strong authentication of e-mail sources and appropriate sorting
at the receiving end should do the trick.  When I give someone e-mail 
address, I may just as well get their fingerprint and put in my allowed 
database.

The question is, as always, convinience and useability - with a good 
design that doesn't seem unsurmountable.
 
 Face it folks, the party is over, the free-for-all was a nice idea but
 it simply did not work. See The Tragedy of the Commons.

Linux does not exist, science disappeared long time ago, etc, etc.  Those 
are commons, too.

In fact, the prevailing myth is that property system is the primary driver
of progress.  As if.  It existed for several millenia (in fact, all higher
animals exhibit behaviour consistent with notion of property, usually
territory and females) and not much happened most of that time, aside from
endless wars.  Then the decidedly anti-proprietary gift economy of
science comes along and in couple hundred years completely changes the
world.

The free-for-all is a nice idea.  Should be preserved whereever possible.  

Spam is not tragedy of commons (i.e. depletion of shared resources
because of uncontrolled cost-free accessibility) - the spam traffic does
not kill the network, last I checked (in fact, TCP's congestion control
provides a basic fairness enforcement in the Internet - which explains why
the backbones aren't really prone to the tragedy of commons, even when
demand is massively larger than supply).

Spam is theft (i.e. unauthorized use of private resources), and should be
fought as such - by prosecuting perps, by installing locks, and by
checking ids before granting access.

--vadim




Re: Operational Issues with 69.0.0.0/8...

2002-12-10 Thread Vadim Antonov


On Tue, 10 Dec 2002, Harsha Narayan wrote:
   Using cryptography to authenticate routing updates gets messy very soon.
   Then, there will again be the same problem of the Public Key Infrastucture
   not getting updated or something like that.

   It would require a PKI and also require every router to support it.

Not every... borders are enough.

PKI is not that complicated, too; and routers already have some
implementation of it embedded (in VPN parts).

--vadim




Re: Spanning tree melt down ?

2002-12-01 Thread Vadim Antonov


On Fri, 29 Nov 2002, Stephen Sprunk wrote:

 This is a bit of culture shock for most ISPs, because an ISP exists to serve
 the network, and proper design is at least understood, if not always adhered
 to.  In the corporate world, however, the network and support staff are an
 expense to be minimized, and capital or headcount is almost never available
 to fix things that are working today.

I think you are mistaken.  In most ISPs engineers are considered an
unfortunate expense, to be reduced to bare bone minimum (defined as the
point where network starts to fall apart, and irate customers reach CEO
through the layers of managerial defenses).  Proper design of corporate
networks is understood much better than that of backbones (witness the
unending stream of new magic backbone routing paradigms, which never seem
to deliver anything remotely as useful as claimed), so the only
explanation for having 10+ hops in spanning tree is plain old
incompetence.

 It didn't take 4 days to figure out what was wrong -- that's usually
 apparent within an hour or so.  What takes 4 days is having to reconfigure
 or replace every part of the network without any documentation or advance
 planning.

Ditto.
 
 My nightmares aren't about having a customer crater like this -- that's an
 expectation.  My nightmare is when it happens to the entire Fortune 100 on
 the same weekend, because it's only pure luck that it doesn't.

Hopefully, not all of their staff is sold on the newest magical tricks
from OFRV, and most just did old fashioned L-3 routing design.

--vadim




Re: Risk of Internet collapse grows

2002-11-27 Thread Vadim Antonov


On Wed, 27 Nov 2002 [EMAIL PROTECTED] wrote:

 It depends which exchange point is hit.  There are a couple of buildings 
 in London which if hit would have a disasterous affect on UK and European 
 peering.

Why hit buildings when removing relatively small number of people will 
render Internet pretty much defunct.  It does not fly itself (courtesy to 
the acute case of featuritis developed by top vendors).

Feeling safer?

--vadim




Re: Cyberattack FUD

2002-11-22 Thread Vadim Antonov


On Thu, 21 Nov 2002, David Schwartz wrote:

   Suppose, for example, we'd had closed cockpit doors. The 9/11 terrorists 
 would have threatened the lives of the passengers and crew to induce the 
 pilots to open the doors. The pilots would have opened the doors because the 
 reasoning until that time was that you did whatever the hostages told you to 
 do until you could get the plane on the ground.
 
   It was the rules of engagement that failed. Nothing more, nothing less.

In the regular skyjacking the attackers want to get ransom, or divert an
airplane to someplace.  They'll get cooperation from pilots, too - without
any need to be present in the cockpit.  So if it is known that the policy
is not to let anyone in, no matter what happens to passengers, the
attackers wouldn't even try.  In fact, they don't, on airlines which have
this policy.  Letting deranged people in cockpit, in fact, places _all_
passengers at risk of an unintended crash (imagine an attacker getting
agitated and killing pilots, or simply pulling knobs - there were
incidents when _little kids_ allowed to cockpit crashed the commercial
planes).

The rules of engagement were patently absurd 
 
 and then by making life truly miserable for
 those who wish or have to travel, in a fit of post-disaster paranoia.
 
   The airline industry did that?

Your mileage may wary, but I do not find pleasure in being stripped in
public just because I've got long hair.  As I result I'm avoiding all air
travel, if I can. I'm sure a lot of other people do that too.

 It is not enemies who are savvy, it is managers who are stupid.  Like, the
 crash airplane into some high-value target scenario was well-aired more
 than decade ago
 
 Not the crash jetliner full of passengers into high-value target
 scenario.

Heh. Our friends Chechens told than in a TV interview back in 1995 that
they intend to do precisely that.  They identified Kremlin as a target,
though.  And Israelis as a matter of fact assume that attackers are on a
suicide mission.  And the fact that US does not exactly inspire adoration 
in mid-Eastern parts of the world isn't any news, too.
 
 If you were able to make the decision to shoot down or not shoot down the two 
 jetliners before either struck a building, knowing only that they were not 
 responding and probably hijaacked, what would you have done?

I'd have doors in place, so as to avoid the whole situation. As I told, it 
is the standard procedure (keep cockpit doors closed) in much of the world 
outside US.

   Again, it's the rules of engagement that failed.

Rules are formulated by someone, they are not God-given.  That someone is 
patently incompetent - in both failing to notice explicit early warnings, 
and failing to follow on the best practices of his peers.
 
   So tell me what they should have done differently. Not allowed knives on the 
 plane? The terrorists would have used their bare hands. Strip searched every 
 passenger? Arm their pilots -- they weren't allowed to.

I repeat: have doors closed, period. As for they weren't allowed part - 
don't be ridiculous.  This is an oligopoly situation, and so they pretty 
much can get their terms from the government - just look at those 
multibillion handouts. 

  I hope that US airlines
 go out of business and El Al moves in; isn't that what competition is
 supposed to be about?
 
   Except that there is no competition. Airlines don't get to make their own 
 security rules, they're largely preempted by the government ownership and 
 control of airports and the FARs.

It takes two to tango. If those large businesses cannot get the reasonable
rules from the government, their lobbying groups are incompetent (and so
they deserve to go out of business).  More likely, they didn't ask.

Competition is not only about having seats filled - it is also about
dealing with governments, courts, media, etc.

 The same holds for the Internet (with special thanks to the toothless
 antimonopoly enforcement which allowed operating systems to become a
 monoculture).
 
   This is a great bit of double-think. It has nothing to do with the fact that 
 people overwhelmingly prefer to have compatible operating systems, it's the 
 fact that nobody forced them to diversify against their will.

Huh?  MS was found guilty of monopolistic practices - repeatedly.  They 
also are quite ruthless in going out and strangling competition (just 
watch their anti-Linux FUD campaign).  If you think they are deterred, 
just take a look at the Palladium thingie - a sure-fire public domain OS 
killer.

In fact, given the enormous positive network externalities associated with 
the operating systems, it would make a lot of sense to the government to 
level the playing field with affirmative action - for example, by 
differential taxation of dominant and sub-dominant vendors.  The 
government procurement could've been more intent on having second supplier 
of compatible OS software, too - that'd 

Re: Bin Laden Associate Warns of Cyberattack

2002-11-19 Thread Vadim Antonov

On Tue, 19 Nov 2002, Richard Irving wrote:

 To Paraphrase the -OLD- KGB:
 
 Quick Comrade, we will protect you, sign here
  What ? You want to be Safe, Comrade, don't you ?
 
 s/Comrade/Citizen/

Naive :)  They didn't have to ask to sign anything - you had to, to get a 
better job, education, etc.  Not that those signatures meant anything, as 
they could just issue an invitation which you couldn't refuse.

--vadim




Re: Even the New York Times withholds the address

2002-11-19 Thread Vadim Antonov


Just to keep it off-topic :)  The kinetic water-based accumulating
stations actually do exist, though they use elevated reservoirs to store
the water.  The water is pumped up during off-peak hours, and then
electricity is generated during peaks.  This is not common, though,
because most energy sources can be throttled to save fuel, or to
accumulate in-flowing water naturally.  However, I think we will see more
of those accumulating stations augmenting green energy sources (wind,
solar, geothermal, tidal) which have erratic performance on shorter time
scales, unless things like very large supercapacitors or hydrolizers/fuel
cells become a lot cheaper.

In some cases accumulating stations are useful in places remote from any 
regular power sources because they can minimize energy loss in long 
transmission lines (it is proportional to current squared, while delivered 
power is linear to the current).

--vadim

On Tue, 19 Nov 2002, blitz wrote:

 One last addition to this idiotic water idea.. since the water doesn't get 
 up there to the reservoir on the roof by itself, add your costs of huge 
 pumps, plus the cost of pumping it up there, and a less than 100% 
 efficiency in converting falling water to electricity. Also, add heating it 
 in the winter to keep it liquid instead of solid, decontamination chemicals 
 (cant have any Leigonella bacillus growing in there in the summer) Its all 
 moot, as the weight factor makes this a non-starter.




Re: PAIX

2002-11-18 Thread Vadim Antonov


I definitely would NOT want to see my doctor over a video link when I need
him.  The technology is simply not up to providing realistic telepresense,
and a lot of diagnostically relevant information is carried by things like
smell and touch, and little details.  So telemedicine is a poor substitute
for having a doctor on site;  and should be used only when it is
absolutely the only option (i.e. emergency on an airplane, etc).

(As a side note - that also explains reluctance of doctors to rely on
computerized diagnostic systems: they feel that the system does not have
all relevant information (which is true) and that they have to follow its
advice anyway, or run a chance of being accused of malpractice.  This is
certainly the case with textbooks - if a doctor does something clearly
against a textbook advice, with negative outcome, lawyers have a feast -
but doctors never get rewarded for following their common sense when
outcome is positive.  And automated diagnostic systems are a lot more
specific with their recommendations than textbooks!).

Emergency situations, of course, require some pre-emptive engineering to
handle, but by no means require major investment to allow a major
percentage of traffic to be handled as emergeny traffic.

As with VoIP, simple prioritization is more than sufficient for
telemedicine apps.  (Note that radiology applications are simply bulk file
transfers, no interactivity).

--vadim

On Mon, 18 Nov 2002, Stephen Sprunk wrote:

 
 Thus spake David Diaz [EMAIL PROTECTED]
  I agree with everything said Stephen except the part about the
  medical industry.  There are a couple of very large companies doing
  views over an IP backbone down here.  Radiology is very big on
  networking.  They send your films or videos over the network to where
  the Radiologist is.  For example one hospital owns about 6 others
  down here, and during off hours like weekends etc, the 5 hospitals
  transmit their films to where the 1 radiologist on duty is.
 
 I meant my reply to be directed only at telemedecine, where the patient is at
 home and consults their general practitioner or primary care physician via
 broadband for things like the flu or a broken arm.  While there's lots of talk
 about this in sci-fi books, there's no sign of this making any significant
 inroads today, nor does it qualify as a killer app for home broadband.
 
 I do work with several medical companies who push radiology etc. around on the
 back end for resource-sharing and other purposes.  This is quite real today, and
 is driving massive bandwidth upgrades for healthcare providers.  However, I
 don't think it qualifies under most people's idea of telemedecine.
 
 S
 




Re: PAIX

2002-11-18 Thread Vadim Antonov


On Mon, 18 Nov 2002, Jere Retzer wrote:

 Maybe it is a function of the origin and destination location + network.
 Since Portland is not a top 25 market our service has never been very 
 good that's why we started an exchange

Yep, Intenet service quality is very uneven; and it does not seem to be an
easily quantifiable factor allowing consumers and businesses to select a
provider.  So, all providers looking the same, they choose the
lowest-priced ones, thus forcing providers to go air transport way (i.e.  
untimately destructive price wars).

With full understanding of political infeasibility of proposed, I think
that the best thing ISPs could do is to fund some independent company
dedicated to publishing comprehensive regional ISP quality information -
in a format allowing apple-to-apple comparison.  Then they could justify
price spread by having facts to back them up.

--vadim




Re: PAIX

2002-11-18 Thread Vadim Antonov

On Mon, 18 Nov 2002, Jere Retzer wrote:

 It's potentially even more important with elderly shut-ins, because
 bringing them in can be difficult and expensive and their immune
 systems are typically weaker so you should try to minimize their
 exposure to people with contagious diseases.

What happened to the gool ol' house calls?

--vadim





Re: PAIX

2002-11-16 Thread Vadim Antonov

On Fri, 15 Nov 2002, Jere Retzer wrote:

 Some thoughts:
 
 - Coast-to-coast guaranteed latency seems too low in most cases that
 I've seen. Not calling CEOs and marketers liars but the real world
 doesn't seem to do as well as the promises. As VOIP takes off local
 IP exchanges will continue/increase in importance because people won't
 tolerate high latency.  What percentage of your phone calls are local?

Who cares? Voice is only 56 or so kbps. Just give it absolute queueing 
priority, and suddenly you have negligible jitter...

 - Yes, we do various kinds of video over Internet2.

People are doing various kinds of video over Internet 1; works fine.

 - While we're on the topic of local video, what happens when
 television migrates to IP networks?  

Why should it?  There's a cheap, ubiquitous, widely deployed broadcasting 
medium already.  I never understood network integration for the sake of 
network integration.

In any case, TV (of all things) does not have problems with latency or
jitter below 10s of seconds.  All TV content is pre-packaged.

--vadim





Re: PAIX

2002-11-14 Thread Vadim Antonov


On Thu, 14 Nov 2002, David Diaz wrote:

 2) There is a lack of a killer app requiring peering every 100 sq Km. 

Peering every 100 sq km is absolutely infeasible.  Just think of the 
number of alternative paths routing algorithms wil lhave to consider.

Anything like that would require serious redesign of Internet's routing 
architecture.

--vadim




Re: PAIX

2002-11-14 Thread Vadim Antonov


On Fri, 15 Nov 2002, Rafi Sadowsky wrote:
 VA  2) There is a lack of a killer app requiring peering every 100 sq Km. 
 VA 
 VA Peering every 100 sq km is absolutely infeasible.  Just think of the 
 VA number of alternative paths routing algorithms wil lhave to consider.
 VA 
 VA Anything like that would require serious redesign of Internet's routing 
 VA architecture.
 
   What about:
 
  IPv6 with hierarchial(sp?) geographical allocation ?
 
  BGP with some kind of tag limiting it to N AS hops ?
 ( say N=2 or N=3? )

I can think of several ways to do it, but all of them amount to 
significant change from how things are being done in the current 
generation of backbones.

--vadim




Re: BGP security in practice

2002-11-04 Thread Vadim Antonov


On Mon, 4 Nov 2002, Eric Anderson wrote:

 Time for a new metaphor, methinks.

There's one.  Defensive networking :)

--vadim




Re: More federal management of key components of the Internet needed

2002-10-24 Thread Vadim Antonov


On Wed, 23 Oct 2002, Alan Hannan wrote:

  I don't understand how giving the US federal government management control
  of key components of the Internet will make it more secure. 
 
   It worked for airline security.

Yeah... removing shoes and randomly searching peace activists while 
allowing to carry on glass bottles containing unknown liquids on board.

Holding air companies liable for lax security could've been a lot more 
efficient.

--vadim




Re: Cogent service

2002-09-21 Thread Vadim Antonov



On Fri, 20 Sep 2002, Joe Abley wrote:

 On Fri, Sep 20, 2002 at 06:40:56PM -0700, Vadim Antonov wrote:
 
 This is all obvious stuff, of course. However, the derived rule of
 thumb long traceroute bad, short traceroute good is the kind of
 thing that can induce marketing people to require engineers to
 deploy MPLS, and is hence Evil and Wrong, and Most Not Be Propagated
 Without Extreme Caution.

Feeding any information to clueless people should be done with Extreme 
Caution :)  A fool with a little knowledge is a lot more dangerous than 
just a fool.

How MPLS




software routers (was: Cogent service)

2002-09-21 Thread Vadim Antonov



On Fri, 20 Sep 2002, Stephen Sprunk wrote:

 If you think you can make a gigabit router with PC parts, feel free.

You may be surprised to learn that BBN folks did practically that
(different CPU) with their 50 Gbps box (MGR). They had OC-48C line cards
and used Alpha 21164 CPU with pretty small 8Kb/94Kb on-chip cache to do
packet forwarding.

See C.Partridge et al in IEEE/ACM Transactions on Networking, 
6(3):237-248, June 1998.

The CPUs are quite faster nowadays, and you can get things like _quad_ 
300MHz PPC core on an FPGA plus 20+ 3.2Gbps serdes I/Os - all on one chip.
So building multigigabit software router is a no-brainer.

(16-4-4-4-4 was in Pluris proof-of-concept; the smaller branching factor 
in the radix tree was to get a better match to Pentium's L-1 cache line 
size, and to make prefix insertion/deletion faster).

--vadim

PS.  We were talking _mid_ 90s.  Back then SSE did about 110kpps (not 
 advertised 250kpps) in the real life, and 166MHz Pentiums 
 were quite available at Fry's.

PPS. I had exactly that argument with ex-cisco hardware folks who came to 
 Pluris; they prevailed, and fat gobs of luck it brought them.  They 
 ended up building a box with exactly the same fabric and comparable
 mechanical and power dissipation parameters as the concept I drew as 
 a starting point in 98.  Wasted time (and $260M) on developing these 
 silly ASICs when CPUs and some FPGAs could do just as nicely. I'm 
 glad that I wasn't around to participate in that folly.




Re: Cogent service

2002-09-20 Thread Vadim Antonov



On Fri, 20 Sep 2002, Stephen Stuart wrote:

 Regarding CPU cycles for route lookups:
 
 Back in the mid-1990s, route lookups were expensive. There was a lot
 of hand-wringing, in fact, about how doing route lookups at every hop
 in larger and larger FIBs had a negative impact on end-to-end
 performance. One of the first reasons for the existence of MPLS was to
 solve this problem.

This was a totally bogus reason from the very beginning. Given that real
backbones carry no prefixes longer than 24 bits the long lookup in
16-4-4-4-4 bit radix tree takes 5 memory reads.  Given that a lot of
routes are aggregated, and the ubiquity of fast data caches (which can
safely hold 3 top levels of full FIB) the average number of memory reads
needed by a general-purpose CPUs available in mid-90s to do route lookup
is (surprise) - about 1.2

In fact, full-featured IP forwarding (including Fair Queueing and packet
classification) at 120 kpps was demonstrated using a 133MHz Pentium MMX.  
Did beat crap out of 7000's SSE.  Wasn't that hard, too; after all it is
more than 1000 CPU cycles per packet.  The value proposition of ASIC-based
packet routing (and use of CAMs) was always quite dicey.

For arithmetically inclined, I'd offer to do the same calculation for P-4
running at 2 Ghz, assuming 1000 cycles per packet, and average packet size
of 200 bytes.  Then estimate cost of that hardware and spend some time
wondering about prices of OC-192 cards and about virtues of duopoly :)  
Of course, doing line rate forwarding of no-payload Christmas-tree
packets is neat, but does anyone really need it?

 In the case of modern hardware, forwarding delay due to route lookup
 times is probably not a contributing factor to latency.

In the real life (as opposed to benchmark tests) nobody cares about 
forwarding delay.  Even with dumb implementations it is 2 orders of 
magnitude smaller than light of speed delays in long-haul circuits (and on 
short hops end-host performance is limited by bus speed anyway).

 More hops can mean many other things in terms of delay, though - in
 both the more delay and less delay directions, and countering the
 more hops means more delay perception will (I think) always be a
 challenge.

More hops means more queues, i.e. potential congestion points.  With max
queue size selected to be RTT*BW, the maximal packet latency is,
therefore, Nhops*RTT.  In a steady mode, with all network uniformly loaded
the mean latency is, therefore, (Nhops*MeanQueueLength + 1)*RTT.  (MQL is
from 0 (empty queue) to 1 (full queue)). In the real networks, queueing
delays tend to contribute mostly to delay variance, not to the mean delay
(in an underloaded network the average queue size is  1 packet).

In other words, a large number of hops makes life a lot less pleasant for 
interactive applications; file transfers generally don't care.

--vadim




Re: How do you stop outgoing spam?

2002-09-10 Thread Vadim Antonov



herecy

Or unless we design a network which does not rely on good will of its
users for proper operation.

/herecy

--vadim

On Tue, 10 Sep 2002 [EMAIL PROTECTED] wrote:

 Most spam-fighting efforts on the technical side make the basic assumption
 that spam has similar characteristics to a properly designed TCP stack - that
 dropped/discarded spam-grams will trigger backoff at the sender.  Unfortunately,
 discarding a high percentage of the grams will trigger a retransmit multiple
 times.
 
 Spam is likely going to be a problem until we either hire some thug muscle from
 pick ethnic organized crime group, or the government does it for us...




Re: How do you stop outgoing spam?

2002-09-10 Thread Vadim Antonov



On Tue, 10 Sep 2002, Iljitsch van Beijnum wrote:

 Or we throw out SMTP and adopt a mail protocol that requires the sender to
 provide some credentials that can't be faked. Then known spammers are easy
 to blacklist.

The credentials that can't be faked is a rather hard to implement 
concept.  Simply because there's no way to impose a single authority on 
the entire world.  The question is whom to trust to certify the sender's
authenticity?  I have correspondents in parts of the world where I'd be 
very reluctant to trust proper authorities.  I'd be so very easy to 
silence anyone by _not_ issuing credentials.

Besides, anonymous communication has its merits.  So what's needed is 
zero-knowledge authentication and Web-of-trust model.  And don't forget 
key revocation and detection of fake identity factories.  Messy, messy, 
messy.

--vadim




Re: How do you stop outgoing spam?

2002-09-10 Thread Vadim Antonov



On Tue, 10 Sep 2002, Barry Shein wrote:

 And, although some won't like me saying this, having the technical
 community deal with these new criminals is a bit like sending the boy
 scouts after Al-Qaida.
 
 Unfortunately it's going to take a much harsher view of reality than
 maybe this regexp will stop crime.


Last time I checked policemen weren't designing door locks.  Not even in
business of selling them.

What we have is a lot of open doors having prominent signs come in and
take whatever you please on them.  This can and should be fixed by the
technical community.

US is not going to send troops to Nigeria just to catch some spammers 
anyway.  Consider that a harsher view of reality :)

--vadim

PS. Criminals are criminals because they are stupid.  If they were smart
they could make good living legally.  Governments avoid competition, 
too.




Re: Unrecognised packets

2002-08-20 Thread Vadim Antonov



Q.931 is built into H.323 (a VOIP call control protocol). Bellhead 
standards are weird.

Hope this helps...

--vadim

On Tue, 20 Aug 2002, cw wrote:

 I'm not familiar with all the protocols involved, so if my searches
 are correct Q.931 is an ISDN control protocol. This is odd because
 this is coming over a lan and neither machines have any ISDN hardware
 or software.
 




Re: Dave Farber comments on Re: Major Labels v. Backbones

2002-08-17 Thread Vadim Antonov



On 17 Aug 2002, Paul Vixie wrote:

 Am I the only one who finds it odd that it's illegal to export crypto
 or supercomputers to certain nations or to sell such goods with
 prior knowledge that the goods are going to be resold in those
 nations... or even to travel to certain nations... yet no law
 prohibits establishing a link and a BGP session to ISP's within those
 nations, or to ISP's who are known to have links and BGP sessions to
 ISP's within those nations?

Well... it is not always legal.  The trade with the enemy act may
prohibit ISPs from connecting with countries on the list.  In the old
times I had a discussion on the subject with Steve Goldstein (regarding
Iran).
 
 I'm not sure I'd be opposed to it, since economic blockades do appear
 to have some effect, and since data is a valuable import/export
 commodity.  I think homeland security is a good thing if it means a
 mandate for IPsec, DNSSEC, edge RPF, etc... but if we *mean* it, then
 why are US packets able to reach ISP's in hostile nations?

This is silly, because:

a) no one can deny connectivity to bad guys.  You can merely create a
minor annoyance to them, in form of having to use a proxy somewhere in
Europe.

b) all you can really achieve is to restrict access for their populace;  
effectively making the job of bad guys easier (hint: governments in
non-friendly countries do agressive filtering of access to Western
networks themselves).

It is a known phenomenon that given the Western cultural dominance in the
net, it is one of the best pro-Western propaganda tools around. Propaganda
(in the right direction) is good, because if you can convince someone to
come to your side, you don't have to kill him to prevail.

I can only hope that H.S. Dept will see it this way.

 I want to know what the homeland security department is likely to do
 about all this, not what is good/bad for the citizens of hostile
 nations or even nonhostile nations.

Likely nothing, unless they are complete incompetents.  The point is:  
there's no feasible way to achieve any gains by restricting access on
per-country basis.

It is a lot more useful to suppress the enemy propaganda by going after
its sources which are easily located.  I would suggest going after CNN
first [sarcasm implied].

--vadim




RE: $400 million network upgrade for the Pentagon

2002-08-13 Thread Vadim Antonov


On Wed, 14 Aug 2002, Brad Knowles wrote:

 
 At 5:13 PM -0500 2002/08/13, Blake Fithen wrote:
 
   Is this sensitive info?   Couldn't someone (theoretically) aim a
   beam at an unoccupied office and another at their objective
   office then filter out the 'noise'?
 
   Actually, I don't know for sure how it's implemented.  They may 
 have separate sound streams for each window.  Moreover, this was a 
 few years ago (I left in 1995), and there may have been changes since 
 then.  It would certainly be a lot easier to use individual speakers 
 fed by electrical wiring, than pumping a lot of air around from a 
 central location.

Even easier is to glue a piezoelectric transducer to the glass and feed
it some noise modulated to look like speech from a gadget which may cost
entire $30 in parts.  Detecting IR laser emissions and sounding alarm is
also a good idea :)

--vadim




Re: Microslosh vision of the future

2002-08-11 Thread Vadim Antonov




Microsoft already duped the software consumers into buying into fully
proprietary software.  Given the prevalent time horizon of average IT
manager's thinking I fully expect Microsoft to get that stuff deployed
before the poor saps start realizing they're being ripped.  After that
Microsoft will leverage their market power to exclude any competition.  
Exactly like they did it before on numerous occasions.

Their PR budget is bigger than GDP of some nations.  They're ruthless and
show remarkable lack of respect to the notions of fairness or common good.  
Be afraid.

--vadim

On Sun, 11 Aug 2002, David Schwartz wrote:

   Microsoft can have whatever vision of the future they want and can use any 
 resources at their disposal to bring their vision to light. Everybody has 
 that right. If I don't like it, I won't buy it. If they convince customers 
 that they gain more than they lose, only a gun will make them buy it. I don't 
 see Bill Gates packing heat any time soon.
 
   *yawn*
 
 -- 
 David Schwartz
 [EMAIL PROTECTED]




Re: endpoint liveness (RE: Do ATM-based Exchange Points make sensean ymore?)

2002-08-10 Thread Vadim Antonov



It makes little sense to detect transient glitches.  Any possible reaction 
on those glitches (i.e. withdrawal of exterior routes with subsequent 
reinstatement) is more damaging than the glitches themselves.

--vadim

On Fri, 9 Aug 2002, Lane Patterson wrote:

 
 BGP keepalive/hold timers are configurable even down to granularity 
 of link or PVC level keepalives, but for session stability reasons, 
 it appears that most ISPs at GigE exchanges choose not to
 tweak them down from the defaults.  IIRC, Juniper is 30/90 and Cisco is
 60/180.  My gut feel was that even something like 10/30 would be 
 reasonable, but nobody seems compelled that this is much of an
 issue.
 
 Cheers,
 -Lane
 
 -Original Message-
 From: Petri Helenius [mailto:[EMAIL PROTECTED]]
 Sent: Friday, August 09, 2002 3:07 PM
 To: Mikael Abrahamsson; [EMAIL PROTECTED]
 Subject: Re: Do ATM-based Exchange Points make sense anymore?
 
 
 
 
  What functionality does PVC give you that the ethernet VLAN does not?
 
 That´s quite easy. Endpoint liveness. A IPv4 host on a VLAN has no idea
 if the guy on the other end died until the BGP timer expires.
 
 FR has LMI, ATM has OAM. (and ILMI)
 
 Pete
 




Re: Do ATM-based Exchange Points make sense anymore?

2002-08-10 Thread Vadim Antonov



On 10 Aug 2002, Paul Vixie wrote:

 why on god's earth would subsecond anything matter in a nonmilitary situation?

Telemedicine, tele-robotics, etc, etc.  Actually, there's a lot of cases
when you want to have subsecond recovery.  The current Internet routing
technology is not up to the task; so people who need it have to build
private networks and pay for that arm and leg, too.

--vadim




Re: PSINet/Cogent Latency

2002-07-23 Thread Vadim Antonov




Some long long long time ago I wrote a small tool called snmpstatd.  Back 
then Sprint management was gracious to allow me to release it as a 
public-domain code.

It basically collects usage statistics (in 30-sec peaks and 5-min
averages), memory and CPU utilization from routers, by performing
_asynchronous_ SNMP polling.  I believe it can scale to about 5000-1
routers.  It also performs accurate time base interpolation for 30-sec
sampling (i.e. it always requests router's local time and uses it for
computing accurate 30-sec peak usage).

The data is stored in text files which are extremely easy to parse.

The configuration is text-based; it also includes compact status alarm 
output (i.e. which routers/links are down),  PostScript chart generator,
and troff/nroff based text report generator, with summary downtime and
usage figures + significant events.  The tool was used routinely to 
produce reporting on ICM-NET performance for NSF.

This thing may need some hacking to accomodate later-day IOS bogosities,
though.

If anyone wants it, I have it at www.kotovnik.com/~avg/snmpstatd.tar.gz

--vadim

On Mon, 22 Jul 2002, Gary E. Miller wrote:

 
 Yo Alexander!
 
 On Tue, 23 Jul 2002, Alexander Koch wrote:
 
  imagine some four routers dying or not answering queries,
  you will see the poll script give you timeout after timeout
  after timeout and with some 50 to 100 routers and the
  respective interfaces you see mrtg choke badly, losing data.
 
 Yep.  Anything gets behind and it all gets behind.
 
 That is why we run multiple copies of MRTG.  That way polling for one set
 of hosts does not have to wait for another set.  If one set is timing
 out the other just keeps on as usual.
 
 RGDS
 GARY
 ---
 Gary E. Miller Rellim 20340 Empire Blvd, Suite E-3, Bend, OR 97701
   [EMAIL PROTECTED]  Tel:+1(541)382-8588 Fax: +1(541)382-8676
 
 




Re: No one behind the wheel at WorldCom

2002-07-16 Thread Vadim Antonov



On Mon, 15 Jul 2002, Pedro R Marques wrote:

  From a point of view of routing software the major challenge of
 handling a 256k prefix list is not actually applying it to the
 received prefixes. The most popular BGP implementations all, to my
 knowledge, have prefix filtering algorithms that are O(log2(N)) and
 which probably scale ok... while it would be not very hard to make
 this a O(4) algorithm that is probably not the issue.

Mmmm... There's also an issue of applying AS-path filters which are (in
cisco world) regular expressions.  Although it is possible to compile
several REs together into a single FSM (lex is doing exactly that), I'm
not sure IOS and/or JunOS do that.
 
--vadim




Re: No one behind the wheel at WorldCom

2002-07-16 Thread Vadim Antonov




 I would still contend that the number 1 issue is how you do express
 the policy to the routing code. One could potentially attempt to
 recognise the primary key is a route-map/policy-statement and compile
 it as you suggest. It is an idea that ends up being tossed up in the
 air frequently, but would that solve anything ?

Actually, expressing RP on per-router basis is kind of silly, and an 
artifact of enterprise-box mentality.  A useful design would allow 
formulation of RP for the entire network with subsequent synchronization 
of routers with the policy repository.

 Is there the ability in the backend systems to manage that effectivly 
 and if so is text interface via the CLI the most apropriate API ?

Cisco-style CLI is extremely annoying and silly.  There's no useful way to
perform a switch-over to a new config, and there's no good way to compare 
two config and produce applicable difference.
 
That said, I think a well-designed CLI is a powerful thing, and can be 
used to integrate routers with provider-specific NMSes.

--vadim




Re: All-optical networking Was: [Re: Notes on the Internet for Bell Heads]

2002-07-12 Thread Vadim Antonov




The discussion is certainly entertaining, but -- 

1) All-optical networking is a bunch of nonsense until optical processing 
   ability includes complete set of logic and storage elements - i.e. 
   achieving fully blown optical computing.

   Rationale for the statement: telecom is fundamentally a multiplexing 
   game, and w/o stochastical multiplexing a network won't be able to 
   achieve price/performance comparable to that of stochastically muxed 
   network.  Stochastical multiplexing requires logic and storage.

   The current opcial gates are all electrically-controlled, and either 
   mechanical (and wear rather  quickly, too, so you can't switch them 
   per-packet or whatever), or iherently slow (liquid crystals), or 
   potentially fast (poled LiNbO3 structures, for example) but requiring 
   tens of kV per mm, making it slow to charge/discharge. 

   Besides, your truly years ago invented a practical way to achieve 
   nearly infinite switching capacity in electronics.  Too bad, Pluris didn't
   survive the WorldCom scandal, as some investors suddenly got cold feet.

2) Wiretapping does not require storage of the entire traffic stream; and 
   filtering for the target sessions can be done relatively easily at wire 
   speed.

3) Nitpicking:

 I think you may be thinking about quantum-entangled pairs. That
 phenomena is better suited to cryptography than general networking.
 
 In an entangled system, both recipients would know pretty quickly that they
 did not receive their photons as there would be an early 'measurement' on
 one end, and a missing photon on the other.

   You cannot detect measurement per se.  What you get is skewed 
   statistics;  the entangled pairs obey Bell inequalities, which no
   classical system can.  This gives an opportunity to detect insertion of 
   anyting destroying entanglement of the pair - but only statistically.  
   You need to send enough pairs to distinguish normal noise from intrusion 
   reliably.

   Besides, quantum entanglement cannot be used to send any information at 
   all.  What it gives is the ability to get co-ordinated sets of 
   measurements at the ends, but the actual results of those measurements 
   are random.  I.e. you can generate identical vectors of random bits at the 
   ends, but cannot send any useful message across using only 
   entanglement.

   Therefore quantum entanglement (aka Einstein-Podolsky-Rosen paradox) 
   does not violate the central postulate of the special relativity theory (that 
   no kind of entity can propagate faster than the speed of light in 
   vacuum, in any non-accelerating reference frame).

--vadim




Re: Vixie puts his finger squarely on the key issue Re: Sprintpeering policy

2002-06-30 Thread Vadim Antonov




Oh, no. If anyone has illusions that politicos can somehow fix the
situation, he ought to do serious reality check.  If anything, they made
that mess in the first place by creating ILEC monopolies and allowing
those supposedly regulated monopilists to strange the emerging last mile
broadband providers.  With the obvious result of getting backbones to lose
the projected traffic and revenue streams.

(Of course, they were also very lax in policing conflicts of interest and
accounting practices).

The serious economic trouble means that some of those tier-1 providers
will go to chapter 11, and emerge with paid-for capacity and no crippling
debts (and with sane executives, too).  At least 4 will survive; the
potential value of the remaining ones to their creditors will grow as
competitors die off.

--vadim

On Sat, 29 Jun 2002, Gordon Cook wrote:

 We are now halfway through 2002.  the build out is complete and most 
 of the builders are  either in chapter 11 or in danger of going 
 there.  Does anyone believe that the non regulation arguments of the 
 build out phase still hold?  If so other than for reasons of blind 
 ideology (all regulation by definition is bad), why?




RE: remember the diameter of the internet?

2002-06-18 Thread Vadim Antonov



On Tue, 18 Jun 2002, Martin, Christian wrote:

 Regarding the diameter of the Internet - I'm still trying to 
 figure out 
 why the hell anyone would want to have edge routers (instead of dumb 
 TDMs) if not for inability of IOS to support large numbers of virtual 
 interfaces.  Same story goes for clusters of backbone routers.
 
 When ANY router becomes as reliable as a dumb TDM device, then maybe we can
 begin collapsing the POP topology.  However, the very nature of the Internet
 almost prevents this reliability from being achieved (having a shared
 control and data plane seems to be the primary culprit).

Uhm. Actually, control  data planes are rather separate inside modern
routers. What is flaky is router software.  That's what you get when your
router vendor sells you 1001 way of screwing up your routing :)

 There are routers out there today that can single-handedly replace
 entire POPs at a fraction of the rack, power, and operational cost.  
 Hasn't happened, tho.

I know two boxes like that - one is broken-as-designed, with copper 
distributed fabric; another (courtesy of VCs who managed to lose nearly 
entire engineering team mid-way but hired a bunch of marketers long before 
there was anything to ship) is still in beta.

 I don't like wasting ports for redundant n^2 or log(n^2) interconnect
 either, but router and reliability mix like oil and water...

Actually, not.  A router is a hell of a lot simpler than a Class-5 switch, 
particularly if you don't do ATM, FR, X.25, MPLS, QoS, multicast, IPv6, 
blah, blah, blah. 

Demonstrably (proof by existence), those switches can be made reasonably
reliable. So can be routers. It's the fabled computer tech culture of be
crappy, ship fast, pile features sky high, test after you ship aka OFRV's
Micro$oft envy, which is the root evil.

--vadim




Re: China's cable firms fight deadly turf war

2002-05-30 Thread Vadim Antonov




 eh, thats nothing. Try doing work in some of the buildings in NY without a
 Union card ;)

Trade unions are schools of communism. 
- Vladimir Il'yich Lenin

--vadim




Re: IP renumbering timeframe

2002-05-30 Thread Vadim Antonov



On Thu, 30 May 2002, Richard A Steenbergen wrote:

 Yes, demonstrating things to ARIN is remarkably annoying.

Demonstrating as in getting rid of monstrosities? :)

--vadim 




  1   2   >