Re: Color vision for network techs

2012-09-01 Thread Vadim Antonov
The simple solution for color perception issues is to carry some cheap 
red/green 3d glasses... they would make discriminating between LED colors as 
easy as closing one eye:)


Re: F-ckin Leap Seconds, how do they work?

2012-07-05 Thread Vadim Antonov
On Thu, 2012-07-05 at 14:00 +0400, Dmitry Burkov wrote:
> On Jul 5, 2012, at 1:35 PM, Vadim Antonov wrote:
> 
> > On Wed, 2012-07-04 at 20:48 -0700, Owen DeLong wrote:
> >> 
> >> Given that we don't seem to be able to eliminate the absurdity of DST,
> >> I doubt that either of those proposals is likely to fly.
> > 
> > Russian govt. did eliminate DST.
> > 
> > http://www.rt.com/news/daylight-saving-time-abolished/
> 
> :)
> http://themoscownews.com/vote/20120629/189902272-results.html

75.9% of people are dimwits :)

--vadim



Re: F-ckin Leap Seconds, how do they work?

2012-07-05 Thread Vadim Antonov
On Wed, 2012-07-04 at 20:48 -0700, Owen DeLong wrote:
> 
> Given that we don't seem to be able to eliminate the absurdity of DST,
> I doubt that either of those proposals is likely to fly.

Russian govt. did eliminate DST.

http://www.rt.com/news/daylight-saving-time-abolished/

--vadim



Re: F-ckin Leap Seconds, how do they work?

2012-07-03 Thread Vadim Antonov

On 7/3/2012 6:28 PM, Steve Allen wrote:

On 2012 Jul 3, at 18:13, Vadim Antonov wrote:

PS. I would vote for using TAI instead of UTC as the
non-relativistic time base in computer systems.


A problem with the use of TAI is that the BIPM and CCTF (who make
TAI) expressed strongly that they do not want it used as a system
time in document CCTF09-27
http://www.bipm.org/cc/CCTF/Allowed/18/CCTF_09-27_note_on_UTC-ITU-R.pdf
so strongly that they end by contemplating the discontinuation
of TAI.


There's always a possibility of using pseudo-TAI internally by 
reconstructing it from UTC. This is not the best solution (because it 
requires systems to have long-term memory of past leap seconds, or
ability to access a reliable storage of such), but at least this removes 
the burden of doing complicated time handling from application software.


Actually, what they are saying is that they would discontinue TAI *if* 
definition of UTC is amended to remove future leap seconds.  The 
document makes it clear that they recognize the necessity of continuous 
coordinate time standard.


--vadim



Re: F-ckin Leap Seconds, how do they work?

2012-07-03 Thread Vadim Antonov

On 7/3/2012 4:15 PM, Tony Finch wrote:

Vadim Antonov  wrote:


But in theory, if you can get the technical wrinkles worked out, you can
derive the same frequency standard in your lab with a single instrument.

(One more issue is that non-relativistic time is not only the frequency of
oscillators, but also a reference point).


Your parenthetical point explains why TAI does not tick at the same rate
as the SI second in your lab, expecially if your lab is (for example) in
Colorado. You have to adjust the frequency depending on your difference in
gravitational potential from the geoid.

Tony.



I'm afraid I didn't express my thoughts clearly... I means besides 
agreement of what a second is there is also an agreement on when the 
zeroeth second was, a fixed reference point in time. *That* cannot be 
recreated in a lab. (You can correct for relativistic effects of local 
gravity and moving frame of reference, though, to match conditions on 
the Earth and thus the SI definition of second).


However, the whole concept of universal standard of _time_ (as opposed 
to standard of second) is thoroughly non-relativistic because it claims 
to have clocks at different locations ticking simultaneously.  The 
special relativity, of course, makes it clear than simultaniety is in 
the eye of the observer:)  In the end, you can only do limited 
Einstein-Poincare synchronization within a chosen reference frame.


An interesting factoid: the notion of synchronized time differs if you 
synchronize clocks from East-to-West and from West-to-East, due to 
Sagnac effect:)


--vadim

PS. I would vote for using TAI instead of UTC as the non-relativistic 
time base in computer systems. The idea of expressing UTC as a single 
number (instead of  tuple) is silly 
because it creates aliases or gaps.  You cannot do simple interval 
arithmetic over UTC, no more than you can do that over local daylight 
savings time; and doing accurate time computation for events in the 
future is impossible in both because they depend on unpredictable 
factors (Earth rotation rate, politics, etc).


TAI is also not a fixed given, because the standards are being refined, 
but at least the refinements tend to be predictably in the direction of 
improved accuracy, so they don't break things.




Re: F-ckin Leap Seconds, how do they work?

2012-07-03 Thread Vadim Antonov

On 7/3/2012 2:35 PM, Tony Finch wrote:

Peter Lothberg  wrote:


As the definition of a atomic second is 9192631770 complete
oscillations of cesium 133 between enery level 3 and 4, "everyone" can
make a second in their lab, that's TAI.


No, TAI isn't based on the SI second you realise in your lab. It's the SI
second realised on the geoid by a large fleet of clocks.


I think if anyone here is well aware of that that's be Peter:)

The reason for the fleet of clocks is partly political, partly practical 
(cesium clocks are not the most precise... so averaging between a bunch 
of them is used to calibrate better master clocks).  But in theory, if 
you can get the technical wrinkles worked out, you can derive the same 
frequency standard in your lab with a single instrument.


(One more issue is that non-relativistic time is not only the frequency 
of oscillators, but also a reference point).


--vadim



Re: Megaupload.com seized

2012-01-20 Thread Vadim Antonov



  "Without the permission of the copyright holder" _is_ contrary to
  statute, and thus 'against the law'.  As such 'illegal' is _not_
  an incorrect term to apply to the situation.

  It may not be a _criminal_ violation, but it is still proscribed by law.

  "Illegal" and "criminal" -- _these_ are different things.



Storing copyrighted material in *any* place, file-sharing server or not, 
is _not_ illegal under the current law as it stands.  There is no law 
which dictates the location of file with a legally obtained content I 
keep for my personal use.  I have no obligation to prevent unauthorized 
access to copyrighted material by any third parties. I don't need 
permission of copyright owner to make copies for my own personal use, 
and I don't need permission to entrust keeping of these copies in any 
place by any agent - as long as that agent does not *use* these copies.


What is illegal is the act of publishing this material (making a public 
performance) and making copies for use by other people without 
permission from copyright holder.  In the digital world it is, 
basically, publishing a reference (and a decryption password) in a 
public forum or otherwise sharing it with others.


That's the dirty secret behind all that PIPA/SOPA lawmaking - as it 
stands now, as long as file sharing services refrain from *publishing* 
the material (as opposed to merely storing it and allowing the rightful 
owner(s) to download it - but without any obligation to actually verify 
that the posession of ownership rights) and have a procedure for dealing 
with takedowns they are in the clear, legally.


This  places the burden of finding infringing content and proving 
infringement to the copyright holders.  They cannot efficiently do that, 
and so they want to off-load that burden to the user content hosters.


The less charitable interpretation is that PIPA/SOPA is a massive 
shakedown attempt by Hollywood; by basically threatening to shut down 
social networks and user-generated content hosters they'll be able to 
hold hostage the business of some very wealthy companies.  If the law 
passes, these large companies will have to come to terms with Hollywood 
and music industry by means of purchasing blanket licenses (it is 
impossible to monitor all user content for copyright violations), 
resulting in transfer of billions of dollars from high-tech to Hollywood.


The worst part is that companies like Google and Facebook may end up 
seeing PIPA/SOPA or future bills of the same nature as beneficial to 
them - after all, they already have enough money to pay copyright 
extortionists off, but their upstart competitors won't be able to get 
into the field at all.  Paying a portion of their income in exchange for 
exclusion of future competition may be looked at as a good bargain, 
without negative P.R. normally associated with explicit attempts to 
cartelize.


--vadim



Re: Whacky Weekend: Is Internet Access a Human Right?

2012-01-05 Thread Vadim Antonov

Nathan Eisenberg  wrote:

There are no such rights. Each positive right is somebody else's obligation.

This is antisocial nonsense.
If you want to be a slave, that's your right.  But leave me out of your 
schemes, please.  May I ask you to remove the guns and violence your 
"representatives" are threatening me with if I refuse to "participate"? 
Because I don't think it's possible to have a civilized discussion when 
one party insists on forcing the other to obey.


By the way, it takes a really twisted mindset to consider violence 
towards people who didn't do anything bad to you as socially acceptable.


--vadim



Re: Whacky Weekend: Is Internet Access a Human Right?

2012-01-05 Thread Vadim Antonov
There are no such rights. Each positive right is somebody else's 
obligation.

Being forced to feed, clothe, and house somebody else is called slavery. So
is providing Internet access, TV, or whatever else. Doesn't matter if 
this slavery

is part-time, the principle remains the same -- some people gang up on you
and force you to work for their benefit.

On the other hand the ability to exchange any information with any other
consenting parties and at your own expense - without being censored,
interfered with, or snooped upon - is indeed a basic human right.

--vadim

On 01/05/2012 07:45 AM, Zaid Ali wrote:

I agree with Vint here. Basic human rights are access to food, clothing
and shelter. I think we are still struggling in the world with that. With
your logic one would expect the radio and TV to be a basic human right but
they are not, they are and will remain powerful medium which be enablers
of something else and the Internet would fit there.

Zaid




RE: next-best-transport! down with ethernet!

2011-12-30 Thread Vadim Antonov
On Fri, 2011-12-30 at 14:00 +0100, Vitkovsky, Adam wrote:
> Well hopefully we won't need to worry about the speed of light anymore

Nope. The laws of physics as currently understood prohibit sending information
faster than the speed of light. (The reality of FTL neutrino thingie is still
too early to tell).

> Basically when 2 photons or electrons are emitted form the same source -they 
>are somehow bound/entangled together -that means if we change the spin
>on one photon to "up" the other photon will have it's spin changed to
>"down" immediately - and it doesn't matter whether the photons are next
>to each other or light years away -this happens instantly (no energy is
>transferred yet the information is passed) -this was already tested
>between two cities

That's not what happens: the entangled particles are in superposition
state (i.e. they are carrying both |0> and |1> simultaneously).  When
the measurement on one of them is made, their common wavefunction
collapses, leaving them in random specific state.  I.e. if you measured
one |0> the other will be |1>, or vice versa.  Changing quantum state of
an entangled particle to a known state will simply break entanglement
(the story is more complicated, but I don't want to get into arcana).
Because of that the quantum entanglement *cannot be used to transmit
information* between receiving points, so this non-local action at a
distance doesn't break the relativistic prohibition on FTL information
transmission.

However, this effect is still useful because it is a way to generate
random encryption keys, which will "just happen" to be the same at both
ends, hence the quantum cryptography.  Anybody trying to snoop on the
entangled photons in transit will cause premature wavefunction collapse
which can be statistically detected (in practice sources of entanglement
and phase detectors are not perfect, so quantum cryptography is not
unbreakable).




Re: estimation of number of DFZ IPv4 routes at peak in the future

2011-03-12 Thread Vadim Antonov
On Sat, 2011-03-12 at 08:00 -0500, William Herrin wrote:

> You're either building a bunch of big TCAMs or a radix trie engine
> with sufficient parallelism to get the same aggregate lookup rate. If
> there's a materially different 3rd way to build a FIB, one that works
> at least as well, feel free to educate me. And while RIB churn doesn't
> grow in lockstep with table size, it does grow.

Radix trie traversal can be pipelined, with every step in the search
being done in separate memory bank.  The upper levels of tries are
small, and the lower levels contain a lot of gunk which is not used
often - so they can be cached on-chip.

FIB lookup is much easier than executing instructions like CPUs do
precisely because packets are not dependent on each other, so you don't
need to stall pipeline (like CPUs do on jumps, I'll skip the discussion
of things like branch prediction and speculative execution).

This didn't stop folks at Intel producing cheap silicon which executes
instructions at astonishing speeds.

Where TCAMs really shine is packet classification - but you don't
generally need huge TCAM to hold ACLs in.

> Your favorite router manufacturer has made vague assertions about how
> they would build one given sufficient customer demand. So make a
> demand.

OFRV has a track record of producing grossly over-engineered devices,
hardware-wise.  I've heard a very senior hardware guy who came from OFRV
claiming that they do that deliberately to increase barriers to entry
for competition, though this doesn't make sense to me.

--vadim




Re: IPv4 address shortage? Really?

2011-03-09 Thread Vadim Antonov
On Tue, 2011-03-08 at 07:37 -0500, Steven Bellovin wrote:
> > 
> > ...well, kind of. What you don't mention is that it was thought to be
> > ugly and rejected solely on the aesthetic grounds.  Which is somewhat
> > different from being rejected because it cannot work.

> No.  It  was rejected because routers tended to melt down into quivering
> puddles of silicon from seeing many packets with IP options set -- a fast
> trip to the slow path.

Let me get it right... an important factor in the architectural decision
was that the current OFRV implementation of a router was
buggy-by-design?

Worse, when having a choice between something which already worked (slow
as it were - the IPv4 options) and something which didn't exist at all
(the new L3 frame format) the chosen one was the thing which didn't
exist.

Any wonder it took so long to get IPv6 into any shape resembling
working?

> It also requires just as many changes to applications
> and DNS content, and about as large an addressing plan change as v6.  There
> were more reasons, but they escape me at the moment.

Not really. DNS change is trivial; and if 64-bit extended IPv4 address
was choosen (instead of a new address family) 80% applications would
only needed to be recompiled with a different header file having long
long instead of int in s_addr.  Most of the rest would only need a
change in a data type and maybe in custom address-to-string formats.

Compare that with try-one-address family and if failed try another logic
which you need to build into every app with the dual-stack approach.

Do you remember the mighty trouble with changing from 32-bit file sizes
to 64-bit size_t in Linux? No? That's the point.

valdis.kletni...@vt.edu wrote:

> Steve, you of all people should remember the other big reason why:
> pathalias tended to do Very Bad Things like violating the Principle of
> Least Surprise

As the guy who implemented the country-wide domain name e-mail router
over UUCP, I remember this issue pretty well.  In any case, it is not
applicable if you structure 32-bit address spaces into a tree. Which
maps very nicely onto the real-life Internet topology.

Steven Bellovin wrote:

> And then some other dim bulb will connect one of those 5 layers to the
> outside world...

A dim bulb has infinite (and often much subtler) ways of screwing
routing in his employer's network.  Protecting against idiots is the
weakest argument I ever heard for architectural design.

(Now, I don't deny value of designing UIs and implementation logic in a
way which helps people to avoid mistakes... how could I, having been
doing GPS Z to SQL just a few hours ago, in IMC:)

So. You pretty much confirmed my original contention that the choice was
made not because of technical merits of the LSRR or IPv4 extended
address option but merely because people wanted to build beautifully
perfect Network Two - at the expense of compatibility and ease of
transition.

Well, I think IPv4 will outlive IPv6 for precisely this reason.  The
real-life users don't care about what's under the hood - but they do
care that the stuff they used to have working will keep working.  And
the real-life net admins would do whatever it takes to keep the users
happy - even if it is ugly as hell.

--vadim




Re: IPv4 address shortage? Really?

2011-03-08 Thread Vadim Antonov
Christopher Morrow  wrote:

> Gbqq Haqrejbbq jbhyq ybir lbhe fbyhgvba! Cebcf!

I'm sure he would:)  Though I can't claim a credit for the idea... it's
way too old, so old, in fact, that many people have forgotten all about
it.

Mark Andrews  wrote:

> This has been thought of before, discussed and rejected.

Of course, it was Discussed and Rejected.  I fall to my knees and beg
the forgiveness from those On High who bless us with Their Infinite
Wisdom and Foresight.  How could I presume to challenge Their Divine
Providence? Mea culpa, mea maxima culpa.

...well, kind of. What you don't mention is that it was thought to be
ugly and rejected solely on the aesthetic grounds.  Which is somewhat
different from being rejected because it cannot work.

Now, I'd be first to admit that using LSRR as a substitute for
straightforward address extension is ugly.  But so is iBGP, CIDR/route
aggregation, running interior routing over CLNS, and (God forbid, for it
is ugly as hell) NAT.

Think of it, dual stack is even uglier.  At least, with LSRR-based
approach you can still talk to legacy hosts without building completely
new and indefinitely maintaining a parallel legacy routing
infrastructure.

Scott W Brim  wrote:

> There are a number of reasons why you want IP addresses to be
> globally unique, even if they are not globally routed.

And do you have it now?  The last time I checked, NAT was all over the
place. Ergo - global address uniqueness (if defined as having unique
interface address labels) is not necessary for practical data
networking.

In fact, looking at two or more steps in the source route taken together
as a single address gives you exactly what you want - the global
uniqueness, as long as you take care to alternate disjoint address
spaces along the path and designate one of these spaces (the existing
publicly routeable space) as the root from which addressing starts.

Bill Manning  wrote:

> just a bit of renumbering...

Ah, that's nice, but I don't propose expanding use of NAT.  Or
renumbering on massive scale.  In fact I want to remind that NAT was
never a necessity.  It's a temporary fix which gave IPv4 a lot of extra
mileage and became popular precisely because it didn't break networking
too much while allowing folks to keep using the existing stuff.

The real problem with NAT is called "P2P" (and I think it will become
important enough to become the death of NAT).

Jima  wrote:

> This seems like either truly bizarre trolling, 

I guess you haven't been around NANOG (and networking) too long, or
you'd be careful to call me a troll:)

What I want is to remind people that with a little bit of lateral
thinking we can get a lot more mileage out of the good old IPv4. Its
death was predicted many times already. (Let me remember... there was
that congestion collapse, then it was the routing table overwhelming the
IGPs, and then there was that shortage of class Bs and routing tables
outgrowing RAM in ciscos, and then there was a heated battle over IP
address ownership, and there was the Big Deal about n^2 growth of iBGP
mesh). I don't remember what was the deal with Bob Metcalfe and his
(presumably eaten) hat. Something about Moore's Law?

> or the misguided idea of someone who's way too invested in IPv4 and
> hasn't made any necessary  plans or steps to implement IPv6.

"Too invested in IPv4"? Like, the Internet and everybody on it?

You know, I left the networking soapbox years ago, and I couldn't care
less about the religious wars regarding the best ways to shoot
themselves in the foot.  The reason why I moved to different pastures
was sheer boredom.  The last interesting development in the networking
technology was when some guy figured out that you can shuffle IP packets
around faster than you can convert a lambda from photons to electrons -
and thus has shown that there's no technological limitation to the
bandwidth of Internet backbones.

> you'd have to overhaul software on many, many computers, routers,
> and other devices.  (Wait, why does this sound familiar?) 

You probably missed the whole point - which is that unlike dual-stack
solution using LSRR leverages existing, installed, and paid for,
infrastructure.

> too bad we don't have a plan that could be put into action sooner

The cynical old codgers like yours truly have predicted that the whole
IPv6 saga would come precisely to that - when it was beginning. The
reason for that is called the Second System Effect of which IPv6 is a
classical example.

A truly workable and clean solution back then would be to simply add
more bits to IPv4 addresses (that's what options are for).  Alas, a lot
of people thought that it would be very neat to replace the whole piston
engine with a turbine powerplant instead of limiting themselves to
changing spark plugs and continuing on the way to the real job (namely,
making moving bits from place A to place B as cheap and fast as
possible).

Now, we don't have a problem of running out of IPv4 addresses - NAT
takes 

Re: A BGP issue?

2011-03-08 Thread Vadim Antonov
On Tue, 2011-03-08 at 09:25 +0200, Hank Nussbacher wrote:
> At 21:49 07/03/2011 -0500, Patrick W. Gilmore wrote:
> >On Mar 7, 2011, at 14:27, Greg Ihnen  wrote:
> >
> > > I run a small network on a mission base in the Amazon jungle which is 
> > fed by a satellite internet connection. We had an outage from Feb 25th to 
> > the 28th where we had no connectivity with email, http/s, ftp, Skype 
> > would indicate it's connected but even chatting failed, basically 
> > everything stopped working except for ICMP. I could ping everywhere just 
> > fine. I started doing traceroutes and they all were very odd, all not 
> > reaching their destination and some hopping all over creation before 
> > dying. But if I did traceroute with ICMP it worked fine. Does this 
> > indicate our upstream (Bantel.net) had a BGP issue? Bantel blamed 
> > Hughesnet which is the service they resell. I'm wondering what kind of 
> > problem would let ping work fine but not any of the other protocols. It 
> > also seems odd that I could traceroute via UDP part way to a destination 
> > but then it would fail if the problem was my own provider. Thanks.
> > >
> > > If this is the wrong forum for this post I'm sorry and please just hit 
> > delete. If this is the wrong forum but you'd be kind enough to share your 
> > expertise please reply off-list. Thanks!
> >
> >Honestly, I would rate this as one of the most on-topic posts in a while.
> 
> +1.
> When you have http working I suggest running:
> http://netalyzr.icsi.berkeley.edu/index.html
> to give you a benchmark of what your connection can do in the way of 
> protocols.
> 
> Regards,
> Hank


Greg - you may want try doing pings with large packets. You may have MTU
mismatch or some other problem with a link with lets small ICMP pings
through but mangles or discards large packets.

--vadim




IPv4 address shortage? Really?

2011-03-07 Thread Vadim Antonov
I'm wondering (and that shows that I have nothing better to do at 3:30am
on Monday...) how many people around here realize that the plain old
IPv4 - as widely implemented and specified in standard RFCs can be
easily used to connect pretty much arbitrary number (arbitrary means
>2^256) of computers WITHOUT NETWORK ADDRESS TRANSLATION.  Yes, you hear
me right.

And, no, it does not require any changes any in the global routing
infrastructure - as implemented now, and most OS kernels (those which
aren't broken-as-designed, grin) would do the trick just fine.  None of
that dual-stack stupidity, and, of course, no chicken-and-egg problem if
the servers and gateways can be made to respect really old and
well-established standards.

DNS and most applications would need some (fairly trivial) updating,
though, to work properly with the extended addressing; and sysadmins
would need to do tweaks in their configs since some mythology-driven
"security" can get in the way.  But they don't have to do that en mass
and all at once.

The most obvious solution to the non-problem of address space shortage
is the hardest to notice, ain't it?

--vadim

P.S. Hfr YFEE gb ebhgr orgjrra cevingr nqqerff fcnprf bire choyvpnyyl
ebhgrq fcnpr, Yhxr. Guvax bs cevingr nqqerff ovgf nf n evtug-fvqr
rkgrafvba gb gur sbhe-bpgrg choyvp nqqerff.

P.P.S. Gb rkgraq shegure, nygreangr gjb qvfgvapg cevingr nqqerff fcnprf,
nf znal gvzrf nf lbh pna svg vagb gur urnqre.




Re: [NANOG] Re: U.S. officials deny technical takedown of WikiLeaks

2010-12-04 Thread Vadim Antonov
This nonsense is only non-operational until you suddenly find yourself in a 
dire need to evade military patrols on a street while you're dragging a bag 
full of equipment to your "backup" NOC.

Been there, done that.

What are your contingency plans for the event of a government order (illegal, 
of course, but that'd be your least worry) to shut the network down? Putting 
your head into sand saying "it can't happen here?"  Yes, it can.

In the Soviet Union just emptying datacenters and phone exchanges from any 
personnel other than security guards - with all technical people making 
themselves unreachable was sufficient to keep the networks running. The goons, 
apparently, had no clue which switches to turn.

(There also was a capacity problem caused by the surge in the traffic; but this 
isn't likely to be a problem in the modern networks, but arranging local caches 
for highly demanded videos and "alternative" news sites - all mainstream 
outlets will be playing the equivalent of Swan Lake - may be necessary in order 
to keep service running).

--vadim

John R. Dennison wrote:
> On Sun, Dec 05, 2010 at 02:53:22AM +, Michael Sokolov wrote:
>   
>> Factoid: we outnumber the pigs by 1000 to 1.  Even if only 1% of us were
>> to go out and shoot a pig, we would still outnumber them 10 to 1!  We
>> *CAN* win -- wake up, people!
>> 
>
>   Is there really any need for this nonsense on this list?  Can
>   all the rhetoric and politics be kept off and return the list
>   to technical issues?
>
>   There are venues much better suited for those discussions.
>
>
>
>
>   John
>   




Re: Lightly used IP addresses

2010-08-13 Thread Vadim Antonov
John - you do not get it...

First of all, I don't want your organization to have ANY policy at all.

Being just a title company for IP blocks is well and good - and can be
easily done at maybe 1% of your budget. Title companies do not tell
people if they can buy or sell, they just record the current ownership. 
They do not create controversy or conflict - in fact, their sole reason
for existence is to reduce expenses involved in figuring out who has
rights to what.

Secondly, if you have delusion that you somehow represent me (or other
Internet users), get rid of that delusion. Simply because you don't.  I
didn't vote for you, and gave your organization no right to claim to
have my consent - individually or collectively.  I'm not bound by any of
your policies, and, as a matter of fact, I have no say in them (and do
not have a wish to participate). Writing a petition to have a policy
changed is something a serf does towards his lord, so I'll spare you the
embarrassment of reading what I have to say to anybody suggesting this
to me.

ARIN as a policy-making body exists solely due to cluelessness of telco
management.  If the execs had any clue, they'd realize that there is NO
such thing as owning a block of IP addresses - the real object is the
contractual right  to send packets to that address block over their
networks.  Because their customers generally want universal
connectivity, they are forced to cooperate with each other - but, as
everybody in this age of NATs, firewalls, and Great Walls knows,
universal connectivity is just a myth. Coverage in connectivity can (and
should) be a competitive discriminator, rather than absolutist
one-size-fits-all regulatory pipe dream.

What they have done is gave control over this right to a third party for
no good reason whatsoever.  (Yes, Randy, it did seem like a good idea -
but, just like any other idea involving giving some people policy-making
powers, - it was bound to go sour and start serving interests of these
people rather than the interests of subjects of their rule-making).

ISPs can increase their revenues by selling this right rather than
_paying_ to ARIN for being able to exercise this right. All it takes is
a bunch of reciprocity agreements saying, essentially, "we'll carry
yours if you carry ours".  As soon as one large ISP figures that out,
this particular political house of cards will go down, quickly.

With due respect,

--vadim


John Curran wrote:
> On Aug 13, 2010, at 4:35 PM, Randy Bush wrote:
>   
>>> How come ARIN has any say at all if A wants to sell and B wants to
>>> buy? Trying to fend off the imaginary monopolistic hobgoblin?
>>>   
>> self-justification for arin's existence, flying people around to lotso
>> meetings, fancy hotels, ...
>>
>> at the rirs, income and control are more important than the health and
>> growth of the internet.  basically, the rirs are another case of seemed
>> like a good idea at the time.
>> 
>
> Vadim - We'll run the database anyway you (collectively) want us to... 
> what policy would you prefer, and can you submit it as a proposal?
>
> (and to answer Randy - the only control over the administration is based 
> on the policies adopted.  Reduce the corpus of applicable policy if that
> is your desire.)
>
> /John
>
> John Curran
> President and CEO
> ARIN
>
>
>
>
>
>   




Re: Lightly used IP addresses

2010-08-13 Thread Vadim Antonov
"Those who do not understand market are doomed to reimplementing it, badly".

How come ARIN has any say at all if A wants to sell and B wants to buy? Trying 
to fend off the imaginary monopolistic hobgoblin?

Or simply killing the incentive to actually do something about conservation 
and, yes, reingeneering the network to fix the address space shortage and 
prefix explosion?

Re: What is "The Internet" TCP/IP or UNIX-to-UNIX ?

2010-04-05 Thread Vadim Antonov

It wasn't Moscow State U.  It was privately-owned network (called RELCOM)
from the day one (which was in 1990, not 1987... in 1987 connecting a
dial-up modem to phone network was still illegal in the USSR), built by
DEMOS co-op (that company is still alive, by the way).  Moscow State U was
one of the first customers (the guy responsible for connecting MSU later
founded Stalker Inc. which makes hi-perf e-mail servers).

It was UUCP-based initially, though I decided to avoid pathalias (it being 
a horrible kludge) and wrote UUCP message router which translated domain 
hostnames into UUCP next-hops - this is why email to .SU never used bang 
paths.

The ability to build dirt-cheap networks over crappy phone lines and using 
some no-name PCs as message and packet routers was noticed, see for 
example: "Developing Networks in Less Industrialized Nations" by Larry 
Press (EEE Computer, vol 28, No 6, June, 1995, pp 66-71) 
http://som.csudh.edu/cis/lpress/ieee.htm

--vadim


On Sun, 4 Apr 2010, Barry Shein wrote:

> 
> I remember around 1987 when Helsinki (Univ I believe) hooked up
> Talinn, Estonia via uucp (including usenet), who then hooked up MSU
> (Moscow State Univ) and the traffic began flowing.
> 
> You could just about see the wide-eyed disbelief by some as they saw
> for example alt.politics, you people just say almost *anything!*, with
> your real name and location attached, and NOTHING HAPPENS???
> 
> I still believe that had as much to do with the collapse of the Soviet
> Union as the million other politicians who wish to take credit.
> 
> It's arguable that UUCP (and Usenet, email, etc that it carried) was
> one of the most powerful forces for change in modern history. All you
> needed was some freely available software, a very modest computer, a
> modem, a phone line, and like so many things in life, a friend.
> 
> And then once you "got it", you looked towards connecting to the
> "real" internet, you knew just what you were after.
> 
> 
> 




Re: legacy /8

2010-04-04 Thread Vadim Antonov

> Zaid
> 
> P.s. Disclaimer: I have always been a network operator and never a dentist.

I would have thought opposite.

People who have been on this list longer would probably remember when I 
was playing in this sandbox.

The real wisdom about networks is "never try to change everything and
everywhere at once".  You either do gradual migration, or you end up in a
big pile of poo.  Which what IPv6 transition situation is.

--vadim




Re: legacy /8

2010-04-03 Thread Vadim Antonov

With all that bitching about IPv6 how come nobody wrote an RFC for a very 
simple solution to the IPv4 address exhaustion problem:

Step 1: specify an IP option for extra "low order" bits of source & 
destination address.  Add handling of these to the popular OSes.

Step 2: make NATs which directly connect extended addresses but also NAT 
them to non-extended external IPs.

Step 3: leave backones unchanged.  Gradually reduce size of allocated 
blocks forcing people to NAT as above.

Step 4: watch people migrating their apps to extended addresses to avoid 
dealing with NAT bogosity and resulting tech support calls & costs.

Step 5: remove NATs.

--vadim




RE: Revisiting the Aviation Safety vs. Networking discussion

2009-12-25 Thread Vadim Antonov

> I can see situations in the future where people's lives could be
> dependent on networks working properly, or at least endangered if a
> network fails.

Actually it's not the future. My father's design bureau was making
hardware, since 70s (including network stuff) for running industrial
processes of a kind where software crash or a network malfunction was
usually associated with casualties.  Gas pipelines, power plants, electric
grids, stuff like that.

That's a completely different class of hardware, more of a kind you'd find
in avionics - modules in triplicate, voting, pervasive error correction,
etc.  Software was also designed differently, with a lot more review
processes, and with data structures designed for integrity checking (I
still use this trick in my work, which saves me a lot of grief during
debugging) and recovery from memory corruption and such.

I'd be seriously loath to put any of the current crop of COTS network
boxes into a life-critical network.

--vadim




RE: Revisiting the Aviation Safety vs. Networking discussion

2009-12-25 Thread Vadim Antonov

Just clearing a small point about pilots (I'm a pilot) - the
pilot-in-command has ultimate responsibility for his a/c and can ignore
whatever ATC tells him to do if he considers that to be contrary to the
safety of his flight (he may be asked to explain his actions later,
though). Now, usually ignoring ATC or keeping it in the dark about one's
intentions is not very clever - but dispatchers are not in the cockpit and 
may misunderstand the situation or be simply mistaken about something (so 
a pilot is encouraged to decline ATC instructions he considers to be in 
error - informing ATC about it, of course).

But one of the first things a pilot does in an emergency is pulling out
the appropriate emergency checklist.  It is kind of hard to keep from 
forgetting to check obvious things when things get hectic (one of the 
distressingly common causes of accidents is trivial running out of fuel - 
either because the pilot didn't do homework on the ground (checking actual 
fuel level in tanks, etc) or because when the engine got suddenly quiet he
forgot to switch to another, non-empty, tank).

The mantra about priorities in both normal and emergency situations is 
"Aviate-Navigate-Communicate" meaning that maintaining control of a/c 
always comes first, no matter what. Knowing where you are and where you 
are going (and other pertinent situational awareness such as condition of 
the a/c and current plan of actions) come second.  Talking is lowest 
priority.

The pre-planned emergency checklists may be a good idea for network
operators.  Try obvious (when you're calm, that's it) actions first, if
they fail to help, try to limit damage.  Only then go file the ticket and
talk to people who can investigate situation in depth and can develop a 
fix.

The way aviation industry come with these checklists is, basically,
experience - it pays to debrief after recovery from every problem not
adequately fixed by existing procedures, find common ones, and develop
diagnostic procedure one could follow step-by-step for these situations. 
(The non-punitive error or incident reporting which actually shields 
pilots from FAA enforcement actions in most cases also helps to collect
real-world information on where and how pilots get into trouble).

The all-too-common multistep ticket escalation chains (which merely work
as delay lines in a significant portion of cases) is something to be
avoided.

Even better is to provide some drilling in diagnostic and recovery from
common problems to the front-line personnel - starting from following the 
checklist on a simulated outage in the lab, and then getting it down to
what pilots call "the flow" - a habitual memorized procedure, which is 
performed first and then checked against the checklist.

Note that use of checklists, drilling, and flows does not make pilots a 
kind of robots - they still have to make decisions, recognize and deal 
with situations not covered in the standard procedures; what it does is 
speeding up dealing with common tasks, reduces mistakes, and frees up 
mental processing for thinking ahead.

The ISP industry has a long way to go until it reaches the same level of 
sophistication in handling problems as aviation has.

--vadim

On Fri, 25 Dec 2009, George Bonser wrote:

> I think any network engineer who sees a major problem is going to have a
> "Houston, we have a problem" moment.  And actually, he was telling the
> ATC what he was going to need to do, he wasn't getting permission so
> much as telling them what he was doing so traffic could be cleared out
> of his way. First he told them he was returning to the airport, then he
> inquired about Peterburough, the ATC called Peterburough to get a runway
> and inform them of an inbound emergency, then the Captain told the ATC
> they were going to be in the Hudson.  And "I hit birds, have lost both
> engines, and am turning back" results in a whole different chain of
> events these days than "I have two guys banging on the cockpit door and
> am returning" or simply turning back toward the airport with no
> communication.  And any network engineer is going to say something if he
> sees CPU or bandwidth utilization hit the rail in either direction.
> Saying something like "we just got flooded with thousands of /24 and
> smaller wildly flapping routes from peer X and I am shutting off the BGP
> session until they get their stuff straight" is different than "we just
> got flooded with thousands of routes and it is blowing up the router and
> all the other routers talking to it.  Can I do something about it?"
> 
>  
> 
> And that illustrates a point that is key.  In that case the ATC was
> asking what the pilot needed and was prepared to clear traffic, get
> emergency equipment prepared, whatever it took to get that person
> dealing with the problem whatever they needed to get it resolved in the
> best way forward.  The ATC isn't asking him if he was sure he set the
> flaps at the right angle and "did you try to restart the 

Re: Telephones for Noisy Data Centers

2009-06-17 Thread Vadim Antonov

Try noice-canceling aviation headsets (GA or helicopter models have truly
amazing noise suppression).  High-end models come with cellphone
interface. I don't think cellphones will work in many data centers, but I
think rigging interface from a normal cordless phone to the headset is
pretty simple.

The better of these headsets (Bose X, Sennheiser HMC460, Zulu Lightspeed,
etc) have additional digital signal processing for getting voice out of
noise - if you don't mind expense:)

--vadim

> Michael J McCafferty wrote:
> > All,
> > I'd be OK if we were in a facility that was only average in terms of
> > noise, but we are not. I need an exceptional phone for the data center.
> > Something that doesn't transmit the horrible background noise to the
> > other end, and something that is loud without being painful for the user
> > of this phone. Cordless would be very fine, headset is excellent.
> > Ordinary desk phone is OK... but the most important thing is that it
> > works for clear communication. A loud ringer would great too... but if
> > the best phone doesn't have one, I'll get an auxiliary ringer.
> > 
> > Does anyone have a phone model that they find to be excellent in a
> > louder than usual data center?