Re: sfp "computer"?

2015-10-21 Thread Andriy Bilous
There are also modules for ISR G2 (quite powerfull) which can host
OS/Hypervisor
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-e-series-servers/data_sheet_c78-705787.pdf

On Tue, Oct 20, 2015 at 2:01 PM, Saku Ytti  wrote:

> On 20 October 2015 at 01:42, Chip Marshall  wrote:
>
> Hey,
>
> > See page 4 on the spec sheet:
> > http://www.juniper.net/assets/us/en/local/pdf/datasheets/1000531-en.pdf
> >
> > No idea what's involved with packaging the VM and getting it there, but
> > should open up some interesting possibilties.
>
> What are those possibilities? How can you leverage VM in your
> router/switch? Do you have access to the high performance NPU? Or some
> high-performance link to forwarding-plane?
>
> If it's just plain old VM in server, why would you want to save 1kUSD
> on installing compute to the rack and add complexity/risk to your
> network infrastructure? JunOS, IOS-XR are very fickle already and fail
> on the darnest things, I'd be very hesitant to put random VM there
> without extremely compelling justification.
>
> --
>   ++ytti
>


Re: MPLS VPN design - RR in forwarding path?

2015-01-02 Thread Andriy Bilous
Given that you assign unique RD per PE, RR out of the forwarding path
provides you with a neat trick for fast convergence (and debugging
purposes) when CE has redundant paths to different PEs. Routes to those CEs
will be seen as different routes on RR.

On Wed, Dec 31, 2014 at 1:08 PM, Marcin Kurek not...@marcinkurek.com
wrote:

 Hi everyone,

 I'm reading Randy's Zhang BGP Design and Implementation and I found
 following guidelines about designing RR-based MPLS VPN architecture:
 - Partition RRs
 - Move RRs out of the forwarding path
 - Use a high-end processor with maximum memory
 - Use peer groups
 - Tune RR routers for improved performance.

 Since the book is a bit outdated (2004) I'm curious if these rules still
 apply to modern SP networks.
 What would be the reasoning behind keeping RRs out of the forwarding path?
 Is it only a matter of performance and stability?

 Thanks,
 Marcin



Re: Need trusted NTP Sources

2014-02-09 Thread Andriy Bilous
Best practice is five. =) I don't remember if it's in FAQ on ntp.org or in
David Mills' book. Your local clock is kind of gullible push-over which
will vote for the party providing most reasonable data. The algorithm
would filter out insane sources which run too far from the rest and then
group sane sources into 2 parties - your clock will follow the one where
runners are closer to each other. That is why uneven number of trustworthy
sources at least at start is required. With 2 sources you will blindly
follow the one which is closer to your own clock. You're also having the
the risk to degrade into this situation when you lose 1 out of 3 sources.
Four is again 2:2 and only with five you have a good chance to start
disciplining your clock into the right direction at the right pace, so when
1 source is lost you (most probably) won't run into insanity.


On Sun, Feb 9, 2014 at 9:03 AM, Saku Ytti s...@ytti.fi wrote:

 On (2014-02-08 19:43 -0500), Jay Ashworth wrote:

  In the architecture I described, though, is it really true that the odds
  of the common types of failure are higher than with only one?

 I think so, lets assume arbitrarily that probability of NTP server not
 starting to give incorrect time is 99% over 1 year time.
 Then either of two servers not giving incorrect time is 0.99**2 i.e. 98%,
 so
 two NTP servers would be 1% point more likely to give incorrect time than
 one
 over 1 year time.

 Obviously the chance of working is more than 99% maybe it's something like
 99.999%? And is that really typical failure-mode or is typical failure-mode
 complete loss of connectivity? Two NTP servers would protect from this,
 single
 not.
 However loss-of-connectivity minor impact on clients, wrong time has major
 impact of client.
 Maybe if loss-of-connectivity is fixed in somewhat short period of time,
 single NTP always win, if loss-of-connectivity is fixed typically in very
 long
 period of time, single NTP loses.

 I don't really have exact data, but best practice is 2. Matthew said 4,
 which
 gives the advantage that in single failure you are still operating
 redundantly
 and do not have urgency to fix, with 3 in single failure another failure
 must
 not occur before it is fixed.
 I think 3 is enough, networks are typically designed to handle 1 arbitrary
 failure at the same time and 2 arbitrary failures in most networks, when
 chosen correctly, will cause SLA breaking faults (Cheaper to pay SLA
 compensations than to recover from any 2 failures).
 But NTP servers are cheap, so if you want to be robust and recover from n
 false tickers, have 3+n.



 --
   ++ytti




Re: Need trusted NTP Sources

2014-02-09 Thread Andriy Bilous
Unfortunately I don't have the book handy. May be I am wrong too. Just
checked and 4 looks to be a valid solution for 1 falseticker according to
Byzantine Generals' Problem.


On Sun, Feb 9, 2014 at 10:03 PM, Saku Ytti s...@ytti.fi wrote:

 On (2014-02-09 21:08 +0100), Andriy Bilous wrote:

  Best practice is five. =) I don't remember if it's in FAQ on ntp.org or
 in
  David Mills' book. Your local clock is kind of gullible push-over which
  will vote for the party providing most reasonable data. The algorithm
  would filter out insane sources which run too far from the rest and then
  group sane sources into 2 parties - your clock will follow the one
 where
  runners are closer to each other. That is why uneven number of
 trustworthy
  sources at least at start is required. With 2 sources you will blindly
  follow the one which is closer to your own clock. You're also having the
  the risk to degrade into this situation when you lose 1 out of 3 sources.
  Four is again 2:2 and only with five you have a good chance to start
  disciplining your clock into the right direction at the right pace, so
 when
  1 source is lost you (most probably) won't run into insanity.

 I'm having bit difficulties understanding the issue with 4.

 Is the implication that you have two groups which all agree with each other
 reasonably well, but do not agree between the groups. Which would mean
 that 4
 cannot handle situation where 2 develop problem where they agree with each
 other but are wrong.
 But even in that case, you'd still recover from 1 of them being wrong. So

 3 = correct time, no redundancy
 4 = correct time, 1 can fail
 5 = correct time, 2 can fail
 and so forth?

 But not sure here, just stabbing in the dark. For the fun of it, threw
 email
 to Mills, if he replies, I'll patch it back here.

 --
   ++ytti




Re: job screening question

2012-07-10 Thread Andriy Bilous
I think Ivan covered that
http://blog.ioshints.info/2012/03/knowledge-and-complexity.html
And also about hiring in general
http://blog.ioshints.info/2009/12/certifications-and-hiring-process.html

Many says that everything happens in the first 5 minutes of interview,
right chemistry if you like - the rest of the hiring process you're
looking for reasons to hire the person you like or for the reasons to
reject someone you don't like.

On Tue, Jul 10, 2012 at 1:05 PM, David Coulson da...@davidcoulson.net wrote:

 On 7/10/12 6:56 AM, Bret Clark wrote:


 Hence the reason he mentioned skilled person...


 Right. A skilled person knows not to commit to anything in a meeting, or to
 at least validate what they think before they open their mouth. Depends on
 the audience, of course.

 At least in my environment, there is not an expectation for someone to be
 able to rattle off technical specifics from memory on demand - I've got an
 iPad and Google for that. General concepts and
 functionality/limitations/whatever are great in that setting, but no one
 asks for the level of detail that takes 30 minutes to research and digest in
 a meeting. The ability to remember obscure command line arguments, or parts
 of a protocol header don't have much value, when you can look it about 10
 seconds.

 Anyone else noticed their memory has gotten worse since Google came along?
 :)

 David




Re: Cisco Update

2012-07-05 Thread Andriy Bilous
I suspect it'll be Corporations control Internet and our private
life well before tomorrow. Domestic operators do that for ages with
their branded routers and AFAIK DOCSIS is unimaginable without (part
of) this functionality. I went berzerk when discovered such a checkbox
in my home router, two days later I checked it on again and never
looked back. How often do I check for firmware upgrades for for my
home router? Almost never. Do I backup my config? No. Do I disassemble
binary blob before upgrade. No. And I consider myself above-average
Internet user. It doesn't really matter how do I brick my hardware and
implementing authentication on the vendor site to download the
firmware does a better job with gathering sensitive data honestly.
Automatic updates is pretty much a common feature these days, it's
good to know what it means for a user but is hardly game-breaking.