We generally recommend 150ms to most customers. The added benefit of going from 150ms to 50ms is generally not enough to warrant the move.
-----Original Message----- From: juniper-nsp-boun...@puck.nether.net [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Andy Harding Sent: Thursday, March 03, 2011 10:07 AM To: David Ball Cc: juniper-nsp@puck.nether.net Subject: Re: [j-nsp] BFD timers for OSPF - MX80 - 10.3R2.11 We are using bfd on mx80 with 300ms timers and no problems. Only 2 or 3 sessions per box however. -- Regards Andy Harding Internet Connections Ltd Phone: 0870 803 1868 Mobile: 07813 975459 Fax: 0870 803 1781 Web: www.inetc.co.uk Email: a...@inetc.co.uk On 3 Mar 2011, at 17:53, David Ball <davidtb...@gmail.com> wrote: > Ah, that might help explain it. And shame on me for not checking > 'sh pfe statistics traffic protocol bfd', which of course shows none > received or absorbed. > I'll only have 2 sessions on each MX80, so I think I might leave it > enabled, but may toy with the interval. I'm expecting the control > plane to be kinda bored on these guys, so we'll see what it can > handle. > Thanks, Egor. > > David > > > On 3 March 2011 10:42, Egor Zimin <les...@gmail.com> wrote: >> Hello, David >> >> It looks like BFD implementation in MX80 is not distributed. At this >> moment I have a case in JTAC. The case is opened yet, however, it >> _looks_like_ bfd is not distributed. >> Probably because of this BFD echomode is not supported. And using 30ms >> timers for BFD ControlPackets can be not so easy task for RE's CPU. >> >> Because of this I don't see much sense to use BFD on MX80 at this moment. >> >> 2011/3/3 David Ball <davidtb...@gmail.com>: >>> MX80s running 10.3R2.11 >>> >>> For those of you using BFD for OSPF, how low have you been able to >>> set your minimum-interval timer? I have a pair of MX80s connected via >>> XFPs and 1m patch cables and with my hellos set to 30ms and multiplier >>> set to 3, I'm seeing failures. I haven't disabled distributed ppm. >>> Moving to 50ms hellos seems to settle things down. The reason I'm >>> wondering why I can't get away with lower timers is because when >>> Juniper proof-of-concepted (yeah, that's a verb) Trio for us (albeit >>> using MX960s), they used 15ms hellos with a multiplier of 3. >>> >>> Mar 3 10:06:06 router bfdd[1129]: BFDD_TRAP_STATE_DOWN: local >>> discriminator: 1, new state: down >>> Mar 3 10:06:06 router rpd[1257]: RPD_OSPF_NBRDOWN: OSPF neighbor >>> 172.16.1.22 (realm ospf-v2 xe-0/0/2.0 area 0.0.0.0) state changed from >>> Full to Down due to InActiveTimer (event reason: BFD session timed out >>> and neighbor was declared dead) >>> >>> >>> me@router> show configuration groups bfd-defaults-core-ospf >>> protocols { >>> ospf { >>> area 0.0.0.0 { >>> interface <*> { >>> bfd-liveness-detection { >>> version automatic; >>> minimum-interval 30; >>> multiplier 3; >>> full-neighbors-only; >>> } >>> } >>> } >>> } >>> } >>> >>> me@router> show configuration protocols ospf area 0.0.0.0 >>> interface lo0.0 { >>> passive; >>> } >>> interface xe-0/0/2.0 { >>> apply-groups bfd-defaults-core-ospf; >>> node-link-protection; >>> } >>> interface xe-0/0/3.0 { >>> apply-groups bfd-defaults-core-ospf; >>> node-link-protection; >>> } >>> >>> >>> David >>> _______________________________________________ >>> juniper-nsp mailing list juniper-nsp@puck.nether.net >>> https://puck.nether.net/mailman/listinfo/juniper-nsp >>> >> >> >> >> -- >> Best regards, >> Egor Zimin >> > _______________________________________________ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp _______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp _______________________________________________ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp