As I had access to lab MX960, then I made a similar packet capture as
described in my Apr 25 e-mail. Only difference is that I didn't
capture packets on forwarding-plane Ethernet interface facing the RE,
but on RE Ethernet interface facing the forwarding plane, i.e "start
shell sh" in Junos and
On Thu, 25 Apr 2019 at 08:49, Tarko Tikan wrote:
>
> hey,
>
> > Please let me know if anything was unclear or if someone has other
> > ideas or theories.
>
> Been following this thread and do not have anything to contribute at
> this point but wanted to say I (and I hope many others) appreciate
Hi Saku,
> So 80% of time, work needed to compete for access to the CPU.
Yes. I hesitated because when I compare the ping results and vFP CPU
utilization plus microkernel threads CPU usage during the route churn
and without the route churn, then while CPU utilization clearly
increased, it did
hey,
Please let me know if anything was unclear or if someone has other
ideas or theories.
Been following this thread and do not have anything to contribute at
this point but wanted to say I (and I hope many others) appreciate this
type of proper debugging given the tools we have available
On Thu, 25 Apr 2019 at 00:19, Martin T wrote:
> in PFE) could indeed be affected. On the other hand, the overall LC
> CPU usage according to "sh linux cpu usage" did not exceed 80% even
> during the route churn, but I actually do not know what exactly this
> utilization means..
Like ethernet,
Hi,
I built a setup where vMX(local-as 64512) has following 20 eBGP
neighbors over point-to-point connections:
AS64513: vmx1[ge-0/0/6.13] <-> [ge-0.0.6.13]PC
AS64514: vmx1[ge-0/0/6.14] <-> [ge-0.0.6.14]PC
AS64515: vmx1[ge-0/0/6.15] <-> [ge-0.0.6.15]PC
AS64516: vmx1[ge-0/0/6.16] <->
Also it's now different again. Because Linux KVM running FreeBSD
guest. Lot of things are very slow now due to the Linux=>FreeBSD
limit.
And then again different with Junos Evolved.
But certainly LC_CPU doing something else and needing to send ICMP
towards RE will cause some jitter. Of course
For those of you interested in all the details around how the transit as
well as host-inbound and host-outbound traffic is handled on juniper MX3D
Trio architecture I'd recommend reading the following FREE book in its
entirety.
> Raphael Mazelier
> Sent: Tuesday, April 16, 2019 3:51 PM
>
> On 16/04/2019 15:52, Saku Ytti wrote:
> > On Tue, 16 Apr 2019 at 16:35, Vincent Bernat wrote:
> >
> >> Can you confirm that rpd is answering ICMP echo requests? I find this
> >> surprising as I would have expected the FreeBSD kernel
On 16/04/2019 15:52, Saku Ytti wrote:
On Tue, 16 Apr 2019 at 16:35, Vincent Bernat wrote:
Can you confirm that rpd is answering ICMP echo requests? I find this
surprising as I would have expected the FreeBSD kernel to do that.
You're probably right. So more likely is that LC CPU is busy
On Tue, 16 Apr 2019 at 16:35, Vincent Bernat wrote:
> Can you confirm that rpd is answering ICMP echo requests? I find this
> surprising as I would have expected the FreeBSD kernel to do that.
You're probably right. So more likely is that LC CPU is busy doing
programming RPD asked it to do,
❦ 15 avril 2019 15:09 +03, Saku Ytti :
>> I'm afraid this is not a valid test to prove the effect of relative process
>> priorities within the RE, doing this you're slowing down the clock on the
>> complete RE simulation as a whole (all simulated processes slowed down
>> equally).
> ..
>
>>
On Mon, 15 Apr 2019 at 14:37, wrote:
> I'm afraid this is not a valid test to prove the effect of relative process
> priorities within the RE, doing this you're slowing down the clock on the
> complete RE simulation as a whole (all simulated processes slowed down
> equally).
..
> Again this
> Martin T
> Sent: Monday, April 15, 2019 11:47 AM
>
> Hi Saku,
>
> thanks for reply!
>
> > > This is well know behavior and documented in several KB articles.
> > > However, what exactly causes this?
> >
> > I think just CPU doing something else before given time to do the ICMP
> > packets.
Hi Saku,
thanks for reply!
> > This is well know behavior and documented in several KB articles.
> > However, what exactly causes this?
>
> I think just CPU doing something else before given time to do the ICMP
> packets. Like busy running some RPD task.
I also thought that it has something to
Hey Martin,
> This is well know behavior and documented in several KB articles.
> However, what exactly causes this?
I think just CPU doing something else before given time to do the ICMP
packets. Like busy running some RPD task. You are facing uphill battle
if you need to rely on precise ICMP
Hi,
ping utility(or "icmp-ping" RPM probe without hardware-timestamp) in
Junos shows occasional high RTT even in case of pinging for example
directly connected devices in the LAN. Example:
64 bytes from 10.55.55.1: icmp_seq=40 ttl=64 time=0.441 ms
64 bytes from 10.55.55.1: icmp_seq=41 ttl=64
17 matches
Mail list logo