FWIW, I am seeing some very similar behavior with 10.3R1, which I'm going to be 
ticketing next week....

 


On Sep 12, 2010, at 08:46 , Matthias Brumm wrote:

> Hi!
> 
> Just a short information, to close this matter:
> 
> ATAC was able to reproduce this problem in lab. Upgrade to 10.2 has 
> eliminated the problem.
> 
> Matthias
> 
> Am 07.09.10 01:49, schrieb Michael Damkot:
>> This sounds like a bug, have you contacted the J-TAC
>> 
>> 
>> On Sep 4, 2010, at 08:44 , Matthias Brumm wrote:
>> 
>>> Hi!
>>> 
>>> Sorry to write again.. I may have found a clue:
>>> 
>>> After commiting this happans:
>>> 
>>> PID USERNAME  THR PRI NICE   SIZE    RES STATE    TIME   WCPU COMMAND
>>> 1058 root        1 132    0   607M   607M RUN     21:07 91.94% flowd_hm
>>> 
>>> This is a system without traffic!
>>> 
>>> On the main router:
>>> 64 bytes from x.x.x.x: icmp_seq=917 ttl=64 time=4.748 ms
>>> 64 bytes from x.x.x.x: icmp_seq=918 ttl=64 time=4.402 ms
>>> 64 bytes from x.x.x.x: icmp_seq=919 ttl=64 time=4.484 ms
>>> 64 bytes from x.x.x.x: icmp_seq=920 ttl=64 time=4.658 ms
>>> 64 bytes from x.x.x.x: icmp_seq=921 ttl=64 time=4.411 ms
>>> 64 bytes from x.x.x.x: icmp_seq=922 ttl=64 time=4.746 ms
>>> 64 bytes from x.x.x.x: icmp_seq=923 ttl=64 time=4.607 ms
>>> 64 bytes from x.x.x.x: icmp_seq=924 ttl=64 time=4.604 ms
>>> 64 bytes from x.x.x.x: icmp_seq=925 ttl=64 time=11.607 ms
>>> 64 bytes from x.x.x.x: icmp_seq=926 ttl=64 time=50.762 ms
>>> 64 bytes from x.x.x.x: icmp_seq=927 ttl=64 time=5.482 ms
>>> 64 bytes from x.x.x.x: icmp_seq=928 ttl=64 time=15.932 ms
>>> 64 bytes from x.x.x.x: icmp_seq=929 ttl=64 time=14.699 ms
>>> 64 bytes from x.x.x.x: icmp_seq=930 ttl=64 time=17.192 ms
>>> 
>>> This stays until the ping stops and the BGP session goes down.
>>> 
>>> Back to pre flowd? Should I use only packet based routing? We are using the 
>>> Js only as routers.
>>> 
>>> Matthias
>>> 
>>> Am 04.09.10 12:50, schrieb Matthias Brumm:
>>>>   HI!
>>>> 
>>>> We have a very strange problem on two chassis clusters with 10.0R3.10
>>>> (will try updating to R4.7 today).
>>>> 
>>>> One chassis cluster (2x J6350) is our main system
>>>> The other (2x J4350) is a system located on the site of our customer.
>>>> 
>>>> The two clusters are speaking BGP with each other. For the customer
>>>> system, this is the only BGP session. Our main system has a full BGP
>>>> mesh to our other locations and edge systems. For understanding the
>>>> problem, I would compress this to three BGP sessions:
>>>> 
>>>> A) BGP session to AMS-IX over VLAN 1
>>>> B) BGP session to ECIX over VLAN 1
>>>> C) BGP session to ECIX over VLAN 2
>>>> 
>>>> Involved are two switches. VLAN 1 is configured on both switches to make
>>>> it available in Amsterdam and Düsseldorf. VLAN 2 is only configured on
>>>> the switch, faced to Düsseldorf, to have a backup in the case the first
>>>> switch is dead.
>>>> 
>>>> The day before yesterday, I started to pings to the ECIX router. One
>>>> from my local workstation, the other from the main cluster.
>>>> 
>>>> If I cofigure something on the redundant interfaces, as soon as I do the
>>>> commit, the first ping stays normal, the second junps to +30ms (normal
>>>> around 6ms). 2-3 minutes later, both pings stop. The BGP session drops.
>>>> This is the only BGP session that is dropped, due to Hold time
>>>> expiration. After a few minutes, the pings and the BGP session come
>>>> back. Every other BGP session even the one to Düsseldorf over VLAN 2
>>>> stays up.
>>>> 
>>>> I switched the main load to Düsseldorf to VLAN 2. That time, that BGP
>>>> session was dropped, while the other stays up. The session to Düsseldorf
>>>> is taking the main load with around 260000 prefixes.
>>>> 
>>>> Matthias
>>>> _______________________________________________
>>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>> _______________________________________________
>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
> 
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp


_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to