FWIW, I am seeing some very similar behavior with 10.3R1, which I'm going to be
ticketing next week
On Sep 12, 2010, at 08:46 , Matthias Brumm wrote:
> Hi!
>
> Just a short information, to close this matter:
>
> ATAC was able to reproduce this problem in lab. Upgrade to 10.2 has
> eli
Hi!
Just a short information, to close this matter:
ATAC was able to reproduce this problem in lab. Upgrade to 10.2 has
eliminated the problem.
Matthias
Am 07.09.10 01:49, schrieb Michael Damkot:
This sounds like a bug, have you contacted the J-TAC
On Sep 4, 2010, at 08:44 , Matthias Bru
Of course,
nothing so far. I will get our sales person to look after the ticket,
perhaps he can ring some bells to get the ticket escalated.
Matthias
Am 07.09.10 01:49, schrieb Michael Damkot:
This sounds like a bug, have you contacted the J-TAC
On Sep 4, 2010, at 08:44 , Matthias Brumm w
This sounds like a bug, have you contacted the J-TAC
On Sep 4, 2010, at 08:44 , Matthias Brumm wrote:
> Hi!
>
> Sorry to write again.. I may have found a clue:
>
> After commiting this happans:
>
> PID USERNAME THR PRI NICE SIZERES STATETIME WCPU COMMAND
> 1058 root1 132
Hi!
Sorry to write again.. I may have found a clue:
After commiting this happans:
PID USERNAME THR PRI NICE SIZERES STATETIME WCPU COMMAND
1058 root1 1320 607M 607M RUN 21:07 91.94% flowd_hm
This is a system without traffic!
On the main router:
64 bytes from
Hi!
updating on the at the moment not used customer system was catastrophic.
Not only the cluster seems not to be coming correctly up, I have
difficulties to reach the system. The flowd process has high CPU rates.
May that have something to do with it?
I have not wrote, why I have wrote abo
HI!
We have a very strange problem on two chassis clusters with 10.0R3.10
(will try updating to R4.7 today).
One chassis cluster (2x J6350) is our main system
The other (2x J4350) is a system located on the site of our customer.
The two clusters are speaking BGP with each other. For the cust
7 matches
Mail list logo