Re: [j-nsp] root@re1 as root: cmd='/sbin/sysctl net.inet.ip_control_plane messages

2011-12-05 Thread sthaug
> We're looking at moving to 10.4R8.5 as it's now out. Is that 
> what you're on? Anyone else had any experience with it?

Running 10.4R8.5 on one MX80 here, since 3 days ago. LDP-based MPLS
PE, no fancy features used. So far everything seems to work.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] CoS error

2011-12-05 Thread STEVEN

IQ2 PIC supports only one "strict-high" or "high" or "medium-high".

Please modify your configurtion, 




STEVEN via foxmail

Sender: Mark Tinka
Date: 2011年12月6日(星期二) 下午12:26
To: juniper-nsp
Subject: Re: [j-nsp] CoS error
On Sunday, December 04, 2011 07:56:01 PM Muhammad Adnan 
Mohsin wrote:

> Hi Experts,
> I am getting the following error while applying the CoS
> configuration.
> 
> [edit class-of-service interfaces]
> 
>   'ge-1/0/2'
> 
> More than one scheduler is configured as
> "strict-high" or "high" or "medium-high" in sch_mp-core
> for ge-1/0/2. Ifd ge-1/0/2 supports only one scheduler
> with "strict-high" or "high" or "medium-high".
> 
> error: configuration check-out failed
> 
> It's a M320 router with a 8 port IQ2 PIC. The PIC details
> are below.

We hit a similar issue in the past, on an M320 with an IQ2 
10Gbps PIC as well.

Basically, you cannot configure more than one 'strict-high' 
or 'high' scheduler for an interface. The router will commit 
only one of each for only one interface.

In a way, it sort of makes sense, since there's no point in 
scheduling traffic with priority on multiple queues on the 
same interface. It would defeat the purpose :-). Also, the 
only difference between 'high' and 'strict-high' is that 
'high' respects the 'transmit-rate', while 'strict-high' 
does not.

I can't recall whether the behaviour was different on the 
MX, but we decided to go with the least common denominator 
so we can have configuration and behaviour consistency 
across the entire backbone.

Cheers,

Mark.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] root@re1 as root: cmd='/sbin/sysctl net.inet.ip_control_plane messages

2011-12-05 Thread Mark Tinka
On Tuesday, December 06, 2011 12:37:36 PM Keegan Holley 
wrote:

> 10.4R5.5 on 1G and 10G DPE-E's.  Our MPC hardware doesn't
> seem to log this message either.

Yes, that was my next question. We have MPC's here.

The DPC's we have are running 10.2 (yes, I know... been 
waiting for 10.4R8.5), so no logging of this message on 
there.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] root@re1 as root: cmd='/sbin/sysctl net.inet.ip_control_plane messages

2011-12-05 Thread Keegan Holley
10.4R5.5 on 1G and 10G DPE-E's.  Our MPC hardware doesn't seem to log this
message either.

Thanks.


2011/12/5 Mark Tinka 

> On Monday, December 05, 2011 12:39:54 AM Keegan Holley
> wrote:
>
> > I'm seeing these come in once every few seconds after
> > upgrading some M/MX boxes to 10.4.  Has anyone else run
> > into this problem?  I don't personally agree with it but
> > we log any any right now and filter on the syslog
> > servers.  I'll probably open a JTAC case on monday, just
> > wondering if anyone else had run into this and solved
> > it.
>
> Which flavour of 10.4? We've been running 10.4R4.5 since it
> started shipping. We've come across all sorts of logs, but
> not this one.
>
> We're looking at moving to 10.4R8.5 as it's now out. Is that
> what you're on? Anyone else had any experience with it?
>
> MX480 with MPC2 3D cards here.
>
> Mark.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] root@re1 as root: cmd='/sbin/sysctl net.inet.ip_control_plane messages

2011-12-05 Thread Mark Tinka
On Monday, December 05, 2011 12:39:54 AM Keegan Holley 
wrote:

> I'm seeing these come in once every few seconds after
> upgrading some M/MX boxes to 10.4.  Has anyone else run
> into this problem?  I don't personally agree with it but
> we log any any right now and filter on the syslog
> servers.  I'll probably open a JTAC case on monday, just
> wondering if anyone else had run into this and solved
> it.

Which flavour of 10.4? We've been running 10.4R4.5 since it 
started shipping. We've come across all sorts of logs, but 
not this one.

We're looking at moving to 10.4R8.5 as it's now out. Is that 
what you're on? Anyone else had any experience with it?

MX480 with MPC2 3D cards here.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] CoS error

2011-12-05 Thread Mark Tinka
On Sunday, December 04, 2011 07:56:01 PM Muhammad Adnan 
Mohsin wrote:

> Hi Experts,
> I am getting the following error while applying the CoS
> configuration.
> 
> [edit class-of-service interfaces]
> 
>   'ge-1/0/2'
> 
> More than one scheduler is configured as
> "strict-high" or "high" or "medium-high" in sch_mp-core
> for ge-1/0/2. Ifd ge-1/0/2 supports only one scheduler
> with "strict-high" or "high" or "medium-high".
> 
> error: configuration check-out failed
> 
> It's a M320 router with a 8 port IQ2 PIC. The PIC details
> are below.

We hit a similar issue in the past, on an M320 with an IQ2 
10Gbps PIC as well.

Basically, you cannot configure more than one 'strict-high' 
or 'high' scheduler for an interface. The router will commit 
only one of each for only one interface.

In a way, it sort of makes sense, since there's no point in 
scheduling traffic with priority on multiple queues on the 
same interface. It would defeat the purpose :-). Also, the 
only difference between 'high' and 'strict-high' is that 
'high' respects the 'transmit-rate', while 'strict-high' 
does not.

I can't recall whether the behaviour was different on the 
MX, but we decided to go with the least common denominator 
so we can have configuration and behaviour consistency 
across the entire backbone.

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Resource Temporarily Unavailable - Juniper MX

2011-12-05 Thread Paul Stewart
Thanks - that actually makes a lot of sense ;)  We don't see any load to
speak of on our side but it does typically occur when a BGP session is reset
and we're sending out a full table to a customer...

Appreciate it,

Paul


-Original Message-
From: Alexandre Snarskii [mailto:s...@snar.spb.ru] 
Sent: Monday, December 05, 2011 10:09 AM
To: Paul Stewart
Cc: juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] Resource Temporarily Unavailable - Juniper MX

On Mon, Dec 05, 2011 at 07:48:22AM -0500, Paul Stewart wrote:
> Can anyone shed some light on these log messages?
> 
>  
> 
> Nov 30 04:48:21  core2.toronto1 rpd[1359]: bgp_send: sending 19 bytes 
> to
> xx.xxx.52.50 (External AS x) blocked (no spooling requested): 
> Resource temporarily unavailable
> 
> We get these every so often .. Presuming it has to due with load on 
> the system for a short period of time?

More possibly it's caused by remote system load (or link congestion or
whatever other reason for remote system not able to receive updates fast
enough). Then, when socket buffer is full with unacknowledged data, your
system tries to send another update/keepalive message and it results in
write(2) syscall returning EAGAIN error (actually, not an error, just and
indication of 'no data sent, try again later'), which translates to
"Resource temporarily unavailable" message. 

> 
> Platform is Juniper MX boxes running 10.0R3.10
> 
>  
> 
> Thanks,
> 
>  
> 
> Paul
> 
>  
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> https://puck.nether.net/mailman/listinfo/juniper-nsp

--
In theory, there is no difference between theory and practice. 
But, in practice, there is. 


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Resource Temporarily Unavailable - Juniper MX

2011-12-05 Thread Alexandre Snarskii
On Mon, Dec 05, 2011 at 07:48:22AM -0500, Paul Stewart wrote:
> Can anyone shed some light on these log messages?
> 
>  
> 
> Nov 30 04:48:21  core2.toronto1 rpd[1359]: bgp_send: sending 19 bytes to
> xx.xxx.52.50 (External AS x) blocked (no spooling requested): Resource
> temporarily unavailable
> 
> We get these every so often .. Presuming it has to due with load on the
> system for a short period of time?

More possibly it's caused by remote system load (or link congestion
or whatever other reason for remote system not able to receive updates
fast enough). Then, when socket buffer is full with unacknowledged data,
your system tries to send another update/keepalive message and it 
results in write(2) syscall returning EAGAIN error (actually, not
an error, just and indication of 'no data sent, try again later'),
which translates to "Resource temporarily unavailable" message. 

> 
> Platform is Juniper MX boxes running 10.0R3.10
> 
>  
> 
> Thanks,
> 
>  
> 
> Paul
> 
>  
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

-- 
In theory, there is no difference between theory and practice. 
But, in practice, there is. 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Resource Temporarily Unavailable - Juniper MX

2011-12-05 Thread Michael Hare
I would be appreciate any on list replies as well.  We also see this, 
running 10.4r on MX gear.  Given other outstanding cases I haven't burnt 
a JTAC resource on this one.


-Michael

On 12/5/2011 6:48 AM, Paul Stewart wrote:

Can anyone shed some light on these log messages?



Nov 30 04:48:21  core2.toronto1 rpd[1359]: bgp_send: sending 19 bytes to
xx.xxx.52.50 (External AS x) blocked (no spooling requested): Resource
temporarily unavailable



We get these every so often .. Presuming it has to due with load on the
system for a short period of time?



Platform is Juniper MX boxes running 10.0R3.10



Thanks,



Paul



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Resource Temporarily Unavailable - Juniper MX

2011-12-05 Thread Paul Stewart
Can anyone shed some light on these log messages?

 

Nov 30 04:48:21  core2.toronto1 rpd[1359]: bgp_send: sending 19 bytes to
xx.xxx.52.50 (External AS x) blocked (no spooling requested): Resource
temporarily unavailable

 

We get these every so often .. Presuming it has to due with load on the
system for a short period of time?

 

Platform is Juniper MX boxes running 10.0R3.10

 

Thanks,

 

Paul

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp