Re: [j-nsp] DDoS protection for J-series and SRX

2013-04-11 Thread Mark Kamichoff
On Thu, Apr 11, 2013 at 10:57:55AM +0200, James Howlett wrote:
 I have a small network with J6350 as a border router (BGP) and two
 SRX240H in a cluster.  Since few days my network is a victim of DDoS
 attacks. Majority of them are high pps count attacks.
 Are there any methods to protect my network against such attacks. My
 J-series can handle quite a lot of pps, but my SRX die after getting
 more than 8000 new sessions per second.
 
 Is there anything i can do here?

Definitely SCREENs, as other folks have said.

However, in the corner case where you're getting traffic for a
particular service or destination IP that isn't in use (maybe not in
this instance), a quick way of protecting the traffic from hitting the
flow module is to use a firewall filter with a discard action for that
traffic.

Just something to keep in your toobox..

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] IPv6 VRRP issue on SRX100

2012-12-29 Thread Mark Kamichoff
On Sat, Dec 29, 2012 at 09:28:53PM +0700, Try Chhay wrote:
 Problem: *Both SRX100 IPv6 VRRP are master role.*
 
 The topology is that two SRX100 are connected to Cisco 2950 switch.
 After configure IPv4 and IPv6 VRRP ready getting IPv4 VRRP is working
 as normal, but IPv6 VRRP is not working. A PC is able to ping IPv6 on
 each SRX100 but it is unable to ping virtual IPv6 address. Please
 advice or comment to fix IPv6 VRRP to work on SRX100. Thanks!

The knee-jerk reaction is for me to tell you to correctly configure
host inbound traffic configuration to allow vrrp.  However I remember
running into a similar situation several months back on a pair of
SRX210HEs and the conclusion was that IPv6 VRRP is not supported on the
SRX.  However, I believe it was only because we were running the boxes
in flow mode.  It's possible switching to packet mode may work (I can't
tell by the configuration what mode you're in), although that may not be
acceptable solution in your environment.

- Mark

 *SRX100-A# show interfaces vlan unit 90
 family inet {
 address 192.168.147.2/24 {
 vrrp-group 1 {
 virtual-address 192.168.147.1;
 priority 110;
 preempt;
 accept-data;
 }
 }
 }
 family inet6 {
 address fe80::2/64;
 address 2001::2/64 {
 vrrp-inet6-group 2 {
 virtual-inet6-address 2001::1;
 virtual-link-local-address fe80::1;
 priority 110;
 preempt;
 accept-data;
 }
 }
 }
 
 
 SRX100-B# show interfaces vlan unit 90
 family inet {
 address 192.168.147.3/24 {
 vrrp-group 1 {
 virtual-address 192.168.147.1;
 preempt;
 accept-data;
 }
 }
 }
 family inet6 {
 address fe80::3/64;
 address 2001::3/64 {
 vrrp-inet6-group 2 {
 virtual-inet6-address 2001::1;
 virtual-link-local-address fe80::1;
 priority 100;
 preempt;
 accept-data;
 }
 }
 }
 Result:
 SRX100-A# run show vrrp
 Interface State   Group   VR state VR Mode   TimerType   Address
 vlan.90   up  1   master   Active  A  0.196 lcl
 192.168.147.2
 vip
 192.168.147.1
 vlan.90   up  2   master   Active  A  0.369 lcl
 2001::2
 vip
 fe80::1
 vip
 2001::1
 
 SRX100# run show vrrp
 Interface State   Group   VR state VR Mode   TimerType   Address
 vlan.90   up  1   backup   Active  D  3.310 lcl
 192.168.147.3
 vip
 192.168.147.1
 mas
 192.168.147.2
 vlan.90   up  2   master   Active  A  0.734 lcl
 2001::3
 vip
 fe80::1
 vip
 2001::1
 *
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Securing management access to Juniper gear

2011-09-02 Thread Mark Kamichoff
Hi Matthew -

On Fri, Sep 02, 2011 at 02:28:03PM -0400, Matthew S. Crocker wrote:
 What is the recommend/preferred way to secure the SSH  Web access to
 a piece of JunOS gear?  I have a couple routers (MX80) and switches
 (EX4200) that are remote.   Can I attach packet filters to the system
 services (HTTP,SSH)?  Do I attach the packet filter to the lo0
 interface?

You typically attach a firewall filter to the lo0 interface to secure
the routing engine.

For more information I highly recommend the following day one book,
which goes over this in detail:

http://www.juniper.net/us/en/community/junos/training-certification/day-one/fundamentals-series/securing-routing-engine/

I'm not an EX guru, but I believe the same concepts can be applied.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Taking full BGP table and filtering on SRX210

2011-01-25 Thread Mark Kamichoff
On Mon, Jan 24, 2011 at 06:02:13PM +, Maqbool Hashim wrote:
 I'm guessing most people are going to say this is a bad idea, but I
 wasn't sure so I thought I'd ask.  Basically if I was to receive a
 full BGP routing table and filter the routes on the SRX210, would the
 SRX be able to handle this operation from a resource perspective?  I'm
 not planning on installing all the routes I'm receiving, only a
 selection.

If you're talking about the IPv4 DFZ, I'd agree with most people in
this case, and avoid it.

BGP routes on Junos are stored in the adjacency-rib-in table (show route
receive-protocol bgp $peer) even if you filter them from being installed
into inet.0 or inet6.0.  Even though the SRX210H has 1GiB of RAM, the
majority of it is reserved for other things (UTM, etc.) rather than BGP
routes.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Juniper SRX and ssh freeze

2010-12-22 Thread Mark Kamichoff
On Wed, Dec 22, 2010 at 07:43:30PM +0100, Maciej Jan Broniarz wrote:
 {primary:node0}
 p...@orb show configuration applications
 application junos-ssh inactivity-timeout 3600;
 
 Does junos-ssh applies to any ssh traffic - the one to the srx itself,
 and the one to the servers behind an SRX firewall?

In my experience, both.

(unless you're connected via the fxp0 interface in a cluster, which I
believe is excluded from the flow/state tracking)

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Juniper SRX and ssh freeze

2010-12-20 Thread Mark Kamichoff
On Mon, Dec 20, 2010 at 10:18:27AM -0600, Chris Adams wrote:
 I don't know about the SRX, but I know with the SSG, the ScreenOS
 default timeout for TCP sessions was way too low (IIRC something like
 5 minutes) and would cause that.  I turned on SSH keepalives to avoid
 the timeout.

Yep, the SRX does the same thing with regards to timeouts.  The timeout
is 30 minutes for SSH by default, but you can extend it to longer by
adding a custom inactivity-timeout to the junos-ssh application:

{primary:node0}
p...@orb show configuration applications 
application junos-ssh inactivity-timeout 3600;

The above configuration increases the inactivity timeout to an hour.
For me, I had one session built before I made that change, and one after
(look at the timeout value):

{primary:node0}
p...@orb show security flow session destination-prefix 10.3.8.18/32 node 0 
node0:
--

Session ID: 8824, Policy name: inbound/4, State: Active, Timeout: 1796, Valid
  In: 10.3.7.149/63197 -- 10.3.8.18/22;tcp, If: reth0.0, Pkts: 61, Bytes: 6901
  Out: 10.3.8.18/22 -- 10.3.7.149/63197;tcp, If: reth2.0, Pkts: 37, Bytes: 9556

Session ID: 8832, Policy name: inbound/4, State: Active, Timeout: 3594, Valid
  In: 10.3.7.149/63198 -- 10.3.8.18/22;tcp, If: reth0.0, Pkts: 55, Bytes: 6445
  Out: 10.3.8.18/22 -- 10.3.7.149/63198;tcp, If: reth2.0, Pkts: 34, Bytes: 7288
Total sessions: 2

Alternatively, you can set the tcp-rst option on the appropriate
zone(s), which will cause SSH sessions to disconnect immediately when
data is sent over an SSH session that's timed-out already:

{primary:node0}[edit]
p...@orb# show security zones security-zone trust   
tcp-rst;
[...]

Hope this helps!

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SRX3400: DNS ALG on 10.2R1

2010-08-12 Thread Mark Kamichoff
On Thu, Aug 12, 2010 at 04:01:47PM -0700, Quoc Hoang wrote:
 IMHO, ALGS should be disabled by default.

From what I've seen, Juniper started disabling over half of the ALGs in
recent ScreenOS releases (probably the ones that JTAC has indicated
cause more problems than they solve).

I'm a little surprised they haven't done the same with the SRXes.  A
default install on my 210 w/10.2R2.11 shows all ALGs enabled except
IKE-ESP, strangely enough.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SRX3400/3600 Stabie Code Recommendations?

2010-07-24 Thread Mark Kamichoff
On Sat, Jul 24, 2010 at 10:00:36PM +0100, Mike Williams wrote:
 Hi, could you possibly expand on lacks V6 please?

The one big change in 10.2 for the SRX platforms is the addition of IPv6
flow mode.  The SRXes will still pass IPv6 traffic in earlier releases,
but without any policy evaluation.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX3400/3600 Stabie Code Recommendations?

2010-07-24 Thread Mark Kamichoff
On Sat, Jul 24, 2010 at 05:31:13PM -0400, Scott T. Cameron wrote:
 Only on the low end models,  3400 and higher has no support.

Oh, that's unfortunate.  I've only had experience with the smaller
SRXes, and the set security forwarding-options family inet6 mode
packet-based command was all that was needed to enable IPv6.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Is putting an IP on an l2circuit possible?

2010-07-22 Thread Mark Kamichoff
Hi Jason - 

On Thu, Jul 22, 2010 at 01:49:55PM -0400, Jason Lixfeld wrote:
 I'm trying to test some C to J EoMPLS interoperability, but the only J
 box that I have doesn't have any free interfaces on it, so I have
 nowhere to connect a test CE and use the CE to ping the far end.  Is
 there any way to stick a subnet on to an l2circuit directly instead of
 having to use a physical interface and a physical CE?

I've done this on an MX by using logical routers and lt interfaces to
connect them.  You can specify an lt interface (with vlan-ccc encap.)
under the l2circuit configuration and assign another lt interface to a
logical router acting as the CE on the same box with vlan encapsulation
and an IP address.  Assign VLAN IDs and match up the peer-units for the
lt interfaces, and you're good to go.

That being said, this only works on the MX because it supports logical
routers and has the tunnel services PIC built-in.

You might be able to do the same thing on a J with an additional virtual
router instead of a full-blown logical router, but I haven't tried it.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Juniper support site and Chrome

2010-06-06 Thread Mark Kamichoff
On Sun, Jun 06, 2010 at 03:04:04PM -0500, Richard A Steenbergen wrote:
 Has anybody else noticed problems with the Juniper support website and
 the Google Chrome browser? At least for the last couple days (and
 maybe longer) it kicks me back to the main screen as soon as I try to
 type anything in any box when updating a case. It works fine in
 firefox, but it seems like they did something to break it in chrome
 (I'm assuming javascript related).

Same behavior, here.  Actually, if I just click in the text box to give
it focus, I'm kicked back to the case details page.

Google Chrome 5.0.307.11-r39572 on Linux x86_64.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Logical Tunnels IPv6

2010-05-31 Thread Mark Kamichoff
Hi Chuck - 

On Sun, May 30, 2010 at 09:01:10PM -0400, Chuck Anderson wrote:
 Yes, and I believe the reason why this is the case is because 
 logical-tunnels use the same MAC address on each end.  Since IPv6 uses 
 the MAC address to generate the link-local address by default, that 
 may be why they prevent you from configuring inet6 on lt.

Actually, with my setup, it looks like the MACs are unique.  It
certainly reuses MAC addresses, but they are unique per peering of
logical interfaces.  For example:

{master}
l...@mx240-lab01-re0 show arp
MAC Address   Address Name  Interface Flags
[...snip]
00:22:83:32:cd:35 10.0.4.110.0.4.1  lt-2/0/10.6   none
00:22:83:32:cd:34 10.0.4.210.0.4.2  lt-2/0/10.7   none
00:22:83:32:cd:34 10.0.4.510.0.4.5  lt-2/0/10.1   none
00:22:83:32:cd:35 10.0.4.610.0.4.6  lt-2/0/10.0   none
[...snip]

lt-2/0/10.0 and lt-2/0/10.1 are connected, and so are lt-2/0/10.6 and
lt-2/0/10.7.  /30s.

For another data point, I've got an SRX 210 (10.1R1.8) doing something
similar, it just uses different addresses:

p...@orb show arp no-resolve 
MAC Address   Address Interface Flags
00:26:88:e9:54:80 10.3.7.160  lt-0/0/0.1 none
00:26:88:e9:54:81 10.3.7.161  lt-0/0/0.0 none
00:26:88:e9:54:81 10.3.7.162  lt-0/0/0.2 none
00:26:88:e9:54:80 10.3.7.163  lt-0/0/0.3 none
[...snip]

/31s.

At least for this (simple) setup, it shouldn't be a reason why IPv6
can't be used.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] Logical Tunnels IPv6

2010-05-30 Thread Mark Kamichoff
Hi - 

I just ran into what looks like an interesting limitation with logical
tunnels on JUNOS.  It seems that using logical tunnels with an
encapsulation type of ethernet results in the inability to use IPv6 on
such interfaces.

I tried the following on an MX240 running 9.5R1.8:

{master}[edit logical-systems]
l...@mx240-lab01-re0# show r1 interfaces lt-2/0/10.0  
encapsulation ethernet;
peer-unit 1;
family inet {
address 10.0.4.5/30;
}
family inet6 {
address fec0:0:4:4::/64 {
eui-64;
}
}

{master}[edit logical-systems]
l...@mx240-lab01-re0# commit check  
[edit logical-systems r1 interfaces lt-2/0/10 unit 0]
  'family'
 family INET6 not allowed with this encapsulation
error: configuration check-out failed

(yes, I know, those are deprecated site-local addresses -  this config
is straight out of the ancient JNCIE study guide)

Just for kicks, I tried switching to encapsulation vlan, added a vlan-id
to both sides, but JUNOS still complained about the inet6 family not
being supported.

Am I hitting some limitation of the built-in tunnel PIC on the
MX-series?  Or, maybe this is a code issue?  I can upgrade this box to
anything if needed, since it's just used for lab testing.

Anyone else run into this?

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Logical Tunnels IPv6

2010-05-30 Thread Mark Kamichoff
On Sun, May 30, 2010 at 05:59:59PM -0500, Richard A Steenbergen wrote:
 It's always been like this, and Juniper has ignored all requests to add
 support for IPv6 with ethernet encapsulation on the LT. The only
 work-around is to use frame-relay encapsulation instead of ethernet,
 which works for most but not all use cases.

Thanks guys.  I'll give the frame-relay encapsulation a try!

Perhaps we just need a few large carriers to help nudge Juniper on
this.  I suppose it'll be added eventually though, as more folks start
to add IPv6 to existing IPv4 configurations.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] IPv6 Outer header tunnels

2009-12-09 Thread Mark Kamichoff
Hi Brandon - 

On Wed, Dec 09, 2009 at 04:16:33PM -0700, Brandon Bennett wrote:
 Does anyone know if JunOS supports IPv6 as the outer header for GRE or
 IPinIP tunnels?

I'm guessing not, considering they only state IPv6 over IPv4 in the
datasheet for the tunnel services PIC:

http://www.juniper.net/products/modules/100092.pdf

Maybe we'll see a new tunnel services PIC in the future?

 I have tried this on a J-series and it doesn't seem to be supported.
 [edit interfaces gr-0/0/0 unit 1 tunnel source]
 'source 2001:1234:4561::1; '
 invalid ip address or hostname: 2001:1234:4561::1

I suspect the emulated tunnel services in the J-series (and SRX, which I
just tried myself) suffer from the same limitations, in the quest to
provide an exact emulation.

- Mark

-- 
Mark Kamichoff
p...@prolixium.com
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] SSG Issue

2008-10-06 Thread Mark Kamichoff
On Mon, Oct 06, 2008 at 01:23:02PM -0400, Stefan Fouant wrote:
 Can you issue the following:
 
 debug flow basic
 set ffilter ip 10.1.2.6
 clear dbuf
 clear sessions

Be careful when issuing commands in the order listed above - you can
easily brick your device if the session rampup rate is high, as the
firewall will essentially generate debugging data for all connections.
I suggest issuing the set ffilter ip 10.1.2.6 before any debug
commands, then following up with an undebug all after you have
reproduced the issue:

ssg550- set ffilter src-ip 10.1.2.6
ssg550- set ffilter dst-ip 10.1.2.6
ssg550- clear db
ssg550- debug flow basic

  reproduce the issue  

ssg550- undebug all
ssg550- get db str

Additionally, what version of ScreenOS are you running?  There was a
strange policy evaluation/compilation issue I ran into earlier this year
that sporadically prevented certain policies from being hit (PR #308459,
iirc).  According to JTAC, it is fixed in = 6.0.0r6.0 - so if you have
support for the device, I'd suggest running at least this version of
ScreenOS, just to be safe.

- Mark

-- 
Mark Kamichoff
[EMAIL PROTECTED]
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] OpenSSH V5.1 with ScreenOS

2008-09-09 Thread Mark Kamichoff
On Mon, Sep 01, 2008 at 11:53:25AM -0400, Ross Vandegrift wrote:
 Looks like something changed during a recent upgrade to OpenSSH V5.1.
 When connecting to ScreenOS firewalls, the firewalls closes the
 connection as soon as authentication has passed.
 
 We've got a ticket open with JTAC, but I'm not sure it's going to go
 anywhere quickly.  I've run into different quirks with Netscreen-SSH
 before, so I'm guessing there's some new option that confuses the
 firewall.  Anyone run into this and found a workaround?

I just received a working patch from JTAC built for SSG 5/20 that fixes
this issue: ssg5ssg20.6.0.0r6-fq4.0.  Just ask JTAC for this patch, and
reference PR# 312992...

JTAC told me that this would probably be incorporated in the 6.0.0r8.0
release.

- Mark

-- 
Mark Kamichoff
[EMAIL PROTECTED]
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] OpenSSH V5.1 with ScreenOS

2008-09-02 Thread Mark Kamichoff
On Tue, Sep 02, 2008 at 04:51:51PM +0200, Marek Lukaszuk wrote:
 I tried different settings with OpenSSH, always the same results. It
 looks like a bug in ScreenOS.

I opened up a JTAC case on this, too, and posted it to the Debian
GNU/Linux bug report that was opened:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=495917

It has to do with the session window size being increased in OpenSSH
5.1, supposedly (details in the URL).  ScreenOS apparently rejects this
option (or can't handle it, and disconnects the client as a security
measure).

The right fix (imo) would be for ScreenOS to handle this option, as I'm
guessing it's part of the SSHv2 protocol.  I have a feeling that the
OpenSSH team is going to have to add ScreenOS to their list of broken
SSH implementations so this window size option is disabled for servers
matching the NetScreen welcome banner.

- Mark

-- 
Mark Kamichoff
[EMAIL PROTECTED]
http://www.prolixium.com/


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Session utilization is 90% of the system capacity

2008-03-15 Thread Mark Kamichoff
Hi Vincent - 

I saw someone mentioned the FSA tool ( http://tools.juniper.net/fsa/ ),
I'd also recommend this.

Also, if you use MRTG to monitor link utilization, you may want to add a
graph to track software sessions, too.  Insert something like this into
mrtg.cfg, replacing 'public' with your SNMP community string and 'host'
with the IP/hostname of your firewall.

Title[host.sessions]: Software Sessions on host
Target[host.sessions]: 
1.3.6.1.4.1.3224.16.3.2.01.3.6.1.4.1.3224.16.3.2.0:[EMAIL PROTECTED]
# tune the following to the Session soft limit number from 'get sys-cfg' 
output
MaxBytes[host.sessions]: 32000
Options[host.sessions]: gauge, growright, nopercent
YLegend[host.sessions]: Sessions
Legend1[host.sessions]: Current Sessions
Legend2[host.sessions]: Current Sessions
LegendI[host.sessions]: Sessions:
LegendO[host.sessions]:
PageTop[host.sessions]: H1Software Sessions on host/H1
pThis summary page shows the number of software sessions on host./p
ShortLegend[host.sessions]: Sessions

- Mark

On Fri, Mar 14, 2008 at 05:25:48PM +0100, Vincent De Keyzer wrote:
 Hello,
 
 we have a Netscreen 25 at our office (30 people), that we use for 
 Internet access and VoIP.
 
  From time to time the firewall goes bananas: traffic does not go 
 through anymore, ping success rate to default gateway is very low, and 
 if we succeed to login, we see very high CPU and messages in the log 
 that say:
 
 2008-03-13 15:08:31 system crit  00051 Session utilization has reached 
 28857,
which is 90% of the system capacity!
 2008-03-13 15:08:29 system crit  00051 Session utilization has reached 
 28857,
which is 90% of the system capacity!
 2008-03-13 15:08:28 system crit  00051 Session utilization has reached 
 28857,
which is 90% of the system capacity!
 2008-03-13 15:08:27 system crit  00051 Session utilization has reached 
 28857,
which is 90% of the system capacity!
 2008-03-13 15:08:26 system crit  00051 Session utilization has reached 
 28857,
which is 90% of the system capacity!
 2008-03-13 15:08:24 system crit  00051 Session utilization has reached 
 28857,
which is 90% of the system capacity!
 2008-03-13 15:08:19 system crit  00051 Session utilization has reached 
 28857,
which is 90% of the system capacity!
 2008-03-13 15:08:18 system crit  00051 Session utilization has reached 
 28857,
which is 90% of the system capacity!
 2008-03-13 15:08:16 system crit  00051 Session utilization has reached 
 28857,
which is 90% of the system capacity!
 
 How do I troubleshoot this? What are those sessions? How do I identify 
 them? How do I limit them? Is it a good thing to limit them?
 
 I don't know where to start, so any idea will be appreciated.
 
 Thanks
 
 Vincent
 
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
 

-- 
Mark Kamichoff
[EMAIL PROTECTED]
http://prolixium.com/
Rensselaer Polytechnic Institute, Class of 2004


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] ScreenOS bgp filtering

2008-02-21 Thread Mark Kamichoff
On Tue, Feb 19, 2008 at 05:24:29AM +0200, Screen OS wrote:
 I have a screenOS device multihomed to two ISPs using BGP.  I would
 like to only receive default route from one of them.  When I configure
 an ACL to permit 0.0.0.0/0 screenOS treat this as all routes and not
 as the exact match of default route.  How can I configure an ACL to
 only allow default route?

Yeah, 0/0 will match everything.  ScreenOS has a special option to match
on 0/0 exact:

set access-list 1 permit default-route 1

- Mark

-- 
Mark Kamichoff
[EMAIL PROTECTED]
http://prolixium.com/
Rensselaer Polytechnic Institute, Class of 2004


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Setting source address for local DNS queries on JunOS (8.5R1.14)?

2008-02-15 Thread Mark Kamichoff
On Fri, Feb 15, 2008 at 04:09:02PM +0100, Johannes Resch wrote:
 is it possible to specify which local IP address a router will use for
 originating DNS queries, either globally or per name server?  JunOS is
 8.5R1.14 on J6350.
 
 Couldn't find anything related in the tech docs, and there don't seem
 to be suitable config options in the system name-server hierarchy.
 
 Per default, it seems as if the router picks the numerically lowest IP
 address as source IP (in this particular case, there is no IP address
 configured on the loopback interface).

I was curious about this, too.  It looks like the _only_ options are to
let it pick the IP on the egress interface automatically, or source
everything from lo0.  Seems strange that it's not more flexible.

This is the only thing I could find, although it won't help if there's
no IP on the loopback interface:

http://www.juniper.net/techpubs/software/junos/junos85/swconfig85-system-basics/id-10937192.html

- Mark

-- 
Mark Kamichoff
[EMAIL PROTECTED]
http://prolixium.com/
Rensselaer Polytechnic Institute, Class of 2004


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] IPv6 subnetting

2008-02-03 Thread Mark Kamichoff
On Sun, Feb 03, 2008 at 05:58:26PM -0800, snort bsd wrote:
 So the statements above is what you refer to?

 The subnet prefix in an anycast address is the prefix that
 identifies a specific link. This anycast address is syntactically the
 same as a unicast address for an interface on the link with the
 interface identifier set to zero.

Yes.

 Then how does that calculate/126? It should give me four addresses
 with first one being network ID/anycast address and I could use rest
 of three, right? honestly, it doesn't sound right to me:
 
 So, we subnet the address fec0::/126 according the rules of IPv4; 0~3,
 4~7, 8~11, 12~15, and so on... fec0::14/126 is not the first address
 of that subnet.

You're thinking in decimal.  It's hex, and should go:

fec0::0/126: 0-3 (0 reserved)
fec0::4/126: 4-7 (4 reserved)
fec0::8/126: 8-b (8 reserved)
fec0::c/126: c-f (c reserved)

And, to keep going...

fec0::10/126
fec0::14/126 --- (your example)
fec0::18/126
fec0::1c/126

So yes, fec0::14/126 is actually the first address.

- Mark

-- 
Mark Kamichoff
[EMAIL PROTECTED]
http://prolixium.com/
Rensselaer Polytechnic Institute, Class of 2004


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] IPv6 subnetting

2008-02-02 Thread Mark Kamichoff
On Fri, Feb 01, 2008 at 01:32:49PM +0700, a. r.isnaini. rangkayo sutan wrote:
 Yes, you cannot assign 10::14/126 which 4 I believe is network ID for
 /126  (/30 in ipv4), before 10::14/126 there should 10::/126.

The first address in any IPv6 subnet is reserved for subnet-router
anycast.  Section 2.6.1 of RFC 2373 defines this.

This also includes the first address of /127's.  Reading RFC 3627 (Use
of /127 Prefix Length Between Routers Considered Harmful) is probably
worthwhile.

- Mark

-- 
Mark Kamichoff
[EMAIL PROTECTED]
http://prolixium.com/
Rensselaer Polytechnic Institute, Class of 2004


signature.asc
Description: Digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp