Sun V20  dual opteron PCIE bus 2 gig memory.  Not using flow portscan,
using portscan2 until we move to a newer version of snort.

Its probably the oracle code and the custom rules that we use that are
pulling us down.

Thanks

Dennis 

> -----Original Message-----
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> Brad Doctor
> Sent: Thursday, October 27, 2005 11:42 AM
> To: [email protected]
> Subject: Re: [Ntop-misc] High End QoS
> 
> What hardware are you using?  What kind of bus is the NIC on?
> 
> snort.conf:
> preprocessor flow: stats_interval 0 hash 2 preprocessor 
> frag3_global: max_frags 65536 preprocessor frag3_engine: 
> policy first detect_anomalies preprocessor stream4: 
> disable_evasion_alerts preprocessor stream4_reassemble 
> preprocessor http_inspect: global iis_unicode_map unicode.map 
> 1252 preprocessor http_inspect_server: server default profile 
> all ports { 80 8080 8180 } flow_depth 0 oversize_dir_length 
> 500 preprocessor flow-portscan: talker-sliding-scale-factor 
> 0.50 talker-fixed-threshold 30 talker-sliding-threshold 30 
> talker-sliding-window 20 talker-fixed-window 30 
> scoreboard-rows-talker 30000 server-watchnet [$HOME_NET] 
> server-ignore-limit 200 server-rows 65535 
> server-learning-time 14400 server-scanner-limit 4 
> scanner-sliding-window 20 scanner-sliding-scale-factor 0.50 
> scanner-fixed-threshold 15 scanner-sliding-threshold 40 
> scanner-fixed-window 15 scoreboard-rows-scanner 30000 
> src-ignore-net [127.0.0.1/32] dst-ignore-net [127.0.0.1/32] 
> alert-mode once output-mode msg tcp-penalties on preprocessor 
> rpc_decode: 111 32771 preprocessor sfportscan: proto  { all } 
> memcap { 10000000 } sense_level { low }
> 
> output unified: filename snort_unified, limit 10
> 
> include $RULE_PATH/classification.config include 
> $RULE_PATH/reference.config include 
> $RULE_PATH/attack-responses.rules include 
> $RULE_PATH/backdoor.rules include 
> $RULE_PATH/bad-traffic.rules include $RULE_PATH/chat.rules 
> include $RULE_PATH/ddos.rules include $RULE_PATH/dns.rules 
> include $RULE_PATH/dos.rules include 
> $RULE_PATH/experimental.rules include 
> $RULE_PATH/exploit.rules include $RULE_PATH/finger.rules 
> include $RULE_PATH/ftp.rules include 
> $RULE_PATH/icmp-info.rules include $RULE_PATH/icmp.rules 
> include $RULE_PATH/imap.rules include $RULE_PATH/info.rules 
> include $RULE_PATH/local.rules include $RULE_PATH/misc.rules 
> include $RULE_PATH/multimedia.rules include 
> $RULE_PATH/mysql.rules include $RULE_PATH/netbios.rules 
> include $RULE_PATH/nntp.rules include $RULE_PATH/oracle.rules 
> include $RULE_PATH/other-ids.rules include 
> $RULE_PATH/p2p.rules include $RULE_PATH/policy.rules include 
> $RULE_PATH/pop2.rules include $RULE_PATH/pop3.rules include 
> $RULE_PATH/porn.rules include $RULE_PATH/rpc.rules include 
> $RULE_PATH/rservices.rules include $RULE_PATH/scan.rules 
> include $RULE_PATH/smtp.rules include $RULE_PATH/snmp.rules 
> include $RULE_PATH/sql.rules include $RULE_PATH/telnet.rules 
> include $RULE_PATH/tftp.rules include $RULE_PATH/virus.rules 
> include $RULE_PATH/web-attacks.rules include 
> $RULE_PATH/web-cgi.rules include $RULE_PATH/web-client.rules 
> include $RULE_PATH/web-coldfusion.rules include 
> $RULE_PATH/web-frontpage.rules include 
> $RULE_PATH/web-iis.rules include $RULE_PATH/web-misc.rules 
> include $RULE_PATH/web-php.rules include $RULE_PATH/x11.rules
> 
> /etc/modprobe.conf:
> 
> options ring bucket_len=9000 num_slots=9000 sample_rate=3 
> transparent_mode=1 enable_tx_capture=1 options sk98lin 
> LowLatency=On,On RlmtMode=DualNet,DualNet FlowCtrl_A=None 
> FlowCtrl_B=None
> 
> I think I can also include the Ixia test case too if you have 
> the ixChariot software.
> 
> -brad
> 
> > I would like to know what settings you are using to get 
> that level of 
> > performance.
> > 
> > With PF_RING, snort with healthy production ruleset, conversation,
> > portscan2 and oracle support, we can only muster around 200 
> meg before 
> > we start losing packets.
> > 
> > 
> > Thanks
> > 
> > 
> > Dennis
> > 
> > > -----Original Message-----
> > > From: [EMAIL PROTECTED]
> > > [mailto:[EMAIL PROTECTED] On Behalf Of Brad 
> > > Doctor
> > > Sent: Thursday, October 27, 2005 10:55 AM
> > > To: [email protected]
> > > Subject: Re: [Ntop-misc] High End QoS
> > > 
> > > First, I don't have much experience with QoS - this is to 
> comment on 
> > > the hardware and bridge.
> > > 
> > > For hardware I would stay away from Intel at the moment.  We have 
> > > two systems presently that are dual Opteron, Dual-core systems:
> > > 
> > > Dual Core AMD Opteron(tm) Processor 275
> > > 2199.995 Mhz
> > > L2 1024 KB
> > > 
> > > NIC: SysKonnect SK-9E22 (dual-port gig, PCI Express)
> > > 
> > > Using Ixia to test throughput, the box can L2 bridge 
> 980Mbps all day 
> > > long and you would never know it was doing it.
> > > Adding additional endpoints gets us to 1900Mbps bridging 
> - again, no 
> > > perceptible load on the system.  Ixia reports average latency of 
> > > .081 at 980 and .1xx at the 1900 level.
> > > MTU doesn't matter for the bridging part -- but MTU of 9000 for 
> > > PF_RING is required for the below statement..
> > > 
> > > Using Snort, PF_RING can monitor 1600-1800Mbps, with no 
> packet loss, 
> > > for the record :) And I have numbers to prove it!
> > > 
> > > As for bridge stability and 2.6 kernel - my company has been 
> > > shipping this solution since about this time last year with no 
> > > problems at all.  Deployed units number very high and no field 
> > > issues whatsoever.
> > > 
> > > -brad
> > > 
> > > >   Hi all,
> > > > 
> > > >   I write to this list as its full of networwing / QoS experts.
> > > > 
> > > >   A client asked if it was possible to replace a very expensive 
> > > > QoS appliance with a Linux box to make QoS and NetFlow 
> on a big network.
> > > > Sustained traffic is around 400Mbps and they need around
> > > 1000 QoS classes.
> > > > 
> > > >   Some thoughts on this:
> > > > 
> > > >   1) Of course we will purchase the fastest box we can find 
> > > > around, dual xeon and such.
> > > > 
> > > >   2) As the system runs as a bridge we are kind of scared
> > > to use 2.6
> > > > kernel as it seems quite unstable in that mode.
> > > > 
> > > >   3) Instead of using standard QoS classification 
> (linear) we were 
> > > > thinking about using clasiffy target in the firewall and
> > > use some more
> > > > complex tree. That way, we still have all those classes but are 
> > > > not read linearly but some logic is applied in the tree.
> > > > 
> > > >   4) As this box ideally would include a netflow probe
> > > (nprobe 4), we
> > > > were thinking about using pf_ring kernel patch. Any
> > > experience in the
> > > > list using this patch with a system that is both a probe
> > > and QoS? Of
> > > > course, we would like to purchase ncap for this :)
> > > > 
> > > >   5) We were thinking about using hipac, but we dont know if it 
> > > > supports clasiffy target, do you know if it does?
> > > > 
> > > >   Any ideas will be REALLY appreciated.
> > > > 
> > > >   Thanks in advance. Regards.
> > > > 
> > > > --
> > > > Jaime Nebrera - [EMAIL PROTECTED] Consultor TI - ENEO 
> > > > Tecnologia SL
> > > > Telf.- 619 04 55 18
> > > > 
> > > > _______________________________________________
> > > > Ntop-misc mailing list
> > > > [email protected]
> > > > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> > > > 
> > > 
> > > --
> > > Brad Doctor, CISSP
> > > _______________________________________________
> > > Ntop-misc mailing list
> > > [email protected]
> > > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> > > 
> > _______________________________________________
> > Ntop-misc mailing list
> > [email protected]
> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> > 
> 
> --
> Brad Doctor, CISSP
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to