Re: Quarantine your infected users spreading malware
On 21 Feb 2006, at 16:26, Jason Frisvold wrote: Key words there.. "Large Provider" .. I don't think A/V companies have any interest whatsoever in smaller providers.. Just not a big enough customer base I guess... It would be nice to see an A/V provider willing to take that first step and offer something like this to providers, regardless of size. Anti-virus is already offered directly to end users ... for free ! http://free.grisoft.com/ And they don't care ! How is someone else telling them that they need a virus checker going to change anything ? -a
Re: The Domain Name Service as an IDS
Chris Brookes wrote: On 22/02/06, Gadi Evron <[EMAIL PROTECTED]> wrote: and help us take it to the next level of DNS monitoring. In the large corporation I work for, I have a home grown DNS monitoring system I put together. I have the luxury of full control over every DNS server used by network connected devices, and so each and every single query is logged for a duration of 30 days to a SQL database. Amongst others, I've developed the following services with it for my internal customers: Hi Chris, thanks for your reply. I was just told by the admin team to keep DNS operational issues off-list. Would you mind if we take this to the DNS operations mailing list run by the ISC OARC? Gadi.
Re: The dissention grows towards AOL and pay per message
On Feb 22, 2006, at 3:30 PM, Nicole wrote: This was sent to me on another mailing list. I am on a number of smaller and or community mailing lists who feel very threatend by this. Only because they don't understand it. Pretty much of all that you included is simply untrue. Whether it's because the folks behind it are illiterate, don't understand the issue, or are putting FUD out for their own reasons I'll let you judge. But it's pretty much all simply false. Cheers, Steve (No, I've no connection with the goodmail folks, but I've actually looked at the details of the system).
RE: How do you (not how do I) calculate 95th percentile?
Database triggers are a marvelous thing. Is this wrong? ((InOctetsCurrent-InOctetsLastTime)*8) - (TimeCurrent-TimeLastTime) = Inbound bits/sec We chose this because it doesn't matter if it is 30 seconds or 8 minutes between sample points, it is all normalized in that period. I do realize that it is an average within that window, but all we need to do is tune the cron job (how often we poll) to increase resolution. Robin
Re: The Domain Name Service as an IDS
On 22/02/06, Gadi Evron <[EMAIL PROTECTED]> wrote: > and help us take it to the next level of DNS monitoring. In the large corporation I work for, I have a home grown DNS monitoring system I put together. I have the luxury of full control over every DNS server used by network connected devices, and so each and every single query is logged for a duration of 30 days to a SQL database. Amongst others, I've developed the following services with it for my internal customers: 1) Malicious activity detection/monitoring We baseline what queries each device normally makes, so when a device suddenly starts trying to resolve n% more destinations than usual, it's often malicious code such as spyware. In addition, each destination name appearing in the database is analysed to see how many devices are querying for it. If a new name pops up, and in the last n minutes it's being resolved by a significant amount of devices, it's almost always a virus or worm outbreak. Once malicious activity is confirmed and dealt with by desktop groups, the system is then used to provide additional verification that a given client really has been cleaned up. 2) Server move impact assessment All devices on the network invariably use DNS to find each other, and by them using DNS you can reasonably assume an IP connection was made from one device to the other. With all queries logged, we generate surprisingly detailed reports on exactly what devices have a relationship with what other devices. The value of relationships is determined by a variety of factors, but these include: does the resolution happen in a reoccurring daily pattern? do both devices in the relationship try to resolve each other? what percentage of the overall queries made by the source is for this specific target? The reports easily draw out issues such as what web servers will be impacted by taking a given app server down? In addition, by cross referencing with our QIP environment we can work out which IP addresses belong to users and which ones dont. When a server is being taken offline we can report on exactly which users will be impacted, and where they are geographically. 3) Server footprint info Devices on the network are named in a somewhat intelligent fashion so we produce quick reports that reveal server characteristics such as: is the machine keeping up to date with Antivirus (is it making reoccurring queries for the AV update server), is the machine Unix or Windows based (is it resolving our NIS environment or our AD systems), is the machine monitored by Openview (are the polling stations resolving it every day) etc etc 4) Hard Coded IP analysis Our internal customers shuffle client server based applications around. Sadly, IP addresses get used in configurations all too often, and IP addresses change. So, we take a sniff of TCP/IP connections made to a given system, and then run it through the query database, taking each TCP/IP connection and checking whether the client resolved the name of the destination IP. When there's no evidence of the source querying for the target, but the source is querying for other targets, that typically points firmly to a hard coded IP. 5) DNS delete validation/server retirement analysis Nothing is deleted from DNS unless the query database clearly shows complete lack of resolution for the given name. ... and those were just the first 5 things I came up with. Mining our DNS data is providing all kinds of opportunities for our security, server, and chanage management groups. I'd be very interested to hear from anyone else who's working on this sort of DNS log mining. Regards Chris
Re: The dissention grows towards AOL and pay per message
This is a done deal. They may just now be announcing it, but they have been doing it for several months. Nicole wrote: This was sent to me on another mailing list. I am on a number of smaller and or community mailing lists who feel very threatend by this.
The dissention grows towards AOL and pay per message
This was sent to me on another mailing list. I am on a number of smaller and or community mailing lists who feel very threatend by this. Nicole -- Hi, I just signed an important online petition because the very existence of online civic participation and the free Internet as we know it are under attack by America Online, and we need to fight back quickly. The petition's at: http://civic.moveon.org/emailtax/ AOL recently announced what amounts to an "email tax." Under this pay- to-send system, large emailers willing to pay an "email tax" can bypass spam filters and get guaranteed access to people's inboxes-- with their messages having a preferential high-priority designation. Charities, small businesses, civic organizing groups, and even families with mailing lists will inevitably be left with inferior Internet service unless they are willing to pay the "email tax" to AOL. The petition says: "AOL, don't auction off preferential access to people's inboxes to giant emailers, while leaving people's friends, families, and favorite causes wondering if their emails are being delivered at all. The Internet is a force for democracy and economic innovation only because it is open to all Internet users equally--we must not let it become an unlevel playing field." AOL's proposed pay-to-send system is the first step down the slippery slope toward dividing the Internet into two classes of users--those who get preferential treatment and those who are left behind. We must preserve the Internet for everybody. Can you sign this emergency petition to America Online? http://civic.moveon.org/emailtax/ Thanks! -- |\ __ /| (`\ | o_o |__ ) ) // \\ - [EMAIL PROTECTED] - Powered by FreeBSD - -- "The term "daemons" is a Judeo-Christian pejorative. Such processes will now be known as "spiritual guides" - Politicaly Correct UNIX Page
RE: How do you (not how do I) calculate 95th percentile?
Title: How do you (not how do I) calculate 95th percentile? I think that we have two (partially) unrelated issues in this thread: 1) how often you should sample and 2) what do you do with the results. I personally think that 5 minute sampling is so last century because it is better suited for batch load types that do not change very quickly than for interactive web applications. If your users' web performance is being affected by a particular link, they are going to notice it in the 10 second range. Congestion events lasting 1-3 minutes can be a problem. After five minutes they have forgotten what they were doing:) How often you check the counter should be driven by how granular you want to measure the network. Pick the right counter so that it does not wrap on you during your sampling interval. The initial downside is that you have 10-30 times as much data. Network data has chaotic (aka self-similar) characteristics that make simple statistics such as max, min or average somewhat useless. My understanding of the reason to calculate a 95th percentile is to try to reduce the dataset size and to make some sense out of the random performance data. For example, I could take some range of data and figure out the 95% threshold and save that as a data point. (eg. 95% of the samples are less than X Mbps). Read the counter value, compute the rate for the interval, then compute the 95th % threshold for 20+ samples and save that as the value for that longer period. The basic assumption is that you can ignore or not bill the 5% of the time that you had higher values. Its 6 minutes during a 10 hour business window or 15 minutes over a 24 hour period. One could argue that 95 should be 98 or 92 or it matters if the 5% is a continuous. But its a reasonable starting point for making a decision about whether link utilization is too high. David Russell From: [EMAIL PROTECTED] on behalf of Jo RhettSent: Wed 2/22/2006 1:12 PMTo: nanog@merit.eduSubject: How do you (not how do I) calculate 95th percentile? I am wondering what other people are doing for 95th percentile calculationsthese days. Not how you gather the data, but how often you check thecounter? Do you use averages or maximums over time periods to create thebuckets used for the 95th percentile calculation?A lot of smaller folks check the counter every 5 min and use that samevalue for the 95th percentile. Most of us larger folks need to check moreoften to prevent 32bit counters from rolling over too often. Are you largerfolks averaging the retrieved values over a larger period? Using themaximum within a larger period? Or just using your saved values?This is curiosity only. A few years ago we compared the same data and theanswers varied wildly. It would appear from my latest check that it isbecoming more standardized on 5-minute averages, so I'm asking here on Nanogas a reality check.Note: I have AboveNet, Savvis, Verio, etc calculations. I'm wonderingif there are any other odd combinations out there.Reply to me offlist. If there is interest I'll summarize the resultswithout identifying the source.--Jo Rhettsenior geekSVcolo : Silicon Valley Colocation Note: The information contained in this message may be privileged and confidential and protected from disclosure. If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by replying to the message and deleting it from your computer. Thank you. ThruPoint, Inc.
Re: How do you (not how do I) calculate 95th percentile?
David W. Hankins wrote: On Wed, Feb 22, 2006 at 12:50:34PM -0600, Tom Sands wrote: A lot of smaller folks check the counter every 5 min and use that same value for the 95th percentile. Most of us larger folks need to check more often to prevent 32bit counters from rolling over too often. Actually, a lot of people do 5 minutes... and I would say that larger companies don't check them more often because they are using 64 bit counters, as should anyone with over about 100Mbps of traffic. Counter size is an incomplete reason for polling interval. Possibly incomplete, but a reason for some none the less, if all they can do is 32 bit counters. If you need a 5 minute average and poll your routers once every five minutes, what happens if an SNMP packet gets lost? No one said it was "needed", just what is done.. and I agree with your reason of more frequent polling, than doing it because of counter roll. In the best case, a retransmission over Y seconds sees it through, but now you've got 300+Y seconds in what was supposed to be a 300 second average...your next datapoint will also now be a 300-Y average unless you schedule it into the future. In the worst case, you've lost the datapoint entirely. This loses not just the one datapoint ending in that five minute span, but also the next datapoint. Sure, you can synthesize two 5 minute averages from one 10 minute average (presuming your counters wouldn't roll), but this is still a loss in data - one of those two datapoints should have been higher than the other. In our setup, as with a lot of people likely, any data that is older than 30 days is averaged. However, we store the exact maximums for the most current 30 days. You keep no record? What do you do if a customer challenges their bill? Synthesize 5 minute datapoints out of the larger averages? This isn't for customer billing. We don't bill customers on Mbps, but rather on total volume of GB transfered. That is an easy number to collect and doesn't depend on 5 minute itervals being successful. Right up until someone clears the counters ;) I recommend keeping the 5 minute averages in perpetuity, even if that means having an operator burn the data to CD and store it in a safe (not under his desk in the pizza boxes, nor under his soft drink as a coaster). -- -- Tom Sands Chief Network Engineer Rackspace Managed Hosting (210)447-4065 --
Re: How do you (not how do I) calculate 95th percentile?
Doh! You are 100% correct. I didn't take into account the fact that the counters are if(In|Out) *Octets* and NOT if(in/Out)*Bits*. The point is that 64-bit counters are not likely to roll :-) Warren On Feb 22, 2006, at 12:24 PM, Alex Rubenstein wrote: (I did this fast, and, who knows; I could be off my an order or two of magnitude) Most people are using 64 bit counters. This avoids the wrapping problem (assuming you don't have 100GE and poll more then once every 5 years :-)). 2^64 is 18,446,744,073,709,551,616 bytes. 100 GE (100,000,000,000 bits/sec) is 12,500,000,000 bytes/sec. It would take 1,475,739,525 seconds, or 46.79 years for a counter wrap. -- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben Net Access Corporation, 800-NET-ME-36, http://www.nac.net
Re: How do you (not how do I) calculate 95th percentile?
On Wed, Feb 22, 2006 at 12:50:34PM -0600, Tom Sands wrote: > >A lot of smaller folks check the counter every 5 min and use that same > >value for the 95th percentile. Most of us larger folks need to check more > >often to prevent 32bit counters from rolling over too often. > > Actually, a lot of people do 5 minutes... and I would say that larger > companies don't check them more often because they are using 64 bit > counters, as should anyone with over about 100Mbps of traffic. Counter size is an incomplete reason for polling interval. If you need a 5 minute average and poll your routers once every five minutes, what happens if an SNMP packet gets lost? In the best case, a retransmission over Y seconds sees it through, but now you've got 300+Y seconds in what was supposed to be a 300 second average...your next datapoint will also now be a 300-Y average unless you schedule it into the future. In the worst case, you've lost the datapoint entirely. This loses not just the one datapoint ending in that five minute span, but also the next datapoint. Sure, you can synthesize two 5 minute averages from one 10 minute average (presuming your counters wouldn't roll), but this is still a loss in data - one of those two datapoints should have been higher than the other. At a place of previous employ, we solved this problem by using a 30 second (!) polling interval, and a home-written (C, linking to the UCD-SNMP library (now net-snmp)) polling engine that did its best to emit and receive as many queries in as short a space of time as it was able to (without flooding monitored devices). In these circumstances, we could lose several datapoints and still construct valid 5-minute averages from the pieces (combinations of 30, 60, 90 etc second averages, weighting each by the number of seconds it represents within the 300-second span). Our operations staff also enjoyed being able to see graphical response to changes in traffic balancing within half a minute...better, faster feedback. Another factor that makes 'counter size' a bad indicator for polling interval. > In our setup, as with a lot of people likely, any data that is older > than 30 days is averaged. However, we store the exact maximums for the > most current 30 days. You keep no record? What do you do if a customer challenges their bill? Synthesize 5 minute datapoints out of the larger averages? I recommend keeping the 5 minute averages in perpetuity, even if that means having an operator burn the data to CD and store it in a safe (not under his desk in the pizza boxes, nor under his soft drink as a coaster). -- David W. Hankins"If you don't do it right the first time, Software Engineer you'll just have to do it again." Internet Systems Consortium, Inc. -- Jack T. Hankins pgpuKwZ6Xwlu9.pgp Description: PGP signature
Re: How do you (not how do I) calculate 95th percentile?
(I did this fast, and, who knows; I could be off my an order or two of magnitude) Most people are using 64 bit counters. This avoids the wrapping problem (assuming you don't have 100GE and poll more then once every 5 years :-)). 2^64 is 18,446,744,073,709,551,616 bytes. 100 GE (100,000,000,000 bits/sec) is 12,500,000,000 bytes/sec. It would take 1,475,739,525 seconds, or 46.79 years for a counter wrap. -- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben Net Access Corporation, 800-NET-ME-36, http://www.nac.net
Re: How do you (not how do I) calculate 95th percentile?
On Feb 22, 2006, at 10:12 AM, Jo Rhett wrote: A lot of smaller folks check the counter every 5 min and use that same value for the 95th percentile. Most of us larger folks need to check more often to prevent 32bit counters from rolling over too often. Are you larger folks averaging the retrieved values over a larger period? Using the maximum within a larger period? Or just using your saved values? Most people are using 64 bit counters. This avoids the wrapping problem (assuming you don't have 100GE and poll more then once every 5 years :-)). This is curiosity only. A few years ago we compared the same data and the answers varied wildly. It would appear from my latest check that it is becoming more standardized on 5-minute averages, so I'm asking here on Nanog as a reality check. Yup, 5 min seems to be the accepted time. Note: I have AboveNet, Savvis, Verio, etc calculations. I'm wondering if there are any other odd combinations out there. Reply to me offlist. If there is interest I'll summarize the results without identifying the source. -- Jo Rhett senior geek SVcolo : Silicon Valley Colocation
Re: How do you (not how do I) calculate 95th percentile?
Jo Rhett wrote: I am wondering what other people are doing for 95th percentile calculations these days. Not how you gather the data, but how often you check the counter? Do you use averages or maximums over time periods to create the buckets used for the 95th percentile calculation? We use maximums, every 5 minutes. A lot of smaller folks check the counter every 5 min and use that same value for the 95th percentile. Most of us larger folks need to check more often to prevent 32bit counters from rolling over too often. Actually, a lot of people do 5 minutes... and I would say that larger companies don't check them more often because they are using 64 bit counters, as should anyone with over about 100Mbps of traffic. Are you larger folks averaging the retrieved values over a larger period? Using the maximum within a larger period? Or just using your saved values? In our setup, as with a lot of people likely, any data that is older than 30 days is averaged. However, we store the exact maximums for the most current 30 days. This is curiosity only. A few years ago we compared the same data and the answers varied wildly. It would appear from my latest check that it is becoming more standardized on 5-minute averages, so I'm asking here on Nanog as a reality check. Note: I have AboveNet, Savvis, Verio, etc calculations. I'm wondering if there are any other odd combinations out there. Reply to me offlist. If there is interest I'll summarize the results without identifying the source. -- -- Tom Sands Chief Network Engineer Rackspace Managed Hosting (210)447-4065 --
How do you (not how do I) calculate 95th percentile?
I am wondering what other people are doing for 95th percentile calculations these days. Not how you gather the data, but how often you check the counter? Do you use averages or maximums over time periods to create the buckets used for the 95th percentile calculation? A lot of smaller folks check the counter every 5 min and use that same value for the 95th percentile. Most of us larger folks need to check more often to prevent 32bit counters from rolling over too often. Are you larger folks averaging the retrieved values over a larger period? Using the maximum within a larger period? Or just using your saved values? This is curiosity only. A few years ago we compared the same data and the answers varied wildly. It would appear from my latest check that it is becoming more standardized on 5-minute averages, so I'm asking here on Nanog as a reality check. Note: I have AboveNet, Savvis, Verio, etc calculations. I'm wondering if there are any other odd combinations out there. Reply to me offlist. If there is interest I'll summarize the results without identifying the source. -- Jo Rhett senior geek SVcolo : Silicon Valley Colocation
Re: anybody here from verizon's e-mail department?
On 2/22/06, Christopher L. Morrow <[EMAIL PROTECTED]> wrote: > On Wed, 22 Feb 2006, Suresh Ramasubramanian wrote: > > > > http://www.irbs.net/internet/nanog/0312/0009.html > > message 2 on that page is interesting: (and apropos to previous threads) > http://www.irbs.net/internet/nanog/0312/0008.html > Oh Yes. And I do know that uab.edu (U.Alabama at Birmingham) has had some smtp redirection stuff that they've been doing for a while - or were doing a few years ago, when I last discussed it with their postmaster, to stop rootkitted *nix workstations and infected windows boxes spamming out their network. What they did struck me as quite interesting - still strikes me as interesting from what I remember of it now 5 yrs later. If someone from uab is reading this and can describe it to nanog that'd be great. As for broadband ISPs I think charter has been putting a walled garden in place even though they, unlike aol, dont control the user client etc. Saw a preso about this at MAAWG in san diego last year. -- Suresh Ramasubramanian ([EMAIL PROTECTED])
Re: anybody here from verizon's e-mail department?
On Wed, 22 Feb 2006, Suresh Ramasubramanian wrote: > > http://www.irbs.net/internet/nanog/0312/0009.html message 2 on that page is interesting: (and apropos to previous threads) http://www.irbs.net/internet/nanog/0312/0008.html
Re: Cisco 3550 replacement
Perhaps this thread would be more appropriate for the Cisco-NSP list? Warren On Feb 22, 2006, at 5:44 AM, Aaron Daubman wrote: And no hierarchial QoS, which was requirement of the original poster, of course 3550 offer no such either. IIRC, the only switch to currently support HQF is the 3750 Metro Series: http://www.cisco.com/en/US/products/hw/switches/ps5532/ products_qanda_item09186a00801eb822.shtml """ Q. What is the difference between the Cisco Catalyst 3750 Metro Series and the Cisco Catalyst 3750 Series? The Cisco Catalyst 3750 Metro Series is built for Metro Ethernet access in a customer location, enabling the delivery of more differentiated Metro Ethernet services. These switches feature bidirectional hierarchical QoS and Traffic Shaping; intelligent 802.1Q tunneling with class-of-service (CoS) mutation; VLAN translation; MPLS, EoMPLS, and Hierarchical Virtual Private LAN Service (H-VPLS) support; and redundant AC or DC power. They are ideal for service providers seeking to deliver profitable business services, such as Layer 2, Layer 3, and MPLS VPNs, in a variety of bandwidths and with different SLAs. With flexible software options, the Cisco Catalyst 3750 Metro Series offers a cost-effective path for meeting current and future service requirements from service providers. The standard Cisco Catalyst 3750 Series is an innovative product line for midsize organizations and enterprise branch offices. Featuring Cisco Systems(r) StackWise™ technology, Cisco Catalyst 3750 Series products improve LAN operating efficiency by combining industry-leading ease of use and high resiliency for stackable switches. """ 32Gbps Backplane (Counted packet-in, packet-out, each direction, with all packets the same size, multicast?) and 52 GE interfaces. Not exactly non-blocking. Gotsta do the CiscoMath. The 1U with the best blocking ratio is the 4948: http://www.cisco.com/en/US/products/ps6021/ products_data_sheet0900aecd8017a72e.html "96 Gbps nonblocking switch fabric" However, I'm unsure of the details of its QoS support? Regards, ~Aaron
Re: anybody here from verizon's e-mail department?
On 2/22/06, Joe Maimon <[EMAIL PROTECTED]> wrote: > Dave Pooser wrote: > > Something I've seen before is a lot of mail servers will wait 10-45 seconds > > before presenting an SMTP prompt to remote hosts; spambots typically won't > > wait that long and give up. But since Verizon's sender verification (as of a > What about sender verification of validity discourages spammers? > The only reason it works is that they are too lazy to actualy use some > random VALID forged return-path. Viruses, virus generated spam - both often hijack a guy's outlook and pump email through it. With his VALID from in the return path. Lots and lots of spammers register valid domains. Thousands of them. And send out email with randomized addresses at that domain in the from, all of which do exist (in that theres a smtpsink instance running on that domains MX to accept and bitbucket all email) > IOW why isnt this technique (not pionered by verizon, afaik the > milter-sender was first I saw of it) short sighted and dangerous in the > long run? It has interesting side effects when you combine it with graylisting as Dave pointed out. And the sender verification stuff has other consequences too - see this nanog thread with Randy getting ... upset ... with verizon. http://www.irbs.net/internet/nanog/0312/0009.html > And yes, put this together with sender-id/domainkeys/spf whathaveyou and > then its valuable. However thats not the world we live in now. No. All you get is a Dibbler sausage. Lots of weird shit mixed together and forced into a sausage skin (or into a 1U pizzabox spamfilter appliance) -- Suresh Ramasubramanian ([EMAIL PROTECTED])
Re: anybody here from verizon's e-mail department?
Dave Pooser wrote: Which probably means Paul is blocking whatever server Verizon is using for its sender verification Something I've seen before is a lot of mail servers will wait 10-45 seconds before presenting an SMTP prompt to remote hosts; spambots typically won't wait that long and give up. But since Verizon's sender verification (as of a couple months ago; haven't checked recently) times out after 30 seconds, that technique can have the side effect of making Verizon customers unreachable. What about sender verification of validity discourages spammers? The only reason it works is that they are too lazy to actualy use some random VALID forged return-path. I for one would not like to force spammers to start using valid return-paths. I dont need that blowback load. That would affect my ability to read NANOG, hence its on-topicness. IOW why isnt this technique (not pionered by verizon, afaik the milter-sender was first I saw of it) short sighted and dangerous in the long run? And yes, put this together with sender-id/domainkeys/spf whathaveyou and then its valuable. However thats not the world we live in now. Joe
Re: anybody here from verizon's e-mail department?
> Which probably means Paul is blocking whatever server Verizon is using for its > sender verification Something I've seen before is a lot of mail servers will wait 10-45 seconds before presenting an SMTP prompt to remote hosts; spambots typically won't wait that long and give up. But since Verizon's sender verification (as of a couple months ago; haven't checked recently) times out after 30 seconds, that technique can have the side effect of making Verizon customers unreachable. -- Dave Pooser, ACSA, CCNA Manager of Information Services Alford Media http://www.alfordmedia.com
RE: anybody here from verizon's e-mail department?
Or he hasn't "paid his fair share" to ride our pipes! :-P - Wayne > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On > Behalf Of Suresh Ramasubramanian > Sent: Wednesday, February 22, 2006 1:29 AM > To: Dennis Dayman > Cc: nanog@merit.edu > Subject: Re: anybody here from verizon's e-mail department? > > > On 2/22/06, Dennis Dayman <[EMAIL PROTECTED]> wrote: > > > > No, but I have forwaded this to the abuse team I used to > work in. Some of > > them are also on Z. > > > > Normally this is because the MAIL FROM: failed or rejected sender > > verfication. > > > > Which probably means Paul is blocking whatever server Verizon is using > for its sender verification > > -- > Suresh Ramasubramanian ([EMAIL PROTECTED]) >
Re: MLPPP over MPLS
For more specific discussion we can move it over to cisco-nsp but here is a general document on it. http://cco/en/US/products/sw/iosswrel/ps5207/products_feature_guide09186a00801f26c8.html#wp1045653 Rodney On Tue, Feb 21, 2006 at 02:00:01PM -0600, Hyunseog Ryu wrote: > > Overall, MLPPP may work fine with MPLS as long as you have single > virtual circuit from each physical circuit. > Such as T1 channel from Channelized DS3... > But you have to use sub-interface (logical interface) other than > sub-channel from channeliezed circuit, > you may have some problem. > If you want to use QoS with MLPPP, some cases you may have to disable > CEF because of side effects. > > Overall, what I was recommended by Cisco source, is, if possible, to use > MLFR instead of MLPPP for MPLS integration. > > If you need more information, you can contact your local Cisco System > Engineer, and he/she will give more information to you. > > Hyun > > > Bill Stewart wrote: > > I've also heard a variety of comments about difficulties in getting > > Cisco MLPPP working in MPLS environments, mostly in the past year when > > our product development people weren't buried in more serious problems > > (:--) I've got the vague impression that it was more buggy for N>2 > > than N=2. There are a number of ways to bond NxT1 together, including > > MLFR and IMA, and we've generally used IMA for ATM and MPLS services > > and CEF for Internet. IMA has the annoyance of extra ATM overhead, > > but doesn't have problems with load-balancing or out-of-order > > delivery, and we've used it long enough to be good at dealing with its > > other problems. > > > > > > > > > >
Re: Cisco 3550 replacement
> And no hierarchial QoS, which was requirement of the original poster, > of course 3550 offer no such either. IIRC, the only switch to currently support HQF is the 3750 Metro Series: http://www.cisco.com/en/US/products/hw/switches/ps5532/products_qanda_item09186a00801eb822.shtml """ Q. What is the difference between the Cisco Catalyst 3750 Metro Series and the Cisco Catalyst 3750 Series? The Cisco Catalyst 3750 Metro Series is built for Metro Ethernet access in a customer location, enabling the delivery of more differentiated Metro Ethernet services. These switches feature bidirectional hierarchical QoS and Traffic Shaping; intelligent 802.1Q tunneling with class-of-service (CoS) mutation; VLAN translation; MPLS, EoMPLS, and Hierarchical Virtual Private LAN Service (H-VPLS) support; and redundant AC or DC power. They are ideal for service providers seeking to deliver profitable business services, such as Layer 2, Layer 3, and MPLS VPNs, in a variety of bandwidths and with different SLAs. With flexible software options, the Cisco Catalyst 3750 Metro Series offers a cost-effective path for meeting current and future service requirements from service providers. The standard Cisco Catalyst 3750 Series is an innovative product line for midsize organizations and enterprise branch offices. Featuring Cisco Systems(r) StackWise™ technology, Cisco Catalyst 3750 Series products improve LAN operating efficiency by combining industry-leading ease of use and high resiliency for stackable switches. """ > 32Gbps Backplane (Counted packet-in, packet-out, each direction, with all > packets the same size, multicast?) and 52 GE interfaces. > Not exactly non-blocking. > Gotsta do the CiscoMath. The 1U with the best blocking ratio is the 4948: http://www.cisco.com/en/US/products/ps6021/products_data_sheet0900aecd8017a72e.html "96 Gbps nonblocking switch fabric" However, I'm unsure of the details of its QoS support? Regards, ~Aaron
The Domain Name Service as an IDS
"How DNS can be used for detecting and monitoring badware in a network" http://staff.science.uva.nl/~delaat/snb-2005-2006/p12/report.pdf This is a very interesting although preliminary work by obviously skilled people. I haven't learned much but I am extremely happy others work on this than the people I already know! They also weren't too shy with credit, mentioning Florian Weimer and his Passive DNS project already at the abstract (quoted below). They even mention me for some reason. Great paper guys! Moving past Passive DNS Replication and blacklisting, they discuss what so far has been done for years using dnstop, and help us take it to the next level of DNS monitoring. Someone should introduce them to Duane Wessels' (from ISC OARC) follow-up dnstop project, DSC. :) http://dns.measurement-factory.com/tools/dsc/ https://oarc.isc.org/faq-dsc.html http://www.caida.org/tools/utilities/dsc/ [Duane's lecture on the tool at the 1st DNS-OARC Workshop] http://www.caida.org/projects/oarc/200507/slides/oarc0507-Wessels-dsc.pdf There has been some other interesting work done in this area by our very own David Dagon from Georgia Tech: [Presentation from the 1st DNS-OARC Workshop] Botnet Detection and Response - The Network is the Infection: http://www.caida.org/projects/oarc/200507/slides/oarc0507-Dagon.pdf [Paper] Modeling Botnet Propagation Using Time Zones: http://www.cs.ucf.edu/~czou/research/botnet_tzmodel_NDSS06.pdf - Abstract SURFnet is looking for technologies to expand the ways they can detect network traffic anomalies like botnets. Since bots started using domain names for connection with their controller, tracking and removing them has become a hard task. This research is a first glance at the usability of DNS traffic and logs for detection of this malicious network activity. Detection of bots is possible by DNS information gathered from the network by placing counters and triggers on specific events in the data analysis. In combination with NetFlow information and IP addresses of known infected systems, detection of bots of network anomalies can be made visible. Also the behavior of a bot can be documented and additional information can be gathering about the bot. Using DNS data as a supplement to the existing detection systems can give more insight in the suspicious network traffic. With some future research, this information can be used to compile a case against particular types of bot or spyware and help dismantling a remote controlled infrastructure as a whole. Note We started this research project with the question if the Passive DNS Software of Florian Weimer was useful for bot detection. We immediately found out that the sensor of the Passive DNS Software strips the source address from the collected data for privacy reasons, making this software not useful at all for our purpose. We deviated from the Research Plan (Plan van Aanpak) and took a more general approach to the question; ”Is gathered DNS traffic usable for badware detection”. - Gadi. -- http://blogs.securiteam.com/ "Out of the box is where I live". -- Cara "Starbuck" Thrace, Battlestar Galactica.