Re: botnets: web servers, end-systems and Vint Cerf
On Thu, 15 Feb 2007 21:54:00 CST, Gadi Evron said: > > And the fact that web servers are getting botted is just the cycle of > > reincarnation - it wasn't that long ago that .edu's had a reputation of > > getting pwned for the exact same reasons that webservers are targets now: > > easy to attack, and usually lots of bang-for-buck in pipe size and similar. > > You mean they aren't now? Do we have any EDU admins around who want to > tell us how bad it still is, despite attempts at working on this? OK, I'll bite. :) We point them at info: http://www.computing.vt.edu/help_and_tutorials/getting_started/students.html and give them a free CD that does all the heavy lifting for them: http://www.antivirus.vt.edu/proactive/vtnet2006.asp (And if you live in the dorms, the CD is *sitting there* on the table when you get there - and the network jack has a little tape cover that reminds them to use the CD first...) Oh, and they also get to attend our "Don't be an online victim" presentation during orientation, and most (if not all) of the residence halls have their own official resident tech geek (it's amazingly easy to find people who are willing to help people on their floor in exchange for a single room rather than double ;) And after all that, at any given instant, there's probably several dozen botted boxes hiding in our 2 /16s - there's a limit to what you can do to stop users from getting themselves botted when it's their box, not yours. And there's political expediency limits to what you can do to detect a botted box and take action before it actually does anything. What's changed over the past few years is that a number of years ago, the end-user part of the Internet was /16s of .edu space with good bandwidth interspersed with /18s of dial-up 56K modem pools, so .edu space was an attractive target. Now the /18s of dial-ups are /12s of cablemodems and DSL, and *everyplace* is the same attractive swamp that .edu's used to be. And most ISPs don't provide in-house tech support and an orientation lecture when you sign up - though some *do* provide the free A/V these days. :) Bottom line - there's cleaner /16s than ours. There's swampier. What's changed is that in addition to Joe Freshman being online, Joe's parents and kid sister are online too. I have *some* control over Joe - the other 3 are Somebody Else's Problem, and all I can do is hope they use an ISP that's learned that you can actually get a positive ROI on up-front investing in security. Unfortunately, Vint tells me that 140 million of them are all over at that *other* ISP. ;) > Dorms are basically large honey nets. :) Are there any globally-routed /24s that *aren't*, these days? ;) pgpEjK2i0jQAl.pgp Description: PGP signature
Re: The Root of The Problem [Was: Re: botnets: web servers, end-systems an d Vint Cerf]
On Fri, 16 Feb 2007, Fergie wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Well, I'm going to add my $.02 here, too, and I don't care who > likes it or not. :-) > > I know Vint, and I've known Vint for a long time. > > He's a smart guy. And he's right. > > Why is he right? > > Because he got in front of the folks who actually _can_ manage > this problem, and that is the people (actually the NGOs) who > have the monetary and fiduciary duty to begin looking at problems > at the financial loss level. > > If you think that these problems are going to solely resolved on > a technical basis, you're delusional. > > Rock on, Vint. I actually agree 100%, I made no run on Vint Cerf or on Google. I believe I even said the same things as you. Only I also asked some questions "while we are on the subject of". Sorry for that misunderstanding. I should have stated that one better. Gadi. > > - - ferg > > > - -- Gadi Evron <[EMAIL PROTECTED]> wrote: > > On Thu, 15 Feb 2007, Peter Moody wrote: > > > I kept quiet on this for a while, but honestly, I appreciate Vint Cerf > > > mentioning this where he did, and raising awareness among people who > > > can potentially help us solve the problem of the Internet. > > > > > > Still, although I kept quiet for a while, us so-called "botnet > > > experts" gotta ask: where does he get his numbers? I would appreciate > > > some backing up to these or I'd be forced to call him up on his > > > statement. > > > > > > My belief is that it is much worse. I am capable of proving only > > > somewhat worse. His numbers are still staggering so.. where why when > > > how what? (not necessarily in that order). > > > > > > So, data please Vint/Google. > > > > > > > > Dr. Cerf wasn't speaking for Google when he said this, so I'm not sure > > why > > Okay, thansk for clarifying that. :) > > > you're looking that direction for answers. But since you ask, his data > > came from informal conversations with A/V companies and folks actually in > > the > > Interesting. > > > trenches of dealing with botnet ddos mitigation. The numbers weren't > > taken > > Botnet trenches? Yes, I suppose the analogy to World War I is correct. I > should know, I was there (metaphorically speaking). My guess is, if we are > to follow this analogy, we are now just before the invention of the tank > now in 2007, but oh well. > > > from any sort of scientific study, and they were in fact mis-quoted (he > > said more like 10%-20%). > > Interesting. > > > > > > > (my opinions != my employer's, etc. etc.) > > > > Many thanks, > > > Cheers, > > .peter > > Gadi. > > -BEGIN PGP SIGNATURE- > Version: PGP Desktop 9.5.3 (Build 5003) > > wj8DBQFF1UQFq1pz9mNUZTMRApjPAKDmKCfWqAbn6k8Qpks+hNlHrpqLQQCg6axq > YQaCMxuU8co3TawE6nsOWaw= > =OYij > -END PGP SIGNATURE- > > > > > -- > "Fergie", a.k.a. Paul Ferguson > Engineering Architecture for the Internet > fergdawg(at)netzero.net > ferg's tech blog: http://fergdawg.blogspot.com/ > >
The Root of The Problem [Was: Re: botnets: web servers, end-systems an d Vint Cerf]
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Well, I'm going to add my $.02 here, too, and I don't care who likes it or not. :-) I know Vint, and I've known Vint for a long time. He's a smart guy. And he's right. Why is he right? Because he got in front of the folks who actually _can_ manage this problem, and that is the people (actually the NGOs) who have the monetary and fiduciary duty to begin looking at problems at the financial loss level. If you think that these problems are going to solely resolved on a technical basis, you're delusional. Rock on, Vint. - - ferg - -- Gadi Evron <[EMAIL PROTECTED]> wrote: On Thu, 15 Feb 2007, Peter Moody wrote: > > I kept quiet on this for a while, but honestly, I appreciate Vint Cerf > > mentioning this where he did, and raising awareness among people who > > can potentially help us solve the problem of the Internet. > > > > Still, although I kept quiet for a while, us so-called "botnet > > experts" gotta ask: where does he get his numbers? I would appreciate > > some backing up to these or I'd be forced to call him up on his > > statement. > > > > My belief is that it is much worse. I am capable of proving only > > somewhat worse. His numbers are still staggering so.. where why when > > how what? (not necessarily in that order). > > > > So, data please Vint/Google. > > > > Dr. Cerf wasn't speaking for Google when he said this, so I'm not sure > why Okay, thansk for clarifying that. :) > you're looking that direction for answers. But since you ask, his data > came from informal conversations with A/V companies and folks actually in > the Interesting. > trenches of dealing with botnet ddos mitigation. The numbers weren't > taken Botnet trenches? Yes, I suppose the analogy to World War I is correct. I should know, I was there (metaphorically speaking). My guess is, if we are to follow this analogy, we are now just before the invention of the tank now in 2007, but oh well. > from any sort of scientific study, and they were in fact mis-quoted (he > said more like 10%-20%). Interesting. > (my opinions != my employer's, etc. etc.) > Many thanks, > Cheers, > .peter Gadi. -BEGIN PGP SIGNATURE- Version: PGP Desktop 9.5.3 (Build 5003) wj8DBQFF1UQFq1pz9mNUZTMRApjPAKDmKCfWqAbn6k8Qpks+hNlHrpqLQQCg6axq YQaCMxuU8co3TawE6nsOWaw= =OYij -END PGP SIGNATURE- -- "Fergie", a.k.a. Paul Ferguson Engineering Architecture for the Internet fergdawg(at)netzero.net ferg's tech blog: http://fergdawg.blogspot.com/
Re: botnets: web servers, end-systems and Vint Cerf
On Thu, 15 Feb 2007, Peter Moody wrote: > > I kept quiet on this for a while, but honestly, I appreciate Vint Cerf > > mentioning this where he did, and raising awareness among people who can > > potentially help us solve the problem of the Internet. > > > > Still, although I kept quiet for a while, us so-called "botnet > > experts" gotta ask: where does he get his numbers? I would appreciate some > > backing up to these or I'd be forced to call him up on his statement. > > > > My belief is that it is much worse. I am capable of proving only somewhat > > worse. His numbers are still staggering so.. where why when how what? (not > > necessarily in that order). > > > > So, data please Vint/Google. > > > > Dr. Cerf wasn't speaking for Google when he said this, so I'm not sure why Okay, thansk for clarifying that. :) > you're looking that direction for answers. But since you ask, his data came > from informal conversations with A/V companies and folks actually in the Interesting. > trenches of dealing with botnet ddos mitigation. The numbers weren't taken Botnet trenches? Yes, I suppose the analogy to World War I is correct. I should know, I was there (metaphorically speaking). My guess is, if we are to follow this analogy, we are now just before the invention of the tank now in 2007, but oh well. > from any sort of scientific study, and they were in fact mis-quoted (he said > more like 10%-20%). Interesting. > (my opinions != my employer's, etc. etc.) > Many thanks, > Cheers, > .peter Gadi.
Re: botnets: web servers, end-systems and Vint Cerf
> systems were botted. Just a little while back, Vint Cerf guesstimated that > there's 140 million botted end user boxes. Unless 100% of Google's servers > are botted, there's no way there's that many botted servers. :) I kept quiet on this for a while, but honestly, I appreciate Vint Cerf mentioning this where he did, and raising awareness among people who can potentially help us solve the problem of the Internet. Still, although I kept quiet for a while, us so-called "botnet experts" gotta ask: where does he get his numbers? I would appreciate some backing up to these or I'd be forced to call him up on his statement. My belief is that it is much worse. I am capable of proving only somewhat worse. His numbers are still staggering so.. where why when how what? (not necessarily in that order). So, data please Vint/Google. Dr. Cerf wasn't speaking for Google when he said this, so I'm not sure why you're looking that direction for answers. But since you ask, his data came from informal conversations with A/V companies and folks actually in the trenches of dealing with botnet ddos mitigation. The numbers weren't taken from any sort of scientific study, and they were in fact mis-quoted (he said more like 10%-20%). so you go ahead an call him on it Gadi; you're a "botnet expert" after all. And the fact that web servers are getting botted is just the cycle of > reincarnation - it wasn't that long ago that .edu's had a reputation of > getting pwned for the exact same reasons that webservers are targets now: > easy to attack, and usually lots of bang-for-buck in pipe size and similar. You mean they aren't now? Do we have any EDU admins around who want to tell us how bad it still is, despite attempts at working on this? Dorms are basically large honey nets. :) spoken like someone who's not actually spent time cleaning up a resnet. cleaning up a resnet must look downright impossible when you spend so much time organizing conferences. (my opinions != my employer's, etc. etc.) Cheers, .peter
botnets: web servers, end-systems and Vint Cerf
On Thu, 15 Feb 2007 [EMAIL PROTECTED] wrote: > On Thu, 15 Feb 2007 19:02:12 CST, Gadi Evron said: > > Many of them are SMTP-based only. IP reputation is very limited still. > > > > Now, all that said, back on "most are broadband users" - no longer > > true. Many bots (especially in spam) are now web servers. > > I'm willing to bet that most are *still* broadband users. Quite likely, Oh, safe bet. :) > even if 100% (yes, *every single last one*) of the "web servers" out there > were botted, that would likely still be less systems than if only 5% of > end-user But not less spam? :) I seriously doubt more spam is sent from web servers than user systems, but it's changing. Web servers now play a part which we can notice and measure. > systems were botted. Just a little while back, Vint Cerf guesstimated that > there's 140 million botted end user boxes. Unless 100% of Google's servers > are botted, there's no way there's that many botted servers. :) I kept quiet on this for a while, but honestly, I appreciate Vint Cerf mentioning this where he did, and raising awareness among people who can potentially help us solve the problem of the Internet. Still, although I kept quiet for a while, us so-called "botnet experts" gotta ask: where does he get his numbers? I would appreciate some backing up to these or I'd be forced to call him up on his statement. My belief is that it is much worse. I am capable of proving only somewhat worse. His numbers are still staggering so.. where why when how what? (not necessarily in that order). So, data please Vint/Google. > And the fact that web servers are getting botted is just the cycle of > reincarnation - it wasn't that long ago that .edu's had a reputation of > getting pwned for the exact same reasons that webservers are targets now: > easy to attack, and usually lots of bang-for-buck in pipe size and similar. You mean they aren't now? Do we have any EDU admins around who want to tell us how bad it still is, despite attempts at working on this? Dorms are basically large honey nets. :) Gadi.
Re: RBL for bots?
On Thu, 15 Feb 2007 19:02:12 CST, Gadi Evron said: > Many of them are SMTP-based only. IP reputation is very limited still. > > Now, all that said, back on "most are broadband users" - no longer > true. Many bots (especially in spam) are now web servers. I'm willing to bet that most are *still* broadband users. Quite likely, even if 100% (yes, *every single last one*) of the "web servers" out there were botted, that would likely still be less systems than if only 5% of end-user systems were botted. Just a little while back, Vint Cerf guesstimated that there's 140 million botted end user boxes. Unless 100% of Google's servers are botted, there's no way there's that many botted servers. :) And the fact that web servers are getting botted is just the cycle of reincarnation - it wasn't that long ago that .edu's had a reputation of getting pwned for the exact same reasons that webservers are targets now: easy to attack, and usually lots of bang-for-buck in pipe size and similar. pgpWV9YCtKaaB.pgp Description: PGP signature
Re: wifi for 600, alex
On Feb 15, 2007, at 4:22 PM, Anton Kapela wrote: [..] Anyway, I don't mean to stray too far off topic, but indeed there are many 'good' things already designed (some decades ago) and understood within the wireless community which would be well to appear in .11 at some point. Hopefully my comment makes more sense now! :) Yes, that is true. There are also mechanisms which have to be invented completely from scratch because the architectural model is different (decisions being made at the edge rather than an "omniscient" controller). Integration with other modes of mobile communication is one such example. It's an interesting problem to have, but it also makes the standard very challenging as there is amendment after amendment with lots of old non-compliant devices around from before the time when a feature was invented. [..] in WiFi is of limited availability in chipsets today, not to mention incompatible with non- scheduled access. Check out EDCF. It's not changing any fundamental part other than the radios behavior during CCA backoff, and any client can benefit from it. Also, I explain how it works briefly in the lightning talk video. Maybe I really need to start thinking about creating a proposal for a talk at NANOG for service provider issues in Wi-Fi, such as those we live every day in my (mostly) day job. Hmm. Best regards, Christian
Re: RBL for bots?
On Thu, 15 Feb 2007 [EMAIL PROTECTED] wrote: > On Thu, 15 Feb 2007 11:30:34 EST, Drew Weaver said: > > > Has anyone created an RBL, much like (possibly) the BOGON list which > > includes the IP addresses of hosts which seem to be "infected" and are > > attempting to brute-force SSH/HTTP, etc? No BL for bots other than SMTP zombies quite yet. There is one for SSH brute forcing, although home-made.. J. Will repond on his own... > > It would be fairly easy to setup a dozen or more honeypots and examine > > the logs in order to create an initial list. > > A large percentage of those bots are in DHCP'ed cable/dsl blocks. As such, > there's 2 questions: Quite right, which is why ... > 1) How important is it that you not false-positive an IP that's listed because > some *previous* owner of the address was pwned? As in, dynamic ranges BL. > 2) How important is it that you even accept connections from *anywhere* in > that DHCP block? Or maybe the cool concept of white-listing known senders? :) > (Note that there *are* fairly good RBL's of DHCP/dsl/cable blocks out there. > So it really *is* a question of why those aren't suitable for use in your > application...) Many of them are SMTP-based only. IP reputation is very limited still. Now, all that said, back on "most are broadband users" - no longer true. Many bots (especially in spam) are now web servers. Gadi.
Re: RBL for bots?
Drew Weaver wrote: Has anyone created an RBL, much like (possibly) the BOGON list which includes the IP addresses of hosts which seem to be "infected" and are attempting to brute-force SSH/HTTP, etc? It would be fairly easy to setup a dozen or more honeypots and examine the logs in order to create an initial list. Anyone know of anything like this? web.dnsbl.sorbs.net has hosts that do this as well as korgo infected machines, and a whole host of other types of vulnerabilities, trojans and bots. Do be careful about how you use the data, we don't distinguish between the types for very good reason. Regards, Mat
tracking fiber assets
What do people use to keep track of fiber-optic assets? We own fiber on electric transmission lines - a hundred spans or so, mostly 24-48 count, about 800-900 total route-miles. But we lack a tool to keep track of what is in use, which customers would be affected when we perform maintenance, and the like. Any suggestions for good tools to manage this would be most appreciated. Our spreadsheets, CAD drawings, and directories full of OTDR shots are just not cutting it. -- Daniel J McDonald, CCIE # 2495, CISSP # 78281, CNX Austin Energy http://www.austinenergy.com
RE: wifi for 600, alex
> There are things underway that can mitigate some of this, > neighbor lists for example. For the sake of the lists topic centrism, I was avoiding getting into points like that. :) Which brings me to the part about: > Hmm. I think it would be good to frame which parts of a "CDMA > system" (whatever that actually refers to ;-) you mean by that Well, neighbor lists for one. That is, if a client device is continually informing something like a "BSC" what it perceives is the 'hearable topology,' we can then implement far more useful logic in the BSC to better direct the underlying activities. Second is network assisted handovers and handoffs (even in the absence of policy-knobs such as neighbor lists). Perhaps third would be more related to the way the PCF shim can be used to schedule up and downlink activity in each BTS by a rather "well informed" BSC. Perhaps even more useful would be support like handup/handown for moving clients (when possible) from .11g to .11a, just like CDMA BSC's would do to direct a mobile station between a classic IS-95 BTS to IS-2000 BTS. Anyway, I don't mean to stray too far off topic, but indeed there are many 'good' things already designed (some decades ago) and understood within the wireless community which would be well to appear in .11 at some point. Hopefully my comment makes more sense now! :) > There's actually a lot more to clean hand-overs between AP. > For starters, you need to know what's around, find them(!) > (i.e., channel), and reestablish any security associations > and take care of IP mobility (at least at scale). Indeed. IAPP and things like it were designed to assist or deal with carry-over of authentication after all the layer-2 and 1 things are accounted for. Who even interoperates with IAPP today? > And which have similar scaling challenges with small cell > sizes and mobility. In fact, you could argue the model is > particularly challenged in that case. Some aspects are improved even in small, dense environments. Some of the interesting work that Meru does is to aggregate & schedule back to back .11 frames for things like RTP delivery. Meru, for example, also globally schedules & coordinates delivery across all APs for specific management messages. But even still, you cannot create capacity where there is none, so if there's simply no free RF, we're hosed. > So goes the theory at small scale, yes. And I would contend > that "RF- ideal" is something you will only find inside of an RF tent. I should have said 'comparatively equal' to whatever shade of grey is available... :) > I don't I agree. Having QoS mechanisms in a cooperative, > unlicensed frequency has its limitations, rather than > anything amounting to scheduled access. And scheduled access I see your point there. In the case of .11e and EDCF, significant improvement can be had even if only one half of the path has the support. In our cases, yea, we only down control of the downlink to the mobile station. I'm not sure I'd even want clients using "self-medicated" EDCF, so the unlink prioritization/scheduluing issue looms large without a great solution. > in WiFi is of limited availability in chipsets today, not to > mention incompatible with non- scheduled access. Check out EDCF. It's not changing any fundamental part other than the radios behavior during CCA backoff, and any client can benefit from it. Also, I explain how it works briefly in the lightning talk video. -Tk
Re: Solaris telnet vuln solutions digest and network risks
On Tue, Feb 13, 2007 at 07:22:51PM -0600, Gadi Evron wrote: ... > 2. If you haven't already, I strongly recommend checking your network for > machines running telnet, and more specifcially, vulnerable to this > particular issue. NO. The telnet DAEMON. NOT telnet. *sigh* Too many releases confusing the two. -- Joe Yao --- This message is not an official statement of OSIS Center policies.
Re: wifi for 600, alex
On Feb 15, 2007, at 10:57 AM, Anton Kapela wrote: Speaking from experiences at Nanog and abroad, this has proven difficult (more like impossible) to achieve to the degree of success engineers would expect. In an ideal world, client hardware makers would all implement sane, rational, and scalable 'scanning' processes in their products. However, we find this to be one market where the hardware is far from ideal and there's little market pressure to change or improve it. On many occasions I've detected client hardware which simply picks the first 'good' response from an AP on a particular SSID to associate with, and doesn't consider anything it detects afterward! If the first "Good" response came from an AP on channel 4, it went there! That is exactly how nearly all devices today function; the exceptions are small. There's a bit more needed to truly establish what is a good association and what isn't, from performance characteristics to functionality. There are things underway that can mitigate some of this, neighbor lists for example. Also incredibly annoying and troubling are cards that implement 'near continuous' scanning once or say twice per second or cards that are programmed to do so whenever 'signal quality' falls below a static threshold. A mobile station would likely see very clean hand-over between AP's and I'm sure the resulting user experience would be great. There's actually a lot more to clean hand-overs between AP. For starters, you need to know what's around, find them(!) (i.e., channel), and reestablish any security associations and take care of IP mobility (at least at scale). However, this behavior is horrible when there are 200 stations all within radio distance of each other... you've just created a storm of ~400 frames/sec across _all_ channels, 1 on up! Remember, the scan sequence is fast - dwell time on each channel listing for a probe_response is on the other of a few milliseconds. If a card emits 22 frames per second across 11 channels, that 2 frames/sec per channel becomes a deafening roar of worthless frames. It's obvious that the CA part of CSMA/CA doesn't scale to 200 stations when we consider these sorts of issues. High density and the relatively high rate of AP can cause the same from beacons, for example. There's a tradeoff between mobility and density of beacons, too: you need to hear a sufficient number of them to make decisions in the current model. In my selfish, ideal world, a "wifi" network would behave more like a CDMA system does. Unfortunately, wifi devices were not designed with these goals in mind. If they had, the hardware would be horribly expensive, no critical mass of users would have adopted the technology, and it wouldn't be ubiquitous or cheap today. The good news is that because it's gotten ubiquitous and popular, companies have added-in some of the missing niceties to aid in scaling the deployments. Hmm. I think it would be good to frame which parts of a "CDMA system" (whatever that actually refers to ;-) you mean by that We now see 'controller based' systems from cisco and Meru which have implemented some of the core principals at work in larger mobile networks. And which have similar scaling challenges with small cell sizes and mobility. In fact, you could argue the model is particularly challenged in that case. One of the important features gained with this centralized controller concept is coordinated, directed association from AP to AP. The controller can know the short-scale and long-scale loading of each AP, the success/failure of delivering frames to each associated client, and a wealth of other useful tidbits. Armed with these clues, a centralized device would prove useful by directing specifically active stations to lesser loaded (but still RF-ideal) APs. So goes the theory at small scale, yes. And I would contend that "RF- ideal" is something you will only find inside of an RF tent. 3. Keep an eye on the conference network stats, netflow etc so that "bandwidth hogs" get routed elsewhere, isolate infected laptops (happens all the time, to people who routinely login to production routers with 'enable' - telneting to them sometimes ..), block p2p ports anyway (yea, at netops meetings too, you'll be surprised at how many people seem to think free fat pipes are a great way to update their collection of pr0n videos), I would add that DSCP & CoS maps on the AP's can be used to great effect here. I don't I agree. Having QoS mechanisms in a cooperative, unlicensed frequency has its limitations, rather than anything amounting to scheduled access. And scheduled access in WiFi is of limited availability in chipsets today, not to mention incompatible with non- scheduled access. Best regards, Christian
Re: RBL for bots?
On Thu, 15 Feb 2007 09:16:27 PST, Joel Jaeggli said: > [EMAIL PROTECTED] wrote: > > 2) How important is it that you even accept connections from *anywhere* in > > that DHCP block? > > That depends... > > Do you sell "Internet service" to you customers or something else. If > the former then they're actually paying to receive connections from > anywhere... Then the RBL is irrelevant, as "anywhere" isn't the same as "anywhere that isn't in an RBL". :) (And anyhow, I'd *hope* that any use of an RBL to filter things on behalf of a customer was spelled out in the contract, at least in the fine print that most Joe Sixpacks never bother reading, specifically to cover that issue...) pgpWkuff8paLM.pgp Description: PGP signature
Re: Paging ATT.com DNS master
David, Issue appears specific to one of five of AT&T's nameservers; in this case, ns3.attdns.com only. Authoritative nameservers for att.com are: ;; QUESTION SECTION: ;att.com. IN NS ;; ANSWER SECTION: att.com.172800 IN NS ns1.attdns.com. att.com.172800 IN NS ns2.attdns.com. att.com.172800 IN NS ns3.attdns.com. att.com.172800 IN NS ns4.attdns.com. att.com.172800 IN NS ns5.attdns.com. ;; ADDITIONAL SECTION: ns1.attdns.com. 172800 IN A 144.160.112.22 ns2.attdns.com. 172800 IN A 144.160.128.140 ns3.attdns.com. 172800 IN A 144.160.20.47 ns4.attdns.com. 172800 IN A 192.128.167.75 ns5.attdns.com. 172800 IN A 192.128.133.75 I only get back NXDOMAIN from ns3.attdns.com. All the others work as expected. -- | Jeremy Chadwick jdc at parodius.com | | Parodius Networkinghttp://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB | On Thu, Feb 15, 2007 at 08:56:01AM -0800, David Ulevitch wrote: > > You broke the zone for ATT.com. > > That's probably not good. > > -david > > > $ dig @ns3.attdns.com att.com > > ; <<>> DiG 9.2.2 <<>> @ns3.attdns.com att.com > ;; global options: printcmd > ;; Got answer: > ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 940 > ;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 > > ;; QUESTION SECTION: > ;att.com. IN A > > ;; Query time: 75 msec > ;; SERVER: 144.160.20.47#53(ns3.attdns.com) > ;; WHEN: Thu Feb 15 08:54:58 2007 > ;; MSG SIZE rcvd: 25
Re: RBL for bots?
[EMAIL PROTECTED] wrote: > On Thu, 15 Feb 2007 11:30:34 EST, Drew Weaver said: > >> Has anyone created an RBL, much like (possibly) the BOGON list which >> includes the IP addresses of hosts which seem to be "infected" and are >> attempting to brute-force SSH/HTTP, etc? > >> It would be fairly easy to setup a dozen or more honeypots and examine >> the logs in order to create an initial list. > > A large percentage of those bots are in DHCP'ed cable/dsl blocks. As such, > there's 2 questions: > > 1) How important is it that you not false-positive an IP that's listed because > some *previous* owner of the address was pwned? > > 2) How important is it that you even accept connections from *anywhere* in > that DHCP block? That depends... Do you sell "Internet service" to you customers or something else. If the former then they're actually paying to receive connections from anywhere... > (Note that there *are* fairly good RBL's of DHCP/dsl/cable blocks out there. > So it really *is* a question of why those aren't suitable for use in your > application...)
Paging ATT.com DNS master
You broke the zone for ATT.com. That's probably not good. -david $ dig @ns3.attdns.com att.com ; <<>> DiG 9.2.2 <<>> @ns3.attdns.com att.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 940 ;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;att.com. IN A ;; Query time: 75 msec ;; SERVER: 144.160.20.47#53(ns3.attdns.com) ;; WHEN: Thu Feb 15 08:54:58 2007 ;; MSG SIZE rcvd: 25
Re: RBL for bots?
On Thu, 15 Feb 2007, Drew Weaver wrote: Has anyone created an RBL, much like (possibly) the BOGON list which includes the IP addresses of hosts which seem to be "infected" and are attempting to brute-force SSH/HTTP, etc? Bots are rarely single purpose engines. If they have been detected doing bad things, they will probably appear in multiple RBLs for multiple reasons. If something is in multiple RBLs, even if it hasn't done the particular badness you are looking for, its probably just a matter of time. Perhaps not surprising, some of the porn site vendors appear to have the most sophisticated systems for detecting brute force/password sharing attacks.
Re: RBL for bots?
On Thu, 15 Feb 2007 11:30:34 EST, Drew Weaver said: > Has anyone created an RBL, much like (possibly) the BOGON list which > includes the IP addresses of hosts which seem to be "infected" and are > attempting to brute-force SSH/HTTP, etc? > It would be fairly easy to setup a dozen or more honeypots and examine > the logs in order to create an initial list. A large percentage of those bots are in DHCP'ed cable/dsl blocks. As such, there's 2 questions: 1) How important is it that you not false-positive an IP that's listed because some *previous* owner of the address was pwned? 2) How important is it that you even accept connections from *anywhere* in that DHCP block? (Note that there *are* fairly good RBL's of DHCP/dsl/cable blocks out there. So it really *is* a question of why those aren't suitable for use in your application...) pgpq4fDNB9vL4.pgp Description: PGP signature
Re: DNS: Definitely Not Safe?
Joe Abley <[EMAIL PROTECTED]> writes: >> i thought it was actually covered on-list... during the event, no? > > I don't think it was especially covered on this list (you are no > doubt thinking of other lists). There was a lightning talk about it > in Toronto, for which slides can be found in the usual place. I think between the list and the lightning talk, it got the level of attention it deserved. ---rob
Re: wifi for 600, alex
Inasmuch as anyone with an ICBM (Intel-Chip-Based-Mac) has 802.11a capability, and such devices have been gaining increasing traction among geeks of late, I'm not surprised. The latest Airport Extreme base station from Apple is A/B/G/N (the Express is still b/g). ---rob Marshall Eubanks <[EMAIL PROTECTED]> writes: > The IETF experience is that enough people run 802.11a to take > significant load off of the {b,g} network. > > Marshall > > On Feb 15, 2007, at 9:45 AM, Pickett, McLean (OCTO) wrote: > >> >> >> Works well if everyone has 802.11a/g card. That's been my biggest >> concern >> with deploying 802.11a recently. >> >> McLean >> >> -Original Message- >> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On >> Behalf Of Todd >> Vierling >> Sent: Thursday, February 15, 2007 12:02 AM >> To: Suresh Ramasubramanian >> Cc: Marshall Eubanks; Carl Karsten; NANOG >> Subject: Re: wifi for 600, alex >> >> >> On 2/14/07, Suresh Ramasubramanian <[EMAIL PROTECTED]> wrote: >>> 4. Isolate the wireless network from the main conference network / >>> backbone so that critical stuff (streaming content for workshop and >>> other presentations, the rego system etc) gets bandwidth allocated to >>> it just fine, without it being eaten up by hungry laptops. >> >> The oft-overlooked 802.11a is great for this purpose when there isn't >> enough wiring infrastructure to drop a RJ45 in all the necessary >> conference rooms. Whereas 802.11[bgn] has only three (or four, >> depending on who you quote) mostly non-overlapping frequencies -- even >> less when MIMO is in use -- 802.11a has eight *completely* >> non-overlapping standard channels. In nice open conference hall space >> with at most two walls in the way, the rated shorter range of 11a is >> actually not so noticeable because of the lack of radio noise. >> >> 2.4GHz is soo last decade. ;) >> >> (The 802.11[bgn] density where I live is so high that I resorted to >> installing 802.11a throughout my house. Zero contention for airwaves >> and I can actually get close to rated speed for data transmission.) >> >> -- >> -- Todd Vierling <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
RBL for bots?
Has anyone created an RBL, much like (possibly) the BOGON list which includes the IP addresses of hosts which seem to be "infected" and are attempting to brute-force SSH/HTTP, etc? It would be fairly easy to setup a dozen or more honeypots and examine the logs in order to create an initial list. Anyone know of anything like this? -Drew
RE: wifi for 600, alex
> -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On > Behalf Of Suresh Ramasubramanian > Sent: Wednesday, February 14, 2007 6:25 PM > To: Marshall Eubanks > Cc: Carl Karsten; NANOG > Subject: Re: wifi for 600, alex [snip] > 2. Plan the network, number of APs based on session capacity, > signal coverage etc so that you dont have several dozen > people associating to the same AP, at the same time, when > they could easily find other APs ... I guess a laptop will > latch onto the AP that has the strongest signal first. Speaking from experiences at Nanog and abroad, this has proven difficult (more like impossible) to achieve to the degree of success engineers would expect. In an ideal world, client hardware makers would all implement sane, rational, and scalable 'scanning' processes in their products. However, we find this to be one market where the hardware is far from ideal and there's little market pressure to change or improve it. On many occasions I've detected client hardware which simply picks the first 'good' response from an AP on a particular SSID to associate with, and doesn't consider anything it detects afterward! If the first "Good" response came from an AP on channel 4, it went there! Also incredibly annoying and troubling are cards that implement 'near continuous' scanning once or say twice per second or cards that are programmed to do so whenever 'signal quality' falls below a static threshold. A mobile station would likely see very clean hand-over between AP's and I'm sure the resulting user experience would be great. However, this behavior is horrible when there are 200 stations all within radio distance of each other... you've just created a storm of ~400 frames/sec across _all_ channels, 1 on up! Remember, the scan sequence is fast - dwell time on each channel listing for a probe_response is on the other of a few milliseconds. If a card emits 22 frames per second across 11 channels, that 2 frames/sec per channel becomes a deafening roar of worthless frames. It's obvious that the CA part of CSMA/CA doesn't scale to 200 stations when we consider these sorts of issues. I can think back to Nanogs years ago where folks tended to have junky prism II radios which did this (type of scanning). Nanog 29 in particular was quite rife with junky prism II user hardware. A lot of the laptops were "sager" or something silvery-plastic-generic from far overseas. In my selfish, ideal world, a "wifi" network would behave more like a CDMA system does. Unfortunately, wifi devices were not designed with these goals in mind. If they had, the hardware would be horribly expensive, no critical mass of users would have adopted the technology, and it wouldn't be ubiquitous or cheap today. The good news is that because it's gotten ubiquitous and popular, companies have added-in some of the missing niceties to aid in scaling the deployments. We now see 'controller based' systems from cisco and Meru which have implemented some of the core principals at work in larger mobile networks. One of the important features gained with this centralized controller concept is coordinated, directed association from AP to AP. The controller can know the short-scale and long-scale loading of each AP, the success/failure of delivering frames to each associated client, and a wealth of other useful tidbits. Armed with these clues, a centralized device would prove useful by directing specifically active stations to lesser loaded (but still RF-ideal) APs. True, the CCX (cisco client extensions) support on some devices can permit stuff like this to be shared with the clients (i.e. CCX exposes AP loading data in the beacon frames, and can tell the client how to limit it's TX power) in the hopes that this can be used in the 'hybrid' AP selection logic of the station card. What stinks for us is that very few (generally fewer than 10% at Nanog) of the clients *support* CCX. What's even more maddening is that about 35 to 40% of the MAC addresses associated at the last Nanog could support CCX, but it's simply not enabled for the ssid profile! Here we have one potential solution to some of the troubles in scaling wireless networks that depends entirely on the user doing the right thing. Failure, all around. This gets back to the point of #2 here, in that only "some" of the better-logic'd client hardware will play by the rules (or even do the right thing). In a lot of theses cases it's better to expertly control where a client _can_ associate with a centralized authority (i.e. controller with data from all active APs). We simply cannot depend on the user doing the right thing, especially when the 'right thing' is buried and obscured by software and/or hardware vendors. > 3. Keep an eye on the conference network stats, netflow etc > so that "bandwidth hogs" get routed elsewhere, isolate > infected laptops (happens all the time, to people who > routinely login to production routers with 'enable'
Re: wifi for 600, alex
On 15-Feb-2007, at 10:39, Carl Karsten wrote: That is a really nice list. Is there a wiki somewhere I could post this to? http://nanog.cluepon.net/ !
Re: wifi for 600, alex
That is a really nice list. Is there a wiki somewhere I could post this to? Carl K Suresh Ramasubramanian wrote: There are a few fairly easy things to do. 1. Don't do what most hotel networks do and think that simply sticking lots of $50 linksys routers into various rooms randomly does the trick. Use good, commercial grade APs that can handle 150+ simultaneous associations, and dont roll over and die when they get traffic 2. Plan the network, number of APs based on session capacity, signal coverage etc so that you dont have several dozen people associating to the same AP, at the same time, when they could easily find other APs ... I guess a laptop will latch onto the AP that has the strongest signal first. 3. Keep an eye on the conference network stats, netflow etc so that "bandwidth hogs" get routed elsewhere, isolate infected laptops (happens all the time, to people who routinely login to production routers with 'enable' - telneting to them sometimes ..), block p2p ports anyway (yea, at netops meetings too, you'll be surprised at how many people seem to think free fat pipes are a great way to update their collection of pr0n videos), 3a. Keep in mind that when you're in a hotel and have an open wireless network, with the SSID displayed prominently all over the place on notice boards, you'll get a lot of other guests mooching onto your network as well. Budget for that too. 4. Isolate the wireless network from the main conference network / backbone so that critical stuff (streaming content for workshop and other presentations, the rego system etc) gets bandwidth allocated to it just fine, without it being eaten up by hungry laptops. 5. Oh yes, get a fat enough pipe to start with. A lot of hotel wireless is just a fast VDSL or maybe a T1, with random linksys boxes scattered around the place. --srs On 2/15/07, Marshall Eubanks <[EMAIL PROTECTED]> wrote: > Carl Karsten wrote: >> Hi list, >> I just read over: http://www.nanog.org/mtg-0302/ppt/joel.pdf >> because I am on the PyCon ( http://us.pycon.org ) team and last >> year the hotel supplied wifi for the 600 attendees was a disaster > How was the wifi at the resent nanog meeting? I thought it was quite good. I also think that the IETF wireless has gotten its act together recently as well; I suspect that Joel Jaeggli has had something to do with this. > I have heard of some success stories 2nd hand. one 'trick' was to > have "separate networks" which I think meant unique SSID's. but > like I said, 2nd hand info, so about all I can say is supposedly > 'something' was done.
Re: wifi for 600, alex
The IETF experience is that enough people run 802.11a to take significant load off of the {b,g} network. Marshall On Feb 15, 2007, at 9:45 AM, Pickett, McLean (OCTO) wrote: Works well if everyone has 802.11a/g card. That's been my biggest concern with deploying 802.11a recently. McLean -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Todd Vierling Sent: Thursday, February 15, 2007 12:02 AM To: Suresh Ramasubramanian Cc: Marshall Eubanks; Carl Karsten; NANOG Subject: Re: wifi for 600, alex On 2/14/07, Suresh Ramasubramanian <[EMAIL PROTECTED]> wrote: 4. Isolate the wireless network from the main conference network / backbone so that critical stuff (streaming content for workshop and other presentations, the rego system etc) gets bandwidth allocated to it just fine, without it being eaten up by hungry laptops. The oft-overlooked 802.11a is great for this purpose when there isn't enough wiring infrastructure to drop a RJ45 in all the necessary conference rooms. Whereas 802.11[bgn] has only three (or four, depending on who you quote) mostly non-overlapping frequencies -- even less when MIMO is in use -- 802.11a has eight *completely* non-overlapping standard channels. In nice open conference hall space with at most two walls in the way, the rated shorter range of 11a is actually not so noticeable because of the lack of radio noise. 2.4GHz is soo last decade. ;) (The 802.11[bgn] density where I live is so high that I resorted to installing 802.11a throughout my house. Zero contention for airwaves and I can actually get close to rated speed for data transmission.) -- -- Todd Vierling <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
RE: Wireless Network Question
If you forced your customers use 802.1X for authentication they wouldn't get an IP address unless they were authorized. If 802.1X is not in the mix, another solution is to give them a very short lease (say 2 minutes) until they've completed web-based authentication, and then give them the one-hour lease. Any portal-based product for wireless hotspots can help you out here. Frank -Original Message- From: Frank Bulk Sent: Wednesday, February 14, 2007 5:40 PM To: nanog@merit.edu Subject: Wireless Network Question Hello- I'm looking for anyone that can send me some suggestions based on experience with a wireless network. My problem: It is possible with our current wireless network that a situation could arise where the IP address pool for a specific service location could be exhausted due to Windows clients acquiring an IP address without being authenticated. Thus, if we have a large event taking place in-market, the IP addresses would be assigned and reassigned out (on a one-hour lease) to each Windows client connected to the network, possibly quickly exhausting a small IP address pool if enough clients were simultaneously up and connected. Does anyone have a good suggestion on how to avoid this from happening (aside from over assigning and wasting IP addresses or ignoring the problem)? Thank you for your time MArla Azinger Frontier Communications