Re: rack power question
Dorn Hetzel wrote: I believe some of the calculations for hole/trench sizing per ton used for geothermal exchange heating/cooling applications rely on the seasonal nature of heating/cooling. I have heard that if you either heat or cool on a continuous permanent basis, year-round, then you need to allow for more hole or trench since the cold/heat doesn't have an off-season to equalize from the surrounding earth. I don't have hard facts on hand, but it might be a factor worth verifying. That is definitely a factor. I do know that you can run such systems 24/7 for multiple months but whether the number is 3, 6 or 8 with the regular sizing I don't know. Obviously it also depends on what's the target temperature for incoming air, if you shoot for 12-13'C the warming of the hole cannot be more than a few degrees but for 17-20'C one would have double the margin to play with. It's also (depending on your kWh cost) economically feasible to combine geothermal pre-cooling with "traditional" chillers to take the outside air first from 40'C to 25'C and then chill it further more expensively. This also works the other way around for us in the colder climates where you actually need to heat up the inbound air. That way you'll also accelerate the cooling of the hole. I'm sure somebody on the list has the necessary math to work out how many joules one can push into a hole for one degree temperature rise. Pete
Re: rack power question
Paul Vixie wrote: aside from the corrosive nature of the salt and other minerals, there is an unbelievable maze of permits from various layers of government since there's a protected marshland as well as habitat restoration within a few miles. i think it's safe to say that Sun Quentin could not be built under current rules. The ones I have are MDPE (Medium Density Polyethylene) and I haven't understood that the plastic would have corrosive features. Obviously it can come down to regulation depending on what you use as a cooling agent but water is very effective if there is no fear of freezing (I use ethanol for that reason). The whole system is closed circuit, I'm not pumping water out of the ground but circulating the ethanol in the vertical ground piping of approximately 360 meters. The amount of slurry that came out of the hole was in order of 5-6 cubic meters. Cannot remember exactly what the individual parts cost but the total investment was less than $10k. (drilling, piping, circulation, air chiller, fluids, etc.) for a system with somewhat over 4kW of cooling capacity. (I'm limited by the airflow, not by the ground hole if the calculations prove correct) Pete
Re: rack power question
Paul Vixie wrote: this is a strict business decision involving sustainability and TCO. if it takes one watt of mechanical to transfer heat away from every watt delivered, whereas ambient air with good-enough filtration will let one watt of roof fan transfer the heat away from five delivered watts, then it's a no-brainer. but as i said at the outset, i am vexed at the moment by the filtration costs. Have you made any calculations if geo-cooling makes sense in your region to fill in the hottest summer months or is drilling just too expensive for the return? Pete
Re: IPv6 on SOHO routers?
Michael K. Smith - Adhost wrote: It's not that bad. You can attach a v6 address to the 802.11 interface and the FastEthernet interface, but you can't put one on a BVI which means you need two /64's if you want v6 on wireless and wired. That workaround does not work on the models with the 4 port switch integrated. (running 12.4T) Pete
Re: IPv6 on SOHO routers?
Mohacsi Janos wrote: On Thu, 13 Mar 2008, Matthew Moyle-Croft wrote: Actually Cisco 850 series does not support IPv6, only 870 series. We tested earlier cisco models also: 830 series has ipv6 support. My colleague tested NetScreen routers: apart for the smallest devices they have IPv6 support. However I think these devices are not consumer equipments. I would call SO (Small Office) devices. The HO (home office) devices are the ~ 50-100 USD devices - you rarely see official ipv6 support. The IPv6 "support" on 87x Cisco is nothing to write home about. It's not supported on most physical interfaces that exist on the devices. But it does work over tunnel interfaces if you have something on your lan to tunnel to. Pete
Re: [funsec] The "Great IPv6 experiment" (fwd)
Gadi Evron wrote: I am unsure what to say. The idea is quite old and I'm happy to see that what started and continued as a joke is actually being tried out to see if it would really work. Hope they get it up and running soon. Pete -- Forwarded message -- Date: Tue, 04 Sep 2007 11:14:34 +0200 From: Lubomir Kundrak <[EMAIL PROTECTED]> To: funsec <[EMAIL PROTECTED]> Subject: [funsec] The "Great IPv6 experiment" This is kind of... interesting. [snip] We're taking 10 gigabytes of the most popular "adult entertainment" videos from one of the largest subscription websites on the internet, and giving away access to anyone who can connect to it via IPv6. No advertising, no subscriptions, no registration. If you access the site via IPv4, you get a primer on IPv6, instructions on how to set up IPv6 through your ISP, a list of ISPs that support IPv6 natively, and a discussion forum to share tips and troubleshooting. If you access the site via IPv6 you get instant access to "the goods". [snip] More on http://www.ipv6porn.com/
Re: An Internet IPv6 Transition Plan
Stephen Wilcox wrote: Now, if you suddenly charge $2.50/mo to have a public IP or $15/mo for a /28 it does become a consideration to the customer as to if they _REALLY_ need it Where would this money go to? Pete
Re: IPv6 Training?
[EMAIL PROTECTED] wrote: Alex Rubenstein writes: Does anyone know of any good IPv6 training resources (classroom, or self-guided)? If your router vendor supports IPv6 (surprisingly, many do!): Too bad the IPv6 support on the low-end Ciscos is mostly broken in many ways (does not work on WLAN, does not work across the local 4 port switch, etc.) , which are also the routers most classrooms could afford. Pete lab-router#conf t Enter configuration commands, one per line. End with CNTL/Z. lab-router(config)#ipv6 ? access-listConfigure access lists cefCisco Express Forwarding for IPv6 dhcp Configure IPv6 DHCP general-prefix Configure a general IPv6 prefix hop-limit Configure hop count limit host Configure static hostnames icmp Configure ICMP parameters local Specify local options mfib Multicast Forwarding mfib-mode Multicast Forwarding mode mldGlobal mld commands multicast-routing Enable IPv6 multicast neighbor Neighbor ospf OSPF pimConfigure Protocol Independent Multicast prefix-listBuild a prefix list route Configure static routes router Enable an IPV6 routing process unicast-routingEnable unicast routing lab-router(config)#ipv6 unicast-routing lab-router(config)#interface tengigabitEthernet 1/1 lab-router(config-if)#ipv6 ? IPv6 interface subcommands: address Configure IPv6 address on interface cef Cisco Express Forwarding for IPv6 dhcpIPv6 DHCP interface subcommands enable Enable IPv6 on interface mfibInterface Specific MFIB Control mld interface commands mtu Set IPv6 Maximum Transmission Unit nd IPv6 interface Neighbor Discovery subcommands ospfOSPF interface commands pim PIM interface commands policy Enable IPv6 policy routing redirects Enable sending of ICMP Redirect messages rip Configure RIP routing protocol router IPv6 Router interface commands traffic-filter Access control list for packets unnumbered Preferred interface for source address selection verify Enable per packet validation lab-router(config-if)#ipv6 enable [...] And then chances are good that you find useful training material on their Web sites, often not just command descriptions, but actual deployment guides.
Re: NANOG 40 agenda posted
Paul Vixie wrote: i wish that the community had the means to do revenue sharing with such folks. carrying someone else's TE routes is a global cost for a point benefit. There are lessons to be learned from the CO2 emissions trade industry. I don't think it's really any different since the economics work exactly the same. Pete
Re: 1500 does not work: Thoughts on increasing MTUs on the internet
Marshall Eubanks wrote: Dear Pete; The streaming servers that I have dealt with (such as Darwin Streaming Server) do the fragmentation at the application layer. They thus send out lots of packets at or near (in this case) 1450 bytes, but they are not UDP fragments. That's the whole point - many networks will not deliver fragments at all, much less the increased risk of loss when they do. (I use Cox Cable at home, and this network apparently does not forward fragments and also has an apparent MTU of 1480 bytes.) Just looked at a YouTube dump, btw, and almost all of the packets are 1448 bytes. I'm referring to Windows Media and Real. They are the worst offenders, though not too many use them without HTTP. Pete
Re: 1500 does not work: Thoughts on increasing MTUs on the internet
Marshall Eubanks wrote: I advise people doing streaming to not use MTU's larger than ~1450 for these sorts of reasons. The unfortunate side-effect of that is that most prominent streaming apps (don't know about Youtube though) then send fragmented UDP packets which leads to reassembly overhead and in case of lost packets, a significantly larger lost data than neccessary. Pete Kind regards Peter and Karin Dambier Regards Marshall --Peter and Karin Dambier Cesidian Root - Radice Cesidiana Rimbacher Strasse 16 D-69509 Moerlenbach-Bonsweiher +49(6209)795-816 (Telekom) +49(6252)750-308 (VoIP: sipgate.de) mail: [EMAIL PROTECTED] mail: [EMAIL PROTECTED] http://iason.site.voila.fr/ https://sourceforge.net/projects/iason/ http://www.cesidianroot.com/
Re: On-going Internet Emergency and Domain Names
Kradorex Xeron wrote: Sadly, if blocking ports and protocols becomes the only method to control things like this from occurring, I sadly will have to agree with Pete's post, as soon we're going to have all 65535 ports on all protocols (TCP, UDP, etc) blocked. 65536 ports for UDP... Pete
Re: On-going Internet Emergency and Domain Names (kill this thread)
Jeff Shultz wrote: We're looking at the alligators surrounding us. Gadi is trying to convince us to help him in draining the swamp (which may indeed be a positive thing in the long run). Does that sound about right? If you drain the swamp the hippo's will be very angry and run at you. The problem argued here is heavily dependent on how long it would take for the bad guys to adapt. I would assume it's less time than it would take to deploy a global system for DNS abuse mitigation. So "fixing" a single protocol would not take us any significant distance because the next thing would be either: - XML-RPC - SOAP - proprietary name-lookup system - p2p botnet control - etc... (yes, blocking port 80 would be a good start) I also have yet to observe measurable reduction of spam since more port 25 blocking has been supposedly taken into use. This is a problem in the policy / edge. It's not something that should be solved in the core. It's immensely easier to blame somebody else (in the case of this thread, registries/registrars) for somebody elses problem (Windows users). It's significantly harder to fix the real issue. But I hope at least part of the loudmouths are up for that. Pete
Re: On-going Internet Emergency and Domain Names
Gadi Evron wrote: Thing is, the problem IS in the core. DNS is no longer just being abused, it is pretty much an abuse infrastructure. That needs to be fixed if security operations on the Internet at their current effectiveness (which is low as it is) are to be maintained past Q4 2007-Q2 2008. Imminent death of the Internet predicted. News at 11. This fearmongering is getting to the scale of democrazy exports. Pete
Re: On-going Internet Emergency and Domain Names
Mattias Ahnberg wrote: They will adapt to any change like this we would try to do. The only real way to attempt to stop this is lobbying for legislation, nailing people for what we see around us and the damage they cause us and to make it risky business rather than the piece of cake it is today. Anything else is just a minor setback for them, and a HUGE deal of investment and money for "us" on top of what we already spend handling what we're exposed to. I second this motion, I think the only way to make a step change for the better is to seek and implement measures that make it more expensive and challenging to be in the badware/phishing/spam business. These measures should also hold their ground and push the problem into the backyards of those who choose to ignore the crap they allow into the public network. Unfortunately nothing to address this seriously exists today and I've yet to identify serious effort to get this done. I'd be happy to be part of such endeavour if one is going to be founded someday. But I do believe it could be done. Even without "clean slate" daydreaming. Pete
Re: botnets: web servers, end-systems and Vint Cerf
J. Oquendo wrote: After all these years, I'm still surprised a consortium of ISP's haven't figured out a way to do something a-la Packet Fence for their clients where - whenever an infected machine is detected after logging in, that machine is thrown into say a VLAN with instructions on how to clean their machines before they're allowed to go further and stay online. This has been commercially available for quite some time so it would be only up to the providers to implement it. Pete
Re: Network end users to pull down 2 gigabytes a day, continuously?
Joe Abley wrote: If anybody has tried this, I'd be interested to hear whether on-net clients actually take advantage of the local monster seed, or whether they persist in pulling data from elsewhere. The local seed would serve bulk of the data because as soon as a piece is served from it, the client issues a new request and if the latency and bandwidth is there, as is the case for ADSL/cable clients, usually >80% of a file is served "locally". I don't think additional optimization is done nor needed in the client. Pete
Re: Network end users to pull down 2 gigabytes a day, continuously?
Gian Constantine wrote: I agree with you. From a consumer standpoint, a trickle or off-peak download model is the ideal low-impact solution to content delivery. And absolutely, a 500GB drive would almost be overkill on space for disposable content encoded in H.264. Excellent SD (480i) content can be achieved at ~1200 to 1500kbps, resulting in about a 1GB file for a 90 minute title. HD is almost out of the question for internet download, given good 720p at ~5500kbps, resulting in a 30GB file for a 90 minute title. Kilobits, not bytes. So it's 3.7GB for 720p 90minutes at 5.5Mbps. Regularly transferred over the internet. Popular content in the size category 2-4GB has tens of thousands and in some cases hundreds of thousands of downloads from a single tracker. Saying it's "out of question" does not make it go away. But denial is usually the first phase anyway. Pete
Re: Google wants to be your Internet
Lucy Lynch wrote: sensor nets anyone? On that subject, the current IP protocols are quite bad on delivering asynchronous notifications to large audiences. Is anyone aware of developments or research toward making this work better? (overlays, multicast, etc.) Pete research http://research.cens.ucla.edu/portal/page?_pageid=59,43783&_dad=portal&_schema=PORTAL business http://www.campbellsci.com/bridge-monitoring investment http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=184400339 global alerts? disaster management? physical world traffic engineering?
Re: Network end users to pull down 2 gigabytes a day, continuously?
Marshall Eubanks wrote: Actually, this is true with unicast as well. This can (I think) largely be handled by a fairly moderate amount of Forward Error Correction. Regards Marshall Before "streaming" meant HTTP-like protocols over port 80 and UDP was actually used, we did some experiments with FEC and discovered that reasonable interleaving (so that two consequtive packets lost could be recovered) and 1:10 FEC resulted in zero-loss environment in all cases we tested. Pete
Re: Network end users to pull down 2 gigabytes a day, continuously?
Sean Donelan wrote: 1/2, 1/3, etc the bandwidth for each additional viewer of the same stream? The worst case for a multicast stream is the same as the unicast stream, but the unicast stream is always the worst case. However unicast stream does not require state in the intermediate boxes (unless they are intentionally keeping some) while even single receiver multicast stream generates state all along the path. This will quite quickly limit the number of groups feasible. Pete
Re: Security of National Infrastructure
Jerry Pasker wrote: It is the way it is, because the internet works when it's open by default, and closed off carefully. (blacklists, and the such) Would email have ever taken off if it were based on white lists of approved domains and or senders? Sure, it might make email better NOW (maybe?) but in the beginning? There was an experiment on this. It's called X.400. Pete
Re: DNS - connection limit (without any extra hardware)
Aaron Glenn wrote: On 12/8/06, Petri Helenius <[EMAIL PROTECTED]> wrote: Has anyone figured out a remote but lawful way to repair zombie machines? sure, null route the customer until they clean their hosts up My question was specifically directed towards zombies that are not local to the ISP. Pete
Re: DNS - connection limit (without any extra hardware)
Geo. wrote: I know this is kind of a crazy idea but how about making cleaning up all these infected machines the priority as a solution instead of defending your dns from your infected clients. They not only affect you, they affect the rest of us so why should we give you a solution to your problem when you don't appear to care about causing problems for the rest of us? Has anyone figured out a remote but lawful way to repair zombie machines? Pete George Roettger -Original Message- *From:* [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of *Luke *Sent:* Friday, December 08, 2006 9:41 AM *To:* [EMAIL PROTECTED] *Subject:* DNS - connection limit (without any extra hardware) Hi, as a comsequence of a virus diffused in my customer-base, I often receive big bursts of traffic on my DNS servers. Unluckly, a lot of clients start to bomb my DNSs at a certain hour, so I have a distributed tentative of denial of service. I can't blacklist them on my DNSs, because the infected clients are too much. For this reason, I would like that a DNS could response maximum to 10 queries per second given by every single Ip address. Anybody knows a solution, just using iptables/netfilter/kernel tuning/BIND tuning, without using any hardware traffic shaper? Thanks Best Regards Luke
Re: The IESG Approved the Expansion of the AS Number Registry
Etaoin Shrdlu wrote: This is an excellent idea, but please do not select the first block after 16 bit numbers are up (can you say buffer overflow?). Something random, in the middle, would be better. 2752512-2818047 ? Pete
Re: Boeing's Connexion announcement
Robert E.Seastrom wrote: Fascinating... of course, you can see where the confusion came from, particularly given the source of some of the components and the fact that they're not actually committed until they get the orders (hence, no satellite capacity online _today_). Thanks for the additional data; I'm sure everyone here will be watching this one closely; the "email/web/irc/AIM from the skies" imperative runs quite high in our community ;-) Early on I was also able to use Skype (with video) successfully but lately the capacity has just not been there. Not sure if they've reduced it towards the end or if the usage has picked up enough for there not to be spare capacity for ~200kbps video. Pete
Re: Why is RFC1918 space in public DNS evil?
Matthew Palmer wrote: I've been directed to put all of the internal hosts and such into the public DNS zone for a client. My typical policy is to have a subdomain of the zone served internally, and leave only the publically-reachable hosts in the public zone. But this client, having a large number of hosts on RFC1918 space and a VPN for external people to get to it, is pushing against this In many scenarios the VPN'd hosts will ask for the names from the public DNS anyway, so I feel your client is right and it would be better for you to go with their wishes. Pete
Re: fingerprinting and spam ID
Ken Simpson wrote: The problem is that I already see enough legit mail hit the quarantine due to being HTML/multipart, suspected of being sent "direct-to-MX" due to Exchange's bizarre habit of not providing an audit trail via Received headers, etc. Of course by the time you can inspect the body of a message, it's already sucked down a large chunk of your resources. Host type is useful in pre-filtering even before you go so far as to send the banner -- to get rid of or at least slow down the crap that you almost certainly know is on its way. The most precious resource for email is in most cases the time spent by reading it. For spam this might not be too many seconds but it still bothers the recipient unneccessarily. Pete
Re: mitigating botnet C&Cs has become useless
Arjan Hulsebos wrote: The ones who've been mugged don't start mugging other people, infected PCs will infect other PCs. That's the difference, and that's why an ISP should do something about that. Although it may be out of fashion, I'd like to see good netizenship. SPAM as other types of abuse is easiest to control closest to the source, which in most cases means the consumer ISP providing the local access for the user. Pete
Re: WSJ: Big tech firms seeking power
David Lesher wrote: I don't know the area; but gather it's hydro territory? How about water-source heat pumps? It's lots easier to cool 25C air into say 10-15C water than into 30C outside air. Open loop water source systems do have their issues [algae, etc] but can save a lot of power If you drill a vertical hole in the order of 50-200 meters deep, the cooling effect of water pumped through a pipe in that hole is in the order of 50W/m. So you can lose 10kW of heat into 200 meter hole. Not sure what the separation needs to be for this to be sustainable. Pretty good return on investment, considering drilling a hole is $3k-$6k. Pete
Re: Geo location to IP mapping
Edward B. DREGER wrote: Since when does the NSA patent things, anyhow? I'd think they would keep secret anything that's actually effective. They are handing out "technology transfer program" leaflets in tradeshows now. Pete
Re: is this like a peering war somehow?
[EMAIL PROTECTED] wrote: And if you are spending the extra money to implement preferential treatment, can you be sure that there is a market willing to pay extra for this? And the real question is if the money is better spent on implementing preferential treatment or upgrading the infrastructure as a whole. Pete
Re: trollage (Re: Akamai server reliability)
Chris Owen wrote: It isn't just that they are wasting my time. They are also wasting their own time. It's the overall lack efficiency that bothers me ;-] Don't worry, it wont take long until google parks their datacenter-in-a-container outside at the fiber junction and the content distribution guys will be obsoleted overnight. Pete
Re: [Misc][Rant] Internet router (straying slightly OT)
Per Gregers Bilse wrote: Life begins with ARP. I would have to argue that for majority of things connected to IP networks, life begins with DHCPDISCOVER. Pete
Re: Weird DNS issues for domains
John Dupuy wrote: If you are talking about strictly http, then you are probably right. If you are hosting any email, then this isn't the case. A live DNS but dead mail server will cause your mail to queue up for a later resend on the originating mail servers. A dead DNS will cause the mail to bounce as undeliverable. (Oh, and if any of your subs are on mailing lists, they will be unsubscribed en masse. A nice way to challenge your call center...) A MTA bouncing mail on temporary DNS failure would be out of spec, horribly. Pete
Re: Turkey has switched Root-Servers
Christopher L. Morrow wrote: So, I think I'm off the crazy-pills recently... Why is it again that folks want to balkanize the Internet like this? Why would you intentionally put your customer base into this situation? If you are going to do this, why not just drop random packets to 'bad' destinations instead? There are actually quite a few parties advocating dropping packets to 'bad' destinations. Each of them usually has a different set of criteria to define the 'bad'. Pete
Re: Tools classifying network traffic to applications
Joe Shen wrote: It seems to focus on P2P application. Is there tool to support applications as more as possible( include p2p, voip, web, ftp, network game, etc. ) The emphasis on p2p is mainly due to the usual questions focusing on them. Obviously the more "traditional" protocols like RTP, HTTP, FTP, etc. are supported also. (RTP with loss/jitter analysis has quite a few uses) Pete
Re: Tools classifying network traffic to applications
Christopher L. Morrow wrote: which can't really tell bittorrent (or ssh or aim or...) over tcp/80 from http over tcp/80... I think Joe's looking for something that knows what protocols look like below the port number and can spit out numbers for that... these, it would seem to me, would all require in-line traffic capture or mirrored port (mirrored traffic, not necessarily an ethernet port mirror) to be effective. We can do that up to 2Gbps; http://www.rommon.com/ , BitTorrent, KaZaa, eDonkey, HTTP, etc. supported. Pete
Re: commonly blocked ISP ports
Kim Onnel wrote: 80 deny udp any any eq 1026 (3481591 matches) This will make one out of 4000 of your udp "sessions" to fail with older stacks which have high ports from 1024 to ~5000. Pete
Re: 12/8 problems?
Drew Linsalata wrote: Richard A Steenbergen wrote: $10 says someone forgot "ip classless". Is there a valid argument for making "ip classless" the default in the IOS? Seems to me that it would only solve problems, but I don't profess to be a routing guru, especially in comparison to folks in this forum. It has been that way for a while now? Pete
Re: Replacing PSTN with VoIP wise? Was Re: Phone networks struggle in Hurricane Katrina's wake
[EMAIL PROTECTED] wrote: A similar problem would be created if a web server relied on DNS that was only hosted on servers in New Orleans. Do you (or somebody) know of recent numbers of what percentage of domains have all their DNS servers in; a) same subnet b) same AS c) same geographical "zone" (faultline, floodplain, coastline, etc.) Obviously if everything served by the DNS is co-located in the same failed facility, it does not matter that much if the names resolve to IP addresses which are unreachable anyway. Pete
Re: P2P Darknets to eclipse bandwidth management?
Fergie (Paul Ferguson) wrote: Overlooking the point that this kind of smells like a pitch for Staselog, I'd be curious to hear of this is an issue on ISP bandwidth management radar... or already is... I've been asked this question repeatedly almost as long as we've had the traffic engineering / classification capabilities in our product. The great change towards encrypted p2p protocols has always been "just moments away" for the last three years. In this time we've seen the predominant p2p protocol to change from Kazaa to eDonkey, from eDonkey to DirectConnect and from there, to BitTorrent. The fraction of traffic classified as "other" has been 2-4% of total since we shipped. Obviously the fact that the world has not changed in the past is no proof that it will not in the future. If it does towards increased privacy and encryption, I'm all for the change. Pete
Re: Replacing PSTN with VoIP wise? Was Re: Phone networks struggle in Hurricane Katrina's wake
[EMAIL PROTECTED] wrote: It's clearly possible to find telco engineers with 5/10/15 years experience in running PSTN (might even find somebody with 40-50 years? :). It's possible to find network engineers with lots of BGP experience. Where do you find a senior engineer with 5+ years experience in enterprise-scale VoIP deployment? Deployable enterprise VoIP products existed in 1998. So it would be somebody who was there doing it back then? Goes 5+ with a margin. Pete
Re: Question about propagation and queuing delays
Tony Finch wrote: TCP performs much better if queueing delays are short, because that means it gets feedback from packet drops more promptly, and its RTT measurements are more accurate so the retransmission timeout doesn't get artificially inflated. Sure, but sending speculative duplicate ack's if you're competing seriously of transit bandwidth works even better... Not sure how to set the evil bit on the packets though... Pete
Re: Question about propagation and queuing delays
David Hagel wrote: This is interesting. This may sound like a naive question. But if queuing delays are so insignificant in comparison to other fixed delay components then what does it say about the usefulness of all the extensive techniques for queue management and congestion control (including TCP congestion control, RED and so forth) in the context of today's backbone networks? Any thoughts? What do the people out there in the field observe? Are all the congestion control researchers out of touch with reality? Co-operative congestion control is like many other things where you're better off without it if most of "somebody else" is using it. TCP does not give you optimal performance but tries to make sure everybody gets along. Pete
Re: zotob - blocking tcp/445
Daniel Senie wrote: One of the dangers is more and more stuff is being shoved over a limited set of ports. There are VPNs being built over SSL and HTTP to help bypass firewall rule restrictions. At some point we end up with another protocol demux layer, and a non-standard one at that if we push more and more restrictive filters out there. This in the long run is going to cause many problems. Isn't SSL VPN exactly another protocol demux layer, though it might be a standard one? Pete
Re: zotob - blocking tcp/445
Joe Maimon wrote: This is network self preservation. Otherwise the garbage will eventually suffocate us all. It's like cancer initially was treated with drugs and equipment which did serious damage to the whole body, killing many in the process and today the methods are much more targeted to the actual bad tissue while minimizing collateral damage. Port blocking is like cancer treatment from the 1980's. Pete
Re: FCC Issues Rule Allowing FBI to Dictate Wiretap-Friendly Design for In ternet Services
[EMAIL PROTECTED] wrote: Then you'll have to conclude that a lot of managed switches are insecure since they include some form of packet mirroring capability. Not to mention most of the routers. They usually can make the copies to an IP tunnel also. Pete
Re: /8 end user assignment?
Christopher L. Morrow wrote: This arguement we (mci/uunet) used/use as well: "not enough demand to do any v6, put at bottom of list"... (until recently atleast it still flew as an answer) How would you know if you had demand? how would you know if people who had dualstack systems were trying to get and failing? Seriously, I'm just curious here... this is akin to the 'if a tree fell inn the forest would it make noise' problem. Run statistics off some selected recursive resolvers? Filter out spammers and other abuse first to make them more accurate. Pete
Re: /8 end user assignment?
Daniel Roesen wrote: I would guesstimate about 8 Terabyte per day, judging from the traffic I saw towards a virgin /21 (1 GByte per day). /18 attracts 19kbps on average, with day averages between 5 and 37 kilobits per second. That would translate to only 50 to 400 megabytes a day. So your /21 must be from a bad neighborhood. Pete
Re: Traffic to our customer's address(126.0.0.0/8) seems blocked by packet filter
Randy Bush wrote: You can ping to 126.66.0.30/8. and how does one ping a /8? Most trojans for zombie networks provide this functionality. Connect to your favourite C&C server and issue; .advscan ping 42 2 64 126.X.X.X (this will ping the address space with 42 threads, using two second intervals for packets, the X's work as wildcards) After the scan has completed, issue .scanstats to view your results. If you need to stop the pinging in the interim, issue .scanstop to cease. Pete
Re: "Cisco gate" - Payload Versus Vector
Randy Bush wrote: very helpful analysis. some questions: mrai stiffle that? could it be used to cascade to a neighbor? i suppose that diverting the just the right 15-30 seconds of traffic could be profitable. More recent hardware allows you to take copies of packets and push them down an IP tunnel. Pushing something like this into the configuration would make much more sense. Pete
Re: as numbers
[EMAIL PROTECTED] wrote: nice... so one or more of the RIRs should ask the IANA for a delegation in the 4byte space and let a few brave souls run such a trap. The IETF has a proces for running such experiments that could be applied here. should I write it up and get the ball rolling? Could the root-servers be moved to the 4 byte space? Pete
Re: Cisco and the tobacco industry
C. Jon Larsen wrote: It was supposed to be a complete ground up re-write in an OO language and it would have the ability to link new modules or shared objects in at run time, and it would unify the existing router (25xx / 4[57]xx / 75xx) family with the Grand Junction acquisition - the CAT5K switch family into one code tree and one IOS to run them all. CatOS and cat5k came from Crescendo acquisition. Pete
Re: Cisco IOS Exploit Cover Up
Stephen Fulton wrote: That assumes that the worm must "discover" exploitable hosts. What if those hosts have already been identified through other means previously?A nation, terrorist or criminal with the means could very well compile a relatively accurate database and use such a worm to attack specific targets, and those attacks need not be destructive/disruptive. Sure, most of the people on this list would make very smart and skilled criminals if they would choose to pursue that path. Pete
Re: Cisco IOS Exploit Cover Up
Buhrmaster, Gary wrote: The *best* exploit is the one alluded to in the presentation. Overwrite the nvram/firmware to prevent booting (or, perhaps, adjust the voltages to damaging levels and do a "smoke test"). If you could do it to all GSR linecards, think of the RMA costs to Cisco (not to mention the fact that Cisco could not possible replace all the cards in all the GSRs across the internet in an anywhere reasonable timeframe). *THAT* is what I suspect worries Cisco. But of course I am just conjecturing... One of the more effective (software) ways is to mess up the cookies on the cards which tell IOS what kinds of cards they are and then reload the box. Fortunately destructive worms don't usually get too wide distribution because they don't survive long. Pete Gary -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Janet Sullivan Sent: Friday, July 29, 2005 12:44 PM To: [EMAIL PROTECTED]; nanog@merit.edu Subject: Re: Cisco IOS Exploit Cover Up Scott Morris wrote: And quite honestly, we can probably be pretty safe in assuming they will not be running IPv6 (current exploit) or SNMP (older exploits) or BGP (other exploits) or SSH (even other exploits) on that box. :) (the 1601 or the 2500's) If a worm writer wanted to cause chaos, they wouldn't target 2500s, but 7200s, 7600s, GSRs, etc. The way I see it, all that's needed is two major exploits, one known by Cisco, one not. Exploit #1 will be made public. Cisco will released fixed code. Good service providers will upgrade. The upgraded code version will be the one targeted by the second, unknown, exploit. A two-part worm can infect Windows boxen via any common method, and then use them to try the exploit against routers. A windows box can find routers to attack easily enough by doing traceroutes to various sites. Then, the windows boxen can try a limited set of exploit variants on each router. Not all routers will be affected, but some will. As for what the worm could do - well, it could report home to the worm creators that "Hey, you 0wn X number of routers", or it could do something fun like erasing configs and locking out console ports. ;-) Honestly, I've been expecting something like that to happen for years now.
Re: Provider-based DDoS Protection Services
Suresh Ramasubramanian wrote: Not allowing your users to run eggdrop or other irc bots on the shells you give them, and generally not hosting irc stuff would definitely help there. Filtering anything else than port 80 and maybe 53 would allow them to experience the Internet in safe and controlled manner! Pete
Re: London incidents
Francesco Usseglio Gaudi wrote: My little experience is that cell phones are in the most of cases nearly congenstion: a simple crow of people calling all together can shut down or delay every calls and sms GSM networks running TFR or EFR audio codecs have 8 timeslots on a cell. Usual 900MHz frequency allocation plans allow for 4-5 usable cells but most handsets try only the two with best reception to get an available timeslot. If you happen to be in a neighborhood with 850/1900 or 900/1800 service, the odds of having more capacity available are better. This translates to 16 people with the same network dialing simultaneously can congest the two local cells. Almost all GSM networks implement emergency priority where a call with the bit set will pre-empt capacity in the primary cell. Some handset firmware can be modified to set the neccessary bit on demand. Not sure how long one would get away with it or if the BTS firmware would check the number dialed before granting pre-emption. Pete
Re: OMB: IPv6 by June 2008
Randy Bush wrote: Is it a pproblem keeping 500,000 routess in core routers? Of course, it is not (it was in 1996, but it is not in 2005 really? we have not seen this so how do you know? and it will be fine with churn and pushing 300k forwarding entries into the fibs on a well-known vendor's line cards? Soon the fib can be replaced by just an array with an index for every IPv4 address. With <255 interfaces, 4G is sufficient .-) Pete
Re: mh (RE: OMB: IPv6 by June 2008)
Crist Clark wrote: And the counter point to that argument is that the sparse population of IPv6 space will make systematic scanning by worms an ineffective means of propagation. Any by connecting to one of the p2p overlay networks you'll have a few million in-use addresses momentarily. Pete
Re: OMB: IPv6 by June 2008
Jay R. Ashworth wrote: Well, with all due respect, of *course* there isn't any 'killer site' that is v6 only yet: the only motivation to do so at the moment, given the proportion of v4 to v6 end-users, is *specifically* to drive v4 to v6 conversion at the end-user level. We need either one efficient v6 p2p application or sites providing free p0rn only over ipv6 connections. The same would work for multicast. Pete
Re: OMB: IPv6 by June 2008
Peter Dambier wrote: David Conrad wrote: The good thing with IPv6 is autoconfiguration. There is no need to renumber. With the radvd daemon running your box builds its own ip as soon as you plug it in. If your box is allowed then give it a global address from the radvd. Your box does not care about the changed address. It will happyly use it. Unfortunately the autoconfiguration did not fix the combined identifier and network address issue both ipv4 and ipv6 have. If it would have done that, multihoming would not be an issue with ipv6 today. (and probably neither with ipv4) Pete
Re: ATM (with the answer!!!)
Mikael Abrahamsson wrote: On Sat, 2 Jul 2005, John L Lee wrote: With routers you will need to turn buffering off and you will still have propagation in the double to triple milli-seconds range with jitter in the multi milli-seconds range. Please elaborate why a router would have multi-millisecond propagation delay. For a 2meg link yes, for highspeed interfaces, no. My ping is showing that! (couldn't resist) Pete
Re: OMB: IPv6 by June 2008
Stephen Sprunk wrote: What this really does is change the detection method. Instead of scanning randomly, you sit and watch what other IP addresses the local host communicates with (on- and off-subnet), and attack each of them. How many degrees of separation are there really between any two unrelated computers on the Internet? You could probably collect half of all addresses in use just by infecting Google... Or just send email with IMG SRC tag pointing to a server you control and harvest the addresses from there? Pete
Re: Fundamental changes to Internet architecture
Fergie (Paul Ferguson) wrote: Yeah, I saw that... With all respect to Dave, and not to sound too skeptical, but we're pretty far along in our current architecture to "fundamentally" change, don't you think (emphasis on fundamentally)? Most of the routing and security issues on todays IP4/IP6 internet could be solved by deploying HIP or derivatives thereof without requiring fundamental changes to the infrastructure since the major "flaw" of current generation Internet is tying the network identity and host/application indentity into one which is then overcome with whole spectrum of solutions along the lines of anycast, load-balancers, NAT, etc. Pete - ferg -- [EMAIL PROTECTED] wrote: I guess I'm not the only one who thinks that we could benefit from some fundamental changes to Internet architecture. http://www.wired.com/news/infostructure/0,1377,68004,00.html?tw=wn_6techhead Dave Clark is proposing that the NSF should fund a new demonstration network that implements a fundamentally new architecture at many levels. -- "Fergie", a.k.a. Paul Ferguson Engineering Architecture for the Internet [EMAIL PROTECTED] or [EMAIL PROTECTED] ferg's tech blog: http://fergdawg.blogspot.com/
Re: ATM
Philip Lavine wrote: I plan to design a hub and spoke WAN using ATM. The data traversing the WAN is US equities market data. Market data can be in two flavors multicast and TCP client/server. Another facet of market data is it is bursty in nature and is very sensitive to packet loss and latency (like voice). What type of ATM AAL format would be best for this topology? Is there any other concerns I should be aware of. Maybe the small fact that ATM is fading away and building new networks with technology going away is going to explode your operational cost in a few years time. Business grade IP networks will provide you with equal if not better performance than a "dedicated" ATM WAN. (in addition to the fact that you probably posted your question in the wrong forum) Pete
Re: Email peering
Rich Kulawiec wrote: "The best place to stop abuse is as near its source as possible." Meaning: it's far easier for network X to stop abuse from leaving its network than it is for 100,000 other networks to defend themselves from it. Especially since techniques for doing so (for instance, controlling outbound SMTP spam) are well-known, heavily documented, and easily put into service. The problem with countermeasures that would actually hurt the source of junk heavily enough would also have to hurt "legitimate" traffic making you an immediate lawsuit magnet. If that would not be the case, or some larger parties feel they could stand despite this fact, the problem would be fairly straightforward to reduce to a fraction in a few months time. Pete
Re: Email peering (Was: Economics of SPAM [Was: Micorsoft's Sender IDAuthentication......?]
[EMAIL PROTECTED] wrote: Today, if Joe Business gets lots of spam, it is not his ISP's responsibility. He has no-one to take responsibility for this problem off his hands. But if he only accepts incoming email through an operator who is part of the email peering network, he knows that somewhere there is someone who will take responsibility for the problem. That is something that businesses will pay for. Just look at the Postini numbers... Pete But first, ISPs have to put their hands up and take collective responsibility for Internet email as a service that has value and not just as some kind of loss leader for Internet access services. --Michael Dillon
Re: Outage queries and notices (was Re: GBLX congestion in Dallas area)
Jay R. Ashworth wrote: On Wed, Jun 08, 2005 at 09:22:02PM +0300, Petri Helenius wrote: Jay R. Ashworth wrote: The Internet needs a PA system. There is this sparsely deployed technology called multicast which would work for this application. Well, that's fine, at the transport layer, but I think more an application layer solution is called for. IMS / PoC(PTT) would allow this. Pete
Re: Outage queries and notices (was Re: GBLX congestion in Dallas area)
Jay R. Ashworth wrote: The Internet needs a PA system. There is this sparsely deployed technology called multicast which would work for this application. Pete
Re: Google DNS problems?!?
Suresh Ramasubramanian wrote: On 5/8/05, aljuhani <[EMAIL PROTECTED]> wrote: Well I am not a DNS expert but why Google have the primary gmail MX record without load balancing and all secondaries are sharing the same priority level. Has it occured to you that there are other ways of load balancing mailserver clusters than just setting MX records? And actually it's quite common to have only one MX record today if the secondaries have no clue of the valid recipients. (to avoid queueing bounces) Pete
Re: Acceptable DSL Speeds (ms based)
[EMAIL PROTECTED] wrote: Well... the *original* question was "What's an acceptable speed for DSL?", and the only *really* correct answer is "The one that maximizes your profit margin", balancing how much you need to build out to improve things against whatever perceived sluggishness ends up making your customers go elsewhere. As Just host a google box and push-install http://webaccelerator.google.com/ Probably makes >95% of people happy and one or two content delivery companies wonder where they should go next. Pete
Re: Schneier: ISPs should bear security burden
Adi Linden wrote: Its not up to the ISP to determine outbound malicious traffic, but its up to the ISP to respond in a timely manner to complaints. Many (most?) do not. If they did their support costs would explode. It is block the customer, educate the customer why they were blocked, exterminate the customers PC, unblock the customer. No doubt there'll be a repeat of the same in short time. This is actually the opposite. (though I'm biased) But the support costs will decrease because you'll get less complaints inbound and less customers complaining about slow connections because their PC's are filling them with junk. Pete
Re: Schneier: ISPs should bear security burden
Fergie (Paul Ferguson) wrote: Of course there are. What I'm saying is that too many providers do nothing, regardless of whether it is a managed (read: paid) service, or not. So why don't the market economy work and solve the problem? Because there is no "tax" on pollution? Pete - ferg -- Petri Helenius <[EMAIL PROTECTED]> wrote: We owe to our customers, and we owe it to ourselves, so let's just stop finding excise to side-step the issue. So are you saying that managed security services are not avaialble for paying consumers in USA? Pete -- "Fergie", a.k.a. Paul Ferguson Engineering Architecture for the Internet [EMAIL PROTECTED] or [EMAIL PROTECTED] ferg's tech blog: http://fergdawg.blogspot.com/
Re: Schneier: ISPs should bear security burden
Daniel Roesen wrote: I hope to find the time to do some capturing and analysis of this traffic. If anyone here has experience with that I'd be happy to hear from them... don't want to waste time doing something others already did... :-) Sure, what would you like to know? Pete
Re: Schneier: ISPs should bear security burden
Fergie (Paul Ferguson) wrote: We owe to our customers, and we owe it to ourselves, so let's just stop finding excise to side-step the issue. So are you saying that managed security services are not avaialble for paying consumers in USA? Pete
Re: Detecting VoIP traffic in ISP network
Suresh Ramasubramanian wrote: >Local telco concerned about voip eating into their revenues, and wants >to push through legislation or something? :) > > > Or somebody who would like to provision adequate bandwidth to accommodate for services on the rise? Not everybody is installed with the evil bit enabled by default :-) Pete >On 4/27/05, Joe Shen <[EMAIL PROTECTED]> wrote: > > >>we want to collect statistics in our backbone >>networks. >> >>Is there any good method to this? is there any product >>for this ? >> >>Joe >> >>_ >>Do You Yahoo!? >>嫌邮箱太小?雅虎电邮自助扩容! >>http://cn.rd.yahoo.com/mail_cn/tag/10m/*http://cn.mail.yahoo.com/event/10m.html >> >> >> > > > >
gigabit residential
http://www.convergedigest.com/Bandwidth/newnetworksarticle.asp?ID=14545 Pete
Re: New Outage Hits Comcast Subscribers
Daniel Golding wrote: If you take a look at the dslreports.com forums, there are numerous complains about DNS performance from various DSL and cable modem users. I'm not sure how reasonable these complains are. The usual solution from other users is to install a piece of Windows software called "Treewalk" which will magically cure their problems. Consumer ISP's who don't proactively take care of security/abuse usually end up with harvesting-bots which consume significant amount of DNS resources, typically doing anything from a few dozen to a thousand queries _a_second_. A few hundred of these will seriously hamper an usually provisioned recursive server. Pete
Re: clued/interested LEO list
Gadi Evron wrote: Petri Helenius wrote: joe mcguckin wrote: Isn't there already one 'secret handshake' club in existence already? Yes, but unlike there is a need for multiple instances of different governments, there is a need for multiple 'closed communities'. It will allow them to become corrupt in different ways. Obviously I meant to say "...but _not_ unlike...". Want my snail mail address for sending checks? I rather get cash. This will be a resource, not a mailing list. Depending on the S/N ratio, a mailing list can be an useful resource. Pete
Re: clued/interested LEO list
joe mcguckin wrote: Isn't there already one 'secret handshake' club in existence already? Yes, but unlike there is a need for multiple instances of different governments, there is a need for multiple 'closed communities'. It will allow them to become corrupt in different ways. Pete On 4/10/05 3:45 AM, "Gadi Evron" <[EMAIL PROTECTED]> wrote: I'm creating a list of clued and/or interested LEO's, who would like to be part of CLOSED/PRIVATE/SECRET online communities such as anti-botnets, anti-spam, anti-phishing, etc. mailing lists, and/or get information in their area of responsibility. I feel such a resource has been needed for a long time, so I just decided to sit down and get it done. If anyone has someone to add, I'll be working on it in the next couple of weeks. Gadi.
Re: The power of default configurations
Paul Vixie wrote: IMO, RFC1918 went off the track when both ISP's and registries started asking their customers if they have "seriously considered using 1918 space instead of applying for addresses". This caused many kinds of renumbering nightmares, overlapping addresses, near death of ipv6, etc. just checking... does that mean you favour the one-prefix-per-asn implicit allocation model, or the ipv6 version of 1918 which intentionally doesn't overlap in order to serve inter-enterprise links, or what exactly? I'm saying that running out of IPv4 addresses would not be such a bad thing and because of this should not be unneccessarily delayed. Pete
Re: The power of default configurations
Paul Vixie wrote: no to 1) prolong the pain, 2) beat a horsey.. BUT, why are 1918 ips 'special' to any application? why are non-1918 ips 'special' in a different way? i know this is hard to believe, but i was asked to review 1918 before it went to press, since i'd been vociferous in my comments about 1597. in IMO, RFC1918 went off the track when both ISP's and registries started asking their customers if they have "seriously considered using 1918 space instead of applying for addresses". This caused many kinds of renumbering nightmares, overlapping addresses, near death of ipv6, etc. Pete
Re: botted hosts
Florian Weimer wrote: * Suresh Ramasubramanian: Find them, isolate them into what some providers call a "walled garden" - vlan them into their own segment from where all they can access are antivirus / service pack downloads Service pack downloads? Do you expect ISPs to pirate Windows (or large parts thereof)? Or has Microsoft finally seen the light? Walled garden is a term to describe selective external availability. This does not violate the usual download license conditions because no copy is made or stored at any time. The ISP can choose which external services are made available to the infected parties. Pete
Re: botted hosts
Sean Donelan wrote: Locating bots is relatively easy. If you think that is the hard part, you don't understand the problem. It's easy to some extent, databases to a few hundred thousand are easy to collect but going to the millions is harder. So how do you encourage people to fix their computers, without the press writing lots of stories about "evil" ISPs cut off service to grandmother's on social security looking at pictures of their grandchildren. Experience tells that telling (obviously automatically) the users that their computer is too unsafe to be on the public internet and it'll stay that way until they either fix it or change to a less clueful provider works wonders. There are at least 20 million and probably more compromised computers on the Internet. Who has a plan to fix them? If the nanog readership is a few thousands, that's only ~5-10k for each of us. Piece of cake. And I still don't buy the number. I might buy 2M. Pete
Re: botted hosts
Peter Corlett wrote: A side-effect of the greylisting and other mail checks is that I've got a lovely list of compromised hosts. Is there any way I can usefully share these with the community? Set up a website where one can input a route and can see hosts covered with it? Pete
Re: so, how would you justify giving users security? [was: Re: botted hosts]
Gadi Evron wrote: Between spam, spyware and worms, not to mention scans ad attacks, I suppose that a large percentage of the Internet already is pay-for-junk? No. Most of the Internet is p2p file sharing, which does not fall into the categories mentioned. (at least mostly it doesn't) Pete
Re: botted hosts
Stephen J. Wilcox wrote: On Sun, 3 Apr 2005, Petri Helenius wrote: I run some summaries about spam-sources by country, AS and containing BGP route. These are from a smallish set of servers whole March aggregated. Percentage indicates incidents out of total. Conclusion is that blocking 25 inbound from a handful of prefixes would stop >10% of spam. and your second highest is 4.0.0.0/8 your advice is blocking it would help your email? The abuse from 4/8 seems to be coming from the first quarter of the address space. To be fair, 24.0.0.0/8 should get equal treatment to 4.0.0.0/8, whichever the reader feels appropriate. There are worse populations on other /8's but none of them are controlled by a single entity. Pete Steve +-+--+ | 26.8013 | US | | 25.6489 | KR | | 11.2896 | CN | | 4.3139 | FR | | 2.8045 | BR | +-+--+ | 11.3916 | 4766 | | 6.3791 | 9318 | | 5.1094 | 4134 | | 3.3910 | 7132 | | 3.1717 |29963 | ++--+ | 2.0754 | 207.182.144.0/20 | | 1.7184 | 4.0.0.0/8| | 1.3054 | 82.224.0.0/11| | 1.1116 | 221.144.0.0/12 | | 1.0963 | 207.182.136.0/21 | | 0.9943 | 61.78.37.0/24| | 0.9586 | 218.144.0.0/12 | | 0.9484 | 222.96.0.0/12| | 0.7394 | 222.65.0.0/16| | 0.7343 | 211.200.0.0/13 | Pete
botted hosts
I run some summaries about spam-sources by country, AS and containing BGP route. These are from a smallish set of servers whole March aggregated. Percentage indicates incidents out of total. Conclusion is that blocking 25 inbound from a handful of prefixes would stop >10% of spam. +-+--+ | 26.8013 | US | | 25.6489 | KR | | 11.2896 | CN | | 4.3139 | FR | | 2.8045 | BR | +-+--+ | 11.3916 | 4766 | | 6.3791 | 9318 | | 5.1094 | 4134 | | 3.3910 | 7132 | | 3.1717 |29963 | ++--+ | 2.0754 | 207.182.144.0/20 | | 1.7184 | 4.0.0.0/8| | 1.3054 | 82.224.0.0/11| | 1.1116 | 221.144.0.0/12 | | 1.0963 | 207.182.136.0/21 | | 0.9943 | 61.78.37.0/24| | 0.9586 | 218.144.0.0/12 | | 0.9484 | 222.96.0.0/12| | 0.7394 | 222.65.0.0/16| | 0.7343 | 211.200.0.0/13 | Pete
Re: 72/8 friendly reminder
Randy Bush wrote: i do not understand what you are proposing. ahhh. you mean o each asn register a pingable address within its normal space, maybe in their irr route object o the rirs set up a routing island with only the new prefix in it o from a box with that new prefix, the rir pings all asn's registered pingable addresses from the first step o whine about any which are not pingable interesting modulo issues of reachability at any one time. and places more of a routing policing burden on the rirs. though some at least one rir is just dying to become net police, so it might sell. We can set this up and provide the results for public consumption given the IP's and a minimum allocation from each one of the new blocks. (for the neccessary duration, unless permanent allocation for darkspace duty is acceptable) Pete
Re: 72/8 friendly reminder
Randy Bush wrote: a bit more coffee made me realize that what might best occur would be for the rir, some weeks BEFORE assigning from a new block issued by the iana, put up a pingable for that space and announce it on the lists so we can all test BEFORE someone uses space from that block. Or maybe people should actually have systems to look at what hits their filters and from where and look at the summaries once a month or so? Pete
Re: Utah governor signs Net-porn bill
Rich Kulawiec wrote: Oh...and then we get into P2P distribution mechanisms. How is any ISP supposed to block content which is everywhere and nowhere? This would only be possible by whitelisting content, which is not what most would accept. (although there are countries where this is the norm, but their citizens are not exactly happy with the norm either) With technologies which do pseudonymous random routing over tunnel broker service, applet brought to you similarly to Flash or Shockwave "plugin", intrusive technologies become even harder to implement reliably. And it's probably the older kids who use this technology before the ISP or the parents. The numbers are still in thousands, but in the P2P world, going from minority to majority is 12 to 18 months. Pete
Re: Utah governor signs Net-porn bill
Simon Lyall wrote: The world has been wait for a list of Florida IPs for a while so we can block them for a few years, no such luck however. ip2location.com would be happy to sell you just such a list. Pete On a more practical note one possible solution to a similar I heard was to ensure that their blocking service (offered at no extra cost) just gave people a rfc1918 address could *only* access a page explaining how all the nasty sites were now blocked. It can be called the "do nothing account" or similar.
Re: public accessible snmp devices?
Jim Popovitch wrote: I think this could be relevant. a LOT of devices drop snmp requests when they get busy or when too many incoming requests occur. Are you sure that you were the only one polling that device? Perhaps someone else put it into a "busy" state. Too often with SNMP devices and tools a '0' can mean things other than zero. So you are saying that it's ok for a Cisco or Juniper router to return zero for a counter when they feel "busy" ? My RFC collection tells a different story. Pete
Re: public accessible snmp devices?
Jim Popovitch wrote: Was the device restarted? Was the polled interface so overloaded that UDP was dropped and your tool/application just happened to show a zero instead? That would be no on both counts. All packets got replies and while debugging the polling interval was fairly short. (on order of seconds) so restart would be out of question and it repeated frequently enough not to be a failover either. Pete
Re: public accessible snmp devices?
Alexei Roudnev wrote: Hmm, good idea. I add my voice to this question. But, btw, SNMP implementations are extremely buggy. Last 2 examples from my experience (with snmpstat system): - I found Cisco which have packet countters (on interface) _decreased_ instead of _increased_ (but octet counters are _increased_); And lately, for reasons undetermined so far there has been instances of both vendor C and J where counters suddenly go to zero either temporarily (like 1,2,3,4,0,6,7,8,0,10,etc.) or reset altogether without any reason. Pete
Re: IRC Bot list (cross posting)
Stephen J. Wilcox wrote: Hi, you probably didnt think of this but it might not be a good idea to publish a list of 3000 computers than can be infected/taken over for further nastiness. Collecting that kind of list on any machine on the public internet takes only a day or so, so I don't think posting a list, where some of the IP's change anyway should be considered a security threat. if you can privately send me a list of Ip addresses (no need to sort) i can assist you to distribute this information securely? Pete
Re: Time to check the rate limits on your mail servers
Nils Ketelsen wrote: Only thing that puzzles me is, why it took spammers so long to go in this direction. It didn't. It took the media long to notice. Pete
Re: beware of the unknown packets
Sabri Berisha wrote: On Wed, Jan 26, 2005 at 11:12:19PM +0200, Petri Helenius wrote: Hi, http://www.kb.cert.org/vuls/id/409555 Did anyone here of any exploits being in the wild? How would one tell if the actual issue is not published? (without violating possible NDA's) Pete
beware of the unknown packets
http://www.kb.cert.org/vuls/id/409555 Pete