RE: Calling TeliaSonera - time to implement prefix filtering

2008-04-15 Thread Fred Reimer
But isn't this what nanog is for?  It appears to be more on-topic than the
email threads.  More E than S.

Fred Reimer, CISSP, CCNP, CQS-VPN, CQS-ISS
Senior Network Engineer
Coleman Technologies, Inc.
954-298-1697


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
 [EMAIL PROTECTED]
 Sent: Tuesday, April 15, 2008 9:51 AM
 To: nanog@merit.edu
 Subject: RE: Calling TeliaSonera - time to implement prefix filtering
 
 
 
   aut-num:AS29049
   and *of course* they don't own 62.0.0.0/8.
  
   Own!?
 
  I think he was saying that Delta Telecom don't *own*
  62.0.0.0/8 and therefore shouldn't be advertising it.
  Following that Telia shouldn't be accepting the route and
  then re-announcing it to peers ...
 
 Of course! ... /8? ... Azerbaijan? ... What was I thinking?...
 
 Still, it would be better to contact the upstream directly
 and work back through the peering chain because this kind
 of thing is usually a result of education deficit, not malice.
 
 --Michael Dillon


smime.p7s
Description: S/MIME cryptographic signature


RE: enterprise change/configuration management and compliance software?

2008-04-15 Thread Fred Reimer
There are tons of products out there.  You could try looking at Cisco
Network Compliance Manager.  It supposedly has built-in compliance rules for
financial institutions (GLB, SOX, etc).  If you want to pay, people will
gladly take your money.

 

Fred Reimer, CISSP, CCNP, CQS-VPN, CQS-ISS

Senior Network Engineer

Coleman Technologies, Inc.

954-298-1697

 

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
jamie
Sent: Tuesday, April 15, 2008 9:35 AM
To: Phil Regnauld
Cc: nanog@merit.edu
Subject: Re: enterprise change/configuration management and compliance
software?

 

 

On Tue, Apr 15, 2008 at 2:31 AM, Phil Regnauld [EMAIL PROTECTED] wrote:

jamie (j) writes:
 `

 device, and by 'device' i mean router and/or switch) configuration
 management (and (ideally) compliance-auditing_and_assurance) software.

   We currently use Voyence (now EMC) and are looking into other options
for
 various reasons, support being in the top-3 ...

   So I guess using something tried, tested and free like Rancid + ISC's
audit
   scripts are not within scope ?


That was my first thought, but the in the industry I'm currently in
(financial), open sourceware for things like this is a definite [fail].
 


   So, I pose:  To you operators of multi-hundred-device networks : what do
 you use for such purposes(*) ?

   Rancid :) (+ and now some home developed stuff)


fail
 

 


   This topic seemed to spark lively debate on efnet,

   The current weather would spark lively debate on most IRC channels.

   Phil



haha.  depends on the day and what other scandals were ao



smime.p7s
Description: S/MIME cryptographic signature


RE: latency (was: RE: cooling door)

2008-03-30 Thread Fred Reimer

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
 Paul Vixie
 Sent: Sunday, March 30, 2008 10:35 AM
 To: nanog@merit.edu
 Subject: Re: latency (was: RE: cooling door)
 
 
 [EMAIL PROTECTED] (Mikael Abrahamsson) writes:
 
  Programmers who do client/server applications are starting to notice
 this
  and I know of companies that put latency-inducing applications in the
  development servers so that the programmer is exposed to the same
  conditions in the development environment as in the real world.  This
  means for some that they have to write more advanced SQL queries to
 get
  everything done in a single query instead of asking multiple and
 changing
  the queries depending on what the first query result was.
 
 while i agree that turning one's SQL into transactions that are more
 like
 applets (such that, for example, you're sending over the content for a
 potential INSERT that may not happen depending on some SELECT, because
 the
 end-to-end delay of getting back the SELECT result is so much higher
 than
 the cost of the lost bandwidth from occasionally sending a useless
 INSERT)
 will take better advantage of modern hardware and software architecture
 (which means in this case, streaming), it's also necessary to teach our
 SQL servers that ZFS recordsize=128k means what it says, for file
 system
 reads and writes.  a lot of SQL users who have moved to a streaming
 model
 using a lot of transactions have merely seen their bottleneck move from
 the
 network into the SQL server.

I have seen first hand (worked for a company and diagnosed issues with their
applications from a network perspective, prompting a major re-write of the
software), where developers work with their SQL servers, application
servers, and clients all on the same L2 switch.  They often do not duplicate
the environment they are going to be deploying the application into, and
therefore assume that the network is going to perform the same.  So, when
there are problems they blame the network.  Often the root problem is the
architecture of the application itself and not the network.  All the
servers and client workstations have Gigabit connections to the same L2
switch, and they are honestly astonished when there are issues running the
same application over a typical enterprise network with clients of different
speeds (10/100/1000, full and/or half duplex).  Surprisingly, to me, they
even expect the same performance out of a WAN.

Application developers today need a network guy on their team.  One who
can help them understand how their proposed application architecture would
perform over various customer networks, and that can make suggestions as to
how the architecture can be modified to allow the performance of the
application to take advantage of the networks' capabilities.   Mikael (seems
to) complain that developers have to put latency inducing applications into
the development environment.  I'd say that those developers are some of the
few who actually have a clue, and are doing the right thing.

  Also, protocols such as SMB and NFS that use message blocks over TCP
 have
  to be abandonded and replaced with real streaming protocols and large
  window sizes. Xmodem wasn't a good idea back then, it's not a good
 idea
  now (even though the blocks now are larger than the 128 bytes of 20-
 30
  years ago).
 
 i think xmodem and kermit moved enough total data volume (expressed as
 a
 factor of transmission speed) back in their day to deserve an
 honourable
 retirement.  but i'd agree, if an application is moved to a new
 environment
 where everything (DRAM timing, CPU clock, I/O bandwidth, network
 bandwidth,
 etc) is 10X faster, but the application only runs 2X faster, then it's
 time
 to rethink more.  but the culprit will usually not be new network
 latency.
 --
 Paul Vixie

It may be difficult to switch to a streaming protocol if the underlying data
sets are block-oriented.

Fred Reimer, CISSP, CCNP, CQS-VPN, CQS-ISS
Senior Network Engineer
Coleman Technologies, Inc.
954-298-1697




smime.p7s
Description: S/MIME cryptographic signature


RE: latency (was: RE: cooling door)

2008-03-30 Thread Fred Reimer
Thanks for the clarification; that's why I put the seems to in the reply.

Fred Reimer, CISSP, CCNP, CQS-VPN, CQS-ISS
Senior Network Engineer
Coleman Technologies, Inc.
954-298-1697


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
 Mikael Abrahamsson
 Sent: Sunday, March 30, 2008 12:30 PM
 To: nanog@merit.edu
 Subject: RE: latency (was: RE: cooling door)
 
 
 On Sun, 30 Mar 2008, Fred Reimer wrote:
 
  application to take advantage of the networks' capabilities.   Mikael
 (seems
  to) complain that developers have to put latency inducing
 applications into
  the development environment.  I'd say that those developers are some
 of the
  few who actually have a clue, and are doing the right thing.
 
 I was definately not complaining, I brought it up as an example where
 developers have clue and where they're doing the right thing.
 
 I've too often been involved in customer complaints which ended up
 being
 the fault of Microsoft SMB and the customers having the firm idea that
 it
 must be a network problem since MS is a world standard and that can't
 be
 changed. Even proposing to change TCP Window settings to get FTP
 transfers
 quicker is met with the same sceptisism.
 
 Even after describing to them about the propagation delay of light in
 fiber and the physical limitations, they're still very suspicious about
 it
 all.
 
 --
 Mikael Abrahamssonemail: [EMAIL PROTECTED]


smime.p7s
Description: S/MIME cryptographic signature


RE: Creating a crystal clear and pure Internet

2007-11-27 Thread Fred Reimer
No offense, but I think this is an overly political topic, and we
just saw that politics are not supposed to be discussed.  There
is a huge political debate on what ISP's should and should not be
doing to traffic that flows through their systems.  There are
other groups, like NNsquad, where these types of conversations
are welcome, but even there on the forums, not the mailing list.

But, if it's not viewed as political then...

Your analogy is flawed, because the Internet is not a pipe system
and ISP's are not your local water utility.  And, there are many
different ways that water utilities are handled in different
parts of the world.  In the US, most if not all water utilities
are handled by the government, usually the county government
where I'm from.  ISP's are not government run, and can't be
compared to a water utility for that simple reason.  They don't
have the same legal (again, an issue that is not supposed to be
discussed, according to the AUP) requirements nor the legal
protections available to governments (you can't sue most
governments).

And my personal opinion is that ISP's should not do anything to
the traffic that passes through their network as far as
filtering.  The only discriminatory behavior that should be
allowed is for QoS, to treat specific types or traffic in a
different manner to give preferential treatment to specific
classifications of traffic.  My definition of QoS for the
purposes of this discussion, if it is allowed to continue, would
not include shaping or policing.  If an ISP says you have a 5Mb
downstream and a 512K upstream, you should actually be allowed to
send 512K upstream all the time.  However, that's not to say that
an ISP should not be able to classify traffic as scavenger over a
particular threshold, and preferentially drop that traffic at
their overprescribed uplink if that is a bottleneck.  The end
user should also be allowed to specify their own QoS markings,
and they should be honored as long as they don't go over specific
thresholds as imposed, and documented, by the ISP.  For example,
the customer should be able to self-classify certain traffic as
high priority (VoIP) and certain as low (P2P), but if the
customer classified all traffic as high priority the ISP is free
to remark anything over a set threshold (say 128K) as a lower
priority, but NOT police it.

If you want to use an analogy, ISP's are more like private road
systems and owners, using public lands that have been given a
right to use said public lands for private profits with
specific restrictions.  Some restrictions may be that you can't
discriminate on the payload (and kind of identifying category for
passengers, such as race, ethnicity, gender, etc, which in the
network world would map to type of protocol or payload content,
such as P2P traffic or email), but that you can create an HOV
lane for high occupancy vehicles (QoS).  Of course, ISP's are
allowed to make sure the vehicles are in proper working condition
(checking that various layer headers are in compliance).  Much
like with the self-marking of traffic with QoS tags, the customer
should also be able to make their own decision and pack two other
people in the car in order to get into that HOV lane.  However,
if the users of the road try and pack everything into the HOV
lane, they can be reclassified (busses may have to pay a higher
fee to use the road).

However, in this world of religious warfare (another banned
topic, I'm sure!) it is recognized that a certain level of
profiling is acceptable.  So, it may be O.K. for ISP's to profile
and deny traffic depending on the payload only for specific types
of traffic that have been shown to cause issues, and/or only be
present for nefarious reasons.  Examples may be known signatures
for virus attacks, worms, or Trojans.  Other examples may be
identifying characteristics for SPAM (I'm reluctant to say
excessive email traffic because I don't believe that is a
proper identifying characteristic, I should be able to run my own
SMTP server and send out as much legitimate email as I want).

I realize that my views probably won't be shared by the vast
majority of ISP's, and hence are overly political for this group.
That's why I think any discussion is not necessarily on-topic.

Thanks,

Fred Reimer, CISSP, CCNP, CQS-VPN, CQS-ISS
Senior Network Engineer
Coleman Technologies, Inc.
954-298-1697



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 On Behalf Of Sean Donelan
 Sent: Tuesday, November 27, 2007 9:39 AM
 To: nanog@merit.edu
 Subject: Creating a crystal clear and pure Internet
 
 
 
 Some people have compared unwanted Internet traffic to water
 pollution,
 and proposed that ISPs should be required to be like water
 utilities and
 be responsible for keeping the Internet water crystal clear
 and pure.
 
 Several new projects have started around the world to
 achieve those goals.
 
 ITU anti-botnet initiative
 
 http://www.itu.int/ITU-
 D/cyb/cybersecurity

RE: Creating a crystal clear and pure Internet

2007-11-27 Thread Fred Reimer
You're not familiar with that incident where VeriSign granted two
certificates to a Microsoft Corporation to hackers?

http://www.news.com/2100-1001-254586.html



Fred Reimer, CISSP, CCNP, CQS-VPN, CQS-ISS
Senior Network Engineer
Coleman Technologies, Inc.
954-298-1697




 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 On Behalf Of John Payne
 Sent: Tuesday, November 27, 2007 4:32 PM
 To: Florian Weimer
 Cc: Jared Mauch; Sean Donelan; nanog@merit.edu
 Subject: Re: Creating a crystal clear and pure Internet
 
 
 
 On Nov 27, 2007, at 4:04 PM, Florian Weimer wrote:
 
 
 One would hope that the CA's wouldn't be connected to an
 attack path...
 
 The revocation stuff should be distributable if it's not
 already.


smime.p7s
Description: S/MIME cryptographic signature


RE: Can P2P applications learn to play fair on networks?

2007-10-29 Thread Fred Reimer
That and the fact that an ISP would be aiding and abetting
illegal activities, in the eyes of the RIAA and MPAA.  That's not
to say that technically it would not be better, but that it will
never happen due to political and legal issues, IMO.


Fred Reimer, CISSP
Senior Network Engineer
Coleman Technologies, Inc.



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Stefan Bethke
Sent: Monday, October 29, 2007 8:37 AM
To: [EMAIL PROTECTED]
Cc: nanog@merit.edu
Subject: Re: Can P2P applications learn to play fair on networks?


[EMAIL PROTECTED] schrieb:
 If P2P software relied on an ISP middlebox to mediate the
transfers,
 then each middlebox could optimize the local situation by using
a whole
 smorgasbord of tools.

Are there any examples of middleware being adopted by the market?
To me, it 
looks like the clear trend is away from using ISP-provided
applications and 
services, towards pure packet pushing (cf. HTTP proxies,
proprietary 
information services).  I'm highly sceptical that users would
want to adopt 
any software that ties them more to their ISP, not less.


Stefan




smime.p7s
Description: S/MIME cryptographic signature


RE: Can P2P applications learn to play fair on networks?

2007-10-29 Thread Fred Reimer
The RIAA is specifically going after P2P networks.  As far as I
know, they are not going after Squid users/hosts.  Although they
may have at one point, it has never made the popular media as
their effort against the P2P networks has.  I'm not talking about
caching at all anyway.  I'm talking about what was suggested,
that ISP's play an active role in helping their users locate
local hosts to grab files from, rather than just anywhere out
on the Internet.  I think that is quite different than
configuring a transparent proxy.  Don't ask me why, it's not a
technical or even necessarily a legal question (and IANAL
anyway).  It's more of a perception issue with the RIAA.  If you
work at an ISP ask your legal counsel if this would be a good
idea.  I doubt they would say yes.

Fred Reimer, CISSP
Senior Network Engineer
Coleman Technologies, Inc.
954-298-1697




-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Sean Donelan
Sent: Monday, October 29, 2007 12:34 PM
To: nanog@merit.edu
Subject: RE: Can P2P applications learn to play fair on networks?


On Mon, 29 Oct 2007, Fred Reimer wrote:
 That and the fact that an ISP would be aiding and abetting
 illegal activities, in the eyes of the RIAA and MPAA.  That's
not
 to say that technically it would not be better, but that it
will
 never happen due to political and legal issues, IMO.

As always consult your own legal advisor, however in the USA
DMCA 512(b) probably makes caching by ISPs legal.  ISPs have not
been shy about using the CDA and DMCA to protect themselves from
liability.

Although caching has been very popular outside the USA, in
particular in 
countries with very expensive trans-oceanic circuits, in the USA
caching
is mostly a niche service for ISPs.  The issue in the USA is more
likely
the cost of operating and maintaing the caching systems are more
expensive 
than the operational cost of the bandwidth in the USA.

Despite some claims from people that ISPs should just shovel
packets,
some US ISPs have used various caching systems for a decade.

It would be a shame to make Squid illegal for ISPs to use.


smime.p7s
Description: S/MIME cryptographic signature