Tim Churches wrote:
>
> BTW, Ross Anderson (see http://www.cl.cam.ac.uk/~rja14/ but not
> http://www.rossanderson.org for some of his excellent papers on security
> and confidentiality issues, including confidentiality of medical
> records) contends that most security breaches of health information
> systems come from the inside. I have not seen much evidence cited to
> support this assertion, but perhaps there isn't much evidence because so
> few institutions admit that they have suffered security breaches, let
> alone discuss the nature of them.
>
>
You have since seen URL's to some published work, but it is
true that most instituitions keep their security breach
information private. I know that our hospital keeps that
information private and confidential.
However, the most well known breaches that have occured
(which are a matter of public record, so I can talk about
them) were internal. One was a mis-guided attempt to
provide an external vendor with a debugging file that ended
up on the web and available via our web site indexer!
It's not all malicious intent, more often then not, it's
mis-configuration or mis-understanding.
That's why security is a people process and not a product or
a set of technologies.
Re: Other comments about getting users to maintain and keep
confidential high quality passwords. I too have seen the
sticky notes, as has just about everyone involved with
systems management. This is proof posistive that if we took
away all userid's and passwords and allowed no access,
security would be dramatically improved :)
Seriously, this is a classic example of what the literature
refers to as a secondary effect of technical security
systems. There is some evidence that these seconday effects
can have very large organizational costs (see my previous
postings about this). The hacker literature also takes
advantage of the fact that people are involved in real
systems and uses the term 'social engineering'. I have read
several high profile tiger team reports (one done at a major
financial institution) where the primary means of compromise
was via social engineering, i.e. someone just gave their
password out over the phone! Forget all that Mission
Impossible stuff, it's really far easier to compromise
systems than one thinks, and it takes little more than tried
and true con man and sting tactics. And of course, the kind
of people who have the knowledge to do this are also those
external folks most likely to be a threat. So, I argue that
these external threats are not really reduced much by highly
complex and sophisticated technial security measures.
The trade press and the security vendors talk about the
sophisticated technical attack and the ever more
sophisticated means of defense, yet these attacks do not
compromise the bulk of the risk. The industry want's their
technical solutions to compose the bulk of your security budget.
Let's take a look at the most recent external event I know
of: If you will remember, last month the FBI announced that
it had located a group of folks (Ukranian, I believe) that
were blackmailing some 20 or so e-commerce sites after
having stolen their credit card lists. The means by which
they were able to penetrate the systems was a very old and
pretty simplistic flaw in IIS. No sophisticated technical
security solutions would have prevented this attack. Beyond
the seemingly simple systems practice of keeping up to date
on patches (but see Schneier for why this in not an adequate
response) what was really needed was a basic re-design of
on-line commerce, i.e. like not storing a credit card number
on the on-line system beyond the time it takes to clear the
transaction! But of course, storing such numbers makes for
a more convenient shopping experience!