Data Security was: Basic EHR functionality

2004-03-12 Thread Nathan Lea
After discussion with Dr Dipak Kalra, we felt that the following would 
be of interest:

As part of the EHR developments at UCL we have been looking at 
appropriate ways of auditing user interactions with individual EHRs, as 
part of an overall security approach. For over a year our record server 
has kept an audit trail of each user access to or addition of data to 
any EHR. Through a helpful student project last summer (thanks to Asif 
Ali) we now also have a first prototype client and query service that 
permits an administrator to examine which users have accessed parts of 
an individual patient's record, which records a given user has 
accessed, or the general accesses that have occurred for any given 
archetype, within any date-time period. What we next need to do is to 
extend the client to support richer interrogation, and to examine again 
if we are retaining the most appropriate data items within the audit 
log. A further challenge is for us to explore the level of granularity 
at which to retain the audit information.

The biggest question in Dipak's mind is how best to audit the result 
of running a query in which many record components are extracted and 
examined (perhaps by an application) to determine if they fulfil the 
query criteria, but only a few are actually returned to the end user 
initiating the request. The record server might not know of the 
filtration taking place, since its interactions would only be with the 
application, and not the end-user.

On consideration of the recent discussion regarding the Harvard 
University experiments to display warning messages on the screens of 
clinicians, we have this facility to log user (whether users are 
clinicians or patients) access to EHRs; a work in progress project to 
develop a browser screen to access this data and display it is 
described above - please see:

  http://www.ehr.chime.ucl.ac.uk/docs/Ali,%20Asif%20(MSc%202003).pdf

The data which is persisted and the GUI is keyed to logging user 
access, primarily to ensure that patient episode information and 
treatment recording is exported in a way which promotes efficient 
patient care and clinician support, with the added value that records 
access is logged for scrutiny should it be necessary.

Best wishes,

Nathan

-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-11 Thread Tim Churches
On Wed, 2004-03-10 at 23:26, Thomas Beale wrote:
 Gavin Brelstaff wrote:
 
  Thomas Beale wrote:
 
 
  A well known study in Harvard medical school (I think) showed that 
  putting the message Do not inappropriately access patient data - all 
  your accesses are being logged on clinician screens a few times a 
  day resulted in a drop to near 0 of inappropriate access. No other 
  technology was used
  - thomas
 
 
  There must be a downside to do that too - discouraging access by those
  who have urgent need while being undertrained on the system - it would
  sure scare me off  - and that would effectively reduce medic-medic
  communication rather than promote it.  Sure security is important
  but don't forget it is always compromise [Bruce Schneier].
 
 I actually suspect the key to this is: whatever the security measures, 
 we must assume that someday, one day, health data of you or me, or a 
 politician or an actress will be hacked and sent to the Mirror or Sun 
 (gutter tabloid press in Britain, for US readers!), or simply posted on 
 a website. 

I think it is very important for all proponents of community-wide EHRs
to sit down and contemplate a series of worst case scenarios. Sure, they
can be dismissed as too unlikely to worry about - but they do need to be
contemplated.

How about this one: a sysadmin of an EHR facility copies not just the
one medical record, but 100,000 medical records, and passes them
secretly to a data terrorist who threatens to publish them on the
Internet via BitTorrent or one of the other distributed, anonymous
peer-to-peer networks - so that rounding up and destroying the
information is nigh on impossible - in order to extort money from a
government, or just to politically embarrass it (to death, probably).
The sysadmin bypasses the audit logs, or simply doctors them to cover
his/her tracks, or just skips the country.

Then consider the slightly more probable scenario that the above happens
not to your community EHR, but to some other community EHR, somewhere
else in the world, with the whole event widely reported in the media.

Either way, you need to be able to articulate why either a) this
scenario could never happen or b) why the costs of protecting against it
are too great, given the low but unknown probability of it occurring and
c) what measures are justifiable to take to protect against it.

As a member of the public, I hope and trust that the people responsible
for,  say, a nuclear power plant, have workshopped all kinds of
extremely unlikely but extremely devastating scenarios. Failure to do so
results in Three Mile Islands and Chernobyls.

Community-wide EHRs are the nuclear power plants of the health
informatics world.


-- 

Tim C

PGP/GnuPG Key 1024D/EAF993D0 available from keyservers everywhere
or at http://members.optushome.com.au/tchur/pubkey.asc
Key fingerprint = 8C22 BF76 33BA B3B5 1D5B  EB37 7891 46A9 EAF9 93D0


-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: 
http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20040311/1dcecf77/attachment.asc


Data Security was: Basic EHR functionality

2004-03-11 Thread Tim Churches
On Thu, 2004-03-11 at 04:52, lakewood at copper.net wrote:
 Hi Tim,
 
 One I failed to include is:
 
 RFC 3586 - IP Security Policy (IPSP) Requirements
 http://www.faqs.org/rfcs/rfc3586.html
 
 Some of the included links support searches, e.g., The CMU link returned 
 over 2200
 hits on a search for 'security policy'. Lots of policy-related 
 information that is appropriate.

OK, I suppose I was just trying to distinguish things like the BMA
security policy, which are very high-level, from nitty-gritty policies
at the operating system or network layer levels. Both are essential, but
the former are much less rarely articulated than the latter.

-- 

Tim C

PGP/GnuPG Key 1024D/EAF993D0 available from keyservers everywhere
or at http://members.optushome.com.au/tchur/pubkey.asc
Key fingerprint = 8C22 BF76 33BA B3B5 1D5B  EB37 7891 46A9 EAF9 93D0


-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: 
http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20040311/c1cc5ebe/attachment.asc


Data Security was: Basic EHR functionality

2004-03-11 Thread Vincent McCauley
I would strongly support the concept that logging
of access (hand in hand with significant penalties)
should be the underlying principal to deter
inappropriate access to the EHR.

This is already used in other high security domains (police etc).

Regards
Vince

Dr Vincent McCauley MB BS, Ph.D
CEO McCauley Software Pty Ltd
Vice President Medical Software Industry Association

- Original Message - 
From: Thomas Beale tho...@deepthought.com.au
To: Gavin Brelstaff gjb at crs4.it
Cc: Openehr-Technical openehr-technical at openehr.org
Sent: Wednesday, March 10, 2004 23:26
Subject: Re: Data Security was: Basic EHR functionality


 Gavin Brelstaff wrote:

  Thomas Beale wrote:
 
 
  A well known study in Harvard medical school (I think) showed that
  putting the message Do not inappropriately access patient data - all
  your accesses are being logged on clinician screens a few times a
  day resulted in a drop to near 0 of inappropriate access. No other
  technology was used
  - thomas
 
 
  There must be a downside to do that too - discouraging access by those
  who have urgent need while being undertrained on the system - it would
  sure scare me off  - and that would effectively reduce medic-medic
  communication rather than promote it.  Sure security is important
  but don't forget it is always compromise [Bruce Schneier].

 I actually suspect the key to this is: whatever the security measures,
 we must assume that someday, one day, health data of you or me, or a
 politician or an actress will be hacked and sent to the Mirror or Sun
 (gutter tabloid press in Britain, for US readers!), or simply posted on
 a website. What will be the acceptable costs of preventative measures
 against this? When it happens, what will be the acceptable outcome? For
 the latter: it has to be at least that perpetrators' accesses were
 logged (assuming they weren't so smart that they bypassed logging and
 all other access detection systems); it has to be that the victim is
 informed of the information theft; and there have to be legislative
 measures which are severe enough that stories do not get published in
 the Mirror or on the web. Stopping a story in a newspaper is possible in
 most countries; stopping the posting of information on the web is going
 to be much harder, but if the identities of the information thieves can
 be logged, then something can be done. Perhaps publishing another's
 health record can be made so severe a crime that it just isn't worth it
 for some would-be hackers? That leaves us with hackers with personal or
 particular motives, e.g. insurance companies, private investigators,
 family members, political partiesagain, it seems to me that the
 greatest deterrent to actually using stolen health information is the
 sure knowledge that your illegal accesses were logged somehow, not that
 you were prevented getting in in the first place; then you know that any
 use you make of the information, once detected will lead to severe action.

 - thomas beale


 -
 If you have any questions about using this list,
 please send a message to d.lloyd at openehr.org




-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-10 Thread Thomas Clark
Hi Tim,

Might want to add:

Computer Security Basics
http://www.oreilly.de/catalog/csb/toc.html

IEEE; Compartmented Mode Workstation: Prototype Highlights
http://csdl.computer.org/comp/trans/ts/1990/06/e0608abs.htm

CMU; Trusted Operating Systems
http://www.sei.cmu.edu/str/descriptions/trusted_body.html

Operating System Security
http://www.cs.ucd.ie/staff/tahar/home/courses/4thyear/chapter4/ppframe.htm

 From Security protocols to System Security
http://www.hpl.hp.com/techreports/2003/HPL-2003-147.html

Trusted Computing Platforms
http://www.hpl.hp.com/techreports/2002/HPL-2002-221.html

ASPECT - a tool for checking protocol security
http://www.hpl.hp.com/techreports/2002/HPL-2002-246.html

Resilient Infrastructure for Network Security
http://www.hpl.hp.com/techreports/2002/HPL-2002-273.html

Security Infrastructure for A Web Service Based Resource Management  System
http://www.hpl.hp.com/techreports/2002/HPL-2002-297.html

Trusted Solaris Developers Guide
http://docs.sun.com/db/doc/805-8060?q=compartmented+mode+workstation

Trusted Network Environment
http://www.tinfosol.com/lab/lab.html

RFC 1825 - Security Architecture for the Internet Protocol
http://www.faqs.org/rfcs/rfc1825.html

RFC 1827 - IP Encapsulating Security Payload (ESP)
http://www.faqs.org/rfcs/rfc1827.html

Secure Trusted Operating System (STOS) Consortium
http://www.stosdarwin.org/

The Blue Book
http://secinf.net/info/rainbow/tg29.txt

UK Security Citations Bibliography
http://chacs.nrl.navy.mil/xtp1/uksecbib.html

Regards!

-Thomas Clark


Tim Churches wrote:

On Tue, 2004-03-09 at 23:20, Thompson, Ken wrote:
  

2) A mechanism on the patient record itself that displays a list of all
users that have accessed the record (with date and time). This will probably
be made available to the patient at some point, so they will actually
provide a critical part of the checks and balances in the system.



This is similar to the mechanisms envisaged under the Consent and
notification secion of the now-famous BMA Security Policy, developed by
Ross Anderson - see
http://www.cl.cam.ac.uk/users/rja14/policy11/policy11.html

This is still the gold standard for EHR security policies, IMHO, yet
most people I have met who are involved in EHR work and who know of it
(curiously many seem ignorant of it) tend to dismiss it, not because the
policies are unsound (although they do need minor tweaking here and
there), but because implementing them is very difficult in practice - 
particularly the multilateral as opposed to multilevel access control
policy. In fact you need both, but of the two, the former is more
important. In other words, role-based access control, where the roles
are specific to each patient, as well as to each health professional.


  



-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-10 Thread Tim Churches
On Wed, 2004-03-10 at 21:31, b.cohen wrote:
 I produced a formal definition of most of Anderson's Security 'Principles' in
 1996 (see http://www.soi.city.ac.uk/~bernie/hsp.pdf) 

Nice paper! Haven't read it in detail but on a quick scan I see the
value in the formalisation.

 and circulated it within
 TC251 and ASTMS 31.1 in an attempt to promote a more formal approach to the
 definition of EHR standards in general, and their security in particular.
 This approach was met with almost universal hostility from both supply and
 demand communities.
 The problem is not only that formalisation is intellectually difficult but 
 that
 it exposes logical inconsistenciies in the composition of what practitioners
 and users believe to be in their best interests (and promote as 'common
 sense').

Yup, but these tensions need to be worked out before you invest in
building vastly expensive community-wide EHRs - because they will surely
come to light once the thing is in operation, and by then they will be
even more expensive, or impossible, to fix.

 The exposure of such inconsistencies, and their repair by altering models on
 both the supply and demand sides, is the essence of strategic  development.
 Unfortunately, it is also the bane of standardisation.

Apart from altering models, it is also possible to press new
technologies into service. For example (wearing my epidemiologist hat
now), many of the new privacy-preserving data mining techniques, based
on secure multi-party computation, can now be used to solve many of the
researcher vs individual privacy issues. The emergency access problem
is tricky, but reasonable compromise schemes can be worked out -
although they may be expensive. I think that is teh bottom line: no-one
likes the fact that to do EHRs properly costs a lot more than
maintaining all the redundant copies of the partial paper records which
we currently maintain. Do the benefits outweigh the costs, and the
benefits can't really be assessed until someone has built and operated a
comprehensive community-wide EHR for half a decade or more. 

Tim C

 
 Quoting Tim Churches tchur at optushome.com.au:
 
  On Tue, 2004-03-09 at 23:20, Thompson, Ken wrote:
   2) A mechanism on the patient record itself that displays a list of all
   users that have accessed the record (with date and time). This will
  probably
   be made available to the patient at some point, so they will actually
   provide a critical part of the checks and balances in the system.
  
  This is similar to the mechanisms envisaged under the Consent and
  notification secion of the now-famous BMA Security Policy, developed by
  Ross Anderson - see
  http://www.cl.cam.ac.uk/users/rja14/policy11/policy11.html
  
  This is still the gold standard for EHR security policies, IMHO, yet
  most people I have met who are involved in EHR work and who know of it
  (curiously many seem ignorant of it) tend to dismiss it, not because the
  policies are unsound (although they do need minor tweaking here and
  there), but because implementing them is very difficult in practice - 
  particularly the multilateral as opposed to multilevel access control
  policy. In fact you need both, but of the two, the former is more
  important. In other words, role-based access control, where the roles
  are specific to each patient, as well as to each health professional.
  
  
  -- 
  
  Tim C
  
  PGP/GnuPG Key 1024D/EAF993D0 available from keyservers everywhere
  or at http://members.optushome.com.au/tchur/pubkey.asc
  Key fingerprint = 8C22 BF76 33BA B3B5 1D5B  EB37 7891 46A9 EAF9 93D0
  
  
  
-- 

Tim C

PGP/GnuPG Key 1024D/EAF993D0 available from keyservers everywhere
or at http://members.optushome.com.au/tchur/pubkey.asc
Key fingerprint = 8C22 BF76 33BA B3B5 1D5B  EB37 7891 46A9 EAF9 93D0


-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: 
http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20040310/d83ca847/attachment.asc


Data Security was: Basic EHR functionality

2004-03-10 Thread Thomas Beale
Gavin Brelstaff wrote:

 Thomas Beale wrote:


 A well known study in Harvard medical school (I think) showed that 
 putting the message Do not inappropriately access patient data - all 
 your accesses are being logged on clinician screens a few times a 
 day resulted in a drop to near 0 of inappropriate access. No other 
 technology was used
 - thomas


 There must be a downside to do that too - discouraging access by those
 who have urgent need while being undertrained on the system - it would
 sure scare me off  - and that would effectively reduce medic-medic
 communication rather than promote it.  Sure security is important
 but don't forget it is always compromise [Bruce Schneier].

I actually suspect the key to this is: whatever the security measures, 
we must assume that someday, one day, health data of you or me, or a 
politician or an actress will be hacked and sent to the Mirror or Sun 
(gutter tabloid press in Britain, for US readers!), or simply posted on 
a website. What will be the acceptable costs of preventative measures 
against this? When it happens, what will be the acceptable outcome? For 
the latter: it has to be at least that perpetrators' accesses were 
logged (assuming they weren't so smart that they bypassed logging and 
all other access detection systems); it has to be that the victim is 
informed of the information theft; and there have to be legislative 
measures which are severe enough that stories do not get published in 
the Mirror or on the web. Stopping a story in a newspaper is possible in 
most countries; stopping the posting of information on the web is going 
to be much harder, but if the identities of the information thieves can 
be logged, then something can be done. Perhaps publishing another's 
health record can be made so severe a crime that it just isn't worth it 
for some would-be hackers? That leaves us with hackers with personal or 
particular motives, e.g. insurance companies, private investigators, 
family members, political partiesagain, it seems to me that the 
greatest deterrent to actually using stolen health information is the 
sure knowledge that your illegal accesses were logged somehow, not that 
you were prevented getting in in the first place; then you know that any 
use you make of the information, once detected will lead to severe action.

- thomas beale


-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-10 Thread lakew...@copper.net
Hi Tim,

Security policies are included as are implementation approaches.

Regards!

-Thomas Clark

Tim Churches wrote:

On Wed, 2004-03-10 at 19:10, Thomas Clark wrote:
  

Hi Tim,

Might want to add:

Computer Security Basics
http://www.oreilly.de/catalog/csb/toc.html

IEEE; Compartmented Mode Workstation: Prototype Highlights
http://csdl.computer.org/comp/trans/ts/1990/06/e0608abs.htm

CMU; Trusted Operating Systems
http://www.sei.cmu.edu/str/descriptions/trusted_body.html

Operating System Security
http://www.cs.ucd.ie/staff/tahar/home/courses/4thyear/chapter4/ppframe.htm

 From Security protocols to System Security
http://www.hpl.hp.com/techreports/2003/HPL-2003-147.html

Trusted Computing Platforms
http://www.hpl.hp.com/techreports/2002/HPL-2002-221.html

ASPECT - a tool for checking protocol security
http://www.hpl.hp.com/techreports/2002/HPL-2002-246.html

Resilient Infrastructure for Network Security
http://www.hpl.hp.com/techreports/2002/HPL-2002-273.html

Security Infrastructure for A Web Service Based Resource Management  System
http://www.hpl.hp.com/techreports/2002/HPL-2002-297.html

Trusted Solaris Developers Guide
http://docs.sun.com/db/doc/805-8060?q=compartmented+mode+workstation

Trusted Network Environment
http://www.tinfosol.com/lab/lab.html

RFC 1825 - Security Architecture for the Internet Protocol
http://www.faqs.org/rfcs/rfc1825.html

RFC 1827 - IP Encapsulating Security Payload (ESP)
http://www.faqs.org/rfcs/rfc1827.html

Secure Trusted Operating System (STOS) Consortium
http://www.stosdarwin.org/

The Blue Book
http://secinf.net/info/rainbow/tg29.txt

UK Security Citations Bibliography
http://chacs.nrl.navy.mil/xtp1/uksecbib.html



All of those deal with security implementation issues i.e. how you
achieve certain objectives. The BMA security policy sets out what those
objectives ought to be. Defining the security objectives, which in turn
ought be be informed by specific threat models, needs to be done before
you can consider which security technologies are appropriate. But yes,
most of those are appropriate.

Tim c

  

Regards!

-Thomas Clark


Tim Churches wrote:



On Tue, 2004-03-09 at 23:20, Thompson, Ken wrote:
 

  

2) A mechanism on the patient record itself that displays a list of all
users that have accessed the record (with date and time). This will probably
be made available to the patient at some point, so they will actually
provide a critical part of the checks and balances in the system.
   



This is similar to the mechanisms envisaged under the Consent and
notification secion of the now-famous BMA Security Policy, developed by
Ross Anderson - see
http://www.cl.cam.ac.uk/users/rja14/policy11/policy11.html

This is still the gold standard for EHR security policies, IMHO, yet
most people I have met who are involved in EHR work and who know of it
(curiously many seem ignorant of it) tend to dismiss it, not because the
policies are unsound (although they do need minor tweaking here and
there), but because implementing them is very difficult in practice - 
particularly the multilateral as opposed to multilevel access control
policy. In fact you need both, but of the two, the former is more
important. In other words, role-based access control, where the roles
are specific to each patient, as well as to each health professional.


 

  

-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org




-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-10 Thread lakew...@copper.net
Hi Tim,

One I failed to include is:

RFC 3586 - IP Security Policy (IPSP) Requirements
http://www.faqs.org/rfcs/rfc3586.html

Some of the included links support searches, e.g., The CMU link returned 
over 2200
hits on a search for 'security policy'. Lots of policy-related 
information that is appropriate.

Regards!

-Thomas Clark

Tim Churches wrote:

On Wed, 2004-03-10 at 19:10, Thomas Clark wrote:
  

Hi Tim,

Might want to add:

Computer Security Basics
http://www.oreilly.de/catalog/csb/toc.html

IEEE; Compartmented Mode Workstation: Prototype Highlights
http://csdl.computer.org/comp/trans/ts/1990/06/e0608abs.htm

CMU; Trusted Operating Systems
http://www.sei.cmu.edu/str/descriptions/trusted_body.html

Operating System Security
http://www.cs.ucd.ie/staff/tahar/home/courses/4thyear/chapter4/ppframe.htm

 From Security protocols to System Security
http://www.hpl.hp.com/techreports/2003/HPL-2003-147.html

Trusted Computing Platforms
http://www.hpl.hp.com/techreports/2002/HPL-2002-221.html

ASPECT - a tool for checking protocol security
http://www.hpl.hp.com/techreports/2002/HPL-2002-246.html

Resilient Infrastructure for Network Security
http://www.hpl.hp.com/techreports/2002/HPL-2002-273.html

Security Infrastructure for A Web Service Based Resource Management  System
http://www.hpl.hp.com/techreports/2002/HPL-2002-297.html

Trusted Solaris Developers Guide
http://docs.sun.com/db/doc/805-8060?q=compartmented+mode+workstation

Trusted Network Environment
http://www.tinfosol.com/lab/lab.html

RFC 1825 - Security Architecture for the Internet Protocol
http://www.faqs.org/rfcs/rfc1825.html

RFC 1827 - IP Encapsulating Security Payload (ESP)
http://www.faqs.org/rfcs/rfc1827.html

Secure Trusted Operating System (STOS) Consortium
http://www.stosdarwin.org/

The Blue Book
http://secinf.net/info/rainbow/tg29.txt

UK Security Citations Bibliography
http://chacs.nrl.navy.mil/xtp1/uksecbib.html



All of those deal with security implementation issues i.e. how you
achieve certain objectives. The BMA security policy sets out what those
objectives ought to be. Defining the security objectives, which in turn
ought be be informed by specific threat models, needs to be done before
you can consider which security technologies are appropriate. But yes,
most of those are appropriate.

Tim c

  

Regards!

-Thomas Clark


Tim Churches wrote:



On Tue, 2004-03-09 at 23:20, Thompson, Ken wrote:
 

  

2) A mechanism on the patient record itself that displays a list of all
users that have accessed the record (with date and time). This will probably
be made available to the patient at some point, so they will actually
provide a critical part of the checks and balances in the system.
   



This is similar to the mechanisms envisaged under the Consent and
notification secion of the now-famous BMA Security Policy, developed by
Ross Anderson - see
http://www.cl.cam.ac.uk/users/rja14/policy11/policy11.html

This is still the gold standard for EHR security policies, IMHO, yet
most people I have met who are involved in EHR work and who know of it
(curiously many seem ignorant of it) tend to dismiss it, not because the
policies are unsound (although they do need minor tweaking here and
there), but because implementing them is very difficult in practice - 
particularly the multilateral as opposed to multilevel access control
policy. In fact you need both, but of the two, the former is more
important. In other words, role-based access control, where the roles
are specific to each patient, as well as to each health professional.


 

  

-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org




-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-09 Thread Thomas Beale
Tim Cook wrote:

On Sat, 2004-03-06 at 14:17, Tim Churches wrote:
  

In general, caches should be
held on encrypted filesystems, either on-disc or in-memory, with the
keys (or a key to the keys) to the encryption/decryption managed by a
daemon which purges the keys from memory when asked (eg locking the
device) or automatically after a short period of disuse.



Well, now that would certainly be a secure way to handle caching.  If I
were worrying about national secrets.  

Do you go to this extreme now (as a manager) when doing your risk
assessments?  I am wondering what the total (additional) costs of system
design and hardware resources is when these facilities are implemented. 

I think that in most cases we can reliably depend on locked doors and
holding people responsible for protecting data they are entrusted with. 
I will agree that security training needs to include this awareness so
that users know how to properly store each of these devices when not in
use.

  

A well known study in Harvard medical school (I think) showed that 
putting the message Do not inappropriately access patient data - all 
your accesses are being logged on clinician screens a few times a day 
resulted in a drop to near 0 of inappropriate access. No other 
technology was used


- thomas


-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-09 Thread Gavin Brelstaff
Thomas Beale wrote:

 Tim Cook wrote:
 
 On Sat, 2004-03-06 at 14:17, Tim Churches wrote:
  

 In general, caches should be
 held on encrypted filesystems, either on-disc or in-memory, with the
 keys (or a key to the keys) to the encryption/decryption managed by a
 daemon which purges the keys from memory when asked (eg locking the
 device) or automatically after a short period of disuse.
   


 Well, now that would certainly be a secure way to handle caching.  If I
 were worrying about national secrets. 
 Do you go to this extreme now (as a manager) when doing your risk
 assessments?  I am wondering what the total (additional) costs of system
 design and hardware resources is when these facilities are implemented.
 I think that in most cases we can reliably depend on locked doors and
 holding people responsible for protecting data they are entrusted 
 with. I will agree that security training needs to include this 
 awareness so
 that users know how to properly store each of these devices when not in
 use.

  

 A well known study in Harvard medical school (I think) showed that 
 putting the message Do not inappropriately access patient data - all 
 your accesses are being logged on clinician screens a few times a day 
 resulted in a drop to near 0 of inappropriate access. No other 
 technology was used
 - thomas

There must be a downside to do that too - discouraging access by those
who have urgent need while being undertrained on the system - it would
sure scare me off  - and that would effectively reduce medic-medic
communication rather than promote it.  Sure security is important
but don't forget it is always compromise [Bruce Schneier].
-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-09 Thread Thomas Clark
Hi Nathan,

The real estate codes in most jurisdictions require landowners to post 
notices
warning of everything from dangerous conditions to prohibitions. 
Interpretations
extend well beyond this to impose duties upon the landowners to take active
measures to persons that may be classified as trespassers to protect them
against dangers.

In short, nailing-up a sign on a tree or a fence is just step #1 and it 
doesn't stop
there.

Another analogy is a financial services firm that takes your 
certificates and holds
them for safe keeping. They may post a sign but they had better do a 
whole lot
more than that.

The topic of Patient record security is a tough one. Some jurisdictions 
have already
established code. The real question is does the security in whole comply 
with the
code and available case law. If so, will it next month?

Security is an ongoing requirement, responsibility and duty.  Consult an 
experience
attorney.

Regards!

-Thomas Clark


Nathan Lea wrote:

 On 9 Mar 2004, at 06:51, Thomas Beale wrote:

 A well known study in Harvard medical school (I think) showed that
 putting the message Do not inappropriately access patient data -
 all your accesses are being logged on clinician screens a few
 times a day resulted in a drop to near 0 of inappropriate access.
 No other technology was used


 Indeed - but the (perhaps) disingenuous claim which is flashed across 
 clinicians' screens will only work for a finite period before people 
 stop believing it and revert to their old habits. Security is a 
 process, and it requires constant amendment and updating. If someone 
 wants to attack a system (in this case by inappropriately accessing 
 records), they will. To use a phrase which is undoubtedly well known 
 to everyone, there is no silver bullet - especially where security 
 is concerned...

 A good book to look at on the subject of insecure data is /The Art of 
 Deception/ by Kevin Mitnik.

 Never say die.

 Best,

 Nathan



-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-09 Thread Thomas Clark
Hi Ken,

Software Agents and Data Mining. Summaries can be helpful but may, and 
probably
will be insufficient to satisfy demands for information. Data Mining can 
solve many of the
anticipated requests for information but not all. Software Agents can be 
developed to
address the majority of requests for information.

The EHR system must be structured to respond to both type of  
interrogations.

...

periodic report ... summary of their activities ...
...

could be helpful but possibly not very useful. A Patient or Provider 
tool with an audit
trail and a 'grep' tool would probably be better. Additionally, the user 
is probably not
interested in getting a summary report of what they did and are likely 
not interested in
their own security.

Record locks and loggers could be valuable in part at specific points in 
the global system.
This would not, however, provide record-based security at other points 
where no
security or protections exists.

Alternative methods could provide additional security but they would 
likely involve
translation and mutation, translation of information to mask identities 
and mutations
to mask record formats. These would be accompanied by notifications so 
that the
prior security mechanisms would be able to track record handling.

This is a deep topic and as always requires constant attention and 
modifications.

Regards!

-Thomas Clark

Thompson, Ken wrote:

Has anyone got any experience with the effect of providing users a periodic
summary of their activities on an EHR system? We are looking at a couple of
different options.

1) A periodic report to our user's inbox outlining their use of the system.
This has an added benefit of giving the user a concrete sense of the
benefits they receive from the system as well as confirming that their
actions are, indeed, being monitored.

2) A mechanism on the patient record itself that displays a list of all
users that have accessed the record (with date and time). This will probably
be made available to the patient at some point, so they will actually
provide a critical part of the checks and balances in the system.

Any other thoughts on this?

Best Regards,

Ken Thompson

 

-Original Message-
From: Nathan Lea
To: Thomas Beale
Cc: Openehr-Technical
Sent: 3/9/2004 4:46 AM
Subject: Re: Data Security was: Basic EHR functionality

On 9 Mar 2004, at 06:51, Thomas Beale wrote: 

  

A well known study in Harvard medical school (I think) showed that
putting the message Do not inappropriately access patient data - all
your accesses are being logged on clinician screens a few times a day
resulted in a drop to near 0 of inappropriate access. No other
technology was used 




Indeed - but the (perhaps) disingenuous claim which is flashed across
clinicians' screens will only work for a finite period before people
stop believing it and revert to their old habits.  Security is a
process, and it requires constant amendment and updating.  If someone
wants to attack a system (in this case by inappropriately accessing
records), they will.  To use a phrase which is undoubtedly well known to
everyone, there is no silver bullet - especially where security is
concerned... 

A good book to look at on the subject of insecure data is The Art of
Deception by Kevin Mitnik. 

Never say die. 

Best, 

Nathan 
-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org

  



-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-07 Thread Tim Churches
On Sun, 2004-03-07 at 10:18, Tim Cook wrote:
 On Sat, 2004-03-06 at 14:17, Tim Churches wrote:
  In general, caches should be
  held on encrypted filesystems, either on-disc or in-memory, with the
  keys (or a key to the keys) to the encryption/decryption managed by a
  daemon which purges the keys from memory when asked (eg locking the
  device) or automatically after a short period of disuse.
 
 Well, now that would certainly be a secure way to handle caching.  If I
 were worrying about national secrets.  

Personal health information is more important than national secrets to
the individuals concerned. Furthermore, it only takes the compromise of
a handful of individuals' confidential information, and publication of 
this fact, before public confidence in your EHR evaporates. So I don't
think that is overkill. Note, however, the use of the subjunctive
should. That's the way it ought to be done, and it is technically
achievable. Unfortunately, browser and OS vendors/writers don't chose to
do that by default. But certainly it can be done - on Linux systems, it
is quite easy to set up encrypted filesystems and to store the browser
cache on these. Likewise on Windows - individual directories can be
encrypted (although there are distinct flaws in the way the encryption
keys are handled in Windows - still, better than not encrypted).

 Do you go to this extreme now (as a manager) when doing your risk
 assessments?  I am wondering what the total (additional) costs of system
 design and hardware resources is when these facilities are implemented. 

Risk assessment: client workstations are often shared between users and
located in insecure locations, laptops are stolen or lost all the time.
Thus confidential information which is captured in a cache on these
systems needs to be secured. Note that if the EHR user is, say, a
physician, then there may be details of hundreds of patients in their
workstation/laptop cache.

Does this represent a challenge to applications,especially Web browser
applications? Yup.

Are technical solutions possible? Yup - see above.

Is all of this costly? Well, my view is that additional hardware
security devices are probably unnecessary (and almost all are
unnecessarily proprietary anyway), and the software required to
implement what I describe above is free (at least for Linux - on
Windows, file system encryption is only available on server versions, I
think - at least that is the case with Windows 2000 - not sure with
Windows XP/XP Pro). Does the administration and training involved cost
money? Definitely, security doesn't come free. Is the expense worth it?
See above - only takes a handful of confidentiality breaches before you
can kiss confidence in your EHR goodbye for several years.

 
 I think that in most cases we can reliably depend on locked doors and
 holding people responsible for protecting data they are entrusted with. 

Surely you jest? Client workstations, even in large hospitals (or
especially in large hospitals) have to be considered insecure, likewise
desktop PCs in doctor's offices - common targets for drug-related
burglary, and especially laptops and handheld devices which are pinched
or misplaced with monotonous regularity.

The same applies to EHR/EMR servers, especially servers which are not
housed in dedicated, secured data centres, although even the latter are
far from invulnerable - see for example
http://www.smh.com.au/articles/2003/09/04/1062548967124.html - and then
there is the off-site back-up media etc to consider.

 I will agree that security training needs to include this awareness so
 that users know how to properly store each of these devices when not in
 use.

Security engineering is all about building systems which fail
gracefully. Certainly training users is vital, but relying entirely on
users, or system administrators, or anyone, to always do the right thing
is a recipe for inevitable security failure. It is always better to
build additional protection into the fabric of information systems, as
long as the cost is justified - and that comes back to risk assessment
as you note. 
-- 

Tim C

PGP/GnuPG Key 1024D/EAF993D0 available from keyservers everywhere
or at http://members.optushome.com.au/tchur/pubkey.asc
Key fingerprint = 8C22 BF76 33BA B3B5 1D5B  EB37 7891 46A9 EAF9 93D0


-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: 
http://lists.openehr.org/mailman/private/openehr-technical_lists.openehr.org/attachments/20040307/4bba46ef/attachment.asc


Data Security was: Basic EHR functionality

2004-03-07 Thread Tim Churches
On Sun, 2004-03-07 at 11:56, Thompson, Ken wrote:
 The argument that we could lose our collective behinds in the event of a
 significant security failure seems consistently incapable of generating
 interest or resource allocations from senior management. Basically, the
 issue comes down to a legal deparment deciding whether or not they can
 convince a jury that actions were reasonable and prudent given the resources
 and situation.  They have absolutely no interest or concern that a forensic
 technical investigation would find design or implementation lacking, thereby
 ruining the careers of the dedicated engineers that have worked on the
 project.

Agreed. Part of the problem is that it is viewed from the point of view
of legal exposure only, ignoring the fact that regardless of whether a
case is proved in court, the fact that the security of an EHR
implementation is seriously in question will be incredibly damaging to
that EHR (and probably to some degree to all other EHRs, by
association). This is made worse by the proponents of community-wide
EHRs stating things like it will be bullet-proof etc. I can understand
the motivation behind such statements, but generally they are made
without the slightest appreciation of what is actually involved in
achieving the promised, or hinted-at, level of security.


 
 To a large extent, this is a result of poor of statistical data regarding
 security breaches because organizations with something to lose are quick to
 hush things up when they go wrong. It is a basic part of the damage control
 strategy of any large organization.

Absolutely. There is a need for mandated reporting of information system
security breaches, in the same way that many countries have mandated
financial reporting requirements for public companies etc.

 In most cases, it is impossible to keep small scale security breaches under
 control. Copy and paste are still effective means of extracting data and
 putting it into a document, mail message, database, etc that is outside of
 the EHR system security. Anyone with reasonable access can do this, and one
 statistic I am familiar with is that 60% of all information security
 breaches involve disgruntled employees.

I absolutely agree, but it is nevertheless important to close
inadvertent security holes, such as invisible-to-the-user browser
caching. Certainly the biggest threat is from within - from people who
are already authorised users. Many security models focus entirely on
keeping out unauthorised users, thus missing the majority of the
threat. It is also necessary to think clearly about what is meant by an
authorised user - in particular, do you mean the actual person, or do
you really mean anyone with access to that person's credentials/login
password. The two are not necessarily the same.

 We deal with these issues as a matter of policy, auditing, and lawyers, not
 expensive technical means. Until someone is able to convince senior
 management that their careers, reputations, etc. are going to suffer from a
 security breach, I suspect this will continue to be our strategy. 

Yes, that is what I have observed also. But that is not the way it ought
to be. And because centralised EHRs significantly increase the size of
the hazard associated with security breaches, I don't think that the
current methods of addressing security issues, as you describe, are
sufficient - they need to be supplemented by architectural and technical
safeguards as well.


 The engineering staff, of course, keeps detailed documentation regarding our
 recommendations and the eventual decisions that were made in all of these
 matters...;-)

Yes, maintenance of I told you so.. files is vital.


Tim C

 
 Best Regards,
 
 Ken
 
 
 -Original Message-
 From: Tim Churches
 To: Tim Cook
 Cc: OpenEHR Technical
 Sent: 3/6/2004 7:14 PM
 Subject: Re: Data Security was: Basic EHR functionality
 
 On Sun, 2004-03-07 at 10:18, Tim Cook wrote:
  On Sat, 2004-03-06 at 14:17, Tim Churches wrote:
   In general, caches should be
   held on encrypted filesystems, either on-disc or in-memory, with the
   keys (or a key to the keys) to the encryption/decryption managed by
 a
   daemon which purges the keys from memory when asked (eg locking the
   device) or automatically after a short period of disuse.
  
  Well, now that would certainly be a secure way to handle caching.  If
 I
  were worrying about national secrets.  
 
 Personal health information is more important than national secrets to
 the individuals concerned. Furthermore, it only takes the compromise of
 a handful of individuals' confidential information, and publication of 
 this fact, before public confidence in your EHR evaporates. So I don't
 think that is overkill. Note, however, the use of the subjunctive
 should. That's the way it ought to be done, and it is technically
 achievable. Unfortunately, browser and OS vendors/writers don't chose to
 do that by default. But certainly it can be done - on Linux systems

Data Security was: Basic EHR functionality

2004-03-06 Thread Tim Cook
On Sat, 2004-03-06 at 14:17, Tim Churches wrote:
 In general, caches should be
 held on encrypted filesystems, either on-disc or in-memory, with the
 keys (or a key to the keys) to the encryption/decryption managed by a
 daemon which purges the keys from memory when asked (eg locking the
 device) or automatically after a short period of disuse.

Well, now that would certainly be a secure way to handle caching.  If I
were worrying about national secrets.  

Do you go to this extreme now (as a manager) when doing your risk
assessments?  I am wondering what the total (additional) costs of system
design and hardware resources is when these facilities are implemented. 

I think that in most cases we can reliably depend on locked doors and
holding people responsible for protecting data they are entrusted with. 
I will agree that security training needs to include this awareness so
that users know how to properly store each of these devices when not in
use.

Later,
Tim

-
If you have any questions about using this list,
please send a message to d.lloyd at openehr.org



Data Security was: Basic EHR functionality

2004-03-06 Thread Thompson, Ken
The argument that we could lose our collective behinds in the event of a
significant security failure seems consistently incapable of generating
interest or resource allocations from senior management. Basically, the
issue comes down to a legal deparment deciding whether or not they can
convince a jury that actions were reasonable and prudent given the resources
and situation.  They have absolutely no interest or concern that a forensic
technical investigation would find design or implementation lacking, thereby
ruining the careers of the dedicated engineers that have worked on the
project.

To a large extent, this is a result of poor of statistical data regarding
security breaches because organizations with something to lose are quick to
hush things up when they go wrong. It is a basic part of the damage control
strategy of any large organization.

In most cases, it is impossible to keep small scale security breaches under
control. Copy and paste are still effective means of extracting data and
putting it into a document, mail message, database, etc that is outside of
the EHR system security. Anyone with reasonable access can do this, and one
statistic I am familiar with is that 60% of all information security
breaches involve disgruntled employees.

We deal with these issues as a matter of policy, auditing, and lawyers, not
expensive technical means. Until someone is able to convince senior
management that their careers, reputations, etc. are going to suffer from a
security breach, I suspect this will continue to be our strategy. 

The engineering staff, of course, keeps detailed documentation regarding our
recommendations and the eventual decisions that were made in all of these
matters...;-)

Best Regards,

Ken


-Original Message-
From: Tim Churches
To: Tim Cook
Cc: OpenEHR Technical
Sent: 3/6/2004 7:14 PM
Subject: Re: Data Security was: Basic EHR functionality

On Sun, 2004-03-07 at 10:18, Tim Cook wrote:
 On Sat, 2004-03-06 at 14:17, Tim Churches wrote:
  In general, caches should be
  held on encrypted filesystems, either on-disc or in-memory, with the
  keys (or a key to the keys) to the encryption/decryption managed by
a
  daemon which purges the keys from memory when asked (eg locking the
  device) or automatically after a short period of disuse.
 
 Well, now that would certainly be a secure way to handle caching.  If
I
 were worrying about national secrets.  

Personal health information is more important than national secrets to
the individuals concerned. Furthermore, it only takes the compromise of
a handful of individuals' confidential information, and publication of 
this fact, before public confidence in your EHR evaporates. So I don't
think that is overkill. Note, however, the use of the subjunctive
should. That's the way it ought to be done, and it is technically
achievable. Unfortunately, browser and OS vendors/writers don't chose to
do that by default. But certainly it can be done - on Linux systems, it
is quite easy to set up encrypted filesystems and to store the browser
cache on these. Likewise on Windows - individual directories can be
encrypted (although there are distinct flaws in the way the encryption
keys are handled in Windows - still, better than not encrypted).

 Do you go to this extreme now (as a manager) when doing your risk
 assessments?  I am wondering what the total (additional) costs of
system
 design and hardware resources is when these facilities are
implemented. 

Risk assessment: client workstations are often shared between users and
located in insecure locations, laptops are stolen or lost all the time.
Thus confidential information which is captured in a cache on these
systems needs to be secured. Note that if the EHR user is, say, a
physician, then there may be details of hundreds of patients in their
workstation/laptop cache.

Does this represent a challenge to applications,especially Web browser
applications? Yup.

Are technical solutions possible? Yup - see above.

Is all of this costly? Well, my view is that additional hardware
security devices are probably unnecessary (and almost all are
unnecessarily proprietary anyway), and the software required to
implement what I describe above is free (at least for Linux - on
Windows, file system encryption is only available on server versions, I
think - at least that is the case with Windows 2000 - not sure with
Windows XP/XP Pro). Does the administration and training involved cost
money? Definitely, security doesn't come free. Is the expense worth it?
See above - only takes a handful of confidentiality breaches before you
can kiss confidence in your EHR goodbye for several years.

 
 I think that in most cases we can reliably depend on locked doors and
 holding people responsible for protecting data they are entrusted
with. 

Surely you jest? Client workstations, even in large hospitals (or
especially in large hospitals) have to be considered insecure, likewise
desktop PCs in doctor's offices - common

Data Security was: Basic EHR functionality

2004-03-06 Thread Thomas Clark
Hi All,

If one resorts to hardware to support security be sure to post guards. 
WWII proved than
reliance upon hardware devices to provide adequate security is misplaced.

Hardware components in a security system are acceptable if the software 
can re-configure
them and alter even their basic functionality, e.g., re-configurable 
computer systems and
networks.

One significant advantage of ATM networking was the use of fixed-size 
cells to transmit
data (essentially a switching technology). Try reconstructing cell-based 
data on the fly or
store data in cellular format with separately stored reconstruction 
algorithms. The problem
was solved long ago - cellular systems.

Any static security system is sensitive to determined efforts. No 
security or inadequate
security mechanisms are invitations. A single breach will likely destroy 
confidence. The
ability to re-configure and re-deploy is essential.

Regards!

-Thomas Clark


Tim Churches wrote:

On Sun, 2004-03-07 at 10:18, Tim Cook wrote:
  

On Sat, 2004-03-06 at 14:17, Tim Churches wrote:


In general, caches should be
held on encrypted filesystems, either on-disc or in-memory, with the
keys (or a key to the keys) to the encryption/decryption managed by a
daemon which purges the keys from memory when asked (eg locking the
device) or automatically after a short period of disuse.
  

Well, now that would certainly be a secure way to handle caching.  If I
were worrying about national secrets.  



Personal health information is more important than national secrets to
the individuals concerned. Furthermore, it only takes the compromise of
a handful of individuals' confidential information, and publication of 
this fact, before public confidence in your EHR evaporates. So I don't
think that is overkill. Note, however, the use of the subjunctive
should. That's the way it ought to be done, and it is technically
achievable. Unfortunately, browser and OS vendors/writers don't chose to
do that by default. But certainly it can be done - on Linux systems, it
is quite easy to set up encrypted filesystems and to store the browser
cache on these. Likewise on Windows - individual directories can be
encrypted (although there are distinct flaws in the way the encryption
keys are handled in Windows - still, better than not encrypted).

  

Do you go to this extreme now (as a manager) when doing your risk
assessments?  I am wondering what the total (additional) costs of system
design and hardware resources is when these facilities are implemented. 



Risk assessment: client workstations are often shared between users and
located in insecure locations, laptops are stolen or lost all the time.
Thus confidential information which is captured in a cache on these
systems needs to be secured. Note that if the EHR user is, say, a
physician, then there may be details of hundreds of patients in their
workstation/laptop cache.

Does this represent a challenge to applications,especially Web browser
applications? Yup.

Are technical solutions possible? Yup - see above.

Is all of this costly? Well, my view is that additional hardware
security devices are probably unnecessary (and almost all are
unnecessarily proprietary anyway), and the software required to
implement what I describe above is free (at least for Linux - on
Windows, file system encryption is only available on server versions, I
think - at least that is the case with Windows 2000 - not sure with
Windows XP/XP Pro). Does the administration and training involved cost
money? Definitely, security doesn't come free. Is the expense worth it?
See above - only takes a handful of confidentiality breaches before you
can kiss confidence in your EHR goodbye for several years.

  

I think that in most cases we can reliably depend on locked doors and
holding people responsible for protecting data they are entrusted with. 



Surely you jest? Client workstations, even in large hospitals (or
especially in large hospitals) have to be considered insecure, likewise
desktop PCs in doctor's offices - common targets for drug-related
burglary, and especially laptops and handheld devices which are pinched
or misplaced with monotonous regularity.

The same applies to EHR/EMR servers, especially servers which are not
housed in dedicated, secured data centres, although even the latter are
far from invulnerable - see for example
http://www.smh.com.au/articles/2003/09/04/1062548967124.html - and then
there is the off-site back-up media etc to consider.

  

I will agree that security training needs to include this awareness so
that users know how to properly store each of these devices when not in
use.



Security engineering is all about building systems which fail
gracefully. Certainly training users is vital, but relying entirely on
users, or system administrators, or anyone, to always do the right thing
is a recipe for inevitable security failure. It is always better to
build additional protection into the fabric of