I am hard put to find an example of a language feature which makes a system more secure but less safe or vice versa, in any context. Can anyone else think of one?

Someone said -" There are two ways to design software: make it so simple there are obviously no defects or, make it so complex there are no obvious defects". Your choice of language and operating system has a strong bearing on this.

The more critical the application, the more you want to have a programming language that makes your verification task easier to be more rigorous. Thus a restricted set of features in a programming language and operating system which allow the system to behave deterministically and limit the abuse of the language (e.g., automatic garbage collection, recursion, etc.) will help to mitigate the undesired consequences in both safety critical and high security applications.

A secure system can be unsafe (the system is secure from unauthorized access but it has design weaknesses which make it hazardous in some situations). Also a safe system can be unsecure (that is, unauthorized people may have access to it but they cannot make it behave unsafely). Similarily a system can be reliable and not safe (it seldom fails but when it does fail a major catastrophy can occur) - or it can be safe but not reliable (it fails often but it always fails safe). Safety, security and reliability are independent attributes of a system and one should not confuse one with the other.

Whether a given feature in a programming language makes a system more secure, safer or reliable depends on the context in which that feature is used. Thus in a given context a language feature that might make the system more reliable (or quicker to repair) could also make the system less secure or less safe. For example, a remote debugging system/ backdoor is allowed to co-exist within the system when it is in operational use, allowing one to make updates on the fly. One could argue perversely, that a language which makes a backdoor easier to implement should not be used.

In safety critical systems, you want to prevent bad things from happening. For example, in air traffic control systems you do not want the controller to be mislead, causing a loss of separation between aircraft or prevent him from resorting to manual procedures if the system should fail. In a nuclear power station safety shutdown systems you want to ensure that an emergency shutdown is not prevented when needed, as well as ensuring that inadvertant shutdowns do not occur and cause a system outage. I think similar things can be said for high security systems - you want to prevent bad things from happening - unauthorized access leading to intrusions, corruption of data, denial of service, identity theft, etc.

The malicious human adversary may appear to be an additional concern for a security system. This would require addtional testing of the "social engineering" aspects of the system. But from a safety perspective, we also have to deal with "social/ human factors engineering" aspects related to naive and error prone humans as well as pay attention to "malicious" events in nature. Cosmic rays which can cause undetectable single event upsets in memory or registers may suffice to break a security system or contribute to a hazardous event occuring. Thus to mitigate the impact of single bit errors in very critical systems, boolean flags may need to be implemented in a double rail fashion: "01" is true, "10" is false and "00" and "11" are undefined and not expected.

In both safety and security cases it is not sufficient to do requirements based, integration and unit testing. For safety systems you also need to do hazard based testing, to satisfy the safety case that due diligence has been done to ensure that sufficient mitgation has been provided to reduce each hazard to an acceptable level of risk. The frequency and severity of the undesireable consequences of the bad things happening will determine the level of rigour for the type of development process that needs to be applied. Similar logic should apply to high security systems development.

With hazard based testing you deliberately and cunningly try to make known hazardous situations occur during your tests (if at all possible). The testng of a security modules must also require to see if you can break the system using methods which try to compromise the system. The security testers should do this with knowledge of how it is constructed as opposed to just doing black box testing against requirements and white box testing to determine coverage of branches and elementary loops and paths in the code. Note: the requirements may be incomplete, ambiguous, contradictatory or inadequate.

In either case, you want to pick a language which makes this additional but necessary verification task easier. For example, if the system design can be made to behave more deterministically by the choice of language, then the scope and effort of testing will be dramatically reduced. But often the language and the operating system is dictated early in development with little or no consideration of its impact on the verification of safety or security.

For hazard based testing see: http://www.ece.ubc.ca/~kcwong/safety/safetypublications.html

Gary McGraw wrote:

Yes, see my book "software fault injection" for some of the key differences. The main one is obvious...a rational, malicious adversary bent on making you lose.

gem

-----Original Message-----
From:   ljknews [mailto:[EMAIL PROTECTED]
Sent:   Thu Apr 22 19:31:52 2004
To:     [EMAIL PROTECTED]
Subject:        Re: [SC-L] Anyone looked at security features of D programming 
language  compared to Spark?

At 11:56 AM -0700 4/22/04, Jim & Mary Ronback wrote:



Safety critical sofware has a lot of overlap with the requirements for high security software.



Can anyone think of any _differences_ between those domain (process and code-wise, not regulatory-wise).



For Spark see http://www.praxis-cs.co.uk/sparkada/

They also have an interesting list of security/ integrity related cases where Spark has been used, e.g., the security modules for the SmartCard (system using a credit card with an embedded chip)

http://www.praxis-cs.co.uk/sparkada/publications.asp



Greenarrow 1 wrote:


There is a comparison chart of different functions of D vs other languages at this site:

http://www.digitalmars.com/d/comparison.html


Regards,


George

Greenarrow1

InNetInvestigations-Forensics









Reply via email to