Software Security: Difference between revisions

From
Jump to navigation Jump to search
No edit summary
No edit summary
 
Line 1: Line 1:
<i>"By security I mean that systems remain depend-able in the face of malice, error or mischance." -- Ross Anderson in "System Security for Cyborgs"</i>

=Introduction and Terms=
=Introduction and Terms=



Latest revision as of 16:14, 5 July 2005

"By security I mean that systems remain depend-able in the face of malice, error or mischance." -- Ross Anderson in "System Security for Cyborgs"

Introduction and Terms

Software Security is one aspect of Security Engineering. It is sometimes viewed as a concept separate from software reliability. (Note that the separation between dependability and reliability is not consistently clear in the literature; the two terms are sometimes used interchangeably. We use the term reliability in the sense defined below). However, it might be more appropriate to consider software security as an aspect of software dependability. After all, the reliability of a system can be defined as "the quality of the delivered service such that reliance can justifiably be placed on this service." (see [1], slide number 14, although Malek uses the term dependability there). Related to that notion, software security is sometimes viewed as software fault-tolerance under very harsh conditions: The software needs to be tolerant not only towards accidental and random faults but also towards maliciously injected faults and tampering, thus towards faults provoked with malicious intent. However, even if you like to view software security and software reliability as separate issues, they are always related. In the majority of cases, improving a software's security will also improve its reliability. There are two main approaches to software security.

  • Design for Security
  • Testing for Security

These two should always be used together. Design for Security includes activities like design audits. In the past it has shown that there's a third important aspect within Security Engineering that should not be underestimated: Implementation security. This is largely independent from Design for Security, but related to Testing for Security in that Testing for Security aims at uncovering security issues that arise from a systems's implementation.

Design for Security

Implementation Security

It has shown quite a few times that subtle implementational mistakes can seriously undermine even the best security design. The most popular of these bugs is probably the infamous Buffer Overflow. Buffer Overflows are now relatively well studied, that is, it is relatively well known how to find, exploit or avoid them. (Which does not mean they are extinct. Far from it.) But there are other, more subtle errors. See the Taxonomy of Vulnerabilities. Some of these more subtle errors are not even due to uncautious or untrained developers but due to side effects that occur when abstractional layers are traversed in the implementation process. Although training developers for secure coding goes a long way towards secure software, we believe it is not possible for a developer to write completely vulnerability-free software. And if this should - against our belief - be possible today, it will most likely not be true in 2, 5, 10 years. This is because the complexity of computer and software systems increases, and growing complexity makes it ever harder to oversee all potential side effects in software, to oversee all interactions between software components. It is along boundaries that security vulnerabilities occur: component boundaries, abstractional boundaries, architectural boundaries, organizational boundaries. Although Software Security has been around for quite some time, no good solution to this problem exists as yet. The methods and procedures to ensure software security are still very archaic, and involve a lot of manual work. Most of this manual work can only be done by highly skilled (and thus rather expensive) experts.

Testing for Security