Learn the steps necessary in securing yourself could not start war3.exe! make sure the loader your company. This sample chapter looks at the various forms of security breeches such as programming errors and viruses, and it gives their sources and the processes that can overcome them.
In the first two chapters, we learned about the need for computer security and we studied encryption, a fundamental tool in implementing many kinds of security controls. In this chapter, we begin to study how to apply security in computing. We start with why we need security at the program level and how we can achieve it. In one form or another, protecting programs is at the heart of computer security.
How do we keep programs free from flaws? How do we protect computing resources against programs that contain flaws? In this chapter, we address more general themes, most of which carry forward to these special-purpose systems. Thus, this chapter not only lays the groundwork for future chapters but also is significant on its own. This chapter deals with the writing of programs. It defers to a later chapter what may be a much larger issue in program security: trust.
The trust problem can be framed as follows: Presented with a finished program, for example, a commercial software package, how can you tell how secure it is or how to use it in its most secure way? SECURE PROGRAMS Consider what we mean when we say that a program is “secure. We saw in Chapter 1 that security implies some degree of trust that the program enforces expected confidentiality, integrity, and availability. From the point of view of a program or a programmer, how can we look at a software component or code fragment and assess its security?
This question is, of course, similar to the problem of assessing software quality in general. An assessment of security can also be influenced by someone’s general perspective on software quality. For example, if your manager’s idea of quality is conformance to specifications, then she might consider the code secure if it meets security requirements, whether or not the requirements are complete or correct. This security view played a role when a major computer manufacturer delivered all its machines with keyed locks, since a keyed lock was written in the requirements. For example, developers track the number of faults found in requirements, design, and code inspections and use them as indicators of the likely quality of the final product.
You might argue that a module in which 100 faults were discovered and fixed is better than another in which only 20 faults were discovered and fixed, suggesting that more rigorous analysis and testing had led to the finding of the larger number of faults. Early work in computer security was based on the paradigm of “penetrate and patch,” in which analysts searched for and repaired faults. Often, a top-quality “tiger team” would be convened to test a system’s security by attempting to cause it to fail. Unfortunately, far too often the proof became a counterexample, in which not just one but several serious security problems were uncovered. The problem discovery in turn led to a rapid effort to “patch” the system to repair or restore the security. The pressure to repair a specific problem encouraged a narrow focus on the fault itself and not on its context.
In particular, the analysts paid attention to the immediate cause of the failure and not to the underlying design or requirements faults. The fault often had nonobvious side effects in places other than the immediate area of the fault. The fault could not be fixed properly because system functionality or performance would suffer as a consequence. One way to do that is to compare the requirements with the behavior. That is, to understand program security, we can examine programs to see whether they behave as their designers intended or users expected.
Program security flaws can derive from any kind of software fault. That is, they cover everything from a misunderstanding of program requirements to a one-character error in coding or even typing. The flaws can result from problems in a single code component or from the failure of several programs or program pieces to interact compatibly through a shared interface. Frequently, we talk about “bugs” in software, a term that can mean many different things, depending on context. When a human makes a mistake, called an error, in performing some software activity, the error may lead to a fault, or an incorrect step, command, process, or data definition in a computer program.
For example, a designer may misunderstand a requirement and create a design that does not match the actual intent of the requirements analyst and the user. A failure is a departure from the system’s required behavior. It can be discovered before or after system delivery, during testing, or during operation and maintenance. Since the requirements documents can contain faults, a failure indicates that the system is not performing as required, even though it may be performing as specified.