comp.org.uk

Networking | Programming | Security | Linux | Computer Science | About

Security Architecture Models

Security architecture models illustrate InfoSec implementations and can help organizations quickly make improvements through adaptation. Formal models do not usually find their way directly into useable implementations; instead, they form the basic approach that an implementation uses. These formal models are discussed here so that the reader can become familiar with them and see how they are used in various security architectures.

Trusted Computing Base

The Trusted Computer System Evaluation Criteria (TCSEC) is an older DoD standard that defines the criteria for assessing the access controls in a computer system. This standard is part of a larger series of standards collectively referred to as the “Rainbow Series” because of the color-coding used to uniquely identify each document. TCSEC is also known as the “Orange Book” and is considered the cornerstone of the series. As described later in this chapter, this series was replaced in 2005 with a set of standards known as the “Common Criteria,” but InfoSec professionals should be familiar with the terminology and concepts of this legacy approach. TCSEC defines a trusted computing base (TCB) as the combination of all hardware, firmware, and software responsible for enforcing the security policy. In this context, “security policy” refers to the rules of configuration for a system rather than a managerial guidance document. TCB is only as effective as its internal control mechanisms and the administration of the systems being configured. TCB is made up of the hardware and software that has been implemented to provide security for a particular information system. This usually includes the operating system kernel and a specified set of security utilities, such as the user login subsystem.

The term “trusted” can be misleading—in this context, it means that a component is part of TCB’s security system, not that is necessarily trustworthy. The frequent discovery of flaws and the delivery of patches by software vendors to remedy security vulnerabilities attest to the relative level of trust you can place in current generations of software.

Within TCB is a conceptual object known as the reference monitor, which is the piece of the system that manages access controls—in other words, it mediates all access to objects by subjects. Systems administrators must be able to audit or periodically review the reference monitor to ensure it is functioning effectively, without unauthorized modification.

One of the biggest challenges in TCB is the existence of covert channels. Covert channels are unauthorized or unintended methods of communications hidden inside a computer system. For example, some researchers discovered that the indicator lights blinking on the face of some network routers were flashing in synch with the content of the data bits being transmitted, thus unintentionally displaying the contents of the data.

TCSEC defines two kinds of covert channels:

Information Technology System Evaluation Criteria

The Information Technology System Evaluation Criteria (ITSEC), an international set of criteria for evaluating computer systems, is very similar to TCSEC. Under ITSEC, Targets of Evaluation (ToE) are compared to detailed security function specifications, resulting in an assessment of systems functionality and comprehensive penetration testing. Like TCSEC, ITSEC was, for the most part, functionally replaced by the Common Criteria (described in the following section). The ITSEC rates products on a scale of E1 (lowest level) to E6 (highest level), in much the same way that TCSEC and the Common Criteria do, with E1 roughly equivalent to EAL2 evaluation of the Common Criteria, and E6 roughly equivalent to EAL7.

The Common Criteria

The Common Criteria for Information Technology Security Evaluation (often called “Common Criteria” or “CC”) is an international standard (ISO/IEC 15408) for computer security certification. It is widely considered the successor to both TCSEC and ITSEC in that it reconciles some of the differences between the various other standards. Most governments have discontinued their use of the other standards. CC is a combined effort of contributors from Australia, New Zealand, Canada, France, Germany, Japan, the Netherlands, Spain, the United Kingdom, and the United States. In the United States, the National Security Agency (NSA) and the NIST were the primary contributors. CC and its companion, the Common Methodology for Information Technology Security Evaluation (CEM), are the technical basis for an international agreement, the Common Criteria Recognition Agreement (CCRA), which ensures that products can be evaluated to determine their particular security properties. CC seeks the widest possible mutual recognition of secure IT products. The CC process assures that the specification, implementation, and evaluation of computer security products are performed in a rigorous and standard manner.

CC terminology includes: - Target of Evaluation (ToE)—The system being evaluated - Protection Profile (PP)—User-generated specification for security requirements - Security Target (ST)—Document describing the ToE’s security properties - Security Functional Requirements (SFRs)—Catalog of a product’s security functions - Evaluation Assurance Levels (EAL)—The rating or grading of a ToE after evaluation

Bell-LaPadula Confidentiality Model

The Bell-LaPadula (BLP) confidentiality model is a “state machine reference model”—in other words, a model of an automated system that is able to manipulate its state or status over time. BLP ensures the confidentiality of the modeled system by using MACs, data classification, and security clearances. The intent of any state machine model is to devise a conceptual approach wherein the system being modeled can always be in a known secure condition; in other words, this kind of model is provably secure. A system that serves as a reference monitor compares the level of classification of the data with the clearance of the entity requesting access; it allows access only if the clearance is equal to or higher than the classification. BLP security rules prevent information from being moved from a level of higher security to a level of lower security. Access modes can be one of two types: simple security and the * (star) property.

Simple security (also called the “read property”) prohibits a subject of lower clearance from reading an object of higher clearance but allows a subject with a higher clearance level to read an object at a lower level (read down).

The * property (the “write property”), on the other hand, prohibits a high-level subject from sending messages to a lower-level object. In short, subjects can read down and objects can write or append up. BLP uses access permission matrices and a security lattice for access control.

This model can be explained by imagining a fictional interaction between General Bell, whose thoughts and actions are classified at the highest possible level, and Private LaPadula, who has the lowest security clearance in the military. It is prohibited for Private LaPadula to read anything written by General Bell and for General Bell to write in any document that Private LaPadula could read. In short, the principle is “no read up, no write down.”

Biba Integrity Model

The Biba integrity model is similar to BLP. It is based on the premise that higher levels of integrity are more worthy of trust than lower ones. The intent is to provide access controls to ensure that objects or subjects cannot have less integrity as a result of read/write operations. The Biba model assigns integrity levels to subjects and objects using two properties: the simple integrity (read) property and the integrity * property (write).

The simple integrity property permits a subject to have read access to an object only if the security level of the subject is either lower or equal to the level of the object. The integrity * property permits a subject to have write access to an object only if the security level of the subject is equal to or higher than that of the object.

The Biba model ensures that no information from a subject can be passed on to an object in a higher security level. This prevents contaminating data of higher integrity with data of lower integrity.

This model can be illustrated by imagining fictional interactions among some priests, a monk named Biba, and some parishioners of the Middle Ages. Priests are considered holier (i.e., to have greater integrity) than monks, who are holier (i.e., have greater integrity) than parishioners. A priest cannot read (or offer) Masses or prayers written by Biba the Monk, who in turn cannot read items written by his parishioners. This is to prevent the lower integrity of the lower level from corrupting the holiness of the upper level. On the other hand, higher-level entities could share their writings with the lower levels without compromising the integrity of the information. This illustrates the “no write up, no read down” principle behind the Biba model.

Clark-Wilson Integrity Model

The Clark-Wilson integrity model, which is built upon principles of change control rather than integrity levels, was designed for the commercial environment. The change control principles upon which it operates are:

Internal consistency means that the system does what it is expected to do every time, without exception. External consistency means that the data in the system is consistent with similar data in the outside world. This model establishes a system of subject-program-object relationships such that the subject has no direct access to the object. Instead, the subject is required to access the object using a well-formed transaction via a validated program. The intent is to provide an environment where security can be proven through the use of separated activities, each of which is provably secure.

The following controls are part of the Clark-Wilson model:

The elements of the Clark-Wilson model are:

All subjects and objects are labeled with TPs. The TPs operate as the intermediate layer between subjects and objects. Each data item has a set of access operations that can be performed on it. Each subject is assigned a set of access operations that it can perform. The system then compares these two parameters and either permits or denies access by the subject to the object.

Graham-Denning Access Control Model

The Graham-Denning access control model has three parts: a set of objects, a set of subjects, and a set of rights. The subjects are composed of two things: a process and a domain. The domain is the set of constraints controlling how subjects may access objects. The set of rights governs how subjects may manipulate the passive objects. This model describes eight primitive protection rights, called commands, that subjects can execute to have an effect on other subjects or objects. Note that these are similar to the rights a user can assign to an entity in modern operating systems.

The eight primitive protection rights are:

  1. Create object
  2. Create subject
  3. Delete object
  4. Delete subject
  5. Read access right
  6. Grant access right
  7. Delete access right
  8. Transfer access right

Harrison-Ruzzo-Ullman Model

The Harrison-Ruzzo-Ullman (HRU) model defines a method to allow changes to access rights and the addition and removal of subjects and objects, a process that the BLP model does not. Since systems change over time, their protective states need to change. HRU is built on an access control matrix and includes a set of generic rights and a specific set of commands.

These include:

By implementing this set of rights and commands and restricting the commands to a single operation each, it is possible to determine if and when a specific subject can obtain a particular right to an object.

Brewer-Nash Model (Chinese Wall)

The Brewer-Nash model, commonly known as a “Chinese Wall,” is designed to prevent a conflict of interest between two parties. Imagine that a law firm represents two individuals who are involved in a car accident. One sues the other, and the firm has to represent both. To prevent a conflict of interest, the individual attorneys should not be able to access the private information of these two litigants. The Brewer-Nash model requires users to select one of two conflicting sets of data, after which they cannot access the conflicting data.


Published on Mon 02 January 2012 by Anthony Norton in Security with tag(s): architecture models