Threat Modeling

Threat Modeling is the practice of building an abstract description of how an attack may proceed and cause damage in order to define strategies and means to defend against it. It is an important part of working toward protecting an Information System (like your computer, or a network you are responsible for), especially if there are many points of potential vulnerability.

Introduction

Before we begin, it's important to define what a threat is, in an Information System (IS) context. A succint description from the National Information Assurance Glossary defines a threat as:

Any circumstance or event with the potential to adversely impact an IS through unauthorized access, destruction, disclosure, modification of data, and/or denial of service.

Their more comprehensive (read „absurdly long“) definition is:

The means through which the ability or intent of a threat agent to adversely affect an automated system, facility, or operation can be manifest. Categorize and classify threats as follows: Categories Classes Human Intentional Unintentional Environmental Natural Fabricated 2. Any circumstance or event with the potential to cause harm to a system in the form of destruction, disclosure, modification or data, and/or denial of service. 3. Any circumstance or event with the potential to cause harm to the ADP system or activity in the form of destruction, disclosure, and modification of data, or denial of service. A threat is a potential for harm. The presence of a threat does not mean that it will necessarily cause actual harm. Threats exist because of the very existence of the system or activity and not because of any specific weakness. For example, the threat of fire exists at all facilities regardless of the amount of fire protection available. 4. Types of computer systems related adverse events (i. e. , perils) that may result in losses. Examples are flooding, sabotage and fraud. 5. An assertion primarily concerning entities of the external environment (agents); we say that an agent (or class of agents) poses a threat to one or more assets; we write: T(e;i) where: e is an external entity; i is an internal entity or an empty set. 6. An undesirable occurrence that might be anticipated but is not the result of a conscious act or decision. In threat analysis, a threat is defined as an ordered pair, , suggesting the nature of these occurrences but not the details (details are specific to events). 7. A potential violation of security. 8. A set of properties of a specific external entity (which may be either an individual or class of entities) that, in union with a set of properties of a specific internal entity, implies a risk (according to some body of knowledge).

Broadly speaking, a threat implies vulnerability, this can come in the form of a person directly spying on your communications or in the form of an automated system (malware or software on the network). It can also come from environmental phenomena, such as fire. In the context of this handbook we'll consider threats as coming from an agent attacking with a motivation rather than environmental phenomena or 'Acts of God'.

An attack is active when it is attempting to manipulate and/or affect a system's resources and/or its general operation. In this way it can be said to compromise the Integrity or Availability of a system. A passive attack however is something that is commonly done as part of an attempt to learn about a system and/or access information on the system but does not actually affect the general running or availability of that system (text adapted from RFC2828).

Successful attacks can be known or unknown. All successful attacks exploit a weakness. While neither are desirable, knowledge of a successful attack exposes one or more weaknesses, providing an opportunity for hardening the defense strategy. Attacks that manipulate a system often take control of functions on the system.

Not all attacks are strictly related to the system itself. A highly sophisticated attack may deploy Social Engineering strategies to manipulate a person to give the attacker access to a system or to important information. Some of these attacks can themselves be automated. Phishing and Pretexting are two such examples.

Modeling Threats

Here are three often-cited frames for the modeling of threats:

Asset-centric: Here the model begins with looking at the assets (personal information, bank account data, for instance) that are stored on a trusted system. From there particular strategies for accessing those assets might be determined. For instance, a strong passphrase used for logging into an online banking service is of little use if the user of that account is vulnerable to Phishing attacks. Similarly, valued data assets stored on an Encrypted File System on a laptop might be safe from prying eyes but are still are vulnerable to kinds of attacks whose goal is to destroy that data (from malware, to an axe or a cup of spilled coffee), unless secured backups are in place.

Attacker-centric: This model starts with trying to determine the motivations and goals of an attacker in order to isolate their method and means of attack. An example might be that an attacker might want to listen into an important phone call in order to determine the time and place of a special meeting. From there we would look at how they might do this based on how (landline, cellular phone or Voice Over IP) and in what context. Does attacker have access to the service being used (wiretapping)? What hardware is the caller using? What software (if any) is the caller using and what are its known vulnerabilities?

System-centric: System-centric modeling considers threat from the perspective of the system itself, most typically the design of the software in use. Each element of the system is studied for vulnerabilities, resulting in determination of an overall attack surface.

Threat Agents

Attacks have a motivation. Typically they are to spy, steal identities, aquire/destroy data (assets), control a system or render a system dysfunctional. There are many different means to achieve these outcomes, each of which relates directly to the conditions the attacker is working with at the time. These conditions might comprise social, psychological, network, operating system and application security factors all the same time.

Risk Management Insight LLC published a paper in 2006 (http://www.riskmanagementinsight.com/media/docs/FAIR_introduction.pdf) that defines five kinds of Threat Agents:

  • Access – simple unauthorized access
  • Misuse – unauthorized use of assets (e.g., identity theft, setting up a porn distribution service on a compromised server, etc.)
  • Disclose – the threat agent illicitly discloses sensitive information
  • Modify – unauthorized changes to an asset
  • Deny access – includes destruction, theft of a non-data asset, etc.

Threat Consequences

It's often very useful to have the particular consequences of a successful attack in mind when modeling threats. This can help us to get closer to isolating the particular strategies an attacker might deploy in search of their goal.

The Internet Engineering Task Force defines Threat Consequences in RFC2828 (http://tools.ietf.org/html/rfc2828). Here is an adaptation of that definition, with references to the (extensive) glossary and natural disaster removed:

A security violation that results from a threat action. Includes disclosure, deception, disruption, and usurpation. The following subentries describe four kinds of threat consequences, and also list and describe the kinds of threat actions that cause each consequence. Threat actions that are accidental events are marked by „*“.

1. “(Unauthorized) Disclosure“: A circumstance or event whereby an entity gains access to data for which the entity is not authorized. (See: data confidentiality.) The following threat actions can cause unauthorized disclosure:

a. Exposure: A threat action whereby sensitive data is directly released to an unauthorized entity. This includes:

  • Deliberate Exposure: Intentional release of sensitive data to an unauthorized entity.
  • Scavenging: Searching through data residue in a system to gain unauthorized knowledge of sensitive data.
  • *Human error: Human action or inaction that unintentionally results in an entity gaining unauthorized knowledge of sensitive data.
  • *Hardware/software error: System failure that results in an entity gaining unauthorized knowledge of sensitive data.

b. Interception: A threat action whereby an unauthorized entity directly accesses sensitive data traveling between authorized sources and destinations. This includes:

  • Theft: Gaining access to sensitive data by stealing a shipment of a physical medium, such as a magnetic tape or disk, that holds the data.
  • Wiretapping (passive): Monitoring and recording data that is flowing between two points in a communication system. (See: wiretapping.)
  • Emanations analysis: Gaining direct knowledge of communicated data by monitoring and resolving a signal that is emitted by a system and that contains the data but is not intended to communicate the data. (See: emanation.)

c. Inference: A threat action whereby an unauthorized entity indirectly accesses sensitive data (but not necessarily the data contained in the communication) by reasoning from characteristics or byproducts of communications. This includes:

  • Traffic analysis: Gaining knowledge of data by observing the characteristics of communications that carry the data.
  • Signals analysis: Gaining indirect knowledge of communicated data by monitoring and analyzing a signal that is emitted by a system and that contains the data but is not intended to communicate the data. (See: emanation.)

d. Intrusion: A threat action whereby an unauthorized entity gains access to sensitive data by circumventing a system's security protections. This includes:

  • Trespass: Gaining unauthorized physical access to sensitive data by circumventing a system's protections.
  • Penetration: Gaining unauthorized logical access to sensitive data by circumventing a system's protections.
  • Reverse engineering: Acquiring sensitive data by disassembling and analyzing the design of a system component.
  • Cryptanalysis: Transforming encrypted data into plaintext without having prior knowledge of encryption parameters or processes.

2. „Deception“: A circumstance or event that may result in an authorized entity receiving false data and believing it to be true. The following threat actions can cause deception:

a. Masquerade: A threat action whereby an unauthorized entity gains access to a system or performs a malicious act by posing as an authorized entity.

  • Spoof: Attempt by an unauthorized entity to gain access to a system by posing as an authorized user.
  • Malicious logic: In context of masquerade, any hardware, firmware, or software (e.g., Trojan horse) that appears to perform a useful or desirable function, but actually gains unauthorized access to system resources or tricks a user into executing other malicious logic.

b. Falsification: A threat action whereby false data deceives an authorized entity. (See: active wiretapping.)

  • Substitution: Altering or replacing valid data with false data that serves to deceive an authorized entity.
  • Insertion: Introducing false data that serves to deceive an authorized entity.

c. Repudiation: A threat action whereby an entity deceives another by falsely denying responsibility for an act. (See: non-repudiation service).

  • False denial of origin: Action whereby the originator of data denies responsibility for its generation.
  • False denial of receipt: Action whereby the recipient of data denies receiving and possessing the data.

3. „Disruption“: A circumstance or event that interrupts or prevents the correct operation of system services and functions. (See: denial of service.) The following threat actions can cause disruption:

a. Incapacitation: A threat action that prevents or interrupts system operation by disabling a system component.

  • Malicious logic: In context of incapacitation, any hardware, firmware, or software (e.g., logic bomb) intentionally introduced into a system to destroy system functions or resources.
  • Physical destruction: Deliberate destruction of a system component to interrupt or prevent system operation.
  • *Human error: Action or inaction that unintentionally disables a system component.
  • *Hardware or software error: Error that causes failure of a system component and leads to disruption of system operation.

b. Corruption: A threat action that undesirably alters system operation by adversely modifying system functions or data.

  • Tamper: In context of corruption, deliberate alteration of a system's logic, data, or control information to interrupt or prevent correct operation of system functions.
  • Malicious logic: In context of corruption, any hardware, firmware, or software (e.g., a computer virus) intentionally introduced into a system to modify system functions or data.
  • *Human error: Human action or inaction that unintentionally results in the alteration of system functions or data.
  • *Hardware or software error: Error that results in the alteration of system functions or data.

c. Obstruction: A threat action that interrupts delivery of system services by hindering system operations.

  • Interference: Disruption of system operations by blocking communications or user data or control information.
  • Overload: Hindrance of system operation by placing excess burden on the performance capabilities of a system component. (See: flooding.)

4. „Usurpation“: A circumstance or event that results in control of system services or functions by an unauthorized entity. The following threat actions can cause usurpation:

a. Misappropriation: A threat action whereby an entity assumes unauthorized logical or physical control of a system resource.

  • Theft of service: Unauthorized use of service by an entity.
  • Theft of functionality: Unauthorized acquisition of actual hardware, software, or firmware of a system component.
  • Theft of data: Unauthorized acquisition and use of data.

b. Misuse: A threat action that causes a system component to perform a function or service that is detrimental to system security.

  • Tamper: In context of misuse, deliberate alteration of a system's logic, data, or control information to cause the system to perform unauthorized functions or services.
  • Malicious logic: In context of misuse, any hardware, software, or firmware intentionally introduced into a system to perform or control execution of an unauthorized function or service.
  • Violation of permissions: Action by an entity that exceeds the entity's system privileges by executing an unauthorized function.
encs/cph/threat-modeling.txt · Poslední úprava: 2013/03/13 22:27 (upraveno mimo DokuWiki)
 
Kromě míst, kde je explicitně uvedeno jinak, je obsah této wiki licencován pod následující licencí: CC Attribution-Share Alike 3.0 Unported
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki