Threat ModelingThreat Modeling is the practice of building an abstract description of how an attack may proceed and cause damage in order to define strategies and means to defend against it. It is an important part of working toward protecting an Information System (like your computer, or a network you are responsible for), especially if there are many points of potential vulnerability. | |
IntroductionBefore we begin, it's important to define what a threat is, in an Information System (IS) context. A succint description from the National Information Assurance Glossary defines a threat as: | |
Their more comprehensive (read „absurdly long“) definition is: | |
Broadly speaking, a threat implies vulnerability, this can come in the form of a person directly spying on your communications or in the form of an automated system (malware or software on the network). It can also come from environmental phenomena, such as fire. In the context of this handbook we'll consider threats as coming from an agent attacking with a motivation rather than environmental phenomena or 'Acts of God'. | |
An attack is active when it is attempting to manipulate and/or affect a system's resources and/or its general operation. In this way it can be said to compromise the Integrity or Availability of a system. A passive attack however is something that is commonly done as part of an attempt to learn about a system and/or access information on the system but does not actually affect the general running or availability of that system (text adapted from RFC2828). | |
Successful attacks can be known or unknown. All successful attacks exploit a weakness. While neither are desirable, knowledge of a successful attack exposes one or more weaknesses, providing an opportunity for hardening the defense strategy. Attacks that manipulate a system often take control of functions on the system. | |
Not all attacks are strictly related to the system itself. A highly sophisticated attack may deploy Social Engineering strategies to manipulate a person to give the attacker access to a system or to important information. Some of these attacks can themselves be automated. Phishing and Pretexting are two such examples. | |
Modeling ThreatsHere are three often-cited frames for the modeling of threats: | |
Asset-centric: Here the model begins with looking at the assets (personal information, bank account data, for instance) that are stored on a trusted system. From there particular strategies for accessing those assets might be determined. For instance, a strong passphrase used for logging into an online banking service is of little use if the user of that account is vulnerable to Phishing attacks. Similarly, valued data assets stored on an Encrypted File System on a laptop might be safe from prying eyes but are still are vulnerable to kinds of attacks whose goal is to destroy that data (from malware, to an axe or a cup of spilled coffee), unless secured backups are in place. | |
Attacker-centric: This model starts with trying to determine the motivations and goals of an attacker in order to isolate their method and means of attack. An example might be that an attacker might want to listen into an important phone call in order to determine the time and place of a special meeting. From there we would look at how they might do this based on how (landline, cellular phone or Voice Over IP) and in what context. Does attacker have access to the service being used (wiretapping)? What hardware is the caller using? What software (if any) is the caller using and what are its known vulnerabilities? | |
System-centric: System-centric modeling considers threat from the perspective of the system itself, most typically the design of the software in use. Each element of the system is studied for vulnerabilities, resulting in determination of an overall attack surface. | |
Threat AgentsAttacks have a motivation. Typically they are to spy, steal identities, aquire/destroy data (assets), control a system or render a system dysfunctional. There are many different means to achieve these outcomes, each of which relates directly to the conditions the attacker is working with at the time. These conditions might comprise social, psychological, network, operating system and application security factors all the same time. | |
Risk Management Insight LLC published a paper in 2006 (http://www.riskmanagementinsight.com/media/docs/FAIR_introduction.pdf) that defines five kinds of Threat Agents: | |
Threat ConsequencesIt's often very useful to have the particular consequences of a successful attack in mind when modeling threats. This can help us to get closer to isolating the particular strategies an attacker might deploy in search of their goal. | |
The Internet Engineering Task Force defines Threat Consequences in RFC2828 (http://tools.ietf.org/html/rfc2828). Here is an adaptation of that definition, with references to the (extensive) glossary and natural disaster removed: | |
A security violation that results from a threat action. Includes disclosure, deception, disruption, and usurpation. The following subentries describe four kinds of threat consequences, and also list and describe the kinds of threat actions that cause each consequence. Threat actions that are accidental events are marked by „*“. | |
1. “(Unauthorized) Disclosure“: A circumstance or event whereby an entity gains access to data for which the entity is not authorized. (See: data confidentiality.) The following threat actions can cause unauthorized disclosure: | |
a. Exposure: A threat action whereby sensitive data is directly released to an unauthorized entity. This includes: | |
b. Interception: A threat action whereby an unauthorized entity directly accesses sensitive data traveling between authorized sources and destinations. This includes: | |
c. Inference: A threat action whereby an unauthorized entity indirectly accesses sensitive data (but not necessarily the data contained in the communication) by reasoning from characteristics or byproducts of communications. This includes: | |
d. Intrusion: A threat action whereby an unauthorized entity gains access to sensitive data by circumventing a system's security protections. This includes: | |
2. „Deception“: A circumstance or event that may result in an authorized entity receiving false data and believing it to be true. The following threat actions can cause deception: | |
a. Masquerade: A threat action whereby an unauthorized entity gains access to a system or performs a malicious act by posing as an authorized entity. | |
b. Falsification: A threat action whereby false data deceives an authorized entity. (See: active wiretapping.) | |
c. Repudiation: A threat action whereby an entity deceives another by falsely denying responsibility for an act. (See: non-repudiation service). | |
3. „Disruption“: A circumstance or event that interrupts or prevents the correct operation of system services and functions. (See: denial of service.) The following threat actions can cause disruption: | |
a. Incapacitation: A threat action that prevents or interrupts system operation by disabling a system component. | |
b. Corruption: A threat action that undesirably alters system operation by adversely modifying system functions or data. | |
c. Obstruction: A threat action that interrupts delivery of system services by hindering system operations. | |
4. „Usurpation“: A circumstance or event that results in control of system services or functions by an unauthorized entity. The following threat actions can cause usurpation: | |
a. Misappropriation: A threat action whereby an entity assumes unauthorized logical or physical control of a system resource. | |
b. Misuse: A threat action that causes a system component to perform a function or service that is detrimental to system security. | |
|