I like to think about computer security as the science of how things can go wrong with computers.
There is so much that can go wrong! In so many different ways! To avoid becoming overwhelmed, computer engineers need a systematic way to think about security, talk about it with each other and organize work to build secure systems.
For this purpose, I like to use and recommend the STRIDE model, which I learned from my friend @noopwafel.
STRIDE provides mental tools to think systematically about computer security and communicate those thoughts effectively with others.
STRIDE in short
STRIDE separates “things that can go wrong” in six categories:
Threat category | In layman’s terms | Property that blocks the threat | In layman’s terms |
---|---|---|---|
Spoofing | Mallory sends a letter to Ben and signs “Alice wrote this.” Ben believes it. | Authenticity | Ben sees Mallory’s letter and knows Alice did not write it. |
Tampering | Mallory borrows Alice’s letter from the postman, and erases every occurence of “the”. Ben receives the letter and thinks Alice writes poorly. | Integrity | Ben notices that the letter was modified. |
Repudiation | Mallory calls Ben in the middle of the night. The following day, Mallory says he never called and Ben thinks he must have dreamed. | Non-repudiability | Ben can see Mallory’s phone records and confirm that Mallory indeed called. |
Information disclosure | Mallory reads Alice’s secret love letter to Ben. | Confidentiality | Alice’s letter is hidden from Mallory. |
Denial of service | Mallory locks Ben in his house and breaks the key in the lock. Ben cannot get outside. | Availability | Ben has a second door at the back of his house. |
Elevation of privilege | Mallory opens an account at Alice’s bank in her name. | Authorization | The bank refuses to open the account without Alice’s consent. |
The STRIDE model was invented by two engineers at Microsoft in 1999. See “further reading” below for a link.
General remediation methods and techniques
In computer systems, especially networked applications, some consensus has emerged as to how to systematically prevent threats:
Spoofing
How to provide authentication:
- Certificate validation from recognized Certification Authorities (CAs): the authentication part of SSL/TLS.
- Multi-factor authentication.
- Shared secrets (e.g. passwords) — as long as they are complex and only used for one purpose.
Pitfalls:
- secret-based authentication breaks without strong confidentiality of the authentication tokens.
- authentication does not work without integrity (see below). Without integrity, it is possible to modify data in-transit after authentication takes place and effectively “own” the communication: this is a man-in-the-middle attack.
- however, the converse is not true: it is possible (and sometimes desirable!) to build tamper-proof systems without authenticity. See off-the-record messaging for an example.
Tampering
How to provide integrity:
- Double-entry accounting (finance) and data replication (storage), with regular comparison of the replicas.
- Data fingerprints and cryptographic hashes.
- Merkle trees.
- Message digests: the integrity part of SSL/TLS.
Pitfalls:
- Checksum-based integrity, such as implemented in TCP or inside credit card numbers, is meant to protect against random faults and communication errors. Checksums are not a good basis to provide protection against tampering in secure systems.
Repudiation
How to provide non-repudiation:
- Access logs: who was accessing the system at which times.
- Audit logs: who was viewing or modifying which data at which times.
- Append-only databases: all the intermediate states of data are preserved, information is not modified in-place.
- Historical backups (for audit purposes).
Notes:
- SSL/TLS does not have much to say about non-repudiation.
- For some applications, the option to repudiate is actually desirable. To achieve this, forward secrecy can be used.
Information disclosure
How to provide confidentiality:
- Encryption of communications: the confidentiality part of SSL/TLS.
- Encryption of storage: filesystem encryption, disk encryption, RAM encryption.
- Firewalling.
- Preventing data connections to networks altogether.
- Isolation of resources (CPU, memory, I/O) between separate applications to prevent side-channel attacks.
Pitfalls:
In storage systems, confidentiality exists at three levels:
- the data values (“I don’t want Mallory to read the body of my e-mail”).
- the inner metadata (“I don’t want Mallory to read the subject of my e-mail”).
- the outer metadata (“I don’t want Mallory to know I have sent an e-mail to Ben”).
Systems are usually designed for confidentiality of values and inner metadata in mind from the start, but outer metadata typically requires more effort and is thus easily overlooked.
Denial of service
How to provide availability:
- Isolation of resources (CPU, memory, I/O) between separate
applications to prevent DoS attacks:
- Privilege domains in operating systems.
- Containers.
- Virtual machines.
- Separate physical servers.
- In networks: QoS, connection and data throttling.
- Automatic scale-up (adding more servers) under load with load balancing.
- Replication with automatic fail-over for storage systems.
Elevation of privilege
How to provide authorization:
- Resource access rules:
- Access control lists (a.k.a. “permissions” — OK-ish)
- Capability-based security (better).
- Design systems according to the principle of least privilege.
- A well-functioning legal system.
Pitfalls:
- Effective authorization is not possible without strong authentication.
- Authorization is not possible without strong integrity of the authorization rules.
Further reading
- Loren Kohnfelder and Praerit Garg. The threats to our products (the original definition of STRIDE). Microsoft internal memo, 1999.
- Nancy G. Leveson. Engineering a safer world. MIT Press, 2016. ISBN 9780262533690.
- Transport Layer Security. Wikipedia.