Getting someone who gets their security advice from Forbes to evaluate and select tools for secure private communications is probably non-trivial, but worthwhile.
This is, hopefully, the start of a practical general checklist. Comments and suggestions would be appreciated. Happy holidays! It may be daunting to evaluate and choose a tool for your needs. As individual circumstances differ, you should choose a tool that best fits your situation, taking into account responsibility and risk. For a suitable tool, answers to the checklist questions should all be `yes' or not applicable. Do your own research, but do reach out to the community (for example, liberationtech) for assistance and to confirm your conclusions. *Accountability* Is the installer or program (binary) digitally signed? Why: A binary should carry a valid and trusted digital signature to ensure that the correct file is received, whether downloaded from the Internet or obtained from a friend in the field. The digital signature provides a link between the signer, ideally, the creator of the file and its checksum. If the file with a valid signature turns out to be compromised or contains a backdoor, the signer can be asked to account for the discrepancy, even if it turns out their computer was compromised (more likely) or an adversary is capable of forging the digital signature (very notable, but less likely). A good digital signature should use a current checksum/digest (SHA1/SHA256/SHA512 and not MD5/CRC32, which are very weak) and a sufficiently long public key (2048 bits or higher). In the case of a certificate authority-based digital signature, the chain of certificates should be plausible. For example, some obscure entity located in a distant land should not be asserting that the signer is Mozilla. *Open source* Is the source code available? Why: Having source code available for anyone to examine offers the security community and other interested observers the ability to inspect the implementation for bugs, flaws, and vulnerabilities. While adversaries may use the opportunity to find exploitable vulnerabilities, the security community has the same chance to do so. For closed source tools, it is possible that only the authors and adversaries have access to the source. Is the source code signed? Why: Code that does not carry valid digital signatures can be altered in transit. The version of code that you receive may not be the version that is used to build the program. (Also, see next item.) Digital signatures associated with commits helps with accountability and integrity. Can you use the source code to produce a functionally identical version of the tool? Why: Although source code might be available, that code may not actually be used to produce the tool that you obtained. For example, a compromised binary might be built from a combination of the clean public source code and a backdoor. If a functionally identical version of the tool cannot be produced from the provided source code, this situation, among others, may be more difficult to detect and prevent as you cannot simply build your own version to use. NB: The standard is currently a functionally identical version of the program. The act of making a complex program bit-for-bit identical is an active area of work due to differences in, for example, compile times and dates. Does the project welcome code contributions? Why: If bug fixes cannot be contributed, the incentive for individuals in the security community to examine the source can be reduced. In addition, this type of open source project may rely solely on the developer to fix bugs and be more dependent on the actual good behavior of the developer. *Threat model* Does the creator define the assumptions that underlie the tool's operation? Why: It is necessary to evaluate if the tool is right for your use case and to make an informed decision with relation to its shortfalls and benefits. The threat model, risks, benefits, warnings, and implementation details should be available and complete in both plain and technical language. In particular, what does the tool not protect against (for example, a smartphone that already has malicious apps)? Are you advised (realistically) of the risks and benefits? Why: It is necessary to know how the tool may fail. Does the tool creator take into account likely, but not yet documented, capabilities of adversaries (for example, partial network view in case of Tor)? Although some risks and failure cases may not be immediately apparent (even to the tool creators), known issues (or issues apparent by inspection) should be made obvious in plain language and presented alongside expected benefits. Is the tool secure by design? Why: The tool creator may be compelled, by legal means or otherwise, to provide user data or other assistance to an adversary. Is the tool designed in such a way to minimize the threat arising from this situation (for example, through end-to-end encryption or by having any sensitive data revealed to the tool creator or their infrastructure)? If the tool does not work for some reason, does it fail in a safe manner (for example, no network access for a VPN as opposed to allowing connections without encryption)? Are you sufficiently protected from the future? Why: Suppose your counterparty's computer is compromised or is seized sometime in the future by an adversary who has also captured an encrypted version of your communications. If your tool does not have perfect forward security, your past communications might be decrypted using information from your counterparty's system. Likewise, if the cryptographic components used in the chosen tool are not strong enough to resist expected advances in computational power, the communications might be decrypted at some point in the future as an expected development. Are the algorithms and components used plausible? Why: While it may be necessary to consult the security community (for example, liberationtech) about this point, improper choices in parameters, cryptographic components, and system design can result in an unsafe tool. Is this best tool for your use case, despite any drawbacks? Why: It is essential to examine the tool landscape and make an informed decision for your use case. In the end, the decision might be to modify the use case. For example, one may decide to use a known insecure tool (for example, telephone) in conjunction with an one-time pad to avoid a false sense of security. *Sustainability* Is there an active developer base to address bugs and changes in the security landscape? Why: Critical bugs may be present despite code review and good design decisions. Bugs may also be introduced by time, for example, through an increase in computational power. These bugs need to be addressed in a timely fashion. In addition, changes to an adversary's capabilities may require cryptographic upgrades (longer key lengths, obfuscation). Without active maintenance, tools that were once safe may become unsafe over time. Are the project's finances sustainable? Why: If the tool relies on infrastructure (for example, RTP proxies for VoIP), the cost needs to be covered somehow (donations, user payments, which may not be anonymous, by selling user data, etc.). In addition, finding and switching to a different tool in the field can be difficult. -- Unsubscribe, change to digest, or change password at: https://mailman.stanford.edu/mailman/listinfo/liberationtech