Куда я попал?
OWASP Building Security In Maturity Model
Framework
The BSIMM activities are the individual controls used to construct or improve an SSI. They range through people, process, technology, and culture. You can use this information to choose which controls to apply within your initiative, then align your implementation strategy and metrics with your desired outcomes.
The BSIMM framework comprises four domains—Governance, Intelligence, SSDL Touchpoints, and Deployment—and these domains contain 12 practices, such as Strategy & Metrics, Attack Models, and Code Review, which themselves contain activities. These activities are the BSIMM building blocks, the smallest unit of software security granularity implemented to build SSIs. Rather than prescriptively dictating a set of best practices, the BSIMM descriptively observes, quantifies, and documents the actual activities carried out in SSIs across diverse organizations.
The BSIMM framework comprises four domains—Governance, Intelligence, SSDL Touchpoints, and Deployment—and these domains contain 12 practices, such as Strategy & Metrics, Attack Models, and Code Review, which themselves contain activities. These activities are the BSIMM building blocks, the smallest unit of software security granularity implemented to build SSIs. Rather than prescriptively dictating a set of best practices, the BSIMM descriptively observes, quantifies, and documents the actual activities carried out in SSIs across diverse organizations.
Для проведения оценки соответствия по документу войдите в систему.
Для оценки соответствия
- авторизуйтесь
- авторизуйтесь
Планируемый уровень
Текущий уровень
Группы областей
75
%
Входящая логистика
40
%
Создание продукта
37
%
Исходящая логистика
38
%
Маркетинг, продажа
83
%
Обслуживание клиента
65
%
Инфраструктура
76
%
HR-менеджмент
63
%
Технологии
55
%
Закупки / Снабжение
60
%
Опыт клиента
Список требований
-
[SM1.1: 90] PUBLISH PROCESS AND EVOLVE AS NECESSARY.
The process for addressing software security is defined, published internally, and broadcast to all stakeholders so that everyone knows the plan. Goals, roles, responsibilities, and activities are explicitly defined. Most organizations examine existing methodologies, such as the NIST SSDF, Microsoft SDL, or Black Duck Touchpoints, then tailor them to meet their needs. Security activities will be adapted to software lifecycle processes (e.g., waterfall, Agile, CI/CD, DevOps), so activities will evolve with both the organization and the security landscape. The process doesn’t need to be publicly promoted outside the firm to have the desired impact (see [SM3.2]). In addition to publishing the written process, some firms also automate parts (e.g., a testing strategy) as governance-as-code (see [SM3.4]). -
[SM1.4: 109] IMPLEMENT SECURITY CHECKPOINTS AND ASSOCIATED GOVERNANCE.
The software security process includes checkpoints (such as gates, release conditions, guardrails, milestones, etc.) at one or more points in a software lifecycle. The first two steps toward establishing security-specific checkpoint conditions are to identify process locations that are compatible with existing development practices and to then begin gathering the information necessary, such as risk-ranking thresholds or defect data, to make a go/no-go decision. Importantly, the conditions need not be enforced at this stage—e.g., the SSG can collect security testing results for each project prior to release, then provide an informed opinion on what constitutes sufficient testing or acceptable test results without trying to stop a project from moving forward (see [SM1.7]). Shorter release cycles might require creative approaches to collecting the right evidence and rely heavily on automation. Socializing the conditions and then enforcing them once most project teams already know how to succeed is a gradual approach that motivates good behavior without introducing unnecessary friction. -
[SM1.7: 72] ENFORCE SECURITY CHECKPOINTS AND TRACK EXCEPTIONS.
Enforce security release conditions at each checkpoint (gate, guardrail, milestone, etc.) for every project, so that each project must either meet an established measure or follow a defined process for obtaining an exception to move forward. Use internal policies and standards, regulations, contractual agreements, and other obligations to define release conditions, then track all exceptions. Verifying conditions yields data that informs the KRIs and any other metrics used to govern the process. Automatically giving software a passing grade or granting exceptions without due consideration defeats the purpose of verifying conditions. Even seemingly innocuous software projects (e.g., small code changes, infrastructure access control changes, deployment blueprints) must successfully satisfy the prescribed security conditions as they progress through the software lifecycle. Similarly, APIs, frameworks, libraries, bespoke code, microservices, container configurations, etc., are all software that must satisfy security release conditions. It’s possible, and often very useful, to have verified the conditions both before and after the development process itself. In modern development environments, the verification process will increasingly become automated (see [SM3.4]). -
[SM2.3: 67] CREATE OR GROW A SECURITY CHAMPIONS PROGRAM.
Form a collection of people scattered across the organization— often called security champions—who show an above-average level of security interest or skill and who contribute software security expertise to development, QA, and operations teams. Forming this social network of advocates is a good step toward scaling security into software engineering. One way to build the initial group is to track the people who stand out during introductory training courses (see [T3.6]). Another way is to ask for volunteers. In a more top-down approach, initial champions membership is assigned to ensure good coverage of development groups, but ongoing membership is based on actual performance. The champions can act as a sounding board for new projects and, in new or fast-moving technology areas, can help combine software security skills with domain knowledge that might be under-represented in the SSG or engineering teams. Agile coaches, scrum masters, and DevOps engineers can make particularly useful champions members, especially for detecting and removing process friction. In some environments, champions-led efforts are delivered via automation (e.g., as-code). -
[SM2.6: 69] REQUIRE SECURITY SIGN-OFF PRIOR TO SOFTWARE RELEASE.
The organization has an initiative-wide process for documenting accountability and accepting security risk by having a risk owner use SSG-approved criteria to sign off on the state of all software prior to release. The sign-off policy might also require the accountable person to, e.g., acknowledge critical vulnerabilities that have not been mitigated or SSDL steps that have been skipped. Informal or uninformed risk acceptance alone isn’t a security sign-off because the act of accepting risk is more effective when it’s formalized (e.g., with a signature, a form submission, or something similar) and captured for future reference. Similarly, simply stating that certain projects don’t need sign-off at all won’t achieve the desired risk management results. In some cases, however, the risk owner can provide the sign-off on a particular set of software project acceptance criteria, which are then implemented in automation to provide governance-as-code (see [SM3.4]), but there must be an ongoing verification that the criteria remain accurate and the automation is working. -
[SM2.7: 52] CREATE EVANGELISM ROLE AND PERFORM INTERNAL MARKETING.
Build support for software security throughout the organization via ongoing evangelism and ensure that everyone aligns on security objectives. This internal marketing function, often performed by a variety of stakeholder roles, keeps executives and others up to date on the magnitude of the software security problem and the elements of its solution. A champion or a scrum master familiar with security, for example, could help teams adopt better software security practices as they transform to Agile and DevOps methods. Similarly, a cloud expert could demonstrate the changes needed in security architecture and testing for serverless applications. Evangelists can increase understanding and build credibility by giving talks to internal groups (including executives), publishing roadmaps, authoring technical papers for internal consumption, or creating a collection of papers, books, and other resources on an internal website (see [SR1.2]) and promoting its use. In turn, organizational feedback becomes a useful source of improvement ideas. -
[SM3.4: 11] INTEGRATE SOFTWARE-DEFINED LIFECYCLE GOVERNANCE.
Organizations begin replacing traditional document-, presentation-, and spreadsheet-based lifecycle management with software-based delivery platforms. For some software lifecycle phases, humans are no longer the primary drivers of progression from one phase to the next. Instead, organizations rely on automation to drive the management and delivery process with software such as Spinnaker or GitHub, and humans participate asynchronously (and often optionally). Automation often extends beyond the scope of CI/ CD to include functional and nonfunctional aspects of delivery, such as health checks, cut-over on failure, rollback to known-good state, defect discovery and management, compliance verification, and a way to ensure adherence to policies and standards. Some organizations are also evolving their lifecycle management approach by integrating their compliance and defect discovery data, perhaps augmented by intelligence feeds and other external data, to begin moving from a series of point-in-time go/ no-go decisions (e.g., release conditions) to a future state of continuous accumulation of assurance data (see [CMVM3.6]). -
[SM3.5: 1] INTEGRATE SOFTWARE SUPPLY CHAIN RISK MANAGEMENT.
Organizational risk management processes ensure that important software created by and entering the organization is managed through policy-driven access and usage controls, maintenance standards (see [SE3.9]), and captured software provenance data (see [SE2.4]). Apply these processes to external (see [SR2.7]), bespoke, and internally developed software (see [SE3.9]) to help ensure that deployed code has the expected components (see [SE3.8]). The lifecycle management for all software, from creation or importation through secure deployment, ensures that all access, usage, and modifications are done in accordance with policy. This assurance is easier to implement at scale using automation in software lifecycle processes (see [SM3.4]). -
[CP1.1: 98] UNIFY REGULATORY PRESSURES.
Have a cross-functional team that understands the constraints imposed on software security by regulatory or compliance drivers that are applicable to the organization and its customers. The team takes a common approach that removes redundancy and conflicts to unify compliance requirements, such as from PCI security standards; GLBA, SOX, and HIPAA in the US; or GDPR in the EU. A formal approach will map applicable portions of regulations to controls (see [CP2.3]) applied to software to explain how the organization complies. Existing business processes run by legal, product management, or other risk and compliance groups outside the SSG could serve as the regulatory focal point, with the SSG providing software security knowledge. A unified set of software security guidance for meeting regulatory pressures ensures that compliance work is completed as efficiently as possible. -
[CP1.2: 105] IDENTIFY PRIVACY OBLIGATIONS.
The SSG identifies privacy obligations stemming from regulation and customer expectations, then translates these obligations into both software requirements and privacy best practices. The way software handles PII might be explicitly regulated, but even if it isn’t, privacy is an important topic. For example, if the organization processes credit card transactions, the SSG will help in identifying the privacy constraints that the PCI DSS places on the handling of cardholder data and will inform all stakeholders (see [SR1.3]). Note that outsourcing to hosted environments (e.g., the cloud) doesn’t relax privacy obligations and can even increase the difficulty of recognizing and meeting all associated needs. Also, note that firms creating software products that process PII when deployed in customer environments might meet this need by providing privacy controls and guidance for their customers. Evolving consumer privacy expectations, the proliferation of “software is in everything,” and data scraping and correlation (e.g., social media) add additional expectations and complexities for PII protection. -
[CP1.3: 94] CREATE POLICY.
The SSG guides the organization by creating or contributing to software security policies that satisfy internal, regulatory, and customer-driven security requirements. This policy is what is permitted and denied at the initiative level—if it’s not mandatory and enforced, it’s not policy. The policies include a unified approach for satisfying the (potentially lengthy) list of security drivers at the governance level so that project teams can avoid keeping up with the details involved in complying with all applicable regulations or other mandates. Likewise, project teams won’t need to relearn customer security requirements on their own. Architecture standards and coding guidelines aren’t examples of policy, but policy that prescribes and mandates their use for certain software categories falls under this umbrella. In many cases, policy statements are translated into automation to provide governance-as-code. Even if not enforced by humans, policy that’s been automated must still be mandatory. In some cases, policy will be documented exclusively as governance as-code (see [SM3.4]), often as tool configuration, but it must still be readily readable, auditable, and editable by humans. -
[CP2.1: 49] BUILD A PII INVENTORY.
The organization identifies and tracks the kinds of PII processed or stored by each of its systems, along with their associated data repositories. In general, simply noting which applications process PII isn’t enough—the type of PII (e.g., PHI, PFI, PI) and where it’s stored are necessary so that the inventory can be easily referenced in critical situations. This usually includes making a list of databases that would require customer notification if breached or a list to use in crisis simulations (see [CMVM3.3]). Build the PII inventory by starting with each individual application and noting its PII use or by starting with PII types and noting the applications that touch each one. System architectures have evolved such that PII will often flow into cloud-based service and endpoint device ecosystems, then come to rest there (e.g., content delivery networks, workflow systems, mobile devices, IoT devices), making it tricky to keep an accurate PII inventory. -
[CP2.2: 58] REQUIRE SECURITY SIGN-OFF FOR COMPLIANCE-RELATED RISK.
The organization has a formal compliance risk acceptance sign-off and accountability process that addresses all software development projects. In this process, the SSG acts as an advisor while the risk owner signs off on the software’s compliance state prior to release based on its adherence to documented criteria. The sign-off policy might also require the head of the business unit to, e.g., acknowledge compliance issues that haven’t been mitigated or compliance-related SSDL steps that have been skipped, but sign-off is required even when no compliance-related risk is present. Sign-off is explicit and captured for future reference, with any exceptions tracked, even in automated application lifecycle methodologies. Note that an application without security defects might still be noncompliant, so clean security testing results are not a substitute for a compliance sign-off. Even in DevOps organizations where engineers have the technical ability to release software, there is still a need for a deliberate risk acceptance step even if the compliance criteria are embedded in automation (see [SM3.4]). In cases where the risk owner signs off on a particular set of compliance acceptance criteria that are then implemented in automation to provide governance as-code, there must be ongoing verification that the criteria remain accurate and the automation is actually working. -
[CP2.3: 69] IMPLEMENT AND TRACK CONTROLS FOR COMPLIANCE.
The organization can demonstrate compliance with applicable requirements because its SSDL is aligned with the control statements that were developed by the SSG in collaboration with compliance stakeholders (see [CP1.1]). The SSG collaborates with stakeholders to track controls, navigate problem areas, and ensure that auditors and regulators are satisfied. The SSG can then remain in the background when the act of following the SSDL automatically generates the desired compliance evidence predictably and reliably. Increasingly, the DevOps approach embeds compliance controls in automation, such as in software-defined infrastructure and networks, rather than in human process and manual intervention. A firm doing this properly can explicitly associate satisfying its compliance concerns with following its SSDL. -
[CP2.4: 60] INCLUDE SOFTWARE SECURITY SLAS IN ALL VENDOR CONTRACTS.
Software vendor contracts include an SLA to ensure that the vendor’s security efforts align with the organization’s security and compliance story. Each new or renewed contract contains provisions requiring the vendor to address software security and deliver a product or service compatible with the organization’s security policy. In some cases, open source licensing concerns initiate the vendor management process, which can open the door for additional software security language in the SLA (see [SR2.5]). Typical provisions set requirements for policy conformance, incident management, training, defect management, and response times for addressing software security issues. Traditional IT security requirements and a simple agreement to allow penetration testing or another defect discovery method aren’t sufficient here. -
[CP3.3: 13] DRIVE FEEDBACK FROM SOFTWARE LIFECYCLE DATA BACK TO POLICY.
Feed information from the software lifecycle into the policy creation and maintenance process to drive improvements, such as in defect prevention and strengthening governance-ascode practices (see [SM3.4]). With this feedback as a routine process, blind spots can be eliminated by mapping them to trends in SSDL failures. Events such as the regular appearance of inadequate architecture analysis, recurring vulnerabilities, ignored security release conditions, or the wrong vendor choice for carrying out a penetration test can expose policy weakness (see [CP1.3]). As an example, lifecycle data including KPIs, OKRs, KRIs, SLIs, SLOs, or other organizational metrics can indicate where policies impose too much bureaucracy by introducing friction that prevents engineering from meeting the expected delivery cadence. Rapid technology evolution might also create policy gaps that must be addressed. Over time, policies become more practical and easier to carry out (see [SM1.1]). Ultimately, policies are refined with SSDL data to enhance and improve effectiveness. -
[T1.8: 50] INCLUDE SECURITY RESOURCES IN ONBOARDING.
The process for bringing new hires into a software engineering organization requires timely completion of a training module about software security. While the generic new hire process usually covers topics like picking a good password and avoiding phishing, this orientation period is enhanced to cover topics such as how to create, deploy, and operate secure code, the SSDL, security standards (see [SR1.1]), and internal security resources (see [SR1.2]). The objective is to ensure that new hires contribute to the security culture as soon as possible. Although a generic onboarding module is useful, it doesn’t take the place of a timely and more complete introductory software security course. -
[T2.5: 41] ENHANCE SECURITY CHAMPIONS THROUGH TRAINING AND EVENTS.
Strengthen the security champions network (see [SM2.3]) by inviting guest speakers or holding special events about advanced software security topics. This effort is about providing to the champions customized training (e.g., the latest software security techniques for DevOps or serverless technologies or on the implications of new policies and standards) so that it can fulfill its assigned responsibilities—it’s not about inviting champions members to routine brown bags or signing them up for standard computer-based training. Similarly, a standing conference call with voluntary attendance won’t get the desired results, which are as much about building camaraderie as they are about sharing knowledge and organizational efficiency. Regular events build community and facilitate collaboration and collective problem-solving. Face-to-face meetings are by far the most effective, even if they happen only once or twice a year and even if some participants must attend by videoconferencing. In teams with many geographically dispersed and work-fromhome members, simply turning on cameras and ensuring that everyone gets a chance to speak makes a substantial difference. -
[T2.9: 31] DELIVER ROLE-SPECIFIC ADVANCED CURRICULUM.
Software security training goes beyond building awareness (see [T1.1]) to enabling students to incorporate security practices into their work. This training is tailored to cover the tools, technology stacks, development methodologies, and issues that are most relevant to the students. An organization could offer tracks for its engineers, for example, supplying one each for architects, developers, operations, DevOps, site reliability engineers, and testers. Tool-specific training is also commonly needed in such a curriculum. While it might be more concise than engineering training, role-specific training is also necessary for many other stakeholders within an organization, including product management, executives, and others. In any case, the training must be taken by a broad enough audience to build the collective skillsets required. -
[T2.10: 25] HOST SOFTWARE SECURITY EVENTS.
The organization hosts security events featuring external speakers and content in order to strengthen its security culture. Good examples of such events are Intel iSecCon and AWS re:Inforce, which invite all employees, feature external presenters, and focus on helping engineering create, deploy, and operate better code. Employees benefit from hearing outside perspectives, especially those related to fast-moving technology areas with software security ramifications, and the organization benefits from putting its security credentials on display (see [SM3.2]). Events open only to small, select groups or simply putting recordings on an internal portal, won’t result in the desired culture change across the organization. -
[T2.11: 29] REQUIRE AN ANNUAL REFRESHER.
Everyone involved in the SSDL is required to take an annual software security refresher course. This course keeps the staff up to date on the organization’s security approach and ensures that the organization doesn’t lose focus due to turnover, evolving methodologies, or changing deployment models. The SSG might give an update on the security landscape and explain changes to policies and standards. A refresher could also be rolled out as part of a firmwide security day or in concert with an internal security conference. While one refresher module can be used for multiple roles (see [T2.9]), coverage of new topics and changes to the previous year’s content should result in a significant amount of fresh content. -
[T3.6: 9] IDENTIFY NEW SECURITY CHAMPIONS THROUGH OBSERVATION.
Future security champions are recruited by noting people who stand out during opportunities that show skill and enthusiasm, such as training courses, office hours, capture-the-flag exercises, hack-a-thons, etc. and then encouraging them to join the champions. Pay particular attention to practitioners who are contributing things such as code, security configurations, or defect discovery rules. The champions program often begins as an assigned collection of people scattered across the organization who show an above-average level of security interest or advanced knowledge of new technology stacks and development methodologies (see [SM2.3]). Identifying future members proactively is a step toward creating a social network that speeds the adoption of security into software development and operations. A group of enthusiastic and skilled volunteers will be easier to lead than a group that is drafted. -
[AM1.2: 64] USE A DATA CLASSIFICATION SCHEME FOR SOFTWARE INVENTORY.
Security stakeholders in an organization agree on a data classification scheme and use it to inventory software, delivery artifacts (e.g., containers), and associated persistent data stores according to the kinds of data processed or services called, regardless of deployment model (e.g., on- or off-premises). Many classification schemes are possible—one approach is to focus on PII, for example. Depending on the scheme and the software involved, it could be easiest to first classify data repositories (see [CP2.1]), then derive classifications for applications according to the repositories they use. Other approaches include data classification according to protection of intellectual property, impact of disclosure, exposure to attack, relevance to GDPR, and geographic boundaries. -
[AM1.3: 49] IDENTIFY POTENTIAL ATTACKERS.
The SSG identifies potential attackers in order to understand and begin documenting their motivations and abilities. The outcome of this periodic exercise could be a set of attacker profiles that includes outlines for categories of attackers, and more detailed descriptions for noteworthy individuals, that are used in end-to-end design review (see [AA1.2]). In some cases, a third-party vendor might be contracted to provide this information. Specific and contextual attacker information is almost always more useful than generic information copied from someone else’s list. Moreover, a list that simply divides the world into insiders and outsiders won’t drive useful results. Identification of attackers should also consider the organization’s evolving software supply chain, attack surface, theoretical internal attackers, and contract staff. -
[AM2.1: 18] BUILD ATTACK PATTERNS AND ABUSE CASES TIED TO POTENTIAL ATTACKERS.
The SSG works with stakeholders to build attack patterns and abuse cases tied to potential attackers (see [AM1.3]). Attack patterns frequently contain details of the targeted asset, attackers, goals, and the techniques used. These resources can be built from scratch or from standard sets, such as the MITRE ATT&CK framework, with the SSG adding to the pile based on its own attack stories to prepare the organization for SSDL activities such as design review and penetration testing. For example, a story about an attack against a poorly designed cloud-native application could lead to a containerization attack pattern that drives a new type of testing (see [ST3.5]). If a firm tracks the fraud and monetary costs associated with specific attacks, this information can in turn be used to prioritize the process of building attack patterns and abuse cases. Organizations will likely need to evolve both their attack pattern and abuse case creation prioritization and their content over time due to changing software architectures (e.g., zero trust, cloud native, serverless), attackers, and technologies. -
[AM2.6: 12] COLLECT AND PUBLISH ATTACK STORIES.
To maximize the benefit from lessons that don’t always come cheap, the SSG collects and publishes stories about attacks against the organization’s software. Both successful and unsuccessful attacks can be noteworthy, and discussing historical information about software attacks has the added effect of grounding software security in a firm’s reality. This is particularly useful in training classes (see [T2.8]) to help counter a generic approach that might be overly focused on other organizations’ most common bug lists or outdated platform attacks. Hiding or overly sanitizing information about attacks from people building new systems fails to garner any positive benefits from a negative event. -
[AM2.7: 16] BUILD AN INTERNAL FORUM TO DISCUSS ATTACKS.
The organization has an internal, interactive forum where the SSG, champions, incident response, and others discuss attacks and attack methods. The discussion serves to communicate the attacker perspective to everyone, so it’s useful to include all successful attacks here, regardless of attack source, such as supply chain, internal, consultants, or bug bounty contributors. The SSG augments the forum with an internal communication channel (see [T2.12]) that encourages subscribers to discuss the latest information on publicly known incidents. Dissection of attacks and exploits that are relevant to a firm are particularly helpful when they spur discussion of software, infrastructure, and other mitigations. Simply republishing items from public mailing lists doesn’t achieve the same benefits as active and ongoing discussions, nor does a closed discussion hidden from those creating code and configurations. Everyone should feel free to ask questions and learn about vulnerabilities and exploits. -
[AM2.8: 24] HAVE A RESEARCH GROUP THAT DEVELOPS NEW ATTACK METHODS.
A research group works to identify and mitigate the impact of new classes of attacks and shares their knowledge with stakeholders. Identification does not always require original research—the group might expand on an idea discovered by others. Doing this research inhouse is especially important for early adopters of new technologies and configurations so that they can discover potential weaknesses before attackers do. One approach is to create new attack methods that simulate persistent attackers during goal-oriented red team exercises (see [PT3.1]). This isn’t a penetration testing team finding new instances of known types of weaknesses, it’s a research group that innovates attack methods and mitigation approaches. Example mitigation approaches include test cases, static analysis rules, attack patterns, standards, and policy changes. Some firms provide researchers time to follow through on their discoveries by using bug bounty programs or other means of coordinated disclosure (see [CMVM2.4]). Others allow researchers to publish their findings at conferences like DEF CON to benefit everyone. -
[AM3.2: 8] CREATE AND USE AUTOMATION TO MIMIC ATTACKERS.
The SSG arms engineers, testers, and incident response with automation to mimic what attackers are going to do. For example, a new attack method identified by an internal research group (see [AM2.8]) or a disclosing third party could require a new tool, so the SSG, perhaps through the security champions, could package the tool and distribute it to testers. The idea here is to push attack capability past what typical commercial tools and offerings encompass, then make that knowledge and technology easy for others to use. Mimicking attackers, especially attack chains, almost always requires tailoring tools to a firm’s particular technology stacks, infrastructure, and configurations. When technology stacks and coding languages evolve faster than vendors can innovate, creating tools and automation in-house might be the best way forward. In the DevOps world, these tools might be created by engineering and embedded directly into toolchains and automation (see [ST3.6]). -
[SFD2.1: 42] LEVERAGE SECURE-BY-DESIGN COMPONENTS AND SERVICES.
Build or provide approved secure-by-design software components and services for use by engineering teams. Prior to approving and publishing secure-by-design software components and services, including open source and cloud services, the SSG must carefully assess them for security. This assessment process to declare a component secure-by-design is usually more rigorous and in-depth than that for typical projects. In addition to teaching by example, these resilient and reusable building blocks aid important efforts such as architecture analysis and code review by making it easier to avoid mistakes. These components and services also often have features (e.g., application identity, RBAC) that enable uniform usage across disparate environments. Similarly, the SSG might further take advantage of this defined list by tailoring static analysis rules specifically for the components it offers (see [CR2.6]). -
[SFD2.2: 68] CREATE CAPABILITY TO SOLVE DIFFICULT DESIGN PROBLEMS.
Contribute to building resilient architectures by solving design problems unaddressed by organizational security components or services, or by cloud service providers, thus minimizing the negative impact that security has on other constraints, such as feature velocity. Involving the SSG and secure design experts in application refactoring or in the design of a new protocol, microservice, or architecture feature (e.g., containerization) enables timely analysis of the security implications of existing defenses and identifies elements to be improved. Designing for security early in the new project process is more efficient than analyzing an existing design for security and then refactoring when flaws are uncovered (see [AA1.1], [AA1.2], [AA2.1]). The SSG could also get involved in what would have historically been purely engineering discussions, as even rudimentary use of cloud-native technologies (e.g., “Hello, world!”) requires proper use of configurations and other capabilities that have direct implications on security posture. -
[SFD3.1: 18] FORM A REVIEW BOARD TO APPROVE AND MAINTAIN SECURE DESIGN PATTERNS.
A review board formalizes the process of reaching and maintaining consensus on security tradeoffs in design needs. Unlike a typical architecture committee focused on functions, this group focuses on providing security guidance, preferably in the form of patterns, standards, features, or frameworks. It also periodically reviews already published design guidance (especially around authentication, authorization, and cryptography) to ensure that design decisions don’t become stale or out of date. This review board helps control the chaos associated with adoption of new technologies when development groups might otherwise make decisions on their own without engaging the SSG or champions. Review board security guidance can also serve to inform outsourced software providers about security expectations (see [CP3.2]). -
[SFD3.2: 21] REQUIRE USE OF APPROVED SECURITY FEATURES AND FRAMEWORKS.
Implementers must take their security features and frameworks from an approved list or repository (see [SFD1.1], [SFD2.1], [SFD3.1]). There are two benefits to this activity—developers don’t spend time reinventing existing capabilities, and review teams don’t have to contend with finding the same old defects in new projects or when new platforms are adopted. Reusing proven components eases testing, code review, and threat modeling (see [AA1.1]). Reuse is a major advantage of consistent software architecture and is particularly helpful for Agile development and velocity maintenance in CI/CD pipelines. Packaging and applying required components, such as via containerization (see [SE2.5]), makes it especially easy to reuse approved features and frameworks. -
[SFD3.3: 12] FIND AND PUBLISH SECURE DESIGN PATTERNS FROM THE ORGANIZATION.
Foster centralized design reuse by collecting secure design patterns (sometimes referred to as security blueprints) from across the organization and publishing them for everyone to use. A section of the SSG website (see [SR1.2]) could promote positive elements identified during threat modeling or architecture analysis so that good ideas spread widely. This process is formalized—an ad hoc, accidental noticing isn’t sufficient. Common design patterns accelerate development, so it’s important to use secure design patterns, and not just for applications but for all software assets (e.g., microservices, APIs, containers, infrastructure, and automation). -
[SR1.1: 84] CREATE SECURITY STANDARDS.
The organization meets the demand for security guidance by creating standards that explain the required way to adhere to policy and carry out security-centric design, development, and operations. A standard might mandate how to perform identity-based application authentication or how to implement transport-level security, perhaps with the SSG ensuring the availability of a reference implementation. Standards often apply to software beyond the scope of an application’s code, including container construction, orchestration, infrastructureas-code, and cloud security configuration. Standards can be deployed in a variety of ways to keep them actionable and relevant. For example, they can be automated into development environments (such as an IDE or toolchain) or explicitly linked to code examples and deployment artifacts (e.g., containers). In any case, to be considered standards, they must be adopted and enforced. Standards for technology stacks [SR3.4] and standards for incorporating new technologies [SR3.5] can be expected to aid in the creation of these standards but are not required. -
[SR1.2: 96] CREATE A SECURITY PORTAL.
The organization has a well-known central location for information about software security. Typically, this is an internal website maintained by the SSG and security champions that people refer to for current information on security policies, standards, and requirements, as well as for other resources (such as training). An interactive portal is better than a static portal with guideline documents that rarely change. Organizations often supplement these materials with mailing lists, chat channels (see [T2.12]), and face-to-face meetings. Development teams are increasingly putting software security knowledge directly into toolchains and automation that are outside the organization (e.g., GitHub), but that does not remove the need for SSG-led knowledge management. -
[SR1.3: 86] TRANSLATE COMPLIANCE CONSTRAINTS TO REQUIREMENTS.
Compliance constraints are translated into security requirements for individual projects and communicated to the engineering teams. This is a linchpin in the organization’s compliance strategy—by representing compliance constraints explicitly with requirements and informing stakeholders, the organization demonstrates that compliance is a manageable task. For example, if the organization builds software that processes credit card transactions, PCI DSS compliance plays a role during the security requirements phase. In other cases, technology standards built for international interoperability can include security guidance on compliance needs. Representing these standards as requirements also helps with traceability and visibility in the event of an audit. It’s particularly useful to codify the requirements into reusable code (see [SFD2.1]) or artifact deployment specifications (see [SE1.4]). -
[SR1.5: 96] IDENTIFY OPEN SOURCE.
Identify open source components and dependencies included in the organization’s code repositories and built software, then review them to understand their security posture. Organizations use a variety of tools and metadata provided by delivery pipelines to discover old versions of open source components with known vulnerabilities or that their software relies on multiple versions of the same component. Scale efforts by using automated tools to find open source, whether whole components or perhaps large chunks of borrowed code. Some software development pipeline platforms, container registries, and middleware platforms have begun to provide this visibility as metadata (e.g., SBOMs [SE3.6]) resulting from behindthe-scenes artifact scanning. Some organizations combine composition analysis results from multiple phases of the software lifecycle to get a more complete and accurate list of the open source being included in production software. -
[SR2.5: 62] CREATE SLA BOILERPLATE.
The SSG works with the legal department to create standard SLA boilerplate for use in contracts with vendors and outsource providers, including cloud providers, to require software security efforts on their part. The legal department might also leverage the boilerplate to help prevent compliance and privacy problems. Under the agreement, vendors and outsource providers must meet company-mandated software security SLAs (see [CP2.4]). Boilerplate language might call for objective third-party insight into software security efforts, such as SSDF gap analysis (https://csrc.nist.gov/Projects/ssdf), BSIMMsc measurements, or BSIMM scores. -
[SR2.7: 55] CONTROL OPEN SOURCE RISK.
The organization has control over its exposure to the risks that come along with using open source components and all the involved dependencies, including dependencies integrated at runtime. Controlling exposure usually includes multiple efforts, with one example being responding to known vulnerabilities in identified open source (see [SR1.5]). The use of open source could also be restricted to predefined projects or to a short list of versions that have been through an approved security screening process, have had unacceptable vulnerabilities remediated, and are made available only through approved internal repositories and containers. For some use cases, policy might preclude any use of open source. The legal department often spearheads additional open source controls due to license compliance objectives and the viral license problem associated with GPL code. SSGs that partner with and educate the legal department can help move an organization to improve its open source risk management practices, which must be applied across the software portfolio to be effective. -
[SR3.2: 19] COMMUNICATE STANDARDS TO VENDORS.
Work with vendors to educate them and promote the organization’s security standards. A healthy relationship with a vendor often starts with contract language (see [CP2.4]), but the SSG should engage with vendors, discuss vendor security practices, and explain in simple terms (rather than legalese) what the organization expects. Any time a vendor adopts the organization’s security standards, it’s a clear sign of progress. Note that standards implemented as security features or infrastructure configuration could be a requirement to services integration with a vendor (see [SFD1.1], [SE1.4]). When the firm’s SSDL is publicly available, communication regarding software security expectations is easier. Likewise, sharing internal practices and measures can make expectations clear. -
[SR3.3: 17] USE SECURE CODING STANDARDS.
Developers use secure coding standards to avoid the most obvious bugs and as ground rules for code review. These standards are necessarily specific to a programming language, and they can address the use of popular frameworks, APIs, libraries, and infrastructure automation. Secure coding standards can also be for low- or no-code platforms (e.g., Microsoft Power Apps, Salesforce Lightning). While enforcement isn’t the point at this stage (see [CR3.5]), violation of standards is a teachable moment for all stakeholders. Other useful coding standards topics include proper use of cloud APIs, use of approved cryptography, memory sanitization, banned functions, open source use, and many others. If the organization already has coding standards for other purposes (e.g., style), its secure coding standards should build upon them. A clear set of secure coding standards is a good way to guide both manual and automated code review, as well as to provide relevant examples for security training. Some groups might choose to integrate their secure coding standards directly into automation. Socializing the benefits of following standards is also a good first step to gaining widespread acceptance (see [SM2.7]). -
[SR3.4: 20] CREATE STANDARDS FOR TECHNOLOGY STACKS.
The organization standardizes on the use of specific technology stacks, which translates into a reduced workload because teams don’t have to explore new technology risks for every new project. The organization might create a secure base configuration (commonly in the form of golden images, Terraform definitions, etc.) for each technology stack, further reducing the amount of work required to use the stack safely. In cloud environments, hardened configurations likely include up-to-date security patches, configurations, and services, such as logging and monitoring. In traditional onpremises IT deployments, a stack might include an operating system, a database, an application server, and a runtime environment (e.g., a MEAN stack). Standards for secure use of reusable technologies, such as containers, microservices, or orchestration code, means that getting security right in one place positively impacts the security posture of all downstream efforts (see [SE2.5]). -
[SR3.5: 0] CREATE STANDARDS CONTROLLING AND GUIDING THE ADOPTION OF NEW TECHNOLOGIES.
The SSG is involved in efforts to provide internal practices for technologies so new that industry best practices have not yet been codified. Involving the SSG in exploration efforts to understand and plan for new technology minimizes the negative impacts that insecure implementations will have by proactively accounting for potential security pitfalls. The SSG’s involvement can result in updates to policies and standards [SR1.1], new security requirements for technology stacks [SR3.4], secure-bydesign components and services [SFD2.1, SFD3.2], or coding guidelines [SR3.3]. The SSG must be involved in proactive efforts surrounding the adoption of new technologies rather than merely retroactively securing existing integrations [SFD2.2] or updating policy and standards in response to changing regulations [CP1.1] or emerging threat intelligence [AM1.5].
This effort helps control the chaos associated with adoption of new technologies (such as the rise of AI and LLMs) when development groups might otherwise make decisions on their own without engaging the SSG or champions. It is all about ensuring that security is considered from the beginning instead of having to be bolted on after the fact. -
[AA1.1: 99] PERFORM SECURITY FEATURE REVIEW.
Security-aware reviewers identify application security features, review these features against application security requirements and runtime parameters, and determine if each feature can adequately perform its intended function—usually collectively referred to as threat modeling. The goal is to quickly identify missing security features and requirements, or bad deployment configuration (authentication, access control, use of cryptography, etc.), and address them. For example, threat modeling would identify both a system that was subject to escalation of privilege attacks because of broken access control as well as a mobile application that incorrectly puts PII in local storage. Use of the firm’s secureby-design components often streamlines this process (see [SFD2.1]). Many modern applications are no longer simply “3-tier” but instead involve components architected to interact across a variety of tiers—browser/endpoint, embedded, web, microservices, orchestration engines, deployment pipelines, third-party SaaS, etc. Some of these environments might provide robust security feature sets, whereas others might have key capability gaps that require careful analysis, so organizations should consider the applicability and correct use of security features across all tiers that constitute the architecture and operational environment. -
[AA1.2: 56] PERFORM DESIGN REVIEW FOR HIGH-RISK APPLICATIONS.
Perform a design review to determine whether the security features and deployment configuration are resistant to attack in an attempt to break the design. The goal is to extend the more formulaic approach of a security feature review (see [AA1.1]) to model application behavior in the context of real-world attackers and attacks. Reviewers must have some experience beyond simple threat modeling to include performing detailed design reviews and breaking the design under consideration. Rather than security feature guidance, a design review should produce a set of flaws and a plan to mitigate them. An organization can use consultants to do this work, but it should participate actively. A review focused only on whether a software project has performed the right process steps won’t generate useful results about flaws. Note that a sufficiently robust design review process can’t be executed at CI/CD speed, so organizations should focus on a few high-risk applications to start (see [AA1.4]). -
[AA2.1: 37] PERFORM ARCHITECTURE ANALYSIS USING A DEFINED PROCESS.
Define and use a process for AA that extends the design review (see [AA1.2]) to also document business risk in addition to technical flaws. The goal is to identify application design flaws as well as the associated risk (e.g., impact of exploitation), such as through frequency or probability analysis, to more completely inform stakeholder risk management efforts. The AA process includes a standardized approach for thinking about attacks, vulnerabilities, and various security properties. The process is defined well enough that people outside the SSG can carry it out. It’s important to document both the architecture under review and any security flaws uncovered, as well as risk information that people can understand and use. Microsoft Threat Modeling, Versprite PASTA, and Black Duck ARA are examples of such a process, although these will likely need to be tailored to a given environment. In some cases, performing AA and documenting business risk is done by different teams working together in a single process. Uncalibrated or ad hoc AA approaches don’t count as a defined process. -
[AA2.4: 40] HAVE SSG LEAD DESIGN REVIEW EFFORTS.
The SSG takes a lead role in performing design review (see [AA1.2]) to uncover flaws. Breaking down an architecture is enough of an art that the SSG, or other reviewers outside the application team, must be proficient, and proficiency requires practice. This practice might then enable, e.g., champions to take the day-to-day lead while the SSG maintains leadership around knowledge and process. The SSG can’t be successful on its own either—it will likely need help from architects or implementers to understand the design. With a clear design in hand, the SSG might be able to carry out a detailed review with a minimum of interaction with the project team. Approaches to design review evolve over time, so don’t expect to set a process and use it forever. Outsourcing design review might be necessary, but it’s also an opportunity to participate and learn. -
[AA3.1: 20] HAVE ENGINEERING TEAMS LEAD AA PROCESS.
Engineering teams lead AA to uncover technical flaws and document business risk. This effort requires a well-understood and well-documented process (see [AA2.1]). But even with a good process, consistency is difficult to attain because breaking architecture requires experience, so provide architects with SSG or outside expertise in an advisory capacity. Engineering teams performing AA might normally have responsibilities such as development, DevOps, cloud security, operations security, security architecture, or a variety of similar roles. The process is more useful if the AA team is different from the design team. -
[AA3.2: 8] DRIVE ANALYSIS RESULTS INTO STANDARD DESIGN PATTERNS.
Failures identified during threat modeling, design review, or AA are fed back to security and engineering teams so that similar mistakes can be prevented in the future through improved design patterns, whether local to a team or formally approved for everyone (see [SFD3.1]). This typically requires a root-cause analysis process that determines the origin of security flaws, searches for what should have prevented the flaw, and makes the necessary improvements in documented security design patterns. Note that security design patterns can interact in surprising ways that break security, so apply analysis processes even when vetted design patterns are in standard use. For cloud services, providers have learned a lot about how their platforms and services fail to resist attack and have codified this experience into patterns for secure use. Organizations that heavily rely on these services might base their applicationlayer patterns on those building blocks provided by the cloud service provider (for example, AWS CloudFormation and Azure Blueprints) when making their own. -
[AA3.3: 18] MAKE THE SSG AVAILABLE AS AN AA RESOURCE OR MENTOR.
To build organizational AA capability, the SSG advertises experts as resources or mentors for teams using the AA process (see [AA2.1]). This effort might enable, e.g., security champions, site reliability engineers, DevSecOps engineers, and others to take the lead while the SSG offers advice. As one example, mentors help tailor AA process inputs (such as design or attack patterns) to make them more actionable for specific technology stacks. This reusable guidance helps protect the team’s time so they can focus on the problems that require creative solutions rather than enumerating known bad habits. While the SSG might answer AA questions during office hours (see [T2.12]), they will often assign a mentor to work with a team, perhaps comprising both security-aware engineers and risk analysts, for the duration of the analysis. In the case of high-risk software, the SSG should play a more active mentorship role in applying the AA process. -
[CR1.2: 80] PERFORM OPPORTUNISTIC CODE REVIEW.
Perform code review for high-risk applications in an opportunistic fashion. For example, organizations can follow up a design review with a code review looking for security issues in source code and dependencies and perhaps also in deployment artifact configuration (e.g., containers) and automation metadata (e.g., infrastructure-as-code). This informal targeting often evolves into a systematic approach (see [CR1.4]). Manual code review could be augmented with the use of specific tools and services, but it has to be part of a proactive process. When new technologies pop up, new approaches to code review might become necessary. -
[CR1.5: 75] MAKE CODE REVIEW MANDATORY FOR ALL PROJECTS.
A security-focused code review is mandatory for all software projects, with a lack of code review or unacceptable results stopping a release, slowing it down, or causing it to be recalled. While all projects must undergo code review, the process might be different for different kinds of projects. The review for low-risk projects might rely more heavily on automation (see [CR1.4]), for example, whereas high-risk projects might have no upper bound on the amount of time spent by reviewers. Having a minimum acceptable standard forces projects that don’t pass to be fixed and reevaluated. A code review tool with nearly all the rules turned off (so it can run at CI/ CD automation speeds, for example) won’t provide sufficient defect coverage. Similarly, peer code review or tools focused on quality and style won’t provide useful security results. -
[CR2.6: 24] USE CUSTOM RULES WITH AUTOMATED CODE REVIEW TOOLS.
Create and use custom rules in code review tools to help uncover security defects specific to the organization’s coding standards or to the framework-based or cloud-provided middleware the organization uses. The same group that provides tool mentoring (see [CR1.7]) will likely spearhead this customization. Custom rules are often explicitly tied to proper usage of technology stacks in a positive sense and avoidance of errors commonly encountered in a firm’s codebase in a negative sense. Custom rules are also an easy way to check for adherence to coding standards (see [CR3.5]). To reduce the workload for everyone, many organizations also create rules to remove repeated false positives and to turn off checks that aren’t relevant. -
[CR2.7: 19] USE A TOP N BUGS LIST (REAL DATA PREFERRED).
Maintain a living list of the most important kinds of bugs the organization wants to eliminate from its code and use it to drive change. Many organizations start with a generic list pulled from public sources, but broad-based lists such as the OWASP Top 10 rarely reflect an organization’s bug priorities. Build a valuable list by using real data gathered from code review (see [CR2.8]), testing (see [PT1.2]), software composition analysis (see [SE3.8]), and actual incidents (see [CMVM1.1]), then prioritize it for prevention efforts. Simply sorting the day’s bug data by number of occurrences won’t produce a satisfactory list because the data changes so often. To increase interest, the SSG can periodically publish a “most wanted” report after updating the list. One potential pitfall with a top N list is that it tends to include only known problems. Of course, just building the list won’t accomplish anything—everyone has to use it to find and fix bugs. -
[CR2.8: 27] USE CENTRALIZED DEFECT REPORTING TO CLOSE THE KNOWLEDGE LOOP.
The defects found during code review are tracked in a centralized repository that makes it possible to do both summary and trend reporting for the organization. Reported defects drive engineering improvements such as enhancing processes, updating standards, adopting reusable frameworks, etc. For example, code review information is usually incorporated into a CISO-level dashboard that can include feeds from other security testing efforts (e.g., penetration testing, composition analysis, threat modeling). Given the historical code review data, the SSG can also use the reports to demonstrate progress (see [SM3.3]) or drive the training curriculum. Individual bugs make excellent training examples (see [T2.8]). Some organizations have moved toward analyzing this data and using the results to drive automation (see [ST3.6]). -
[CR3.3: 6] CREATE CAPABILITY TO ERADICATE BUGS.
When a security bug is found during code review (see [CR1.2], [CR1.4]), the organization searches for and then fixes all occurrences of the bug, not just the instance originally discovered. Searching with custom rules (see [CR2.6]) makes it possible to eradicate the specific bug entirely without waiting for every project to reach the code review portion of its lifecycle. This doesn’t mean finding every instance of every kind of crosssite scripting bug when a specific example is found—it means going after that specific example everywhere. A firm with only a handful of software applications built on a single technology stack will have an easier time with this activity than firms with many large applications built on a diverse set of technology stacks. A new development framework or library, rules in RASP or a next-generation firewall, or cloud configuration tools that provide guardrails can often help in (but not replace) eradication efforts. -
[CR3.5: 6] ENFORCE SECURE CODING STANDARDS.
A violation of secure coding standards is sufficient grounds for rejecting a piece of code. This rejection can take one or more forms, such as denying a pull request, breaking a build, failing quality assurance, removing from production, or moving the code into a different development workstream where repairs or exceptions can be worked out. The enforced portions of an organization’s secure coding standards (see [SR3.3]) often start out as a simple list of banned functions or required frameworks. Code review against standards must be objective—it shouldn’t become a debate about whether the noncompliant code is exploitable. In some cases, coding standards are specific to language constructs and enforced with tools (e.g., codified into SAST rules). In other cases, published coding standards are specific to technology stacks and enforced during the code review process or by using automation. Standards can be positive (“do it this way”) or negative (“do not use this API”), but they must be enforced. -
[ST2.5: 34] INCLUDE SECURITY TESTS IN QA AUTOMATION.
Security tests are included in an automation framework and run alongside functional, performance, and other QA test suites. Executing this automation framework can be triggered manually or through additional automation (e.g., as part of pipeline tooling). When test creators who understand the software create security tests, they can uncover more specialized or more relevant defects than commercial tools might (see [ST1.4]). Security tests might be derived from typical failures of security features (see [SFD1.1]), from creative tweaks of functional and developer tests, or even from guidance provided by penetration testers on how to reproduce an issue. Tests that are performed manually or out-of-band likely will not provide timely feedback -
[ST3.3: 16] DRIVE TESTS WITH DESIGN REVIEW RESULTS.
Use design review or architecture analysis results to direct QA test creation. For example, if the results of attempting to break a design determine that “the security of the system hinges on the transactions being atomic and not being interrupted partway through,” then torn transactions will become a primary target in adversarial testing. Adversarial tests like these can be developed according to a risk profile, with high-risk flaws at the top of the list. Security defect data shared with QA (see [ST2.4]) can help focus test creation on areas of potential vulnerability that can, in turn, help prove the existence of identified high-risk flaws. -
[ST3.4: 5] LEVERAGE CODE COVERAGE ANALYSIS.
Testers measure the code coverage of their application security testing to identify code that isn’t being exercised and then adjust test cases to incrementally improve coverage. AST can include automated testing (see [ST2.5], [ST2.6]) and manual testing (see [ST1.1], [ST1.3]). In turn, code coverage analysis drives increased security testing depth. Coverage analysis is easier when using standard measurements, such as function coverage, line coverage, or multiple condition coverage. The point is to measure how broadly the test cases cover the security requirements, which is not the same as measuring how broadly the test cases exercise the code. -
[ST3.5: 6] BEGIN TO BUILD AND APPLY ADVERSARIAL SECURITY TESTS (ABUSE CASES).
QA teams incorporate test cases based on abuse cases (see [AM2.1]) as testers move beyond verifying functionality and take on the attacker’s perspective. One way to do this is to systematically attempt to replicate incidents from the organization’s history. Abuse and misuse cases based on the attacker’s perspective can also be derived from security policies, attack intelligence, standards, and the organization’s top N attacks list (see [AM3.5]). This effort turns the corner in QA from testing features to attempting to break the software under test. -
[PT3.2: 21] CUSTOMIZE PENETRATION TESTING TOOLS.
Build a capability to create penetration testing tools, or to adapt publicly available ones, to attack the organization’s software more efficiently and comprehensively. Creating penetration testing tools requires a deep understanding of attacks (see [AM2.1], [AM2.8]) and technology stacks (see [AM3.4]). Customizing existing tools goes beyond configuration changes and extends tool functionality to find new issues. Tools will improve the efficiency of the penetration testing process without sacrificing the depth of problems that the SSG can identify. Automation can be particularly valuable in organizations using Agile methodologies because it helps teams go faster. Tools that can be tailored are always preferable to generic tools. Success here is often dependent on both the depth and scope of tests enabled through customized tools. -
[SE1.2: 102] ENSURE HOST AND NETWORK SECURITY BASICS ARE IN PLACE.
The organization provides a solid foundation for its software in operation by ensuring that host (whether bare metal or virtual machine) and network security basics are in place across its data centers and networks and that these basics remain in place during new releases. Host and network security basics must account for evolving network perimeters, increased connectivity and data sharing, software-defined networking, and increasing dependence on vendors (e.g., content delivery, load balancing, and content inspection services). In addition to securing your production environment, the organization should consider securing their development endpoints [SE3.10] and tool chains [SE3.9]. Doing software security before getting host and network security in place is like putting on shoes before putting on socks. -
[SE2.4: 51] PROTECT CODE INTEGRITY.
Use code protection mechanisms (e.g., code signing) that allow the organization to attest to the provenance, integrity, and authorization of important code. While legacy and mobile platforms accomplished this with point-in-time code signing and permissions activity, protecting modern containerized software demands actions in various lifecycle phases. Organizations can use build systems to verify sources and manifests of dependencies, creating their own cryptographic attestation of both. Packaging and deployment systems can sign and verify binary packages, including code, configuration, metadata, code identity, and authorization to release material. In some cases, organizations allow only code from their own registries to execute in certain environments. Protecting code integrity can also include securing development infrastructure, using permissions and peer review to govern code contributions, and limiting code access to help protect integrity (see [SE3.9]). -
[SE2.5: 64] USE APPLICATION CONTAINERS TO SUPPORT SECURITY GOALS.
The organization uses application containers to support its software security goals. Simply deploying containers isn’t sufficient to gain security benefits, but their planned use can support a tighter coupling of applications with their dependencies, immutability, integrity (see [SE2.4]), and some isolation benefits without the overhead of deploying a full operating system on a virtual machine. Containers are a convenient place for security controls to be applied and updated consistently (see [SFD3.2]), and while they are useful in development and test environments, their use in production provides the needed security benefits. -
[SE2.7: 42] USE ORCHESTRATION FOR CONTAINERS AND VIRTUALIZED ENVIRONMENTS.
The organization uses automation to scale service, container, and virtualized environments in a disciplined way. Orchestration processes take advantage of built-in and add-on security features (see [SFD2.1]), such as hardening against drift, secrets management, RBAC, and rollbacks, to ensure that each deployed workload meets predetermined security requirements. Setting security behaviors in aggregate allows for rapid change when the need arises. Orchestration platforms are themselves software that becomes part of your production environment, which in turn requires hardening and security patching and configuration—in other words, if you use Kubernetes, make sure you patch Kubernetes. -
[SE3.6: 25] CREATE BILLS OF MATERIALS FOR DEPLOYED SOFTWARE.
Create a BOM detailing the components, dependencies, and other metadata for important production software. Use this BOM to help the organization tighten its security posture, i.e., to react with agility as attackers and attacks evolve, compliance requirements change, and the number of items to patch grows quite large. Knowing where all the components live in running software—and whether they’re in private data centers, in clouds, or sold as box products (see [CMVM2.3])—allows for timely response when unfortunate events occur. -
[SE3.8: 3] PERFORM APPLICATION COMPOSITION ANALYSIS ON CODE REPOSITORIES.
Use composition analysis results to augment software asset inventory information with data on all components comprising important applications. Beyond open source (see [SR1.5]), inventory information (see [SM3.1]) includes component and dependency information for internally developed (first-party), commissioned code (second-party), and external (third-party) software, whether that software exists as source code or binary. One common way of documenting this information is to build SBOMs. Doing this manually is probably not an option—keeping up with software changes likely requires toolchain integration rather than carrying this out as a point-in-time activity. This information is extremely useful in supply chain security efforts (see [SM3.5]). -
[SE3.10: 0] PROTECT THE INTEGRITY OF DEVELOPMENT ENDPOINTS.
The organization maintains the integrity of the software it builds by applying security basics to the workstations used by development stakeholders who interact with the development toolchain. Development endpoints are the workstations used for writing source code, configuring the development toolchain, testing the software’s functionality, or modifying data in the code or artifact repositories. Organizations can protect development endpoints by limiting or monitoring privileged actions, ensuring that the operating system and antivirus definitions are up to date, vetting installed software, or by providing a separate, secured workstation for development that is not used for administrative tasks. Establishing and applying a development endpoint security baseline allows for stakeholders to perform the technical tasks required by software development, but also provides another layer of defense to the development toolchain [SE3.9]. -
[CMVM1.2: 85] IDENTIFY SOFTWARE DEFECTS FOUND IN OPERATIONS MONITORING AND FEED THEM BACK TO ENGINEERING.
Defects identified in production through operations monitoring are fed back to development and used to change engineering behavior. Useful sources of production defects include incidents, bug bounty (see [CMVM3.4]), responsible disclosure (see [CMVM2.4]), SIEMs, production logs, customer feedback, and telemetry from cloud security posture monitoring, container configuration monitoring, RASP, and similar technologies. Entering production defect data into an existing bug-tracking system (perhaps by making use of a special security flag) can close the information loop and make sure that security issues get fixed. In addition, it’s important to capture lessons learned from production defects and use these lessons to change the organization’s behavior. In the best of cases, processes in the SSDL can be improved based on operations data (see [CMVM3.2]). -
[CMVM1.3: 89] TRACK SOFTWARE DEFECTS FOUND IN OPERATIONS THROUGH THE FIX PROCESS.
Defects found in operations (see [CMVM1.2]) are entered into established defect management systems and tracked through the fix process. This tracking ability could come in the form of a two-way bridge between defect finders and defect fixers or possibly through intermediaries (e.g., the vulnerability management team), but make sure the loop is closed completely. Defects can appear in all types of deployable artifacts, deployment automation, and infrastructure configuration. Setting a security flag in the defect tracking system can help facilitate tracking. -
[CMVM2.4: 41] STREAMLINE INCOMING RESPONSIBLE VULNERABILITY DISCLOSURE.
Provide external bug reporters with a line of communication to internal security experts through a low-friction, public entry point. These experts work with bug reporters to invoke any necessary organizational responses and to coordinate with external entities throughout the defect management lifecycle. Successful disclosure processes require insight from internal stakeholders, such as legal, marketing, and public relations roles, to simplify and expedite decision-making during software security crises (see [CMVM3.3]). Although bug bounties might be important to motivate some researchers (see [CMVM3.4]), proper public attribution and a low-friction reporting process is often sufficient motivation for researchers to participate in a coordinated disclosure. Most organizations will use a combination of easy-to-find landing pages, common email addresses (security@), and embedded product documentation when appropriate (security.txt) as an entry point for external reporters to invoke this process. -
[CMVM3.1: 13] FIX ALL OCCURRENCES OF SOFTWARE DEFECTS FOUND IN OPERATIONS.
When a security defect is found in operations (see [CMVM1.2]), the organization searches for and fixes all occurrences of the defect in operations, not just the one originally reported. Doing this proactively requires the ability to reexamine the entire operations software inventory (see [CMVM2.3]) when new kinds of defects come to light. One way to approach reexamination is to create a ruleset that generalizes deployed defects into something that can be scanned for via automated code review. In some environments, addressing a defect might involve removing it from production immediately and making the actual fix in some priority order before redeployment. Use of orchestration can greatly simplify deploying the fix for all occurrences of a software defect (see [SE2.7]). -
[CMVM3.2: 24] ENHANCE THE SSDL TO PREVENT SOFTWARE DEFECTS FOUND IN OPERATIONS.
Experience from operations leads to changes in the SSDL (see [SM1.1]), which can in turn be strengthened to prevent the reintroduction of defects. To make this process systematic, the incident response postmortem includes a feedback-to-SSDL step. The outcomes of the postmortem might result in changes such as to tool-based policy rulesets in a CI/CD pipeline and adjustments to automated deployment configuration (see [SE1.4]). This works best when root-cause analysis pinpoints where in the software lifecycle an error could have been introduced or slipped by uncaught (e.g., a defect escape). DevOps engineers might have an easier time with this because all the players are likely involved in the discussion and the solution. An ad hoc approach to SSDL improvement isn’t sufficient for prevention. -
[CMVM3.5: 17] AUTOMATE VERIFICATION OF OPERATIONAL INFRASTRUCTURE SECURITY.
The SSG works with engineering teams to verify with automation the security properties (e.g., adherence to agreedupon security hardening) of infrastructure generated from controlled self-service processes. Engineers use self-service processes to create networks, storage, containers, and machine instances, to orchestrate deployments, and to perform other tasks that were once IT’s sole responsibility. In facilitating verification, the organization uses machine-readable policies and configuration standards (see [SE1.4]) to automatically detect issues and report on infrastructure that does not meet expectations. In some cases, the automation makes changes to running environments to bring them into compliance, but in many cases, organizations use a single policy to manage automation in different environments, such as in multi- and hybrid-cloud environments -
[CMVM3.6: 4] PUBLISH RISK DATA FOR DEPLOYABLE ARTIFACTS.
The organization collects and publishes risk information about the applications, services, APIs, containers, and other software it deploys. Whether captured through manual processes or telemetry automation, published information extends beyond basic software security (see [SM2.1]) and inventory data (see [CMVM2.3]) to include risk information. This information usually includes constituency of the software (e.g., BOMs [SE3.6]), provenance data about what group created it and how, and the risks associated with known vulnerabilities, deployment models, security controls, or other security characteristics intrinsic to each artifact. This approach stimulates cross-functional coordination and helps stakeholders take informed risk management action. Making a list of risks that aren’t used for decision support won’t achieve useful results. -
[CMVM3.8: 1] DO ATTACK SURFACE MANAGEMENT FOR DEPLOYED APPLICATIONS.
Operations standards and procedures proactively minimize application attack surfaces by using attack intelligence and application weakness data to limit vulnerable conditions. Finding and fixing software defects in operations is important (see [CMVM1.2]) but so is finding and fixing errors in cloud security models, VPNs, segmentation, security configurations for networks, hosts, and applications, etc., to limit the ability to successfully attack deployed applications. Combining attack intelligence (see [AM1.5]) with information about software assets (see [AM2.9]) and a continuous view of application weaknesses helps ensure that attack surface management keeps pace with attacker methods. SBOMs (see [SE3.6]) are also an important information source when doing attack surface management in a crisis.
Мы используем cookie-файлы, чтобы получить статистику, которая помогает нам улучшить сервис для вас с целью персонализации сервисов и предложений. Вы может прочитать подробнее о cookie-файлах или изменить настройки браузера. Продолжая пользоваться сайтом, вы даёте согласие на использование ваших cookie-файлов и соглашаетесь с Политикой обработки персональных данных.