Куда я попал?
OWASP Building Security In Maturity Model
Framework
CR
Для проведения оценки соответствия по документу войдите в систему.
Список требований
-
[CR1.2: 80] PERFORM OPPORTUNISTIC CODE REVIEW.
Perform code review for high-risk applications in an opportunistic fashion. For example, organizations can follow up a design review with a code review looking for security issues in source code and dependencies and perhaps also in deployment artifact configuration (e.g., containers) and automation metadata (e.g., infrastructure-as-code). This informal targeting often evolves into a systematic approach (see [CR1.4]). Manual code review could be augmented with the use of specific tools and services, but it has to be part of a proactive process. When new technologies pop up, new approaches to code review might become necessary. -
[CR1.5: 75] MAKE CODE REVIEW MANDATORY FOR ALL PROJECTS.
A security-focused code review is mandatory for all software projects, with a lack of code review or unacceptable results stopping a release, slowing it down, or causing it to be recalled. While all projects must undergo code review, the process might be different for different kinds of projects. The review for low-risk projects might rely more heavily on automation (see [CR1.4]), for example, whereas high-risk projects might have no upper bound on the amount of time spent by reviewers. Having a minimum acceptable standard forces projects that don’t pass to be fixed and reevaluated. A code review tool with nearly all the rules turned off (so it can run at CI/ CD automation speeds, for example) won’t provide sufficient defect coverage. Similarly, peer code review or tools focused on quality and style won’t provide useful security results. -
[CR2.6: 24] USE CUSTOM RULES WITH AUTOMATED CODE REVIEW TOOLS.
Create and use custom rules in code review tools to help uncover security defects specific to the organization’s coding standards or to the framework-based or cloud-provided middleware the organization uses. The same group that provides tool mentoring (see [CR1.7]) will likely spearhead this customization. Custom rules are often explicitly tied to proper usage of technology stacks in a positive sense and avoidance of errors commonly encountered in a firm’s codebase in a negative sense. Custom rules are also an easy way to check for adherence to coding standards (see [CR3.5]). To reduce the workload for everyone, many organizations also create rules to remove repeated false positives and to turn off checks that aren’t relevant. -
[CR2.7: 19] USE A TOP N BUGS LIST (REAL DATA PREFERRED).
Maintain a living list of the most important kinds of bugs the organization wants to eliminate from its code and use it to drive change. Many organizations start with a generic list pulled from public sources, but broad-based lists such as the OWASP Top 10 rarely reflect an organization’s bug priorities. Build a valuable list by using real data gathered from code review (see [CR2.8]), testing (see [PT1.2]), software composition analysis (see [SE3.8]), and actual incidents (see [CMVM1.1]), then prioritize it for prevention efforts. Simply sorting the day’s bug data by number of occurrences won’t produce a satisfactory list because the data changes so often. To increase interest, the SSG can periodically publish a “most wanted” report after updating the list. One potential pitfall with a top N list is that it tends to include only known problems. Of course, just building the list won’t accomplish anything—everyone has to use it to find and fix bugs. -
[CR2.8: 27] USE CENTRALIZED DEFECT REPORTING TO CLOSE THE KNOWLEDGE LOOP.
The defects found during code review are tracked in a centralized repository that makes it possible to do both summary and trend reporting for the organization. Reported defects drive engineering improvements such as enhancing processes, updating standards, adopting reusable frameworks, etc. For example, code review information is usually incorporated into a CISO-level dashboard that can include feeds from other security testing efforts (e.g., penetration testing, composition analysis, threat modeling). Given the historical code review data, the SSG can also use the reports to demonstrate progress (see [SM3.3]) or drive the training curriculum. Individual bugs make excellent training examples (see [T2.8]). Some organizations have moved toward analyzing this data and using the results to drive automation (see [ST3.6]). -
[CR3.3: 6] CREATE CAPABILITY TO ERADICATE BUGS.
When a security bug is found during code review (see [CR1.2], [CR1.4]), the organization searches for and then fixes all occurrences of the bug, not just the instance originally discovered. Searching with custom rules (see [CR2.6]) makes it possible to eradicate the specific bug entirely without waiting for every project to reach the code review portion of its lifecycle. This doesn’t mean finding every instance of every kind of crosssite scripting bug when a specific example is found—it means going after that specific example everywhere. A firm with only a handful of software applications built on a single technology stack will have an easier time with this activity than firms with many large applications built on a diverse set of technology stacks. A new development framework or library, rules in RASP or a next-generation firewall, or cloud configuration tools that provide guardrails can often help in (but not replace) eradication efforts. -
[CR3.5: 6] ENFORCE SECURE CODING STANDARDS.
A violation of secure coding standards is sufficient grounds for rejecting a piece of code. This rejection can take one or more forms, such as denying a pull request, breaking a build, failing quality assurance, removing from production, or moving the code into a different development workstream where repairs or exceptions can be worked out. The enforced portions of an organization’s secure coding standards (see [SR3.3]) often start out as a simple list of banned functions or required frameworks. Code review against standards must be objective—it shouldn’t become a debate about whether the noncompliant code is exploitable. In some cases, coding standards are specific to language constructs and enforced with tools (e.g., codified into SAST rules). In other cases, published coding standards are specific to technology stacks and enforced during the code review process or by using automation. Standards can be positive (“do it this way”) or negative (“do not use this API”), but they must be enforced.
Мы используем cookie-файлы, чтобы получить статистику, которая помогает нам улучшить сервис для вас с целью персонализации сервисов и предложений. Вы может прочитать подробнее о cookie-файлах или изменить настройки браузера. Продолжая пользоваться сайтом, вы даёте согласие на использование ваших cookie-файлов и соглашаетесь с Политикой обработки персональных данных.