System and services acquisition

On this page

 

The controls and activities in the System and services acquisition (SA) family deal with the contracting of products and services required to support the implementation and operation of organizational systems. They ensure that sufficient resources are allocated for the adequate protection of organizational systems, and they support system development lifecycle processes that incorporate security considerations.

SA-01 System and services acquisition policy and procedures

Activity

  1. Develop, document, and disseminate to [Assignment: organization-defined personnel or roles]
    1. [Selection (1 or more): Organization-level; Mission/business process-level; System-level] system and services acquisition policy that
      1. addresses purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance
      2. is consistent with applicable laws, Orders in Council, directives, regulations, policies, standards, and guidelines
    2. procedures to facilitate the implementation of the system and services acquisition policy and the associated system and services acquisition controls
  2. Designate an [Assignment: organization-defined official] to manage the development, documentation, and dissemination of the system and services acquisition policy and procedures
  3. Review and update the current system and services acquisition
    1. policy [Assignment: organization-defined frequency] and following [Assignment: organization-defined events]
    2. procedures [Assignment: organization-defined frequency] and following [Assignment: organization-defined events]

Discussion

System and services acquisition policy and procedures address the controls in the SA family that are implemented within systems and organizations. The risk management strategy is an important factor in establishing such policies and procedures. Policies and procedures contribute to security and privacy assurance. Therefore, it is important that security and privacy programs collaborate on the development of system and services acquisition policy and procedures.

In general, security and privacy program policies and procedures at the organization level are preferable and may remove the need for mission- or system-specific policies and procedures. The policy can be included as part of the general security and privacy policy or be represented by multiple policies that reflect the complex nature of organizations.

Procedures can be established for security and privacy programs, for mission or business processes, and for systems, if needed. Procedures describe how the policies or controls are implemented and can be directed at the individual or role that is the object of the procedure. Procedures can be documented in system security and privacy plans or in one or more separate documents.

Events that may precipitate an update to system and services acquisition policy and procedures include assessment or audit findings, security incidents or breaches, or changes in laws, Orders in Council, directives, regulations, policies, standards, and guidelines. Simply restating controls does not constitute an organizational policy or procedure.

Related controls and activities

PM-09, PS-08, SA-08, SI-02, SI-12.

Enhancements

None.

References

 

SA-02 Allocation of resources

Control

  1. Determine the high-level information security and privacy requirements for the system or system service in mission and business process planning
  2. Determine, document, and allocate the resources required to protect the system or system service as part of the organizational capital planning and investment control process
  3. Establish a discrete line item for information security and privacy in organizational programming and budgeting documentation

Discussion

Resource allocation for information security and privacy includes funding for system and services acquisition, sustainment, and supply-chain-related risks throughout the system development lifecycle.

Related controls and activities

PL-07, PM-03, PM-11, SA-09, SR-03, SR-05.

Enhancements

None.

References

 

SA-03 System development lifecycle

Activity

  1. Acquire, develop, and manage the system using [Assignment: organization-defined system development lifecycle] that incorporates information security and privacy considerations
  2. Define and document information security and privacy roles and responsibilities throughout the system development lifecycle
  3. Identify individuals having information security and privacy roles and responsibilities
  4. Integrate the organizational information security and privacy risk management process into system development lifecycle activities

Discussion

A system development lifecycle process provides the foundation for the successful development, implementation, and operation of organizational systems. The integration of security and privacy considerations early in the system development lifecycle is a foundational principle of systems security engineering and privacy engineering. Applying the required controls within the system development lifecycle requires a basic understanding of information security and privacy, threats, vulnerabilities, adverse impacts, and risk to critical mission and business functions.

The security engineering principles in SA-08 help individuals properly design, code, and test systems and system components. Organizations include qualified personnel (e.g., senior officials in the department’s security governance, appropriate privacy senior officials or executives, security and privacy architects, and security and privacy engineers) in system development lifecycle processes to ensure that established security and privacy requirements are incorporated into organizational systems. Role-based security and privacy training programs can ensure that individuals with key security and privacy roles and responsibilities have the experience, skills, and expertise to conduct assigned system development lifecycle activities.

The effective integration of security and privacy requirements into enterprise architecture also helps ensure that important security and privacy considerations are addressed throughout the system lifecycle and that those considerations are directly related to organizational mission and business processes. This process also facilitates the integration of the information security and privacy architectures into the enterprise architecture, consistent with the risk management strategy of the organization. Because the system development lifecycle involves multiple organizations (e.g., external suppliers, developers, integrators, service providers), acquisition and supply chain risk management functions and controls play significant roles in the effective management of the system during the lifecycle.

Related controls and activities

AT-03, PL-08, PM-07, SA-04, SA-05, SA-08, SA-11, SA-15, SA-17, SA-22, SA-400, SR-03, SR-04, SR-05, SR-09.

Enhancements

  • (01) System development lifecycle: Manage pre-production environment
    • Protect system pre-production environments commensurate with risk throughout the system development lifecycle for the system, system component, or system service.
    • Discussion: The pre-production environment includes development, test, and integration environments. Criticality analysis and the application of controls on developers also contribute to a more secure system development environment.
    • Related controls and activities: CM-02, CM-04, RA-03, RA-09, SA-04.
  • (02) System development lifecycle: Use of live or operational data
      1. Approve, document, and control the use of live data in pre-production environments for the system, system component, or system service
      2. Protect pre-production environments for the system, system component, or system service at the same impact or classification level as any live data in use within the pre-production environments
    • Discussion: Live data is also referred to as operational data. The use of live or operational data in pre-production (i.e., development, test, and integration) environments can result in significant risks to organizations. In addition, the use of live data that is personal information in testing, research, and training increases the risk of unauthorized disclosure or misuse of such information. Therefore, it is important for the organization to manage any additional risks that may result from the use of live or operational data.
      Organizations can minimize such risks by using test or dummy data during the design, development, and testing of systems, system components, and system services. Risk assessment techniques may be used to determine if the risk of using live or operational data is acceptable. The use of personal information for testing requires that individuals are informed about the use in advance.
    • GC discussion: When using personal information in testing activities, organizations may rely on testing as a consistent use of the information, in accordance with 8(2)(a) of the Privacy Act. For this to be acceptable, individuals must have been given notice about the use of their information for testing, at the point that the personal information was collected and, where appropriate, the associated PIBs need to reflect the use of the information for testing purposes. If individuals were not provided with notice at the point of collection, or if the use of the information for testing purposes is not listed as a consistent use in the associated PIB, program officials should contact their privacy officials for advice about whether this would be considered a consistent use or a secondary use.
    • Related controls and activities: PM-25, RA-03.
  • (03) System development lifecycle: Technology refresh
    • Plan for and implement a technology refresh schedule for the system throughout the system development lifecycle.
    • Discussion: Technology refresh planning may encompass hardware, software, firmware, processes, personnel skill sets, suppliers, service providers, and facilities. The use of obsolete or nearing obsolete technology may increase the security and privacy risks associated with unsupported components, counterfeit or repurposed components, components unable to implement security or privacy requirements, slow or inoperable components, components from untrusted sources, inadvertent personnel error, or increased complexity. Technology refreshes typically occur during the operations and maintenance stage of the system development lifecycle.
    • Related controls and activities: MA-06.

References

 

SA-04 Acquisition process

Control

Include the following requirements, descriptions, and criteria, explicitly or by reference, using [Selection (1 or more): standardized contract language; [Assignment: organization-defined contract language]] in the acquisition contract for the system, system component, or system service:

  1. security and privacy functional requirements
  2. strength of mechanism requirements
  3. security and privacy assurance requirements
  4. controls needed to satisfy the security and privacy requirements
  5. security and privacy documentation requirements
  6. requirements for protecting security and privacy documentation
  7. a description of the system development environment and the environment in which the system is intended to operate
  8. the allocation of responsibility or identification of parties responsible for information security, privacy, and supply chain risk management
  9. acceptance criteria

Discussion

Security and privacy functional requirements are typically derived from the high-level security and privacy requirements described in SA-02. The derived requirements include security and privacy capabilities, functions, and mechanisms. Strength requirements associated with such capabilities, functions, and mechanisms include degree of correctness, completeness, resistance to tampering or bypass, and resistance to direct attack.

Assurance requirements can be specified in a statement of work as part of a contract and can include development processes, procedures, and methodologies, as well as the evidence from development and assessment activities that provide grounds for confidence that the required functionality is implemented and possesses the required strength of mechanism. The Cyber Centre publication System lifecycle cyber security and privacy risk management activities (ITSP.10.037) describes the process of requirements engineering as part of the system development lifecycle.

Controls can be viewed as descriptions of the safeguards and protection capabilities appropriate for achieving the particular security and privacy objectives of the organization and for reflecting the security and privacy requirements of stakeholders. Controls are selected and implemented to satisfy system requirements and include developer and organizational responsibilities. Controls can include technical, administrative, privacy, and physical aspects. In some cases, the selection and implementation of a control may necessitate additional specification by the organization in the form of derived requirements or instantiated control parameter values. The derived requirements and control parameter values may be necessary to provide the appropriate level of implementation detail for controls within the system development lifecycle.

Security and privacy documentation requirements address all stages of the system development lifecycle. Documentation provides user and administrator guidance for the implementation and operation of controls. The level of detail required in such documentation is based on the security categorization or classification level of the system and the degree to which organizations depend on the capabilities, functions, or mechanisms to meet risk response expectations. Requirements can include mandated configuration settings that specify allowed functions, ports, protocols, and services. Acceptance criteria for systems, system components, and system services are defined in the same manner as the criteria for any organizational acquisition or procurement.

Organizations can determine other requirements that support security and operations, to include responsibilities for the organization and developer, and notification and timing requirements for support, maintenance and updates.

GC discussion

Functional requirements for the protection of personal information can be found in the PSPC Supply Manual, Chapter 1, Section 1.70 – Privacy in contracting. The TBS guidance document: Taking Privacy Into Account Before Making Contracting Decisions, presents additional information on the handling of personal information. Functional requirements include the contractual obligations for ownership of personal information, the maintenance of accuracy, privacy and integrity of personal information, limitations to the secondary use or disclosure without consent, disposition of records containing personal information, including returning the records back to the GC if required, acceptance of privacy audit possibilities, privacy breach and privacy complaint mechanisms, and legal requirements to disclose personal information.

Related controls and activities

CM-06, CM-08, PS-07, SA-03, SA-05, SA-08, SA-11, SA-15, SA-16, SA-17, SA-21, SA-400, SR-03, SR-05.

Enhancements

  • (01) Acquisition process: Functional properties of controls
    • Require the developer of the system, system component, or system service to provide a description of the functional properties of the controls to be implemented.
    • Discussion: Functional properties of security and privacy controls describe the functionality (i.e., security or privacy capability, functions, or mechanisms) visible at the interfaces of the controls and specifically exclude functionality and data structures internal to the operation of the controls.
    • Related controls and activities: None.
  • (02) Acquisition process: Design and implementation information for controls
    • Require the developer of the system, system component, or system service to provide design and implementation information for the controls that includes: [Selection (1 or more): security-relevant external system interfaces; high-level design; low-level design; source code or hardware schematics; [Assignment: organization-defined design and implementation information]] at [Assignment: organization-defined level of detail].
    • Discussion: Organizations may require different levels of detail in the documentation for the design and implementation of controls in organizational systems, system components, or system services based on mission and business requirements, requirements for resiliency and robustness, and requirements for analysis and testing. Systems can be partitioned into multiple subsystems. Each subsystem within the system can contain one or more modules. The high-level design for the system is expressed in terms of subsystems and the interfaces between subsystems providing security-relevant functionality. The low-level design for the system is expressed in terms of modules and the interfaces between modules providing security-relevant functionality. Design and implementation documentation can include manufacturer, version, serial number, verification hash signature, software libraries used, date of purchase or download, and the vendor or download source. Source code and hardware schematics are referred to as the implementation representation of the system.
    • Related controls and activities: None.
  • (03) Acquisition process: Development methods, techniques, and practices
    • Require the developer of the system, system component, or system service to demonstrate the use of a system development lifecycle process that includes:
      1. [Assignment: organization-defined systems engineering methods]
      2. [Selection (1 or more): [Assignment: organization-defined systems security engineering methods]; [Assignment: organization-defined privacy engineering methods]]
      3. [Assignment: organization-defined software development methods; testing, evaluation, assessment, verification, and validation methods; and quality control processes]
    • Discussion: Following a system development lifecycle that includes state-of-the-practice software development methods, systems engineering methods, systems security and privacy engineering methods, and quality control processes helps to reduce the number and severity of latent errors within systems, system components, and system services. Reducing the number and severity of such errors reduces the number of vulnerabilities in those systems, components, and services.
      Transparency in the methods and techniques that developers select and implement for systems engineering, systems security and privacy engineering, software development, component and system assessments, and quality control processes provides an increased level of assurance in the security functionality of the system, system component, or system service being acquired.
    • Related controls and activities: None.
  • (04) Acquisition process: Assignment of components to systems
    • Withdrawn: Incorporated into CM-08(09).
  • (05) Acquisition process: System, component, and service configurations
    • Require the developer of the system, system component, or system service to:
      1. deliver the system, component, or service with [Assignment: organization-defined security configurations] implemented
      2. use the configurations as the default for any subsequent system, component, or service reinstallation or upgrade
    • Discussion: Examples of security configurations include the Cyber Centre’s Baseline security requirements for network security zones (ITSP.80.022), STIGs, and any limitations on functions, ports, protocols, and services. Security characteristics can include requiring that default passwords have been changed.
    • GC discussion: GC departments and agencies can also refer to the GC System Management Configuration Requirements, as well as to the TBS Standard on Enterprise IT Service Common Configurations.
    • Related controls and activities: None.
  • (06) Acquisition process: Use of cyber security products
      1. Employ only government off-the-shelf or COTS cyber security and cyber security-enabled IT products that compose a Cyber Centre-approved solution to protect classified information when the networks used to transmit the information are at a lower classification level than the information being transmitted
      2. Ensure that these products have been evaluated or validated by the Cyber Centre or in accordance with Cyber Centre-approved procedures
    • Discussion: None.
    • GC discussion: COTS cyber security or cyber security-enabled IT products used to protect classified information by cryptographic means may be required to use Cyber Centre-approved key management. See Government of Canada Key Management Infrastructure (GC KMI) for more information (restricted to GC).
    • Related controls and activities: SC-08, SC-12, SC-13.
  • (07) Acquisition process: Canadian Common Criteria Program - recognized protection profiles
      1. Limit the use of commercially provided cyber security and cyber security-enabled IT products to those products that have been successfully evaluated against a Canadian Common Criteria Program (CCCP) recognized protection profile for a specific technology type, if such a profile exists
      2. If no CCCP-recognized protection profile exists for a specific technology type but a commercially provided IT product relies on cryptographic functionality to enforce its security policy, require that the cryptographic module is FIPS-validated or CCCP-recognized
    • Discussion: The Cyber Centre operates the CCCP to certify products. See the Common Criteria Portal for additional information on protection profiles. See the Cyber Centre’s CMVP for additional information on FIPS-validated cryptographic modules.
    • Related controls and activities: IA-07, SC-12, SC-13.
  • (08) Acquisition process: Continuous monitoring plan for controls
    • Require the developer of the system, system component, or system service to produce a plan for continuous monitoring of control effectiveness that is consistent with the continuous monitoring program of the organization.
    • Discussion: The objective of continuous monitoring plans is to determine if the planned, required, and deployed controls within the system, system component, or system service continue to be effective over time based on the inevitable changes that occur.
      Developer continuous monitoring plans include a sufficient level of detail such that the information can be incorporated into continuous monitoring programs implemented by organizations. Continuous monitoring plans can include the types of control assessment and monitoring activities planned, frequency of control monitoring, and actions to be taken when controls fail or become ineffective.
    • Related controls and activities: CA-07.
  • (09) Acquisition process: Functions, ports, protocols, and services in use
    • Require the developer of the system, system component, or system service to identify the functions, ports, protocols, and services intended for organizational use.
    • Discussion: The identification of functions, ports, protocols, and services early in the system development lifecycle (e.g., during the initial requirements definition and design stages) allows organizations to influence the design of the system, system component, or system service.
      This early involvement in the system development lifecycle helps organizations avoid or minimize the use of functions, ports, protocols, or services that pose unnecessarily high risks and understand the trade-offs involved in blocking specific ports, protocols, or services or requiring system service providers to do so.
      Early identification of functions, ports, protocols, and services avoids costly retrofitting of controls after the system, component, or system service has been implemented. SA-09 describes the requirements for external system services. Organizations identify which functions, ports, protocols, and services are provided from external sources.
    • Related controls and activities: CM-07, SA-09.
  • (10) Acquisition process: Use of approved digital credentials products
    • Employ only IT products that are recommended by the Cyber Centre for digital credentials capability implemented within organizational systems.
    • Discussion: Products should meet the Cyber Centre requirements outlined in the publication User authentication guidance for information technology systems (ITSP.30.031) for authentication level of assurance (LoA) and robustness level (RL) for digital credentials of GC employees and contractors. Authentication tokens such as smart cards are used for MFA in systems and organizations.
    • Related controls and activities: IA-02, IA-08, PM-09.
  • (11) Acquisition process: System of record
    • Include [Assignment: organization-defined Privacy Act requirements] in the acquisition contract for the operation of a system of record on behalf of an organization to accomplish an organizational mission or function.
    • Discussion: When, by contract, an organization provides for the operation of a system of record to accomplish an organizational mission or function, the organization, consistent with its authority, causes the requirements of the Privacy Act to be applied to the system of records.
      At times, contractors will represent the GC and collect personal information associated with operating programs or services. Contractors must be transparent that they are representing the GC and inform individuals about how their personal information may be shared with the government. When contractors are representing the GC, privacy and security obligations apply. Clarify in the acquisition contract custodial rights to personal information, in the event that the contractor will be representing the GC and collecting personal information from individuals.
    • Related controls and activities: PT-06.
  • (12) Acquisition process: Data ownership
      1. Include organizational data ownership requirements in the acquisition contract
      2. Require all data to be removed from the contractor’s system and returned to the organization within [Assignment: organization-defined time frame]
    • Discussion: Contractors who operate a system that contains data owned by an organization initiating the contract have policies and procedures in place to remove the data from their systems and/or return the data in a timeframe defined by the contract.
    • Related controls and activities: None.

References

 

SA-05 System documentation

Activity

  1. Obtain or develop administrator documentation for the system, system component, or system service that describes
    1. secure configuration, installation, and operation of the system, component, or service
    2. effective use and maintenance of security and privacy functions and mechanisms
    3. known vulnerabilities regarding configuration and use of administrative or privileged functions
  2. Obtain or develop user documentation for the system, system component, or system service that describes
    1. user-accessible security and privacy functions and mechanisms and how to effectively use those functions and mechanisms
    2. methods for user interaction, which enable individuals to use the system, component, or service in a more secure manner and protect individual privacy
    3. user responsibilities in maintaining the security of the system, component, or service and privacy of individuals
  3. Document attempts to obtain system, system component, or system service documentation when such documentation is either unavailable or non-existent and take [Assignment: organization-defined actions] in response
  4. Distribute documentation to [Assignment: organization-defined personnel or roles]

Discussion

System artifacts and documentation created by the developer helps organizational personnel understand the implementation and operation of controls. Organizations consider establishing specific measures to determine the quality and completeness of the content provided. System documentation may be used to delineate roles, responsibilities, and expectations of the developer and organization, support the management of supply chain risk, incident response, and other functions. Personnel or roles that require documentation include system owners, system security officers, and system administrators. Attempts to obtain documentation include contacting manufacturers or suppliers and conducting web-based searches.

The inability to obtain documentation may occur due to the age of the system or component or the lack of support from developers and contractors. When documentation cannot be obtained, organizations may need to recreate the documentation if it is essential to the implementation or operation of the controls.

The protection provided for the documentation is commensurate with the security category or classification of the system. Documentation that addresses system vulnerabilities may require an increased level of protection. Secure operation of the system includes initially starting the system and resuming secure system operation after a lapse in system operation.

Related controls and activities

CM-04, CM-06, CM-07, CM-08, PL-02, PL-04, PL-08, PS-02, SA-03, SA-04, SA-08, SA-09, SA-10, SA-11, SA-15, SA-16, SA-17, SI-12, SR-03.

Enhancements

  • (01) System documentation: Functional properties of security controls
    • Withdrawn: Incorporated into SA-04(01).
  • (02) System documentation: Security-relevant external system interfaces
    • Withdrawn: Incorporated into SA-04(02).
  • (03) System documentation: High-level design
    • Withdrawn: Incorporated into SA-04(02).
  • (04) System documentation: Low-level design
    • Withdrawn: Incorporated into SA-04(02).
  • (05) System documentation: Source code
    • Withdrawn: Incorporated into SA-04(02).

References

None.

 

SA-06 Software usage restrictions

Withdrawn: Incorporated into CM-10 and SI-07.

 

SA-07 User-installed software

Withdrawn: Incorporated into CM-11 and SI-07.

 

SA-08 Security and privacy engineering principles

Control

Apply the following systems security and privacy engineering principles in the specification, design, development, implementation, and modification of the system and system components: [Assignment: organization-defined systems security and privacy engineering principles].

Discussion

Systems security and privacy engineering principles are closely related to and implemented throughout the system development lifecycle (see SA-03). Organizations can apply systems security and privacy engineering principles to new systems under development or to systems undergoing upgrades. For existing systems, organizations apply systems security and privacy engineering principles to system upgrades and modifications to the extent feasible, given the current state of hardware, software, and firmware components within those systems.

The application of systems security and privacy engineering principles helps organizations develop trustworthy, secure, and resilient systems and reduces the susceptibility to disruptions, hazards, threats, and the creation of privacy problems for individuals. Examples of system security engineering principles include developing layered protections; establishing security and privacy policies, architecture, and controls as the foundation for design and development; incorporating security and privacy requirements into the system development lifecycle; delineating physical and logical security boundaries; ensuring that developers are trained on how to build secure software; tailoring controls to meet organizational needs; and performing threat modelling to identify use cases, threat agents, attack vectors and patterns, design patterns, and compensating controls needed to mitigate risk.

Organizations that apply systems security and privacy engineering concepts and principles can facilitate the development of trustworthy, secure systems, system components, and system services; reduce risk to acceptable levels; and make informed risk management decisions. System security engineering principles can also be used to protect against certain supply chain risks, including incorporating tamper-resistant hardware into a design.

GC discussion

There are 10 Fair Information Principles that guide privacy protections for federal programs. In general, organizations need policies that dictate privacy policies; limit the collection, use, disclosure, and retention of personal information; and ensure individuals are informed about the handling of their personal information and have the right to access their personal information.

Related controls and activities

PL-08, PM-07, RA-02, RA-03, RA-09, SA-03, SA-04, SA-15, SA-17, SA-20, SC-02, SC-03, SC-32, SC-39, SI-400, SR-02, SR-03, SR-04, SR-05.

Enhancements

  • (01) Security and privacy engineering principles: Clear abstractions
    • Implement the security design principle of clear abstractions.
    • Discussion: The principle of clear abstractions states that a system has simple, well-defined interfaces and functions that provide a consistent and intuitive view of the data and how the data is managed. The clarity, simplicity, necessity, and sufficiency of the system interfaces — combined with a precise definition of their functional behaviour — promotes ease of analysis, inspection, and testing, as well as the correct and secure use of the system. The clarity of an abstraction is subjective.
      Examples that reflect the application of this principle include avoiding redundant, unused interfaces; information hiding; and avoiding semantic overloading of interfaces or their parameters. Information hiding (i.e., representation-independent programming) is a design discipline used to ensure that the internal representation of information in one system component is not visible to another system component invoking or calling the first component, such that the published abstraction is not influenced by how the data may be managed internally.
    • Related controls and activities: None.
  • (02) Security and privacy engineering principles: Least common mechanism
    • Implement the security design principle of least common mechanism in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of least common mechanism states that the amount of mechanism common to more than one user and depended on by all users is minimized (Popek, 1974). Mechanism minimization implies that different components of a system refrain from using the same mechanism to access a system resource.
      Every shared mechanism (especially a mechanism involving shared variables) represents a potential information path between users and is designed with care to ensure that it does not unintentionally compromise security (Saltzer and Schroeder, 1975). Implementing the principle of least common mechanism helps to reduce the adverse consequences of sharing the system state among different programs.
      A single program that corrupts a shared state (including shared variables) has the potential to corrupt other programs that are dependent on the state. The principle of least common mechanism also supports the principle of simplicity of design and addresses the issue of covert storage channels (Lampson, 1973).
    • Related controls and activities: None.
  • (03) Security and privacy engineering principles: Modularity and layering
    • Implement the security design principles of modularity and layering in [Assignment: organization-defined systems or system components].
    • Discussion: The principles of modularity and layering are fundamental across system engineering disciplines. Modularity and layering derived from functional decomposition are effective in managing system complexity by making it possible to comprehend the structure of the system. Modular decomposition, or refinement in system design, is challenging and resists general statements of principle.
      Modularity serves to isolate functions and related data structures into well-defined logical units. Layering allows the relationships of these units to be better understood so that dependencies are clear and undesired complexity can be avoided. The security design principle of modularity extends functional modularity to include considerations based on trust relationships, privilege, and security policy.
      Security-informed modular decomposition includes the allocation of policies to systems in a network, separation of system applications into processes with distinct address spaces, allocation of system policies to layers, and separation of processes into subjects with distinct privileges based on hardware-supported privilege domains.
    • Related controls and activities: SC-02, SC-03.
  • (04) Security and privacy engineering principles: Partially ordered dependencies
    • Implement the security design principle of partially ordered dependencies in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of partially ordered dependencies states that the synchronization, calling, and other dependencies in the system are partially ordered. A fundamental concept in system design is layering, whereby the system is organized into well-defined, functionally related modules or components. The layers are linearly ordered with respect to inter-layer dependencies, such that higher layers are dependent on lower layers.
      While providing functionality to higher layers, some layers can be self-contained and are not dependent on lower layers. Although a partial ordering of all functions in a given system may not be possible, if circular dependencies are constrained to occur within layers, the inherent problems of circularity can be more easily managed.
      Partially ordered dependencies and system layering contribute significantly to the simplicity and coherency of the system design. Partially ordered dependencies also facilitate system testing and analysis.
    • Related controls and activities: None.
  • (05) Security and privacy engineering principles: Efficiently mediated access
    • Implement the security design principle of efficiently mediated access in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of efficiently mediated access states that policy enforcement mechanisms utilize the least common mechanism available while satisfying stakeholder requirements within expressed constraints. The mediation of access to system resources (i.e., CPU, memory, devices, communication ports, services, infrastructure, data, and information) is often the predominant security function of secure systems. It also enables the realization of protections for the capability provided to stakeholders by the system.
      Mediation of resource access can result in performance bottlenecks if the system is not designed correctly. For example, efficiently mediated access can be achieved by using hardware mechanisms. Once access to a low-level resource such as memory has been obtained, hardware protection mechanisms can ensure that out-of-bounds access does not occur.
    • Related controls and activities: AC-25.
  • (06) Security and privacy engineering principles: Minimized sharing
    • Implement the security design principle of minimized sharing in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of minimized sharing states that no computer resource is shared between system components (e.g., subjects, processes, functions) unless it is absolutely necessary to do so. Minimized sharing helps to simplify system design and implementation.
      To protect user-domain resources from arbitrary active entities, no resource is shared unless that sharing has been explicitly requested and granted. The need for resource sharing can be motivated by the design principle of least common mechanism in the case of internal entities or driven by stakeholder requirements. However, internal sharing is carefully designed to avoid problem related to performance, covert storage, and timing channel problems.
      Sharing via common mechanism can increase the susceptibility of data and information to unauthorized access, disclosure, use, or modification and can adversely affect the inherent capability provided by the system. To minimize sharing induced by common mechanisms, such mechanisms can be designed to be re-entrant or virtualized to preserve separation. Moreover, the use of global data to share information is carefully scrutinized. The lack of encapsulation may obfuscate relationships among the sharing entities.
    • Related controls and activities: SC-31.
  • (07) Security and privacy engineering principles: Reduced complexity
    • Implement the security design principle of reduced complexity in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of reduced complexity states that the system design is as simple and small as possible. A small and simple design is more understandable, more analyzable, and less prone to error. The reduced complexity principle applies to any aspect of a system, but it has particular importance for security due to the various analyses performed to obtain evidence about the emergent security property of the system. For such analyses to be successful, a small and simple design is essential.
      Application of the principle of reduced complexity contributes to the ability of system developers to understand the correctness and completeness of system security functions. It also facilitates the identification of potential vulnerabilities. The corollary of reduced complexity states that the simplicity of the system is directly related to the number of vulnerabilities it will contain; that is, simpler systems contain fewer vulnerabilities.
      Benefits of reduced complexity are that it is easier to understand whether the intended security policy has been captured in the system design and that fewer vulnerabilities are likely to be introduced during engineering development. An additional benefit is that any such conclusion about correctness, completeness, and the existence of vulnerabilities can be reached with a higher degree of assurance in contrast to conclusions reached in situations where the system design is inherently more complex.
      Transitioning from older technologies to newer technologies (e.g., transitioning from Ipv4 to Ipv6) may require implementing the older and newer technologies simultaneously during the transition period. This may result in a temporary increase in system complexity during the transition.
    • Related controls and activities: None.
  • (08) Security and privacy engineering principles: Secure evolvability
    • Implement the security design principle of secure evolvability in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of secure evolvability states that a system is developed to facilitate the maintenance of its security properties when there are changes to the system’s structure, interfaces, interconnections (i.e., system architecture), functionality, or configuration (i.e., security policy enforcement). Changes include a new, enhanced, or upgraded system capability; maintenance and sustainment activities; and reconfiguration.
      Although it is not possible to plan for every aspect of system evolution, system upgrades and changes can be anticipated by analyses of mission or business strategic direction, anticipated changes in the threat environment, and anticipated maintenance and sustainment needs. It is unrealistic to expect that complex systems remain secure in contexts not envisioned during development, whether such contexts are related to the operational environment or to usage.
      A system may be secure in some new contexts, but there is no guarantee that its emergent behaviour will always be secure. It is easier to build robustness into a system from the outset, and it follows that the sustainment of system robustness requires planning for change as opposed to adapting in an ad hoc or non-methodical manner.
      The benefits of this principle include reduced vendor lifecycle costs, reduced cost of ownership, improved system security, more effective management of security risk, and less risk uncertainty.
    • Related controls and activities: CM-03.
  • (09) Security and privacy engineering principles: Trusted components
    • Implement the security design principle of trusted components in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of trusted components states that a component is both trustworthy and robust to at least a level commensurate with the security dependencies it supports (i.e., how much it is trusted to perform its security functions by other components). This principle enables the composition of components such that trustworthiness and robustness are not inadvertently diminished, and the trust is not consequently misplaced. Ultimately, this principle demands some metric by which the trust in a component and the trustworthiness and robustness of a component can be measured on the same abstract scale.
      The principle of trusted components is particularly relevant when considering systems and components in which there are complex chains of trust dependencies. A trust dependency is also referred to as a trust relationship and there may be chains of trust relationships.
      The principle of trusted components also applies to a compound component that consists of subcomponents (e.g., a subsystem), which may have varying levels of trustworthiness. The conservative assumption is that the trustworthiness of a compound component is that of its least trustworthy subcomponent. It may be possible to provide a security engineering rationale that the trustworthiness of a particular compound component is greater than the conservative assumption. However, any such rationale reflects logical reasoning based on a clear statement of the trustworthiness objectives as well as relevant and credible evidence. The trustworthiness of a compound component is not the same as increased application of defence-in-depth layering within the component or a replication of components. Defence-in-depth techniques do not increase the trustworthiness of the whole above that of the least trustworthy component.
    • Related controls and activities: None.
  • (10) Security and privacy engineering principles: Hierarchical trust
    • Implement the security design principle of hierarchical trust in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of hierarchical trust for components builds on the principle of trusted components and states that the security dependencies in a system will form a partial ordering if they preserve the principle of trusted components. The partial ordering provides the basis for both trustworthiness and robustness reasoning when composing a secure system from independently developed or sourced components.
      To analyze a system composed of independently developed or sourced components for both trustworthiness and robustness, it is essential to eliminate circular dependencies. If a more trustworthy or robust component located in a lower layer of the system were to depend on a less trustworthy or less robust component in a higher layer, this would reduce the overall trust or the robustness of the system to that of the weaker element that is being depended upon.
      Trust relationships, or chains of trust, can have various manifestations. For example, the root certificate of a certificate hierarchy is the most trusted node in the hierarchy, whereas the leaves in the hierarchy may be less trustworthy nodes. Another example occurs in a layered high-assurance system where the security kernel (including the hardware base), which is located at the lowest layer of the system, is the most trustworthy component.
      The principle of hierarchical trust, however, does not prohibit the use of overly trustworthy components. There may be cases in a system of low trustworthiness where it is reasonable to employ a highly trustworthy component rather than one that is less trustworthy (e.g., due to availability or another cost-benefit driver). For such a case, any dependency of the highly trustworthy component upon a less trustworthy component does not degrade the trustworthiness of the resulting low-trust system, nor does it raise its trustworthiness or robustness.
    • Related controls and activities: None.
  • (11) Security and privacy engineering principles: Inverse modification threshold
    • Implement the security design principle of inverse modification threshold in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of inverse modification threshold builds on the principles of trusted components and of hierarchical trust and states that the degree of protection provided to a component is commensurate with its trustworthiness. As the trust placed in a component increases, the protection against unauthorized modification of the component increases to the same degree.
      Protection from unauthorized modification can come in the form of the component’s own self-protection and innate robustness, or it can come from the protections afforded to the component from other elements or attributes of the security architecture (to include protections in the environment of operation).
    • Related controls and activities: None.
  • (12) Security and privacy engineering principles: Hierarchical protection
    • Implement the security design principle of hierarchical protection in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of hierarchical protection states that a component need not be protected from more trustworthy components. In the degenerate case of the most trusted component, it protects itself from all other components. For example, if an operating system kernel is deemed the most trustworthy component in a system, then it protects itself from all untrusted applications it supports, but the applications, conversely, do not need to protect themselves from the kernel.
      The trustworthiness of users is a consideration for applying the principle of hierarchical protection. A trusted system need not protect itself from an equally trustworthy user, reflecting use of untrusted systems in system high environments where users are highly trustworthy and where other protections are put in place to bound and protect the system high execution environment.
    • Related controls and activities: None.
  • (13) Security and privacy engineering principles: Minimized security elements
    • Implement the security design principle of minimized security elements in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of minimized security elements states that the system does not have extraneous trusted components. The principle of minimized security elements has 2 aspects: the overall cost of security analysis and the complexity of security analysis.
      Trusted components are generally costlier to construct and implement, owing to the increased rigour of development processes and supply chain integrity requirements. Trusted components require greater security analysis to qualify their robustness and enhance supply chain controls to qualify their trustworthiness. Thus, to reduce the cost and decrease the complexity of the security analysis, a system contains as few trusted components as possible.
      The analysis of the interaction of trusted components with other components of the system is one of the most important aspects of system security verification. If the interactions between components are unnecessarily complex, the security of the system will also be more difficult to ascertain than one whose internal trust relationships are simple and elegantly constructed. In general, fewer trusted components result in fewer internal trust relationships and a simpler system.
    • Related controls and activities: None.
  • (14) Security and privacy engineering principles: Least privilege
    • Implement the security design principle of least privilege in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of least privilege states that each system component is allocated sufficient privileges to accomplish its specified functions but no more. Applying the principle of least privilege limits the scope of the component’s actions, which has 2 desirable effects: the security impact of a failure, corruption, or misuse of the component will have a smaller security impact, and the security analysis of the component will be simplified. Least privilege is a pervasive principle that is reflected in all aspects of the secure system design. Interfaces used to invoke component capability are available to only certain subsets of the user population, and component design supports a sufficiently fine granularity of privilege decomposition.
      For example, in the case of an audit mechanism, there may be an interface for the audit manager, who configures the audit settings; an interface for the audit operator, who ensures that audit data is safely collected and stored; and, finally, yet another interface for the audit reviewer, who only needs to view the audit data that has been collected but does not need to perform operations on that data.
      In addition to its manifestations at the system interface, least privilege can be used as a guiding principle for the internal structure of the system itself. One aspect of internal least privilege is to construct modules so that only the elements encapsulated by the module are directly operated on by the functions within the module. Elements external to a module that may be affected by the module’s operation are indirectly accessed through interaction (e.g., via a function call) with the module that contains those elements. Another aspect of internal least privilege is that the scope of a given module or component includes only those system elements that are necessary for its functionality and that the access modes for the elements (e.g., read, write) are minimal. An example of least privilege in software development is minimizing the functions that operate with elevated privileges (for example, limiting the tools and functions that operate in kernel mode).
    • Related controls and activities: AC-06, CM-07.
  • (15) Security and privacy engineering principles: Predicate permission
    • Implement the security design principle of predicate permission in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of predicate permission states that system designers consider requiring multiple authorized entities to provide consent before a highly critical operation or access to highly sensitive data, information, or resources is allowed to proceed.
      Saltzer (Saltzer and Schroeder, 1975) originally named predicate permission the separation of privilege. It is also equivalent to separation of duty. The division of privilege among multiple parties decreases the likelihood of abuse and provides the safeguard that no single accident, deception, or breach of trust is sufficient to enable an unrecoverable action that can lead to significantly damaging effects.
      The design options for such a mechanism may require simultaneous action (e.g., the firing of a nuclear weapon requires 2 different authorized individuals to give the correct command within a small time window) or a sequence of operations where each successive action is enabled by some prior action, but no single individual is able to enable more than one action.
    • Related controls and activities: AC-05.
  • (16) Security and privacy engineering principles: Self-reliant trustworthiness
    • Implement the security design principle of self-reliant trustworthiness in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of self-reliant trustworthiness states that systems minimize their reliance on other systems for their own trustworthiness. A system is trustworthy by default, and any connection to an external entity is used to supplement its function. If a system were required to maintain a connection with another external entity in order to maintain its trustworthiness, then that system would be vulnerable to malicious and non-malicious threats that could result in the loss or degradation of that connection.
      The benefit of the principle of self-reliant trustworthiness is that the isolation of a system will make it less vulnerable to attack. A corollary to this principle relates to the ability of the system (or system component) to operate in isolation and then resynchronize with other components when it is rejoined with them.
    • Related controls and activities: None.
  • (17) Security and privacy engineering principles: Secure distributed composition
    • Implement the security design principle of secure distributed composition in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of secure distributed composition states that the composition of distributed components that enforce the same system security policy result in a system that enforces that policy at least as well as the individual components do.
      Many of the design principles for secure systems deal with how components can or should interact. The need to create or enable a capability from the composition of distributed components can magnify the relevancy of these principles. In particular, the translation of security policy from a standalone to a distributed system or a system-of-systems can have unexpected or emergent results.
      Communication protocols and distributed data consistency mechanisms help to ensure consistent policy enforcement across a distributed system. To ensure a system-wide level of assurance of correct policy enforcement, the security architecture of a distributed composite system is thoroughly analyzed.
    • Related controls and activities: None.
  • (18) Security and privacy engineering principles: Trusted communications channels
    • Implement the security design principle of trusted communications channels in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of trusted communication channels states that when composing a system where there is a potential threat to communications between components (i.e., the interconnections between components), each communication channel is trustworthy to a level commensurate with the security dependencies it supports (i.e., how much it is trusted by other components to perform its security functions). Trusted communication channels are achieved by a combination of restricting access to the communication channel (to ensure an acceptable match in the trustworthiness and the robustness of the endpoints involved in the communication) and employing end-to-end protections for the data transmitted over the communication channel (to protect against interception and modification and to further increase the assurance of proper end-to-end communication).
    • Related controls and activities: SC-08, SC-12, SC-13.
  • (19) Security and privacy engineering principles: Continuous protection
    • Implement the security design principle of continuous protection in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of continuous protection states that components and data used to enforce the security policy have uninterrupted protection that is consistent with the security policy and the security architecture assumptions. No assurances that the system can provide the confidentiality, integrity, availability, and privacy protections for its design capability can be made if there are gaps in the protection. Any assurances about the ability to secure a delivered capability require that data and information are continuously protected. That is, there are no periods during which data and information are left unprotected while under control of the system (i.e., during the creation, storage, processing, or communication of the data and information, as well as during system initialization, execution, failure, interruption, and shutdown).
      Continuous protection requires adherence to the precepts of the reference monitor concept (i.e., every request is validated by the reference monitor; the reference monitor is able to protect itself from tampering; and sufficient assurance of the correctness and completeness of the mechanism can be ascertained from analysis and testing) and the principle of secure failure and recovery (i.e., preservation of a secure state during error, fault, failure, and successful attack; preservation of a secure state during recovery to normal, degraded, or alternative operational modes).
      Continuous protection also applies to systems designed to operate in varying configurations, including those that deliver full operational capability and degraded-mode configurations that deliver partial operational capability. The continuous protection principle requires that changes to the system security policies be traceable to the operational need that drives the configuration and be verifiable (i.e., it is possible to verify that the proposed changes will not put the system into an insecure state).
      Insufficient traceability and verification may lead to inconsistent states or protection discontinuities due to the complex or undecidable nature of the problem. The use of pre-verified configuration definitions that reflect the new security policy enables analysis to determine that a transition from old to new policies is essentially atomic and that any residual effects from the old policy are guaranteed to not conflict with the new policy. The ability to demonstrate continuous protection is rooted in the clear articulation of lifecycle protection needs as stakeholder security requirements.
    • Related controls and activities: AC-25.
  • (20) Security and privacy engineering principles: Secure metadata management
    • Implement the security design principle of secure metadata management in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of secure metadata management states that metadata is a first-class object with respect to security policy when the policy requires either complete protection of information or that the security subsystem be self-protecting. The principle of secure metadata management is driven by the recognition that a system, subsystem, or component cannot achieve self-protection unless it protects the data it relies on for correct execution.
      Data is generally not interpreted by the system that stores it. It may have semantic value (i.e., it comprises information) to users and programs that process the data. In contrast, metadata is information about data, such as a file name or the date when the file was created. Metadata is bound to the target data that it describes in a way that the system can interpret, but it need not be stored inside of or proximate to its target data. There may be metadata whose target is itself metadata (e.g., the classification level or impact level of a file name), including self-referential metadata.
      The apparent secondary nature of metadata can lead to neglect of its legitimate need for protection, resulting in a violation of the security policy that includes the exfiltration of information. A particular concern associated with insufficient protections for metadata is associated with multilevel secure (MLS) systems. MLS systems mediate access by a subject to an object based on relative sensitivity levels. It follows that all subjects and objects in the scope of control of the MLS system are either directly labelled or indirectly attributed with sensitivity levels.
      The corollary of labelled metadata for MLS systems states that objects containing metadata are labelled. As with protection needs assessments for data, attention is given to ensure that the confidentiality and integrity protections are individually assessed, specified, and allocated to metadata, as would be done for mission, business, and system data.
    • Related controls and activities: None.
  • (21) Security and privacy engineering principles: Self-analysis
    • Implement the security design principle of self-analysis in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of self-analysis states that a system component is able to assess its internal state and functionality to a limited extent at various stages of execution, and that this self-analysis capability is commensurate with the level of robustness invested in the system. At the system level, self-analysis can be achieved through hierarchical assessments of robustness established in a bottom-up fashion. In this approach, the lower-level components check for data integrity and correct functionality (to a limited extent) of higher-level components. For example, trusted boot sequences involve a trusted lower-level component that attests to the integrity of the next higher-level components so that a transitive chain of trust can be established.
      At the root, a component attests to itself, which usually involves an axiomatic or environmentally enforced assumption about its integrity. Results of the self-analyses can be used to guard against externally induced errors, internal malfunction, or transient errors. By following this principle, some simple malfunctions or errors can be detected without allowing the effects of the error or malfunction to propagate outside of the component. Further, the self-test can be used to attest to the configuration of the component, detecting any potential conflicts in configuration with respect to the expected configuration.
    • Related controls and activities: CA-07.
  • (22) Security and privacy engineering principles: Accountability and traceability
    • Implement the security design principle of accountability and traceability in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of accountability and traceability states that it is possible to trace security-relevant actions (i.e., subject-object interactions) to the entity on whose behalf the action is being taken. The principle of accountability and traceability requires a trustworthy infrastructure that can record details about actions that affect system security (e.g., an audit subsystem).
      To record the details about actions, the system is able to uniquely identify the entity on whose behalf the action is being carried out and also record the relevant sequence of actions that is carried out. The accountability policy also requires that the audit trail itself be protected from unauthorized access and modification.
      The principle of least privilege assists in tracing the actions to particular entities, as it increases the granularity of accountability. Associating specific actions with system entities, and ultimately with users, and making the audit trail secure against unauthorized access and modifications provide non-repudiation because once an action is recorded, it is not possible to change the audit trail.
      Another important function that accountability and traceability serves is in the routine and forensic analysis of events associated with the violation of security policy. Analysis of audit logs may provide additional information that may be helpful in determining the path or component that allowed the violation of the security policy and the actions of individuals associated with the violation of the security policy.
    • Related controls and activities: AC-06, AU-02, AU-03, AU-06, AU-09, AU-10, AU-12, IA-02, IR-04.
  • (23) Security and privacy engineering principles: Secure defaults
    • Implement the security design principle of secure defaults in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of secure defaults states that the default configuration of a system (including its constituent subsystems, components, and mechanisms) reflects a restrictive and conservative enforcement of security policy. The principle of secure defaults applies to the initial (i.e., default) configuration of a system as well as to the security engineering and design of access control and other security functions that follow a deny-unless-explicitly-authorized strategy.
      The initial configuration aspect of this principle requires that any as-shipped configuration of a system, subsystem, or system component does not aid in the violation of the security policy and can prevent the system from operating in the default configuration for those cases where the security policy itself requires configuration by the operational user.
      Restrictive defaults mean that the system will operate as shipped with adequate self-protection and will be able to prevent security breaches before the intended security policy and system configuration is established. In cases where the protection provided by the as-shipped product is inadequate, stakeholders assess the risk of using it prior to establishing a secure initial state.
      Adherence to the principle of secure defaults guarantees that a system is established in a secure state upon successfully completing initialization. In situations where the system fails to complete initialization, it will perform a requested operation using secure defaults or it will not perform the operation. Refer to the principles of continuous protection and secure failure and recovery that parallel this principle to provide the ability to detect and recover from failure.
      The security engineering approach to this principle states that security mechanisms deny requests unless the request is found to be well-formed and consistent with the security policy. The insecure alternative is to allow a request unless it is shown to be inconsistent with the policy. In a large system, the conditions that are satisfied to grant a request that is denied by default are often far more compact and complete than those that would need to be checked in order to deny a request that is granted by default.
    • Related controls and activities: CM-02, CM-06, SA-04.
  • (24) Security and privacy engineering principles: Secure failure and recovery
    • Implement the security design principle of secure failure and recovery in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of secure failure and recovery states that neither a failure in a system function or mechanism nor any recovery action in response to failure leads to a violation of security policy. The principle of secure failure and recovery parallels the principle of continuous protection to ensure that a system is capable of detecting (within limits) actual and impending failure at any stage of its operation (i.e., initialization, normal operation, shutdown, and maintenance) and to take appropriate steps to ensure that security policies are not violated. In addition, when specified, the system is capable of recovering from impending or actual failure to resume normal, degraded, or alternative secure operations while ensuring that a secure state is maintained such that security policies are not violated.
      Failure is a condition in which the behaviour of a component deviates from its specified or expected behaviour for an explicitly documented input. Once a failed security function is detected, the system may reconfigure itself to circumvent the failed component while maintaining security and provide all or part of the functionality of the original system, or it may completely shut itself down to prevent any further violation of security policies. For this to occur, the reconfiguration functions of the system are designed to ensure continuous enforcement of security policy during the various phases of reconfiguration.
      Another technique that can be used to recover from failures is to perform a rollback to a secure state (which may be the initial state) and then either shut down or replace the service or component that failed such that secure operations may resume. Failure of a component may or may not be detectable to the components using it. The principle of secure failure indicates that components fail in a state that denies rather than grants access.
      For example, a nominally atomic operation interrupted before completion does not violate security policy and is designed to handle interruption events by employing higher-level atomicity and rollback mechanisms (e.g., transactions). If a service is being used, its atomicity properties are well-documented and characterized so that the component availing itself of that service can detect and handle interruption events appropriately. For example, a system is designed to gracefully respond to disconnection and support resynchronization and data consistency after disconnection.
      Failure protection strategies that employ replication of policy enforcement mechanisms, sometimes called defence in depth, can allow the system to continue in a secure state even when one mechanism has failed to protect the system. If the mechanisms are similar, however, the additional protection may be illusory, as the adversary can simply attack in series. Similarly, in a networked system, breaking the security on one system or service may enable an attacker to do the same on other similar replicated systems and services.
      By employing multiple protection mechanisms whose features are significantly different, the possibility of attack replication or repetition can be reduced. Analyses are conducted to weigh the costs and benefits of such redundancy techniques against increased resource usage and adverse effects on the overall system performance.
      Additional analyses are conducted as the complexity of these mechanisms increases, as could be the case for dynamic behaviours. Increased complexity generally reduces robustness. When a resource cannot be continuously protected, it is critical to detect and repair any security breaches before the resource is once again used in a secure context.
    • Related controls and activities: CP-10, CP-12, SC-07, SC-08, SC-24, SI-13.
  • (25) Security and privacy engineering principles: Economic security
    • Implement the security design principle of economic security in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of economic security states that security mechanisms are not costlier than the potential damage that could occur from a security breach. This is the security-relevant form of the cost-benefit analyses used in risk management. The cost assumptions of cost-benefit analysis prevent the system designer from incorporating security mechanisms of greater strength than necessary, where strength of mechanism is proportional to cost. The principle of economic security also requires analysis of the benefits of assurance relative to the cost of that assurance in terms of the effort expended to obtain relevant and credible evidence.
    • Related controls and activities: RA-03.
  • (26) Security and privacy engineering principles: Performance security
    • Implement the security design principle of performance security in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of performance security states that security mechanisms are constructed so that they do not degrade system performance unnecessarily. Stakeholder and system design requirements for performance and security are precisely articulated and prioritized. For the system implementation to meet its design requirements and be found acceptable to stakeholders (i.e., validation against stakeholder requirements), the designers adhere to the specified constraints that capability performance needs place on protection needs.
      The overall impact of computationally intensive security services (e.g., cryptography) are assessed and demonstrated to pose no significant impact to higher-priority performance considerations or are deemed to provide an acceptable trade-off of performance for trustworthy protection. The trade-off considerations include less computationally intensive security services unless they are unavailable or insufficient. The insufficiency of a security service is determined by functional capability and strength of mechanism. The strength of mechanism is selected with respect to security requirements, performance-critical overhead issues (e.g., cryptographic key management), and an assessment of the capability of the threat.
      The principle of performance security leads to the incorporation of features that help in the enforcement of security policy but incur minimum overhead, such as low-level hardware mechanisms upon which higher-level services can be built. Such low-level mechanisms are usually very specific, have very limited functionality, and are optimized for performance. For example, once access rights to a portion of memory are granted, many systems use hardware mechanisms to ensure that all further accesses involve the correct memory address and access mode.
      Application of this principle reinforces the need to design security into the system from the ground up and to incorporate simple mechanisms at the lower layers that can be used as building blocks for higher-level mechanisms.
    • Related controls and activities: SC-12, SC-13, SI-02, SI-07.
  • (27) Security and privacy engineering principles: Human-factored security
    • Implement the security design principle of human factored security in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of human-factored security states that the user interface for security functions and supporting services is intuitive, user-friendly, and provides feedback for user actions that affect such policy and its enforcement. The mechanisms that enforce security policy are not intrusive to the user and are designed not to degrade user efficiency.
      Security policy enforcement mechanisms also provide the user with meaningful, clear, and relevant feedback and warnings when insecure choices are being made. Particular attention is given to interfaces through which personnel responsible for system administration and operation configure and set up the security policies. Ideally, these personnel are able to understand the impact of their choices.
      Personnel with system administrative and operational responsibilities are able to configure systems before start-up and administer them during runtime with confidence that their intent is correctly mapped to the system’s mechanisms. Security services, functions, and mechanisms do not impede or unnecessarily complicate the intended use of the system. There is a trade-off between system usability and the strictness necessary for security policy enforcement. If security mechanisms are frustrating or difficult to use, then users may disable them, avoid them, or use them in ways inconsistent with the security requirements and protection needs that the mechanisms were designed to satisfy.
    • Related controls and activities: None.
  • (28) Security and privacy engineering principles: Acceptable security
    • Implement the security design principle of acceptable security in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of acceptable security requires that the level of privacy and performance that the system provides is consistent with the users’ expectations. The perception of personal privacy may affect user behaviour, morale, and effectiveness. Based on the organizational privacy policy and the system design, users should be able to restrict their actions to protect their privacy. When systems fail to provide intuitive interfaces or to meet privacy and performance expectations, users may either choose to completely avoid the system or use it in ways that may be inefficient or even insecure.
    • Related controls and activities: None.
  • (29) Security and privacy engineering principles: Repeatable and documented procedures
    • Implement the security design principle of repeatable and documented procedures in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of repeatable and documented procedures states that the techniques and methods employed to construct a system component permit the same component to be completely and correctly reconstructed at a later time. Repeatable and documented procedures support the development of a component that is identical to the component created earlier, which may be in widespread use.
      In the case of other system artifacts (e.g., documentation and testing results), repeatability supports consistency and the ability to inspect the artifacts. Repeatable and documented procedures can be introduced at various stages within the system development lifecycle and can contribute to the ability to evaluate assurance claims for the system. Examples include systematic procedures for code development and review, procedures for the configuration management of development tools and system artifacts, and procedures for system delivery.
    • Related controls and activities: CM-01, SA-01, SA-10, SA-11, SA-15, SA-17, SC-01, SI-01.
  • (30) Security and privacy engineering principles: Procedural rigour
    • Implement the security design principle of procedural rigour in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of procedural rigour states that the rigour of a system lifecycle process is commensurate with its intended trustworthiness and robustness. Procedural rigour defines the scope, depth, and detail of the system lifecycle procedures. Rigorous system lifecycle procedures contribute to the assurance that the system is correct and free of unintended functionality in several ways. First, the procedures impose checks and balances on the lifecycle process such that the introduction of unspecified functionality is prevented.
      Second, rigorous procedures applied to systems security engineering activities that produce specifications and other system design documents contribute to the ability to understand the system as it has been built rather than trusting that the component, as implemented, is the authoritative (and potentially misleading) specification.
      Finally, it is easier to make modifications to an existing system component when there are detailed specifications that describe its current design than it is to study source code or schematics to try to understand how the component works.
      Procedural rigour helps ensure that security functional and assurance requirements have been satisfied, and it contributes to a better-informed basis for determining trustworthiness and robustness. Procedural rigour is commensurate with the degree of assurance desired for the system. If the required assurance of the system is low, a high level of procedural rigour may add unnecessary cost, whereas when high assurance is critical, the cost of high procedural rigour is merited.
    • Related controls and activities: None.
  • (31) Security and privacy engineering principles: Secure system modification
    • Implement the security design principle of secure system modification in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of secure system modification states that system modification maintains system security with respect to the security requirements and risk tolerance of stakeholders. Upgrades or modifications to systems can transform secure systems into systems that are not secure. The procedures for system modification ensure that if the system is to maintain its trustworthiness and robustness, the same rigour that was applied to its initial development is applied to any system changes. Because modifications can affect the ability of the system to maintain its secure state, a careful security analysis of the modification is needed prior to its implementation and deployment. This principle parallels the principle of secure evolvability.
    • Related controls and activities: CM-03, CM-04.
  • (32) Security and privacy engineering principles: Sufficient documentation
    • Implement the security design principle of sufficient documentation in [Assignment: organization-defined systems or system components].
    • Discussion: The principle of sufficient documentation states that organizational personnel with responsibilities to interact with the system are provided with adequate documentation and other information such that the personnel contribute to, rather than detract from, system security. Despite attempts to comply with principles such as human-factored security and acceptable security, systems are inherently complex, and the design intent for the use of security mechanisms and the ramifications of the misuse or misconfiguration of security mechanisms are not always obvious.
      Uninformed and insufficiently trained users can introduce vulnerabilities due to errors of omission and commission. The availability of documentation and training can help to ensure a knowledgeable cadre of personnel, all of whom have a critical role in the achievement of principles such as continuous protection. Documentation is written clearly and supported by training that provides security awareness and understanding of security-relevant responsibilities.
    • Related controls and activities: AT-02, AT-03, SA-05.
  • (33) Security and privacy engineering principles: Minimization
    • Implement the privacy principle of minimization using [Assignment: organization-defined processes].
    • Discussion: The principle of minimization states that organizations must collect only personal or sensitive information if it relates directly to an operating program or activity of the institution. Along the same lines, organizations must only retain personal or sensitive information for as long as is necessary to accomplish the purpose. In the event that personal or sensitive information is required to be shared with other programs or organizations, the personal or sensitive information shared must be limited to that which is expressly required by the recipient program or organization. Organizations should have processes in place, consistent with applicable laws and policies, to implement the principle of minimization.
    • GC discussion: This enhancement is mandatory for GC departments and agencies.
    • Related controls and activities: PE-08, PM-25, SC-42, SI-12.
  • (400) Security and privacy engineering principles: Licensed and certified engineers
    • Employ licensed and certified security engineers that assume responsibility for the specification, design, development, and implementation of information system security and privacy solutions.
    • Discussion: None.
    • Related controls and activities: None.

References

 

SA-09 External system services

Control

  1. Require that providers of external system services comply with organizational security and privacy requirements and employ the following controls: [Assignment: organization-defined controls]
  2. Define and document organizational oversight and user roles and responsibilities with regard to external system services
  3. Employ the following processes, methods, and techniques to monitor control compliance by external service providers on an ongoing basis: [Assignment: organization-defined processes, methods, and techniques]

Discussion

External system services are provided by an external provider, and the organization has no direct control over the implementation of the required controls or the assessment of control effectiveness. Organizations establish relationships with external service providers in a variety of ways, including through business partnerships, contracts, inter-agency agreements, lines of business arrangements, licensing agreements, joint ventures, and supply chain exchanges.
The responsibility for managing risks from using external system services remains with authorizing officials. For services external to organizations, a chain of trust requires that organizations establish and retain a certain level of confidence that each provider in the consumer-provider relationship provides adequate protection for the services rendered. The extent and nature of this chain of trust vary based on relationships between organizations and the external providers. Organizations document the basis for the trust relationships so that the relationships can be monitored.
External system services documentation includes government, service providers, and end-user security roles and responsibilities, and service-level agreements. Service-level agreements define the performance expectations for implemented controls, describe measurable outcomes, and identify remedies and response requirements for identified instances of non-compliance.

Related controls and activities

AC-20, CA-03, CP-02, IR-04, IR-07, PL-10, PL-11, PS-07, SA-02, SA-04, SR-03, SR-05, SA-400.

Enhancements

  • (01) External system services: Risk assessments and organizational approvals
      1. Conduct an organizational risk assessment prior to acquiring or outsourcing information security services
      2. Verify that the acquisition or outsourcing of dedicated information security services is approved by [Assignment: organization-defined personnel or roles]
    • Discussion: Information security services include the operation of security devices, such as firewalls or key management services, as well as incident monitoring, analysis, and response. Risks assessed can include system, mission or business, security, privacy, or supply chain risks.
    • Related controls and activities: CA-06, RA-03, RA-08.
  • (02) External system services: Identification of functions, ports, protocols, and services
    • Require providers of the following external system services to identify the functions, ports, protocols, and other services required for the use of such services: [Assignment: organization-defined external system services].
    • Discussion: Information from external service providers regarding the specific functions, ports, protocols, and services used in the provision of such services can be useful when the need arises to understand the trade-offs involved in restricting certain functions and services or blocking certain ports and protocols.
    • Related controls and activities: CM-06, CM-07.
  • (03) External system services: Establish and maintain trust relationship with providers
    • Establish, document, and maintain trust relationships with external service providers based on the following requirements, properties, factors, or conditions: [Assignment: organization-defined security and privacy requirements, properties, factors, or conditions defining acceptable trust relationships].
    • Discussion: Trust relationships between organizations and external service providers reflect the degree of confidence that the risk from using external services is at an acceptable level. Trust relationships can help organizations gain increased levels of confidence that service providers are providing adequate protection for the services rendered. They can also be useful when conducting incident response or when planning for upgrades or obsolescence.
      Trust relationships can be complicated due to the potentially large number of entities participating in the consumer-provider interactions, subordinate relationships and levels of trust, and types of interactions between the parties. In some cases, the degree of trust is based on the level of control that organizations can exert on external service providers regarding the controls necessary for the protection of the service, information, or individual privacy and on the evidence brought forth as to the effectiveness of the implemented controls. The level of control is established by the terms and conditions of the contracts or service-level agreements.
    • Related controls and activities: SR-02.
  • (04) External system services: Consistent interests of consumers and providers
    • Take the following actions to verify that the interests of [Assignment: organization-defined external service providers] are consistent with and reflect organizational interests: [Assignment: organization-defined actions].
    • Discussion: As organizations increasingly use external service providers, it is possible that the interests of the service providers may diverge from organizational interests. In such situations, simply having the required technical, management, or operational controls in place may not be sufficient if the providers that implement and manage those controls are not operating in a manner consistent with the interests of the consuming organizations. Actions that organizations can take to address such concerns include requiring background checks for selected service provider personnel; examining ownership records; employing only trustworthy service providers, such as providers with which organizations have had successful trust relationships; and conducting routine, periodic, and unscheduled visits to service provider facilities.
    • Related controls and activities: None.
  • (05) External system services: Processing, storage, and service location
    • Restrict the location of [Selection (1 or more): information processing; information or data; system services] to [Assignment: organization-defined locations] based on [Assignment: organization-defined requirements or conditions].
    • Discussion: The location of information processing, information and data storage, or system services can have a direct impact on the ability of organizations to successfully execute their mission and business functions. The impact occurs when external providers control the location of processing, storage, or services. The criteria that external providers use to select processing, storage, or service locations may be different from the criteria that organizations use.
      For example, organizations may want data or information storage locations to be restricted to certain locations to facilitate incident response activities in case of information security incidents or breaches. Incident response activities, including forensic analyses and after-the-fact investigations, may be adversely affected by the governing laws, policies, or protocols in the locations where processing and storage occur and/or the locations from which system services emanate.
    • Related controls and activities: SA-05, SA-400, SR-04.
  • (06) External system services: Organization-controlled cryptographic keys
    • Maintain exclusive control of cryptographic keys for encrypted material stored or transmitted through an external system.
    • Discussion: Maintaining exclusive control of cryptographic keys in an external system prevents decryption of organizational data by external system staff. Organizational control of cryptographic keys can be implemented by encrypting and decrypting data inside the organization as data is sent to and received from the external system or by employing a component that permits encryption and decryption functions to be local to the external system but allows exclusive organizational access to the encryption keys.
    • Related controls and activities: SC-12, SC-13, SI-04.
  • (07) External system services: Organization-controlled integrity checking
    • Provide the capability to check the integrity of information while it resides in the external system.
    • Discussion: Storage of organizational information in an external system could limit visibility into the security status of its data. The ability of the organization to verify and validate the integrity of its stored data without transferring it out of the external system provides such visibility.
    • Related controls and activities: SI-07.
  • (08) External system services: Processing and storage location — within Canada
    • Restrict the geographic location of information processing and data storage to facilities located within the legal jurisdictional boundary of Canada.
    • Discussion: The geographic location of information processing and data storage can have a direct impact on the ability of organizations to successfully execute their mission and business functions. A compromise or breach of high-impact information and systems can have severe adverse or catastrophic impacts on organizational assets and operations, individuals, other organizations, and Canada. Restricting the processing and storage of high-impact information to facilities within the legal jurisdictional boundary of Canada provides greater control over such processing and storage.
    • Related controls and activities: SA-05, SA-400, SR-04.

References

 

SA-10 Developer configuration management

Activity: Require the developer of the system, system component, or system service to:

  1. perform configuration management during system, component, or service [Selection (1 or more): design; development; implementation; operation; disposal]
  2. document, manage, and control the integrity of changes to [Assignment: organization-defined configuration items under configuration management]
  3. implement only organization-approved changes to the system, component, or service
  4. document approved changes to the system, component, or service and the potential security and privacy impacts of such changes
  5. track security flaws and flaw resolution within the system, component, or service and report findings to [Assignment: organization-defined personnel]

Discussion

Organizations consider the quality and completeness of configuration management activities conducted by developers as direct evidence that effective security controls have been applied. Controls include protecting the master copies of material used to generate security-relevant portions of the system hardware, software, and firmware from unauthorized modification or destruction. Maintaining the integrity of changes to the system, system component, or system service requires strict configuration control throughout the system development lifecycle to track authorized changes and prevent unauthorized changes.

The configuration items that are placed under configuration management include the formal model; the functional, high-level, and low-level design specifications; other design data; implementation documentation; source code and hardware schematics; the current running version of the object code; tools for comparing new versions of security-relevant hardware descriptions and source code with previous versions; and test fixtures and documentation.

Depending on the mission and business needs of organizations and the nature of the contractual relationships in place, developers may provide configuration management support during the operations and maintenance stage of the system development lifecycle.

Related controls and activities

CM-02, CM-03, CM-04, CM-07, CM-09, SA-04, SA-05, SA-08, SA-15, SI-02, SR-03, SR-04, SR-05, SR-06.

Enhancements

  • (01) Developer configuration management: Software and firmware integrity verification
    • Require the developer of the system, system component, or system service to enable integrity verification of software and firmware components.
    • Discussion: Software and firmware integrity verification allows organizations to detect unauthorized changes to software and firmware components using developer-provided tools, techniques, and mechanisms. The integrity checking mechanisms can also address counterfeiting of software and firmware components. Organizations verify the integrity of software and firmware components, for example, through secure one-way hashes provided by developers. Delivered software and firmware components also include any updates to such components.
    • Related controls and activities: SI-07, SR-11.
  • (02) Developer configuration management: Alternative configuration management processes
    • Provide an alternate configuration management process using organizational personnel in the absence of a dedicated developer configuration management team.
    • Discussion: Alternate configuration management processes may be required when organizations use COTS IT products. Alternate configuration management processes include organizational personnel who review and approve proposed changes to systems, system components, and system services and who conduct security and privacy impact analyses prior to the implementation of changes to systems, components, or services.
    • Related controls and activities: None.
  • (03) Developer configuration management: Hardware integrity verification
    • Require the developer of the system, system component, or system service to enable integrity verification of hardware components.
    • Discussion: Hardware integrity verification allows organizations to detect unauthorized changes to hardware components using developer-provided tools, techniques, methods, and mechanisms. Organizations may verify the integrity of hardware components with hard-to-copy labels or verifiable serial numbers provided by developers, and by requiring the use of anti-tamper technologies. Delivered hardware components also include hardware and firmware updates to such components.
    • Related controls and activities: SI-07.
  • (04) Developer configuration management: Trusted generation
    • Require the developer of the system, system component, or system service to employ tools for comparing newly generated versions of security-relevant hardware descriptions, source code, and object code with previous versions.
    • Discussion: The trusted generation of descriptions, source code, and object code addresses authorized changes to hardware, software, and firmware components between versions during development. The focus is on the efficacy of the developer’s configuration management process to ensure that newly generated versions of security-relevant hardware descriptions, source code, and object code continue to enforce the security policy for the system, system component, or system service. In contrast, SA-10(01) and SA-10(03) allow organizations to detect unauthorized changes to hardware, software, and firmware components using tools, techniques, or mechanisms provided by developers.
    • Related controls and activities: None.
  • (05) Developer configuration management: Mapping integrity for version control
    • Require the developer of the system, system component, or system service to maintain the integrity of the mapping between the master build data describing the current version of security-relevant hardware, software, and firmware and the onsite master copy of the data for the current version.
    • Discussion: Mapping integrity for version control addresses changes to hardware, software, and firmware components during initial development and system development lifecycle updates. Maintaining the integrity between the master copies of security-relevant hardware, software, and firmware (including designs, hardware drawings, and source code) and the equivalent data in master copies in operational environments is essential to ensuring the availability of organizational systems that support critical mission and business functions.
    • Related controls and activities: None.
  • (06) Developer configuration management: Trusted distribution
    • Require the developer of the system, system component, or system service to execute procedures for ensuring that security-relevant hardware, software, and firmware updates distributed to the organization are exactly as specified by the master copies.
    • Discussion: The trusted distribution of security-relevant hardware, software, and firmware updates helps ensure that the updates are correct representations of the master copies maintained by the developer and have not been tampered with during distribution.
    • Related controls and activities: None.
  • (07) Developer configuration management: Security and privacy representatives
    • Require [Assignment: organization-defined security and privacy representatives] to be included in the [Assignment: organization-defined configuration change management and control process].
    • Discussion: Information security and privacy representatives can include system security officers, senior officials in the department’s security governance, appropriate privacy senior officials or executives, and system privacy officers. Representation by personnel with information security and privacy expertise is important because changes to system configurations can have unintended side effects, some of which may be security- or privacy-relevant. Detecting such changes early in the process can help avoid unintended, negative consequences that could ultimately affect the security and privacy posture of systems. The configuration change management and control process in this control enhancement refers to the change management and control process defined by organizations in SA-10B.
    • Related controls and activities: None.

References

 

SA-11 Developer testing and evaluation

Activity

At all post-design stages of the system development lifecycle, require the developer of the system, system component, or system service to:

  1. develop and implement a plan for ongoing security and privacy control assessments
  2. perform [Selection (1 or more): unit; integration; system; regression] testing/evaluation [Assignment: organization-defined frequency] at [Assignment: organization-defined depth and coverage]
  3. produce evidence of the execution of the assessment plan and the results of the testing and evaluation
  4. implement a verifiable flaw remediation process
  5. correct flaws identified during testing and evaluation

Discussion

Developmental testing and evaluation confirm that the required controls are implemented correctly, operating as intended, enforcing the desired security and privacy policies, and meeting established security and privacy requirements. Security properties of systems and the privacy of individuals may be affected by the interconnection of system components or changes to those components. The interconnections or changes — including upgrading or replacing applications, operating systems, and firmware — may adversely affect previously implemented controls.

Ongoing assessment during development allows for additional types of testing and evaluation that developers can conduct to reduce or eliminate potential flaws. Testing custom software applications may require approaches such as manual code review, security architecture review, and penetration testing, as well as static analysis, dynamic analysis, binary analysis, or a hybrid of the 3 analysis approaches.

Developers can use the analysis approaches, along with security instrumentation and fuzzing, in a variety of tools and in source code reviews. The security and privacy assessment plans include the specific activities that developers plan to carry out, including the types of analyses, testing, evaluation, and reviews of software and firmware components; the degree of rigour to be applied; the frequency of the ongoing testing and evaluation; and the types of artifacts produced during those processes.

The depth of testing and evaluation refers to the rigour and level of detail associated with the assessment process. The coverage of testing and evaluation refers to the scope (i.e., number and type) of the artifacts included in the assessment process. Contracts specify the acceptance criteria for security and privacy assessment plans, flaw remediation processes, and the evidence that the plans and processes have been diligently applied. Methods for reviewing and protecting assessment plans, evidence, and documentation are commensurate with the security category or classification level of the system. Contracts may specify protection requirements for documentation.

Related controls and activities

CA-02, CA-07, CM-04, SA-03, SA-04, SA-05, SA-08, SA-15, SA-17, SI-02, SR-05, SR-06, SR-07.

Enhancements

  • (01) Developer testing and evaluation: Static code analysis
    • Require the developer of the system, system component, or system service to employ static code analysis tools to identify common flaws and to document the results of the analysis.
    • Discussion: Static code analysis provides a technology and methodology for security reviews and includes checking for weaknesses in the code as well as for the incorporation of libraries or other included code that contain known vulnerabilities or that are out of date and not supported. Static code analysis can be used to identify vulnerabilities and enforce secure coding practices. It is most effective when used early in the development process, when each code change can automatically be scanned for potential weaknesses. Static code analysis can provide clear remediation guidance and identify defects for developers to fix. Evidence of the correct implementation of static analysis can include aggregate defect density for critical defect types; evidence that defects were inspected by developers or security professionals; and evidence that defects were remediated. A high density of ignored findings, commonly referred to as false positives, indicates a potential problem with the analysis process or the analysis tool. In such cases, organizations weigh the validity of the evidence against evidence from other sources.
    • Related controls and activities: None.
  • (02) Developer testing and evaluation: Threat modelling and vulnerability analyses
    • Require the developer of the system, system component, or system service to perform threat modelling and vulnerability analyses during the development and the subsequent testing and evaluation of the system, component, or service that:
      1. uses the following contextual information: [Assignment: organization-defined information concerning impact, environment of operations, known or assumed threats, and acceptable risk levels]
      2. employs the following tools and methods: [Assignment: organization-defined tools and methods]
      3. conducts the modelling and analyses at the following level of rigour: [Assignment: organization-defined breadth and depth of modelling and analyses]
      4. produces evidence that meets the following acceptance criteria: [Assignment: organization-defined acceptance criteria]
    • Discussion: Systems, system components, and system services may deviate significantly from the functional and design specifications created during the requirements and design stages of the system development lifecycle. Therefore, updates to threat modelling and vulnerability analyses of those systems, system components, and system services during development and prior to delivery are critical to the effective operation of those systems, components, and services. Threat modelling and vulnerability analyses at this stage of the system development lifecycle ensure that design and implementation changes have been accounted for and that vulnerabilities created because of those changes have been reviewed and mitigated.
    • Related controls and activities: PM-15, RA-03, RA-05.
  • (03) Developer testing and evaluation: Independent verification of assessment plans and evidence
      1. Require an independent agent satisfying [Assignment: organization-defined independence criteria] to verify the correct implementation of the developer security and privacy assessment plans and the evidence produced during testing and evaluation
      2. Verify that the independent agent is provided with sufficient information to complete the verification process or granted the authority to obtain such information
    • Discussion: Independent agents have the qualifications — including the expertise, skills, training, certifications, and experience — to verify the correct implementation of developer security and privacy assessment plans.
    • Related controls and activities: AT-03, RA-05.
  • (04) Developer testing and evaluation: Manual code reviews
    • Require the developer of the system, system component, or system service to perform a manual code review of [Assignment: organization-defined specific code] using the following processes, procedures, and/or techniques: [Assignment: organization-defined processes, procedures, and/or techniques].
    • Discussion: Manual code reviews are usually reserved for the critical software and firmware components of systems. Manual code reviews are effective at identifying weaknesses that require knowledge of the application’s requirements or context that, in most cases, is unavailable to automated analytic tools and techniques, such as static and dynamic analysis. The benefits of manual code review include the ability to verify access control matrices against application controls and review detailed aspects of cryptographic implementations and controls.
    • Related controls and activities: None.
  • (05) Developer testing and evaluation: Penetration testing
    • Require the developer of the system, system component, or system service to perform penetration testing:
      1. at the following level of rigour: [Assignment: organization-defined breadth and depth of testing]
      2. under the following constraints: [Assignment: organization-defined constraints]
    • Discussion: Penetration testing is an assessment methodology in which assessors, using all available IT product or system documentation and working under specific constraints, attempt to circumvent the implemented security and privacy features of IT products and systems.
      Useful information for assessors who conduct penetration testing includes product and system design specifications, source code, and administrator and operator manuals. Penetration testing can include open-box, grey-box, or closed-box testing with analyses performed by skilled professionals who simulate adversary actions.
      The objective of penetration testing is to discover vulnerabilities in systems, system components, and services that result from implementation errors, configuration faults, or other operational weaknesses or deficiencies. Penetration tests can be performed in conjunction with automated and manual code reviews to provide a greater level of analysis than would ordinarily be possible.
      When user session information and other personal information is captured or recorded during penetration testing, such information is handled appropriately to protect privacy.
    • Related controls and activities: CA-08, PM-14, PM-25, PT-02, SA-03, SI-02, SI-06.
  • (06) Developer testing and evaluation: Attack surface reviews
    • Require the developer of the system, system component, or system service to perform attack surface reviews.
    • Discussion: Attack surfaces of systems and system components are exposed areas that make those systems more vulnerable to attacks. Attack surfaces include any accessible areas where weaknesses or deficiencies in the hardware, software, and firmware components provide opportunities for adversaries to exploit vulnerabilities. Attack surface reviews ensure that developers analyze the design and implementation changes to systems and mitigate attack vectors generated as a result of the changes. The correction of identified flaws includes deprecation of unsafe functions.
    • Related controls and activities: SA-15.
  • (07) Developer testing and evaluation: Verify scope of testing and evaluation
    • Require the developer of the system, system component, or system service to verify that the scope of testing and evaluation provides complete coverage of the required controls at the following level of rigour: [Assignment: organization-defined breadth and depth of testing and evaluation].
    • Discussion: Verifying that testing and evaluation provides complete coverage of required controls can be accomplished by a variety of informal and formal analytic techniques. Each of these techniques provides an increasing level of assurance that corresponds to the degree of formality of the analysis. Rigorously demonstrating control coverage at the highest levels of assurance can be achieved using formal modelling and analysis techniques, including correlation between control implementation and corresponding test cases.
    • Related controls and activities: SA-15.
  • (08) Developer testing and evaluation: Dynamic code analysis
    • Require the developer of the system, system component, or system service to employ dynamic code analysis tools to identify common flaws and document the results of the analysis.
    • Discussion: Dynamic code analysis provides runtime verification of software programs using tools capable of monitoring programs for memory corruption, user privilege issues, and other potential security problems. Dynamic code analysis employs runtime tools to ensure that security functionality performs in the way it was designed.
      A type of dynamic analysis, known as fuzz testing, induces program failures by deliberately introducing malformed or random data into software programs. Fuzz testing strategies are derived from the intended use of applications and the functional and design specifications for the applications.
      To understand the scope of dynamic code analysis and the assurance provided, organizations may also consider conducting code coverage analysis (i.e., checking the degree to which the code has been tested using metrics such as percentage of subroutines tested or percentage of program statements called during execution of the test suite) and/or concordance analysis (i.e., checking for words that are out of place in software code, such as non-English or non-French language words or derogatory terms).
    • Related controls and activities: None.
  • (09) Developer testing and evaluation: Interactive application security testing
    • Require the developer of the system, system component, or system service to employ interactive application security testing tools to identify flaws and document the results.
    • Discussion: Interactive (also known as instrumentation-based) application security testing is a method of detecting vulnerabilities by observing applications as they run during testing. The use of instrumentation relies on direct measurements of the actual running applications and uses access to the code, user interaction, libraries, frameworks, backend connections, and configurations to directly measure control effectiveness.
      When combined with analysis techniques, interactive application security testing can identify potential vulnerabilities and confirm control effectiveness. Instrumentation-based testing works in real time and can be used continuously throughout the system development lifecycle.
    • Related controls and activities: None.

References

 

SA-12 Supply chain protection

Withdrawn: Moved to SR Family.

Enhancements

  • (01) Supply chain protection: Acquisition strategies / tools / methods
    • Withdrawn: Moved to SR-05.
  • (02) Supply chain protection: Supplier reviews
    • Withdrawn: Moved to SR-06.
  • (03) Supply chain protection: Trusted shipping and warehousing
    • Withdrawn: Incorporated into SR-03.
  • (04) Supply chain protection: Diversity of suppliers
    • Withdrawn: Moved to SR-03(01).
  • (05) Supply chain protection: Limitation of harm
    • Withdrawn: Moved to SR-03(02).
  • (06) Supply chain protection: Minimizing procurement time
    • Withdrawn: Incorporated into SR-05(01).
  • (07) Supply chain protection: Assessments prior to selection / acceptance / update
    • Withdrawn: Moved to SR-05(02).
  • (08) Supply chain protection: Use of all-source intelligence
    • Withdrawn: Incorporated into RA-03(02).
  • (09) Supply chain protection: Operations security
    • Withdrawn: Moved to SR-07.
  • (10) Supply chain protection: Validate as genuine and not altered
    • Withdrawn: Moved to SR-04(03).
  • (11) Supply chain protection: Penetration testing / analysis of elements, processes, and actors
    • Withdrawn: Moved to SR-06(01).
  • (12) Supply chain protection: Inter-organizational agreements
    • Withdrawn: Moved to SR-08.
  • (13) Supply chain protection: Critical information system components
    • Withdrawn: Incorporated into MA-06 and RA-09.
  • (14) Supply chain protection: Identity and traceability
    • Withdrawn: Moved to SR-04(01) and SR-04(02).
  • (15) Supply chain protection: Processes to address weaknesses or deficiencies
    • Withdrawn: Incorporated into SR-03.
 

SA-13 Trustworthiness

Withdrawn: Incorporated into SA-08.

 

SA-14 Criticality analysis

Withdrawn: Incorporated into RA-09.

Enhancements

  • (01) Criticality analysis: Critical components with no viable alternative sourcing
    • Withdrawn: Incorporated into SA-20.
 

SA-15 Development process, standards, and tools

Activity

  1. Require the developer of the system, system component, or system service to follow a documented development process that
    1. explicitly addresses security and privacy requirements
    2. identifies the standards and tools used in the development process
    3. documents the specific tool options and tool configurations used in the development process
    4. documents, manages, and ensures the integrity of changes to the process and/or tools used in development
  2. Review the development process, standards, tools, tool options, and tool configurations [Assignment: organization-defined frequency] to determine if the process, standards, tools, tool options, and tool configurations selected and employed can satisfy the following security and privacy requirements: [Assignment: organization-defined security and privacy requirements]

Discussion

Development tools include programming languages and computer-aided design systems. Reviews of development processes include the use of maturity models to determine the potential effectiveness of such processes. Maintaining the integrity of changes to tools and processes facilitates effective supply chain risk assessment and mitigation. Such integrity requires configuration control throughout the system development lifecycle to track authorized changes and prevent unauthorized changes.

Related controls and activities

MA-06, SA-03, SA-04, SA-08, SA-10, SA-11, SI-02, SR-03, SR-04, SR-05, SR-06, SR-09.

Enhancements

  • (01) Development process, standards, and tools: Quality metrics
    • Require the developer of the system, system component, or system service to:
      1. define quality metrics at the beginning of the development process
      2. provide evidence of meeting the quality metrics [Selection (1 or more): [Assignment: organization-defined frequency]; [Assignment: organization-defined program review milestones]; upon delivery]
    • Discussion: Organizations use quality metrics to establish acceptable levels of system quality. Metrics can include quality gates, which are collections of completion criteria or sufficiency standards that represent the satisfactory execution of specific phases of the system development project. For example, a quality gate may require the elimination of all compiler warnings or a determination that such warnings have no impact on the effectiveness of required security or privacy capabilities. During the execution phases of development projects, quality gates provide clear, unambiguous indications of progress. Other metrics apply to the entire development project. Metrics can include defining the severity thresholds of vulnerabilities in accordance with organizational risk tolerance, such as requiring no known vulnerabilities in the delivered system with a CVSS severity of medium or high.
    • Related controls and activities: None.
  • (02) Development process, standards, and tools: Security and privacy control traceability tools
    • Require the developer of the system, system component, or system service to select and employ security and privacy controls traceability tools for use during the development process.
    • Discussion: The objective is to create a security and privacy requirements traceability matrix. System development teams select and deploy security and privacy tracking tools, including vulnerability or work item tracking systems that facilitate assignment, sorting, filtering, and tracking of completed work items or tasks associated with development processes.
    • Related controls and activities: SA-11.
  • (03) Development process, standards, and tools: Criticality analysis
    • Require the developer of the system, system component, or system service to perform a criticality analysis:
      1. at the following decision points in the system development lifecycle: [Assignment: organization-defined decision points in the system development lifecycle]
      2. at the following level of rigour: [Assignment: organization-defined breadth and depth of criticality analysis]
    • Discussion: Criticality analysis performed by the developer provides input to the criticality analysis performed by organizations. Developer input is essential to organizational criticality analysis because organizations may not have access to detailed design documentation for system components that are developed as COTS products. Such design documentation includes functional specifications, high-level designs, low-level designs, source code, and hardware schematics.
      Criticality analysis is important for organizational systems that are designated as high-value assets. High-value assets can be moderate- or high-impact systems due to heightened adversarial interest or potential adverse effects on the federal enterprise. Developer input is especially important when organizations conduct supply chain criticality analyses.
    • Related controls and activities: RA-09.
  • (04) Development process, standards, and tools: Threat modelling and vulnerability analysis
    • Withdrawn: Incorporated into SA-11(02).
  • (05) Development process, standards, and tools: Attack surface reduction
    • Require the developer of the system, system component, or system service to reduce attack surfaces to [Assignment: organization-defined thresholds].
    • Discussion: Attack surface reduction is closely aligned with threat and vulnerability analyses and system architecture and design. Attack surface reduction is a means of reducing risk to organizations by giving attackers less opportunity to exploit weaknesses or deficiencies (i.e., potential vulnerabilities) within systems, system components, and system services. Attack surface reduction includes implementing the concept of layered defences, applying the principles of least privilege and least functionality, applying secure software development practices, deprecating unsafe functions, reducing entry points available to unauthorized users, reducing the amount of code that executes, and eliminating APIs that are vulnerable to attacks.
    • Related controls and activities: AC-06, CM-07, RA-03, SA-11.
  • (06) Development process, standards, and tools: Continuous improvement
    • Require the developer of the system, system component, or system service to implement an explicit process to continuously improve the development process.
    • Discussion: Developers of systems, system components, and system services consider the effectiveness and efficiency of their development processes in meeting quality objectives and addressing the security and privacy capabilities in current threat environments.
    • Related controls and activities: None.
  • (07) Development process, standards, and tools: Automated vulnerability analysis
    • Require the developer of the system, system component, or system service [Assignment: organization-defined frequency] to:
      1. perform an automated vulnerability analysis using [Assignment: organization-defined tools]
      2. determine the exploitation potential for discovered vulnerabilities
      3. determine potential risk mitigations for delivered vulnerabilities
      4. deliver the outputs of the tools and results of the analysis to [Assignment: organization-defined personnel or roles]
    • Discussion: Automated tools can be more effective at analyzing exploitable weaknesses or deficiencies in large and complex systems, prioritizing vulnerabilities by severity, and providing recommendations for risk mitigations.
    • Related controls and activities: RA-05, SA-11.
  • (08) Development process, standards, and tools: Reuse of threat and vulnerability information
    • Require the developer of the system, system component, or system service to use threat modelling and vulnerability analyses from similar systems, components, or services to inform the current development process.
    • Discussion: Analysis of vulnerabilities found in similar software applications can inform potential design and implementation issues for systems under development. Similar systems or system components may exist within developer organizations. Vulnerability information is available from a variety of public and private sector sources, including the NIST National Vulnerability Database.
    • Related controls and activities: None.
  • (09) Development process, standards, and tools: Use of live data
    • Withdrawn: Incorporated into SA-03(02).
  • (10) Development process, standards, and tools: Incident response plan
    • Require the developer of the system, system component, or system service to provide, implement, and test an incident response plan.
    • Discussion: The incident response plan provided by developers may provide information not readily available to organizations and may be incorporated into organizational incident response plans. Developer information may also be extremely helpful, such as when organizations respond to vulnerabilities in COTS products.
    • Related controls and activities: IR-08.
  • (11) Development process, standards, and tools: Archive system or component
    • Require the developer of the system or system component to archive the system or component to be released or delivered together with the corresponding evidence supporting the final security and privacy review.
    • Discussion: Archiving system or system components requires the developer to retain key development artifacts, including hardware specifications, source code, object code, and relevant documentation from the development process that can provide a readily available configuration baseline for system and component upgrades or modifications.
    • Related controls and activities: CM-02.
  • (12) Development process, standards, and tools: Minimize personal information
    • Require the developer of the system or system component to minimize the use of personal information in development and test environments.
    • Discussion: Organizations can minimize the risk to an individual’s privacy by using techniques such as de-identification or synthetic data. Limiting the use of personal information in development and test environments helps reduce the level of privacy risk created by a system.
    • GC discussion: Development and test environments should not be conducted using identifiable personal information, except in exceptional circumstances. It is recommended that system testing be completed using dummy data, scrambled data, or personal information that is otherwise obfuscated. At times, system testing may be outsourced. Even with obfuscated personal information, it is recommended that contracting documents require the deletion of testing data by the third-party vendor when development or testing is complete, to protect against the possibility of re-identification and breach.
    • Related controls and activities: PM-25, SA-03, SA-08.
  • (13) Development process, standards, and tools: Logging syntax
    • Require the developer of the system, system component, or system service to use [Assignment: organization-defined secure logging format] to log [assignment: organization-defined event types] at [assignment: organization-defined level of detail].
    • Discussion: In support of better incident response and the ability to more quickly reconstruct security-related actions, identifying specific requirements for secure logging facilitates the ability to connect application-produced audit event logs with operational data. Event types are consistent with the event types defined in AU-02.
    • Related controls and activities: AU-02, AU-03, IR-04, IR-08, SA-05.

References

 

SA-16 Developer-provided training

Control

Require the developer of the system, system component, or system service to provide the following training on the correct use and operation of the implemented security and privacy functions, controls, and/or mechanisms: [Assignment: organization-defined training].

Discussion

Developer-provided training applies to external and internal (in-house) developers. Personnel training is essential to ensuring the effectiveness of the controls implemented within organizational systems. Training may be web-based, computer-based, classroom style, or hands-on (including micro-training). Organizations can also request training materials from developers to conduct in-house training or offer self-training to organizational personnel. Organizations determine the type of training necessary and may require different types of training for different security and privacy functions, controls, and mechanisms.

Related controls and activities

AT-02, AT-03, PE-03, SA-04, SA-05.

Enhancements

None.

References

TBS Directive on Security Management, Appendix H: Mandatory Procedures for Security Awareness and Training Control

 

SA-17 Developer security and privacy architecture and design

Activity

Require the developer of the system, system component, or system service to produce a design specification and security and privacy architecture that:

  1. is consistent with the organization’s security and privacy architecture that is an integral part the organization’s enterprise architecture
  2. accurately and completely describes the required security and privacy functionality and the allocation of controls among physical and logical components
  3. expresses how individual security and privacy functions, mechanisms, and services work together to provide required security and privacy capabilities and a unified approach to protection

Discussion

Developer security and privacy architecture and design are directed at external developers, although they could also be applied to internal (in-house) development. In contrast, PL-08 is directed at internal developers to ensure that organizations develop a security and privacy architecture that is integrated with the enterprise architecture. The distinction between SA-17 and PL-08 is especially important when organizations outsource the development of systems, system components, or system services and when there is a requirement to demonstrate consistency with the enterprise architecture and security and privacy architecture of the organization.
ISO/IEC 15408-2, ISO/IEC 15408-3, and the Cyber Centre’s System lifecycle cyber security and privacy risk management activities (ITSP.10.037) provide information on security architecture and design, including formal policy models, security-relevant components, formal and informal correspondence, conceptually simple design, and structuring for least privilege and testing.

Related controls and activities

PL-02, PL-08, PM-07, SA-03, SA-04, SA-08, SC-07.

Enhancements

  • (01) Developer security and privacy architecture and design: Formal policy model
    • Require the developer of the system, system component, or system service to:
      1. produce, as an integral part of the development process, a formal policy model describing the [Assignment: organization-defined elements of organizational security and privacy policy] to be enforced
      2. prove that the formal policy model is internally consistent and sufficient to enforce the defined elements of the organizational security and privacy policy when implemented
    • Discussion: Formal models describe specific behaviours or security and privacy policies using formal languages, thus enabling the correctness of those behaviours and policies to be formally proven. Not all components of systems can be modelled. Generally, formal specifications are scoped to the behaviours or policies of interest, such as non-discretionary access control policies. Organizations choose the formal modelling language and approach based on the nature of the behaviours and policies to be described and the available tools.
    • Related controls and activities: AC-03, AC-04, AC-25.
  • (02) Developer security and privacy architecture and design: Security-relevant components
    • Require the developer of the system, system component, or system service to:
      1. define security-relevant hardware, software, and firmware
      2. provide a rationale that the definition for security-relevant hardware, software, and firmware is complete
    • Discussion: The security-relevant hardware, software, and firmware represent the portion of the system, component, or service that is trusted to perform correctly to maintain required security properties.
    • Related controls and activities: AC-25, SA-05.
  • (03) Developer security and privacy architecture and design: Formal correspondence
    • Require the developer of the system, system component, or system service to:
      1. produce, as an integral part of the development process, a formal top-level specification that specifies the interfaces to security-relevant hardware, software, and firmware in terms of exceptions, error messages, and effects
      2. show, via proof to the extent feasible with additional informal demonstration as necessary, that the formal top-level specification is consistent with the formal policy model
      3. show, via informal demonstration, that the formal top-level specification completely covers the interfaces to security-relevant hardware, software, and firmware
      4. show that the formal top-level specification is an accurate description of the implemented security-relevant hardware, software, and firmware
      5. describe the security-relevant hardware, software, and firmware mechanisms not addressed in the formal top-level specification but strictly internal to the security-relevant hardware, software, and firmware
    • Discussion: Correspondence is an important part of the assurance gained through modelling. It demonstrates that the implementation is an accurate transformation of the model, and that any additional code or implementation details that are present have no impact on the behaviours or policies being modelled.
      Formal methods can be used to show that the high-level security properties are satisfied by the formal system description, and that the formal system description is correctly implemented by a description of some lower level, including a hardware description.
      Consistency between the formal top-level specification and the formal policy models is generally not amenable to being fully proven. Therefore, a combination of formal and informal methods may be needed to demonstrate such consistency. Consistency between the formal top-level specification and the actual implementation may require the use of an informal demonstration due to limitations on the applicability of formal methods to prove that the specification accurately reflects the implementation. Hardware, software, and firmware mechanisms internal to security-relevant components include mapping registers and direct memory input and output.
    • Related controls and activities: AC-03, AC-04, AC-25, SA-04, SA-05.
  • (04) Developer security and privacy architecture and design: Informal correspondence
    • Require the developer of the system, system component, or system service to:
      1. produce, as an integral part of the development process, an informal descriptive top-level specification that specifies the interfaces to security-relevant hardware, software, and firmware in terms of exceptions, error messages, and effects
      2. show, via [Selection (1): informal demonstration; convincing argument with formal methods as feasible], that the descriptive top-level specification is consistent with the formal policy model
      3. show, via informal demonstration, that the descriptive top-level specification completely covers the interfaces to security-relevant hardware, software, and firmware
      4. show that the descriptive top-level specification is an accurate description of the interfaces to security-relevant hardware, software, and firmware
      5. describe the security-relevant hardware, software, and firmware mechanisms not addressed in the descriptive top-level specification but strictly internal to the security-relevant hardware, software, and firmware
    • Discussion: Correspondence is an important part of the assurance gained through modelling. It demonstrates that the implementation is an accurate transformation of the model, and that additional code or implementation detail has no impact on the behaviours or policies being modelled. Consistency between the descriptive top-level specification (i.e., high-level/low-level design) and the formal policy model is generally not amenable to being fully proven. Therefore, a combination of formal and informal methods may be needed to show such consistency. Hardware, software, and firmware mechanisms strictly internal to security-relevant hardware, software, and firmware include mapping registers and direct memory input and output.
    • Related controls and activities: AC-03, AC-04, AC-25, SA-04, SA-05.
  • (05) Developer security and privacy architecture and design: Conceptually simple design
    • Require the developer of the system, system component, or system service to:
      1. design and structure the security-relevant hardware, software, and firmware to use a complete, conceptually simple protection mechanism with precisely defined semantics
      2. internally structure the security-relevant hardware, software, and firmware with specific regard for this mechanism
    • Discussion: The principle of reduced complexity states that the system design is as simple and small as possible (see SA-08(07)). A small and simple design is easier to understand and analyze and is also less prone to error (see AC-25, SA-08(13)). The principle of reduced complexity applies to any aspect of a system, but it has particular importance for security due to the various analyses performed to obtain evidence about the emergent security property of the system. For such analyses to be successful, a small and simple design is essential.
      Application of the principle of reduced complexity helps system developers to understand the correctness and completeness of system security functions and facilitates the identification of potential vulnerabilities. The corollary of reduced complexity states that the simplicity of the system is directly related to the number of vulnerabilities it will contain. That is, simpler systems contain fewer vulnerabilities.
      Two important benefits of reduced complexity are that it is easier to understand whether the security policy has been captured in the system design and that fewer vulnerabilities are likely to be introduced during engineering development. An additional benefit is that any such conclusion about correctness, completeness, and existence of vulnerabilities can be reached with a higher degree of assurance in contrast to conclusions reached in situations where the system design is inherently more complex.
    • Related controls and activities: AC-25, SA-08, SC-03.
  • (06) Developer security and privacy architecture and design: Structure for testing
    • Require the developer of the system, system component, or system service to structure security-relevant hardware, software, and firmware to facilitate testing.
    • Discussion: Applying the security design principles in NIST SP 800-160 Volume 1 Revision 1 and the development framework in System lifecycle cyber security and privacy risk management activities (ITSP.10.037) promote complete, consistent, and comprehensive testing and evaluation of systems, system components, and services. The thoroughness of such testing contributes to the evidence produced to generate an effective assurance case or argument as to the robustness of the system, system component, or service.
    • Related controls and activities: SA-05, SA-11.
  • (07) Developer security and privacy architecture and design: Structure for least privilege
    • Require the developer of the system, system component, or system service to structure security-relevant hardware, software, and firmware to facilitate controlling access with least privilege.
    • Discussion: The principle of least privilege states that each component is allocated sufficient privileges to accomplish its specified functions but no more (see SA-08(14)). Applying the principle of least privilege limits the scope of the component’s actions, which has 2 desirable effects. First, the security impact of a failure, corruption, or misuse of the system component results in a minimized security impact. Second, the security analysis of the component is simplified.
      Least privilege is a pervasive principle that is reflected in all aspects of the secure system design. Interfaces used to invoke component capability are available to only certain subsets of the user population, and component design supports a sufficiently fine granularity of privilege decomposition.
      For example, in the case of an audit mechanism, there may be an interface for the audit manager, who configures the audit settings; an interface for the audit operator, who ensures that audit data is safely collected and stored; and, finally, yet another interface for the audit reviewer, who only has a need to view the audit data that has been collected but no need to perform operations on that data.
      In addition to its manifestations at the system interface, least privilege can be used as a guiding principle for the internal structure of the system itself. One aspect of internal least privilege is to construct modules so that only the elements encapsulated by the module are directly operated upon by the functions within the module. Elements external to a module that may be affected by the module’s operation are indirectly accessed through interaction (e.g., via a function call) with the module that contains those elements.
      Another aspect of internal least privilege is that the scope of a given module or component includes only those system elements that are necessary for its functionality, and the access modes to the elements (e.g., read, write) are minimal.
    • Related controls and activities: AC-05, AC-06, SA-08.
  • (08) Developer security and privacy architecture and design: Orchestration
    • Design [Assignment: organization-defined critical systems or system components] with coordinated behaviour to implement the following capabilities: [Assignment: organization-defined capabilities, by system or component].
    • Discussion: Security resources that are distributed, located at different layers or in different system elements can interact in unforeseen or incorrect ways. Adverse consequences can include cascading failures, interference, or coverage gaps. Coordination of the behaviour of security resources (e.g., by ensuring that one patch is installed across all resources before making a configuration change that assumes that the patch is propagated) can avert such negative interactions.
    • Related controls and activities: None.
  • (09) Developer security and privacy architecture and design: Design diversity
    • Use different designs for [Assignment: organization-defined critical systems or system components] to satisfy a common set of requirements or to provide equivalent functionality.
    • Discussion: Design diversity is achieved by supplying the same requirements specification to multiple developers, each of whom is responsible for developing a variant of the system or system component that meets the requirements. Variants can be in software design, in hardware design, or in both hardware and a software design. Differences in the designs of the variants can result from developer experience (e.g., prior use of a design pattern), design style (e.g., when decomposing a required function into smaller tasks, determining what constitutes a separate task and how far to decompose tasks into sub-tasks), selection of libraries to incorporate into the variant, and the development environment (e.g., different design tools make some design patterns easier to visualize).
      Hardware design diversity includes making different decisions about what information to keep in analog form and what information to convert to digital form, transmitting the same information at different times, and introducing delays in sampling (temporal diversity). Design diversity is commonly used to support fault tolerance.
    • Related controls and activities: None.

References

 

SA-18 Tamper resistance and detection

Withdrawn: Moved to SR-09.

Enhancements

  • (01) Tamper resistance and detection: Multiple phases of system development lifecycle
    • Withdrawn: Moved to SR-09(01).
  • (02) Tamper resistance and detection: Inspection of systems or components
    • Withdrawn: Moved to SR-10.
 

SA-19 Component authenticity

Withdrawn: Moved to SR-11.

Enhancements

  • (01) Component authenticity: Anti-counterfeit training
    • Withdrawn: Moved to SR-11(01).
  • (02) Component authenticity: Configuration control for component service and repair
    • Withdrawn: Moved to SR-11(02).
  • (03) Component authenticity: Component disposal
    • Withdrawn: Moved to SR-12.
  • (04) Component authenticity: Anti-counterfeit scanning
    • Withdrawn: Moved to SR-11(03).
 

SA-20 Customized development of critical components

Control

Reimplement or custom develop the following critical system components: [Assignment: organization-defined critical system components].

Discussion

Organizations determine that certain system components likely cannot be trusted due to specific threats to and vulnerabilities in those components for which there are no viable security controls to adequately mitigate risk. Reimplementation or custom development of such components may satisfy requirements for higher assurance and is carried out by initiating changes to system components (including hardware, software, and firmware) such that the standard attacks by adversaries are less likely to succeed.

In situations where no alternative sourcing is available and organizations choose not to reimplement or custom develop critical system components, additional controls can be employed. Controls include enhanced auditing, restrictions on source code and system utility access, and protection from deletion of system and application files.

Related controls and activities

CP-02, RA-09, SA-08.

Enhancements

None.

References

System lifecycle cyber security and privacy risk management activities (ITSP.10.037)

 

SA-21 Developer screening

Control

Require that the developer of [Assignment: organization-defined system, system component, or system service]:

  1. has appropriate access authorizations as determined by assigned [Assignment: organization-defined official government duties]
  2. satisfies the following additional personnel screening criteria: [Assignment: organization-defined additional personnel screening criteria]

Discussion

Developer screening is directed at external developers. Internal developer screening is addressed by PS-03. Because the system, system component, or system service may be used in critical activities essential to the national or economic security interests of Canada, organizations have a strong interest in ensuring that developers are trustworthy. The degree of trust required of developers may need to be consistent with that of the individuals who access the systems, system components, or system services once deployed.

Authorization and personnel screening criteria include clearances, background checks, citizenship, and nationality. Developer trustworthiness may also include a review and analysis of company ownership and relationships that the company has with entities that may potentially affect the quality and reliability of the systems, components, or services being developed.

Satisfying the required access authorizations and personnel screening criteria includes providing a list of all individuals who are authorized to perform development activities on the selected system, system component, or system service so that organizations can validate that the developer has satisfied the authorization and screening requirements.

Related controls and activities

PS-02, PS-03, PS-06, PS-07, SA-04, SR-06.

Enhancements

  • (01) Developer screening: Validation of screening
    • Withdrawn: Incorporated into SA-21.

References

 

SA-22 Unsupported system components

Control

  1. Replace system components when support for the components is no longer available from the developer, vendor, or manufacturer
  2. Provide the following options for alternative sources for continued support for unsupported components [Selection (1 or more): in-house support; [Assignment: organization-defined support from external providers]]

Discussion

Support for system components includes software patches, firmware updates, replacement parts, and maintenance contracts. An example of unsupported components is when vendors no longer provide critical software patches or product updates, which can result in an opportunity for adversaries to exploit weaknesses in the installed components. Exceptions to replacing unsupported system components include systems that provide critical mission or business capabilities where newer technologies are not available or where the systems are so isolated that installing replacement components is not an option.

Alternative sources for support address the need to provide continued support for system components that are no longer supported by the original manufacturers, developers, or vendors when such components remain essential to organizational mission and business functions. If necessary, organizations can establish in-house support by developing customized patches for critical software components or, alternatively, obtain the services of external providers who provide ongoing support for the designated unsupported components through contractual relationships. Such contractual relationships can include open-source software value-added vendors. The increased risk of using unsupported system components can be mitigated, for example, by prohibiting the connection of such components to public or uncontrolled networks or implementing other forms of isolation.

Related controls and activities

PL-02, SA-03, SA-400.

Enhancements

  • (01) Unsupported system components: Alternative sources for continued support
    • Withdrawn: Incorporated into SA-22.

References

TBS Directive on Service and Digital, Appendix H: Standard on At-Risk Information Technology

 

SA-23 Specialization

Activity

Employ [Selection (1 or more): design; modification; augmentation; reconfiguration] on [Assignment: organization-defined systems or system components] supporting mission-essential services or functions to increase the robustness in those systems or components.

Discussion

It is often necessary to enhance a system or system component that supports mission-essential services or functions to maximize the robustness of the resource. Sometimes this enhancement is done at the design level. In other instances, it is done post-design, either by modifying the system in question or by augmenting the system with additional components. For example, supplemental authentication or non-repudiation functions may be added to the system to enhance the identity of critical resources to other resources that depend on the organization-defined resources.

Related controls and activities

RA-09, SA-08.

Enhancements

None.

References

System lifecycle cyber security and privacy risk management activities (ITSP.10.037)

 

SA-24 Design for cyber resiliency

Control

  1. Design organizational systems, system components, or system services to achieve cyber resiliency by:
    1. defining the following cyber resiliency goals: [Assignment: organization-defined cyber resiliency goals]
    2. defining the following cyber resiliency objectives: [Assignment: organization-defined cyber resiliency objectives]
    3. defining the following cyber resiliency techniques: [Assignment: organization-defined cyber resiliency techniques]
    4. defining the following cyber resiliency implementation approaches: [Assignment: organization-defined cyber resiliency implementation approaches]
    5. defining the following cyber resiliency design principles: [Assignment: organization-defined cyber resiliency design principles]
  2. Implement and review the selected cyber resiliency goals, objectives, techniques, implementation approaches, and design principles as part of an organizational risk management process or systems security engineering process

Discussion

Cyber resiliency is critical to ensuring the survivability of mission-critical systems and high value assets. Cyber resiliency focuses on limiting the damage from adversity or the conditions that can cause a loss of assets. Damage can affect (1) organizations (for example, loss of reputation, increased existential risk), (2) mission or business functions (for example, decreased capability to complete current missions and to accomplish future missions), (3) security (for example, decreased capability to achieve security objectives or to prevent, detect, and respond to cyber incidents), (4) systems (for example, unauthorized use of system resources or decreased capability to meet system requirements), or (5) specific system elements (for example, physical destruction; corruption, modification, or fabrication of information).

Cyber resiliency goals are intended to help organizations maintain a state of informed preparedness for adversity, continue essential mission or business functions despite adversity, restore mission or business functions during and after adversity, and modify mission or business functions and their supporting capabilities in response to predicted changes in technical, operational, or threat environments.

NIST SP 800-160 Volume 2 Revision 1 provides additional information on the Cyber Resiliency Engineering Framework, including detailed descriptions of cyber resiliency goals, objectives, techniques, implementation approaches, and design principles. NIST SP 800-160 Volume 1 Revision 1 provides additional information on achieving cyber resiliency as an emergent property of an engineered system.

Related controls and activities

CA-07, CP-02, CP-04, CP-09, CP-10, CP-11, CP-12, CP-13, IR-04, IR-05, PE-11, PE-12, PE-17, PL-08, PM-07, PM-16, PM-30, PM-31, RA-03, RA-05, RA-09, SA-03, SA-08, SA-09, SA-17, SC-03, SC-05, SC-07, SC-10, SC-11, SC-29, SC-30, SC-34, SC-35, SC-36, SC-37, SC-39, SC-44, SC-47, SC-48, SC-49, SC-50, SC-51, SI-03, SI-04, SI-14, SI-17, SI-20, SI-21, SI-22, SI-23, SR-03, SR-04, SR-05, SR-07.

Enhancements

None.

References

 

SA-400 Sovereignty and jurisdiction

Control

Require the organizational business function to conduct a sovereignty and jurisdiction threat and risk assessment process at the [Selection (1 or more): organization-level, mission/business-level, system-level] that:

  1. conducts an injury assessment to determine the maximum potential injuries that may be suffered due to external legal compulsion of the business functions or information assets by
    1. considering the business needs for security including any laws or regulations which require that data not be disclosed or compromised
    2. updating the security categorization of the business functions or information assets
    3. documenting the negative consequences of legal compulsion
  2. performs jurisdiction-specific threat assessment that establishes a likelihood of being targeted
  3. undertakes a vulnerability assessment to evaluate the potential means by which the external jurisdiction could exploit the business functions or information assets
  4. completes a jurisdiction-specific risk assessment

Discussion

Data sovereignty refers to Canada’s right to control access to and disclosure of digital information that is subject to its legislation. Data residency refers to the physical or geographical location of an organization’s data while at rest. Jurisdiction refers to a realm of authority, which in this context, references the ability to compel compliance from entities within that realm as the key threat.

A concern with having data hosted in a foreign jurisdiction is that data could be observed, modified, or denied through legal compulsion, which should not happen if an organization had sovereignty over its data. Exposure of a business function to a different legal jurisdiction can effectively remove sovereign legal barriers to compromise within the original jurisdiction. For example, the protection of citizen privacy often is not afforded to non-citizens, and service providers may be legally compelled to provide access to data or services that would have been legally protected in the original jurisdiction. In the case of business functions, consider as an example OT that controls critical infrastructures hosted in the cloud being modified or rendered unavailable by a hostile foreign jurisdiction.

The techniques employed may be very sophisticated and will generally be executed with full insider administrative privilege. Legal compulsion by sophisticated jurisdictions is exceedingly effective at bypassing other controls. Multiple jurisdictions may apply e.g., municipality, city, county, reservation, district, region, province, state, canton, territory, or nation. All levels must be identified to assess capabilities for compulsion.

Other jurisdictions may compel the compromise of business functions in confidentiality, integrity or availability. An injury assessment must document the negative consequences of legal compulsion, including any specific business needs for security (e.g., laws or regulations which require that data not be compromised).

An organization simply may not be at liberty to take a risk management approach (e.g., due to legislative mandate or indigenous data sovereignty) and must avoid risk scenarios outright.

GC discussion

The TBS Directive on Service and Digital, section 4.4.3.14 requires that sensitive information or data that is classified or categorized as Protected B or Protected C be located within Canadian jurisdiction as the principal delivery option. Canadian jurisdiction refers to the geographic boundaries of Canada or within the boundary of a diplomatic or consular mission of the GC abroad. The Cyber Centre recommends that all data and business functions of significant sensitivity be retained within Canadian jurisdictions.

Related controls and activities

AT-02, CA-02, RA-01, RA-02.

Enhancements

  • (01) Sovereignty and jurisdiction: Threat and risk assessment
    • Require the organization to perform threat and risk assessments in relation to data sovereignty, considering its jurisdiction and by which aspects legal compulsion may be effective.
    • Discussion: The organization establishes the injury value of business functions proposed to occupy other jurisdictions. Each of the 4 surfaces (logical, physical, personnel, and lifecycle) discussed in System lifecycle cyber security and privacy risk management activities (ITSP.10.037) may be vulnerable.
    • Related controls and activities: CA-02, RA-03.
  • (02) Sovereignty and jurisdiction: Legal and contractual assessment
    • Evaluate security assurance for business processes via legal or contractual mechanisms.
    • Discussion: Many jurisdictions will have different opportunities for protecting business functions from legal compulsion, at different assurance levels. International/multilateral agreements may help improve protection. Contract terms may be available from suppliers that can assist with reducing exposure or reporting on compulsion (note that some forms of compulsion may additionally compel non-disclosure in any form).
      Where service delivery involves third-party vendors, ensure contractual documents reflect data sovereignty and jurisdiction obligations. If organizations choose to address sovereignty and jurisdiction obligations at the organization or project level, this obligation should be included in associated contractual obligations.
      Organizations should seek specialist advice from the target jurisdiction if legal or contractual mechanisms are relied upon to prevent foreign legal compulsion, since any compulsive legislation is likely to take precedence overprotective contracts or legislation. What constitutes due process in other jurisdictions may not be satisfactory from a Canadian perspective.
    • Related controls and activities: CA-02, SA-09.
  • (03) Sovereignty and jurisdiction: Business process attribute marking
    • Ensure business processes receive the intended handling with respect to legal jurisdiction.
    • Discussion: Metadata attributes can be defined and attached to business processes or data to assist management processes in correctly handling the intended legal jurisdiction. For example, documents may be marked with “Canadian Jurisdiction Only” or “Sovereign Indigenous” as an attribute. The automated handling of metadata attributes can be of great assistance for shared-infrastructure providers who may have multiple differing jurisdictional requirements.
    • Related controls and activities: AC-16, MP-03.
  • (04) Sovereignty and jurisdiction: Protection of data at rest
    • Prevent data from residing in any other legal jurisdiction.
    • Discussion: Permitting data at rest to remain in other jurisdictions provides the greatest opportunity for compulsion of the data at any time. Hosting data at rest within Canadian jurisdictions will increase the work factor and may eliminate some capabilities. Encryption of data at rest is of limited value but may be somewhat effective in preventing physical retrieval of the data.
      There is no minimum time with respect to the data being “at rest”. If the data could, even momentarily, be written to non-volatile media, it must be considered at rest. This can particularly affect emergency/unexpected situations such as core dumps, memory caching/swapping and disaster recovery operations.
      Consideration of data at rest must also include how the stored data will eventually be decommissioned and disposed of, to ensure that the data will remain outside foreign compulsion at the end of the system lifespan.
    • Related controls and activities: CP-06, CP-09, MP-06, MP-08, SA-03, SA-04, SA-22, SC-28.
  • (05) Sovereignty and jurisdiction: Protection of data in transit
    • Prevent data from transiting through any other legal jurisdiction.
    • Discussion: Permitting data in transit to pass through other jurisdictions provides numerous opportunities for compulsion of the data. Routing data strictly within Canadian jurisdictions eliminates many capabilities for compulsion. Encrypting data in transit may also have value.
    • Related controls and activities: CP-08, SC-08.
  • (06) Sovereignty and jurisdiction: Protection of data in use
    • Prevent data from being used in any other legal jurisdiction.
    • Discussion: Permitting data in use to exist in other jurisdictions provides opportunity for compulsion of the data. Processing data strictly within trusted jurisdictions eliminates many capabilities for compulsion.
      There is currently no approved method of encryption to protect data in use. In principle, a form of encryption referred to as homomorphic encryption could permit certain kinds of processing while the data remains encrypted, but the Cyber Centre does not currently assess homomorphic encryption to be sufficient for cryptographic use as per the Cyber Centre’s Cryptographic algorithms for UNCLASSIFIED, PROTECTED A, and PROTECTED B information (ITSP.40.111). Therefore, if homomorphic encryption is employed to protect data in use, it must currently be assessed as cleartext (i.e., not providing cryptographic protection as per SC-13).
      There is no minimum time with respect to data being in use. Presence in a system for a single CPU cycle would be sufficient for compulsion.
    • Related controls and activities: CP-07, SC-13.
  • (07) Sovereignty and jurisdiction: Protection against extraterritorial compulsion
    • Prevent business functions from being compromised by individuals or corporations that are being compelled by a different legal jurisdiction.
    • Discussion: Adversaries may have means to compel individuals and/or corporations to compromise business functions even while those functions are kept entirely outside the adversary’s jurisdiction. Consider the loyalty of individuals or corporations, direct ties between entities (e.g., affiliated corporations, family members, foreign assets) and indirect ties (e.g., partnerships and relationships).
      Individuals residing in Canada who hold both Canadian and foreign citizenships may find themselves compelled by their other nationalities, despite also being bound to Canadian law, and could then be forced to choose between legal consequences in either jurisdiction.
    • Related controls and activities: MA-05.
  • (08) Sovereignty and jurisdiction: Protection of business lifecycle
    • Prevent business functions from being compromised by lifecycle attacks embedded via legal compulsion in or from a different legal jurisdiction.
    • Discussion: Employing products or services (e.g., software, firmware, hardware, design services, consultancy, etc.) that have previously been within the scope of compulsion may expose business functions to embedded compromises in those products or services. Most current systems contain elements from multiple legal jurisdictions, and a risk-based approach will be required to select solutions that address identified threats, according to the business needs for security.
      Strong network isolation is recommended as a compensating control for any embedded compromises.
    • Related controls and activities: SA-03, SA-04, SC-07.
  • (09) Sovereignty and jurisdiction: Public ownership
    • Prevent business functions from being compromised by ownership transfer of lifecycle functions to a different legal jurisdiction.
    • Discussion: While system dependencies might be secured against foreign compulsion at the outset of implementation, if a transfer of ownership occurs during the system lifespan, the system could become subject to lifecycle attacks from foreign compulsion. To prevent this possibility, organizations select lifecycle support and supply arrangements that will be under Canadian public ownership for the entire lifespan of the system.
    • Related controls and activities: SA-03, SA-04, SA-22.

References

 
Date modified: