| NODIS Library | Program Management(8000s) | Search |

NASA Ball NASA
Procedural
Requirements
NPR 8705.2C
Effective Date: July 10, 2017
Expiration Date: July 10, 2024
COMPLIANCE IS MANDATORY FOR NASA EMPLOYEES
Printable Format (PDF)

Subject: Human-Rating Requirements for Space Systems (Updated w/Change 2)

Responsible Office: Office of Safety and Mission Assurance


| TOC | ChangeHistory | Preface | Chapter1 | Chapter2 | AppendixA | AppendixB | AppendixC | AppendixD | AppendixE | AppendixF | ALL |

Chapter 2. Human-Rating Certification Requirements

2.1 Overview

The Human-Rating Certification requirements are designed to lead the Program Manager through the certification process and define the contents of the HRCP. The certification requirements are divided into five categories:

a. Process and Standards

b. Designing the System

c. Verifying and Validating the System Capabilities and Performance

d. Flight Testing the System

e. Certifying and Operating the Human-Rated System

2.2 Process and Standards

2.2.1 HRCP. The Program Manager shall develop and maintain an HRCP for crewed space systems that require NASA Human-Rating Certification.

Note 1: The contents of the HRCP are specified in the following certification requirements. The HRCP reflects the program's progress toward Human-Rating Certification at various milestones and, therefore, is maintained under configuration management control to clearly document changes. When multiple systems of the same configuration are produced from the same design, a single HRCP may apply to all the systems. Paragraph 2.6.4 applies when design changes, configuration changes, block updates, or other changes are incorporated.

Note 2: The Human-Rating Certification is granted to the crewed space system, but the certification process and requirements affect functions and elements of other mission systems, such as control centers, launch pads, and communication systems. Refer to the definitions in Appendix A for further information.

2.2.2 Human-Rating Waivers, Deviations, and Exceptions. At SRR, the Program Manager shall summarize, in the HRCP, all requests for waivers, deviations, and exceptions to the certification process defined in this NPR and technical requirements referenced by this NPR, as well as any exemptions to the failure tolerance requirement and provide access to the program documentation that contains the waivers, deviations, and exceptions. (This is updated at SDR, PDR, CDR, and ORR.)

Note: For the purposes of this NPR, the term "exception" is equivalent to and interchangeable with a "Determination of nonapplicability" as described in NPR 8715.3. The method for documenting approved exceptions should be described in the Safety and Mission Assurance Plan summary (see 2.2.4). Requests for waivers, deviations, and exceptions are submitted in accordance with the requirements contained within NPR 8715.3. The Safety and Mission Assurance Technical Authority dispositions requests for waivers, deviations, and exceptions to the requirements of this NPR. Approved exceptions indicate that a requirement is not applicable and do not represent a non-compliance. The HRCP documents all requests for exceptions, deviations, and waivers submitted for approval by the Technical Authorities and includes the final disposition from the Technical Authorities. Existing program configuration management processes and systems may be used to track these exceptions, deviations, and waivers and support documentation within the HRCP. Individual waivers, deviations, and exceptions to the applicable standards are not to be included in the HRCP.

2.2.3 Safety Analysis Processes. At SRR, the Program Manager shall document in the HRCP, implement, and maintain (for the life of the program) a process for identifying hazards, understanding risk implications of the hazards, modeling hazard scenarios, quantifying and ranking risks to crew safety, and mitigating risks and deficiencies.

Note 1: The intent is that this process for identifying and understanding the hazards (including those resulting from software behavior and human error) and defining and modeling the scenarios (refer to NPR 8715.3) to assess and rank associated crew safety risks, becomes an integral part of the overall iterative design and development process that eliminates hazards, controls the initiating events or enabling conditions related to hazards, and mitigates the resulting effects related to the hazard. This encompasses the use of the reference missions for scenario definition and hazard identification. Integration and consistency between these efforts and any other engineering modeling and assessment activities are also essential.

Note 2: Common approaches or tools for performance of this activity include, but are not limited to, traditional safety and reliability analysis techniques (Hazard Analyses, Fault Tree Analyses, Failure Modes and Effects Analysis, Damage Modes and Effects Analysis, Critical Items Lists), Probabilistic Risk Assessment (PRA) including causes due to human health and human error, Human Error Analysis, simulation modeling techniques (e.g., physics-based abort effectiveness and trigger analyses), and accident precursor analysis. The inter-relationship of these analysis techniques provides a comprehensive risk assessment in which these analytical techniques support and feed each other. Risk assessments should utilize the most current NASA-accepted data and environmental models within any hazard analysis or safety assessment. This requirement explicitly refers to the loss of crew which is the primary emphasis of this NPR; requirements related to hazards associated with the loss of a mission are covered within the content of other 8000 series NASA directives.

Note 3: The process does not need to be documented in a stand-alone document; it may be incorporated in other program documentation such as the integrated Safety and Mission Assurance Plan described in paragraph 2.2.4 of this NPR or in the System Safety Technical Plan described in NPR 8715.3. This requirement will be considered satisfied when the Technical Authorities verify the process has been implemented and documented.

2.2.4 Safety and Mission Assurance Plan. Prior to SRR, the Program Manager shall summarize, in the HRCP, the safety and mission assurance plan (including implementation of Independent Verifications and Validation requirements for software) established in accordance with NPR 8715.3, and is updated at SDR, PDR, CDR, and ORR.

Note 1: The program may document the planned safety and mission assurance activities and outcomes in a stand-alone Safety and Mission Assurance Plan or in a combined form with another program level plan. This plan may be separate from the HRCP. Verification by the Technical Authorities that the program is in place, properly documented, and referenced in the HRCP, satisfies this requirement.

Note 2: The Human-Rating Certification effort focuses on key elements of the overall safety and mission assurance, health, and systems engineering efforts. The effectiveness of implementation of these key elements depends upon the framework and integration of the activities encompassed in the overall safety and mission assurance program. Implementation and subsequent maintenance of all of the elements of the safety and mission assurance program are essential to establish a basis for Human-Rating Certification.

2.2.5 Applicable Standards. The Program Manager shall comply with the following standards:

a. NASA-STD-8719.29.

b. NASA-STD-3001 Volume 1.

c. NASA-STD-3001 Volume 2.

d. FAA HFDS - Human Factors Design Standard.

Note: The standards listed are levied onto the program as applicable standards. These standards consist of human-system integration standards, which are unique to human space systems and other standards deemed mandatory by the Technical Authorities. Exceptions, deviations, and waivers to the applicable standards require the approval of the Technical Authorities (see paragraph 2.2.2, Human-Rating Waivers, Deviations, and Exceptions). In all cases, the application of standards remains under the control of the Technical Authorities (see paragraph 2.2.6, Other Standards Mandated by the Technical Authorities). Refer to NPR 7120.10, Technical Standards Products for NASA Programs and Projects.

2.2.6 Other Standards Mandated by the Technical Authorities. At SRR, the Program Manager shall document, in the HRCP, the list of additional program-level standards mandated by the Technical Authorities as relevant to human-rating, per paragraph 1.4 of this NPR.

Rationale: The intent of this requirement is to ensure that the program has identified and applied the necessary standards early in the system development. The Technical Authorities may mandate standards or topic areas which require standards through other NASA directives or by written direction to the program. In all cases, the standards established by the program are approved by the Technical Authorities, and the application of the standards remains under the control of the Technical Authorities. Refer to NPD 7120.4, NASA Engineering and Program/Project Management Policy.

2.2.7 Summarizing Exceptions Deviations and Waivers to the Applicable Standards. At SRR, the Program Manager shall summarize, in the HRCP, the exceptions, deviations, and waivers to the applicable standards listed in paragraphs 2.2.5 and 2.2.6 and provide access to the program documentation that contains the exceptions, deviations, and waivers. (This is updated at SDR, PDR, CDR, and ORR.)

Rationale: The intent of this requirement is to have the program collectively evaluate the impact to human-rating of the waivers, deviations, and exceptions to the standards mandated by the Technical Authorities for the particular system to be human-rated. It will be left to the program and the Technical Authorities to determine which waivers, deviations, and exceptions are significant and relevant to human-rating. The individual waivers, deviations, and exceptions are not documented in the HRCP, but the program provides the location of and access to the actual waivers, deviations, and exceptions for review.

2.3 Designing the System

2.3.1 Reference Missions. At SRR, the Program Manager shall document, in the HRCP, a description of the crewed space system, its functional interfaces to other systems, and the reference missions that will be certified for human-rating.

Rationale: Defining reference missions establishes the scope of the program to be human-rated and also provides a framework that supports, among other things, identification of crew survival strategies and establishment of scenarios to be used for hazard analysis and risk assessments. The reference missions also define the interfaces with other systems, such as mission control centers, that functionally interact with the crewed space systems.

2.3.2 Identifying System Capabilities for Crew Survival. At SDR, the Program Manager shall document, in the HRCP, a description of the crew survival strategy for all phases of the reference missions and the system capabilities required to execute the strategy. (This is updated at PDR, CDR, and ORR.)

Rationale: The reference missions establish a basis and framework that the program can use to establish the operational scenarios and document the strategies that will be used to enhance crew survival. Incorporating and preserving the capability for the crew to safely return from the mission is a fundamental tenet of human-rating. The scenarios should include system failures and emergencies (such as fire, collision, toxic atmosphere, decreasing atmospheric pressure, and medical emergencies) with specific capabilities (such as abort, safe haven, rescue, emergency egress, emergency systems, and emergency medical equipment or access to emergency medical care) identified to protect the crew. Some specific capabilities, such as abort, are mandated by the technical requirements in NASA-STD-8719.29 referenced by this NPR. The intent of those requirements is to have the program identify additional capabilities for their specific design that enhance crew survival. Additionally, the program describes how the survival capabilities will be maintained during the scenarios. The broad strategies and the process used to develop both the reference missions and the strategies that respond to the scenarios help to establish a focus within the program of making crew survival an integral element of the design process. Continued challenges to (and deliberations concerning) the scenarios themselves and the assumptions, analyses, and design decisions that flow from these scenarios are essential to successfully obtaining Human-Rating Certification.

2.3.3 Documenting the Design Philosophy for Utilization of the Crew. At SRR, the Program Manager shall document, in the HRCP, a description of the design philosophy which will be followed to develop a system that utilizes the crew's capabilities to execute the reference missions, prevent aborts, and prevent catastrophic events.

Rationale: The integration of the crew with the space system and utilization of the crew's capabilities to improve safety and mission success comprise the second tenet in the human-rating definition. Establishing and documenting a design philosophy for utilization of the crew are important steps in actually producing such a system. When unexpected conditions or failures occur, the capability of the crew to control the system can be used to prevent catastrophic events and aborts. These capabilities are determined via task analysis for those tasks where there is a crew interface and documented in operation concepts and, later, referenced in the design of crew interfaces and the development of flight procedures.

2.3.4 Incorporating Capabilities into the System Design. At SDR, the Program Manager shall document, in the HRCP, a description of the implementation of the survival capabilities identified in the requirement in paragraph 2.3.2 and provide clear traceability to the highest level program documentation. (This is updated and reviewed at PDR and CDR.).

Note: At SDR, if the design is not determined, describing the implementation consists of identifying the trade studies and analysis to be used to determine implementation. At PDR and CDR, the design that implements the capability is described in increasing detail with traceability to the highest level requirements in program documentation.

2.3.5 Implementing the referenced Technical Requirements. At SRR, the Program Manager shall document, in the HRCP, a description of the implementation of the applicable requirements of NASA-STD-8719.29 referenced by this NPR and provide clear traceability to the highest level program documentation. (This is updated and reviewed at SDR, PDR, and CDR.).

Note: At SRR, if the design is not determined, describing the implementation consists of identifying the trade studies and analysis to be used to determine implementation. At SDR, PDR, and CDR, the design that implements the requirement is described in increasing detail with traceability to the highest level requirements in program documentation. The description of the implementation of the failure tolerance requirements includes rationale for the level and type of redundancy for critical systems and subsystems.

2.3.6 Allocation of Safety Goals and Thresholds. At SRR, the Program Manager shall document, in the HRCP, probabilistic safety requirements derived from the Agency-level safety goals and safety thresholds, including any allocations to mission phases and system elements (to be updated at PDR and CDR) .

Rationale: Top-level allocations of probabilistic safety requirements are documented in the HRCP to allow for comparison with the risk estimates produced as part of the design and safety analyses. Allocations established during the earlier phases of the program are treated as preliminary and may be updated as the design matures.

2.3.7 Integration of Design and Safety Analyses

2.3.7.1 The Program Manager shall integrate design and safety analyses to determine the following:

Note 1: This NPR places the responsibility on the program to determine the appropriate implementation of risk reduction measures such as failure tolerance. The program integrates the design and safety analyses to make such determinations based on an understanding of individual risk contributions as well as the total level of risk to the crew.

Note 2: As explained in the note to the requirement in paragraph 2.2.3, safety analyses, as defined by this NPR, combine existing techniques such as Hazard Analysis, Fault Tree Analysis, Failure Modes and Effects Analysis, Damage Modes and Effects Analysis, Critical Items Lists, as well as scenario-based probabilistic risk analyses including human error analysis and simulation modeling techniques (e.g., physics-based abort effectiveness and trigger analyses).

Note 3: The integration of design and safety analysis consists of the active and iterative application of these techniques and the use of the collective results from these analyses to inform design decisions. The integrated analysis is done in a consistent manner throughout the program and at the overall system level. This implies that techniques such as Hazard Analysis, Failure Modes and Effects Analysis, and probabilistic risk analyses cannot be performed in isolation and that such analyses should be internally consistent.

Note 4: The resulting assessments and rankings, along with probabilistic safety requirements, serve to inform decisions regarding safety enhancing measures such as necessary failure tolerance levels, margins, abort triggers, and crew survival capabilities.

Note 5: While the results of the design and safety analysis processes are formally submitted for endorsement by stakeholders such as the Technical Authorities and representatives of the crew at major review milestones, it is intended that these stakeholders are an ongoing part of the analysis and design deliberations, enabling them to challenge the rationale for design decisions and help identify hazards and safer alternatives.

a. A list of the significant risk contributors that together constitute the majority of the total risk to which the crew is subjected. Rationale: A ranking of risk contributors such as accident scenarios or classes of accident scenarios enables the identification of the significant risk contributors that collectively represent the majority of risk to the crew. Ranking is done based on the estimated risk to the crew, accounting for hazard controls, crew survival capabilities, and other risk reduction measures.

b. The appropriate hazard controls and mitigations to reduce the risk to the crew, including the level and implementation of failure tolerance to catastrophic events for the space system.

Rationale: This requirement is tied to paragraphs 4.3.1 and 4.3.2 of NASA-STD-8719.29, which require the crewed space system to be failure tolerant.

c. Specific rationale for dynamic flight phases where dissimilar redundancy, backup systems, or abort capabilities are not available to limit the likelihood of a catastrophic event or the loss of crew.

Rationale: The intent of these requirements is to ensure that the program has analyzed and considered the benefits of dissimilar redundancy and backup systems. Where possible, the crewed space system should provide a backup capability for entry to protect for loss of the primary attitude control and guidance system. Specific focus is placed on dynamic flight phases that do not have an abort option, such as Earth reentry and lunar ascent (other than potentially an abort to lunar orbit), because they can be very unforgiving when multiple or common cause failures occur. There is very limited time for system troubleshooting or reconfiguration and the "time to effect" for loss of a critical capability is often short.

d. The effectiveness of crew survival capabilities under conditions and time constraints to be encountered during high-risk accident conditions and their impact on the risk to the crew.

Note: An evaluation of crew survival design and operational capabilities and limitations (functionality, performance, reliability, availability, autonomy, response, activation features, and whether the design requires human interaction) will be used to determine their effectiveness given anticipated conditions and time constraints following the defeat of preventative controls, as well as their impact on the risk to the crew. Evaluations may be qualitative or quantitative and are prioritized based on the risk associated with the accident condition. At a minimum, quantitative (probabilistic) evaluations are performed for crew survival capabilities that are credited with significant reductions of risk to the crew.

e. The level of risk to the crew and associated uncertainty determined via analysis performed in accordance with accepted probabilistic safety analysis protocols and supported by documented evidence including ground and flight test data.

Rationale: This requirement is tied to paragraph 4.2.2 of NASA-STD-8719.29, which requires satisfaction of probabilistic safety requirements with a high degree of certainty. At a minimum, the determination of risk is performed for the system and any phase or system element for which an allocation is established. Other risk contributions are determined in order to decide on risk reduction measures such as failure tolerance.

Note: Types of evidence to support risk estimates commonly include design information and functional allocations, performance analyses, success criteria, other safety and reliability analyses and ground test, flight test, and operational reliability performance data.

2.3.7.2 At SDR, the Program Manager shall summarize, in the HRCP, and present the current understanding of risks and uncertainties and related decisions regarding the system design and application of testing, based on the results of the design and safety analyses performed in accordance with paragraph 2.3.7.1 (this is updated and reviewed at PDR, CDR, and ORR).

Note 1: The Technical Authorities determine compliance with this requirement during the milestone reviews indicated. A formally scheduled discussion, as part of the review milestone with the Technical Authorities and the review board, satisfies the presentation aspect of this requirement. The intent is for the program to show that safety analyses are iteratively used to make design decisions to eliminate hazards, control initiating events, or enabling conditions related to hazards and mitigate the resulting effects related to the hazard. The intent is not to track all decisions and provide a linkage to the assessment that influenced those decisions; rather, the intent is to summarize how the analyses were used.

Note 2: The effectiveness of tools such as Hazard Analyses, Failure Modes and Effects Analysis, Damage Modes and Effects Analysis, Critical Items List, Fault Trees, and PRA is dependent on their integrated use in design activities and the information and data on which they are based. Specific implementation requirements concerning the models and assessment techniques and processes (including the hazard reduction precedence) to be used in relation to this requirement are defined in NPR 8715.3 and accepted standards regarding the conduct of PRA. Accepted standards and guidance for the conduct of PRA include ISO 11231:2019, Space systems - Probabilistic risk assessment (PRA), and NASA/SP-2011-3421, Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners. Technical authorities may accept other standards for use by programs and projects. The demonstration here shows how these tools were used in the deliberations that: examined design alternatives, identified key uncertainties (e.g., uncertainty in system performance, uncertainty in human performance, or in understanding phenomena) related to the design options, established confidence in the analyses and the resulting design, identified focus areas for testing, and the subsequent decisions that resulted from the deliberations. Since any modeling or analysis process is an abstraction of the design (since it uses assumptions, limits scenarios modeled, and uses both program specific and generic data) the rigorous use of deliberation to identify the thresholds as well as to defend and challenge design options is of greater significance than a final number that results from the analysis.

Note 3: SDR, PDR, and CDR are the key milestones where the requirements, architectures, and design are developed and solidified. These are also the milestones where demonstration and discussion of the use of the techniques and their results are expected. This information can be documented as a part of the safety analysis report described in NPR 8715.3. A ranking of the safety risks to which the crew is subjected, and an assessment of the achievement of probabilistic safety requirements derived from the Agency-level safety goals and thresholds should be provided.

2.3.8 Human-Systems Integration Team. At SRR, the Program Manager shall establish a Human-Systems Integration (HSI) team comprising representation from the system's user community (e.g., astronauts, mission operations personnel, training personnel, ground processing personnel, human factors and human-systems SMEs, etc.), with defined authority, responsibility, and accountability in support of the program's HSI Plan for the crewed space system.

Rationale: Past experience with development of spacecraft and military aircraft has shown that, when a correctly staffed human-system integration team is given the authority, responsibility, and accountability for human-system design and integration, the best possible system is achieved within the schedule and budget constraints. This team focuses on all human-system interfaces (e.g., crew, launch control, and applicable ground processing operations) and ensures an acceptable crew health and performance environment in the space systems. See NPR 7123.1, NASA/SP-2016-6105 Rev 2, and NASA/SP-2015-3709 for more guidance.

Note: NPR 7123.1 requires that a Human-Systems Integration (HSI) Plan be created and updated throughout the development cycle of a human-rated system. The plan defines how human-system considerations are integrated into the full systems engineering design, verification, and validation life cycle. Updates are required to document the implementation of an HSI design approach to the system and its mission and to demonstrate how the design accommodates human capabilities and limitations. This requirement is consistent with NASA-STD-3001 and other standards for human-centered design and with Federal Agency HSI best practices for development of systems that involve humans and further builds on standards such as NASA-STD-5005. The intent is to ensure that, through developing and executing the HSI Plan, the PM expends the effort to integrate HSI expertise, capture HSI approach, and track HSI metrics throughout the life cycle of the program to increase safety, human performance, and mission success. HSI domains include safety; human factors engineering; operational resources management; training; maintainability and supportability; and habitability and environment. Lessons learned from previous programs and projects have shown that by including stakeholders with expertise in relevant HSI domains, the best possible outcome is achieved for operations and mission success. HSI focuses on all human-system interaction (crew, ground control, and ground processing) that can cause or prevent a catastrophic failure.

2.3.9 Evaluating Crew Workload. At SRR, the Program Manager shall document, in the HRCP, a description of how the crew and ground control workload for the reference mission(s) will be evaluated. (This is updated and reviewed at PDR and CDR.)

Rationale: The design of the system can have a significant impact on crew and ground control workload and productivity. Integration of the human into the system is a fundamental tenet of human-rating. Understanding how the system design affects workload is part of the integration process. Additionally, if the resultant workload during a mission is too high, crew fatigue can affect safety. The expectation is that the evaluation of workload would be tasked to the human-systems integration team. Evaluation of the workload requires the program to establish criteria for the evaluation.

2.3.10 Human-in-the-Loop Integration Evaluation.

2.3.10.1 The Program Manager shall conduct human-in-the-loop usability evaluation for the human-system interfaces and integrated human-system performance testing, with human performance criteria, for critical system and subsystem operations involving crew and ground control performance during crewed operations.

2.3.10.2 At PDR, the Program Manager shall summarize, in the HRCP, and present how the human-in-the-loop usability evaluations for human-system interfaces and integrated human-system performance evaluation results (to date) were used to influence the system design and provide access to the detailed evaluation plans and results. (This is updated at CDR.)

2.3.10.3 At ORR, the Program Manager shall summarize, in the HRCP, how the integrated human-system performance test results were used to validate the system design and provide access to the detailed test plans and results.

Rationale: The expectation is that human-in-the-loop testing is conducted during the development life cycle and is intended to ensure the integrated system requirements and operational concepts are progressively met. Tests and analyses are the standards utilized to demonstrate the operational concepts and human-system interface design requirements are met. Test and analysis data are used to verify and validate the integrated performance of the space system hardware, software, and human operators in simulated vehicle and mission operations environments. Testing can include quantitative and objective human-in-the-loop testing and simulations of flight-critical systems, vehicle, and mission-level operations in ground-based simulators. In addition, integrated test data should be complemented by usability evaluation data and analysis of human-system interfaces. This data can also be used to inform and validate human error analysis.

2.3.11 Human Error Analysis.

2.3.11.1 The Program Manager shall conduct a human error analysis for all mission phases to include operations planned for response to system failures.

2.3.11.2 At PDR, the Program Manager shall summarize, in the HRCP, and present how the human error analysis (to date) was used to: (This is updated at CDR and ORR.)

a. Understand and manage potential catastrophic hazards which could be caused by human errors.

b. Understand the relative risks and uncertainties within the system design.

c. Influence decisions related to the system design, operational use, and application of testing.

Rationale: Personnel trained in human error analysis (HEA) need to be part of the human-system integration team to perform this analysis. The intent is to show that the HEA (which includes hazard identification, analysis [including process failure modes and effects analysis], and modeling of human behavior) is iteratively used to make design decisions. The effectiveness of HEA tools is dependent on their integrated use in design activities, upgrades, enhancements, and operation-risk trades.

Note: The human error analysis includes all mission operations while the crew is interacting with the space system - including crew and ground control operations, and ground processing operations with flight crew interfaces. This analysis covers response to system failures and abort scenarios. While the potential errors of ground processing personnel are to be considered, their personal safety is not addressed by this NPR. A formally scheduled discussion as part of the review milestone with the Technical Authorities and the review board is necessary to satisfy the presentation aspect of this requirement. The intent of this human error analysis requirement is to have the program:

1) Identify inadvertent operator actions and failure to act which would cause a catastrophic event and determine the appropriate level of tolerance.

2) Identify other types of human error that would result in a catastrophic event.

3) Apply the appropriate error management (per paragraph 2.3.12).

2.3.12 The Program Manager shall design the system to manage human error according to the following precedence:

a. Design the system to prevent human error in the operation and control of the system.

b. Design the system to reduce the likelihood of human error and provide the capability for the human to detect and correct or recover from the error.

c. Design the system to limit the negative effects of errors.

2.4 Verifying and Validating the System Capabilities and Performance

2.4.1 Verifying and Validating Implementation of the Technical Requirements. At SRR, the Program Manager shall document, as part of the HRCP, how the implementation of the technical requirements in NASA-STD-8719.29 referenced by this NPR will be verified and validated (with rationale). (This is updated at SDR, PDR, and CDR.)

Rationale: This is linked to the certification requirement in paragraph 2.3.5. From a human-rating perspective, it is important to understand how the implementation of the technical requirements in NASA-STD-8719.29 will be validated, which may not be demonstrated by requirements verification alone.

2.4.2 Verifying and Validating Survival Capabilities. At CDR, the Program Manager shall document, as part of the HRCP, how the implementation of survival capabilities from the requirement contained in paragraph 2.3.4 will be verified and validated (with rationale).

Note: This is linked to certification requirement in paragraph 2.3.4. These are the capabilities identified by the program that are unique to the reference mission and the system.

2.4.3 Verifying and Validating Critical System and Subsystem Performance. At CDR, the Program Manager shall document, as part of the HRCP, how the critical system and subsystem performance will be verified and validated (with rationale).

Rationale: The intent of this requirement is to have the program prove that the critical (sub)system actually performs its functions properly, which may or may not be demonstrated by requirements verification alone. Testing provides the last line of defense and opportunity to discover unexpected interactions and the ability to validate and verify models used during design. The axiom is "Test Like You Fly." The "Test Like You Fly" approach, covering nominal and off-nominal scenarios, assures the system can, in fact, accomplish the mission with the intended safety controls and robustness to mission success. It is acknowledged that testing is not possible for all types of systems and that testing is combined with analysis and other methods. Therefore, the second intent of this requirement is have the program justify the cases where a "Test Like You Fly" approach cannot or should not be used and to describe how validation is accomplished assuring sufficient coverage of the expected flight environments and operational sequences demonstrating critical (sub)system functions, performance, and margins. A detailed summarization of the plans and procedures for performing the verification and validation with respect to the critical system and subsystem performance is sufficient to meet this requirement, provided complete references are provided to the detailed plans and procedures that document the verification and validation activities.

2.4.4 Integrated Verification and Validation of Critical Systems and Subsystems. At CDR, the Program Manager shall document, as part of the HRCP, how critical system and subsystem performance will be verified and validated at the integrated system level to ensure that (sub)system interactions will not cause a catastrophic hazard (with rationale).

Rationale: The intent of this requirement is to have the program prove that the critical (sub)systems actually perform their functions properly in an integrated environment and to demonstrate that (sub)system interactions do not cause a catastrophic hazard. Testing provides an opportunity to discover unexpected interactions and allows the program to validate and verify models used during design. The axiom is "Test Like You Fly." The "Test Like You Fly" approach, covering nominal and off-nominal scenarios, assures the system can, in fact, accomplish the mission with the intended safety controls and robustness to mission success. It is acknowledged that testing is not possible for all types of systems and that testing is combined with analysis and other methods. Therefore, the second intent of this requirement is to have the program justify the cases where a "Test Like You Fly" approach cannot or should not be used and to describe how validation is accomplished assuring sufficient coverage of the expected flight environments and operational sequences demonstrating critical (sub)system functions, performance, and margins.

2.4.5 Verifying and Validating Critical Software Performance.

2.4.5.1 At CDR, the Program Manager shall document, as part of the HRCP, how testing will be used to verify and validate the performance, security, and safety of all critical software across the entire performance envelope (or flight envelope) including mission functions, modes, and transitions (with rationale).

2.4.5.2 At CDR, the Program Manager shall also document, as part of the HRCP, how testing will be used to verify and validate the performance, security, and safety of all critical software under additional off-nominal, contingency, and stress testing (with faults injected) (with rationale).

Rationale: The intent of these requirements is to have the program fully describe the verification and validation approach that will be used, including fidelity of test environment and extent of stress testing to be performed. Critical mission software, which may include both flight and ground software, should be tested using the highest fidelity closed-loop test environment possible; for example, when a flight-equivalent avionics test bed is not used, the program needs to provide the rationale and strategy for the alternate approach.

2.4.6 System Design Verification and Validation Results. At ORR, the Program Manager shall summarize, as part of the HRCP, the results of the verification and validation performed per requirements 2.4.1 and 2.4.2, along with access to the detailed results.

2.4.7 Critical System and Subsystem Performance Verification and Validation. At ORR, the Program Manager shall summarize, as part of the HRCP, the results of the critical system and subsystem verification and validation performed per requirements 2.4.3 and 2.4.4, along with access to the detailed results.

2.4.8 Software Verification and Validation Results. At ORR, the Program Manager shall summarize, as part of the HRCP, the results of the critical software testing performed per requirement 2.4.5, along with access to the detailed results.

2.4.9 Validating Crew Workload. At ORR, the Program Manager shall document, in the HRCP, how the crew and ground control workload was validated for the reference mission(s) and how the Program identified and implemented necessary mitigations to significant findings.

2.4.10 Updating Safety Models to Support System Validation. At the ORR, the Program Manager shall describe, in the HRCP, how the safety analysis documented in paragraph 2.2.3 related to loss of crew was updated based on the results of validation and verification testing and used to support validation and verification of the design in circumstances where testing was not accomplished.

Rationale: This requirement is verified by the Technical Authorities at ORR. A formally scheduled discussion with the Technical Authorities and the review board is a satisfactory method for the delivery of the information. When a program prepares for system acceptance, it is essential to examine the system in a comprehensive manner. The system capabilities need to be examined in relationship to the overall safety and mission assurance framework that is documented in the overall safety analyses defined in paragraphs 2.2.3 and 2.3.7. Only in looking at these in a collective sense can uncertainties related to uncontrolled or unidentified hazards be reduced and confidence in the results be established to the point necessary to obtain Human-Rating Certification.

Rationale: Also, while testing is the preferred approach to validate and verify the design, there will be situations where testing will not be performed. The intent here is to show where these tools and analyses are used to support validation and verification when testing is not performed.

2.5 Flight Testing the System

2.5.1 Establishing the Flight Test Program. At SDR, the Program Manager shall document, as part of the HRCP, the flight test program, including the type and number of test flights that will be performed.

Rationale: Since flight tests are typically major factors in program and budget planning, it is important to review the flight test program at a high level early in the development process. The program may elect to bring forward the flight test program at an earlier milestone for concurrence.

2.5.2 At PDR, the Program Manager shall update the flight test program documented in the HRCP to include the flight test objectives with linkage to specific program requirements that are validated by flight test. (This is updated and reviewed at CDR.)

Note: 1) The flight test program provides two important functions. First, the flight test program uses testing to validate the integrated performance of the space system hardware, software, and, for crewed test flights, the human, in the operational flight environment. Second, the flight test program uses testing to validate the analytical models that are the foundation of all other analyses, including those used to define operating boundaries not expected to be approached during normal flight.

Note: 2) Flight and ground tests are needed to ensure that the data for the analytical models can be used to confidently predict the performance of the space systems at the edges of the operational envelopes and to predict the margins of the critical design parameters.

Note: 3) In order to minimize risk to the crew, it is preferred that an unmanned flight test be conducted prior to a manned flight test. It is acknowledged that this may not be feasible for all phases of flight and may not be necessary for some systems.

2.5.3 Flight Test Results. At ORR, the Program Manager shall summarize, as part of the HRCP, the results of the flight test program to date and each test objective, along with access to the detailed test results.

Rationale: The results of the flight test program may force modifications or changes to the system. It is imperative that any changes are fully understood and properly verified and validated.

2.6 Certifying and Operating the Human-Rated System

2.6.1 Maintaining the System and System Configuration Control. At ORR, the Program Manager shall provide, as part of the HRCP, a configuration management and maintenance plan that documents the processes that the program will use to ensure that the space system remains in the "as-certified" condition through the end of the life cycle to include system disposal.

Rationale: The plan is used to define how the human-rating for the system remains current in the face of configuration or operational changes that may require re-evaluation. The processes documented may include (but are not limited to) raw material selection criteria and control, fabrication, inspection, acceptance tests, audits, and maintenance processes.

2.6.2 Data Collection, Management, and Analysis. At ORR, the Program Manager shall provide, as part of the HRCP, a data collection, management, and analysis plan that documents the processes that the program will use to ensure that the appropriate space system data is collected, stored, and analyzed throughout its life cycle in support of the analyses to understand the risks associated with each mission.

Note: These data and processes may include (but are not limited to) time to failure of critical components, operating histories (operating times and demands), thermal and structural-related data used to verify design parameters, test data, updated environment models, repair times, acceptance tests, and maintenance processes.

2.6.3 System Certification. Prior to the first crewed flight, the Program Manager shall obtain from the NASA Administrator, as the authority for human-rating, a Human-Rating Certification for the crewed space system based on the reference (or test) missions.

Note: The specific administrative process is detailed in Chapter 1 of this NPR. The certification request will specify the duration of the certification. See Appendix F for the request form.

2.6.4 Evaluating Changes to the System.

2.6.4.1 After Human-Rating Certification, the Program Manager, the Technical Authorities, and the Director, JSC, shall collectively evaluate design changes, manufacturing (or refurbishment) process changes, testing changes to the space system, and temporary exemptions to the failure tolerance requirement.

2.6.4.2 If the Program Manager, any of the Technical Authorities, or the Director, JSC determine that a re-rating is required, the Program Manager shall submit a request for Human-Rating Recertification, with a revised HRCP, to the NASA Administrator, as the authority for human rating.

Rationale 1: When changes to the design, manufacturing or refurbishment process, or acceptance testing are made, the Human-Rating Certification is reevaluated. In some cases, the Technical Authorities and the Director, JSC may decide that the changes do not affect the certification. In this case, the change should be documented and certified for flight at the appropriate level.

Rationale 2: Major hardware and software changes in requirements, design, major upgrades, major modifications or changes to the process, or testing that affect form, fit, performance, timing, or function, or the structural integrity and structural life of the system should be evaluated through a recertification process. Recertification is completed prior to the next flight/mission readiness review process.

2.6.5 Operating the System within the Certification. As part of each flight or mission readiness review, the Program Manager shall review the Human-Rating Certification to include the following:

a. Compliance with the Configuration Management and Maintenance Plan.

b. Verification that the human-rated system will be operated within the certified envelope of the reference mission(s).

c. Anomalies from the previous flight/mission that affect the Human-Rating Certification and their resolution.

d. Design changes, manufacturing (or refurbishment) process changes, and testing changes that were made as part of the Program's safety upgrade and improvement program that are expected to affect risk to the crew.

Rationale: Human-Rating of a space flight system is a process that is embedded throughout the life cycle of a program from development through operations. The applicability of the Human-Rating Certification is part of the program review process, including the program boards and flight readiness reviews. However, more important than the certification or process, human-rating is a state of mind that enables each member of a program design team to constantly work to reduce uncertainties, reduce risk, and design, build, test, and operate the safest practical system for the mission. As a part of this effort, analytical models for the system are updated using the anomaly and operational and flight performance data to accurately reflect the risk associated with future missions.



| TOC | ChangeHistory | Preface | Chapter1 | Chapter2 | AppendixA | AppendixB | AppendixC | AppendixD | AppendixE | AppendixF | ALL |
 
| NODIS Library | Program Management(8000s) | Search |

DISTRIBUTION:
NODIS


This document does not bind the public, except as authorized by law or as incorporated into a contract. This document is uncontrolled when printed. Check the NASA Online Directives Information System (NODIS) Library to verify that this is the correct version before use: https://nodis3.gsfc.nasa.gov.