
21 CFR 820, the regulation that outlines the quality management system requirements for medical devices manufactured or distributed in the US, defines requirements for production and process controls in sec. 820.70 to ‘ensure that a device conforms to its specifications, with 820.70(i) listing the only requirements for software validation in automated processes, something that has become increasingly relevant each year since the issuance of the current QSR in 1997 or the origins of this particular GMP requirement, which dates back to 1978.
Computers and automated equipment are used extensively throughout all aspects of medical device design, laboratory testing and analysis, product inspection and acceptance, production and process control, environmental controls, packaging, labeling, traceability, document control, complaint management, and many other aspects of the quality system. Increasingly, plant floor operations can involve extensive use of embedded systems in:
- Programmable logic controllers;
- Digital function controllers;
- Statistical process control;
- Supervisory control and data acquisition;
- Robotics;
- Human-machine interfaces;
- Input/output devices; and
- Computer operating systems.
820.70(i) states:
When computers or automated data processing systems are used as part of production or the quality system, the manufacturer shall validate computer software for its intended use according to an established protocol. All software changes shall be validated before approval and issuance. These validation activities and results shall be documented.
“This requirement [820.70(i)] applies to any software used to automate device design, testing, component acceptance, manufacturing, labeling, packaging, distribution, complaint handling, or to automate any other aspect of the quality system…computer systems used to create, modify, and maintain electronic records and to manage electronic signatures are also subject to the validation requirements.”
The draft guidance General Principles of Software Validation, Version 1.1 was issued by the FDA in June of 1997 and superseded by the final guidance document, Version 2.0, in 2002, which is the current guidance that the FDA ‘considers to be applicable to the validation of medical device software or the validation of software used to design, develop, or manufacture medical devices,’ and considers software validation to be, “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.”
The 2002 version of the guidance cites staggering figures to illustrate the importance of following its tenets:
“The FDA’s analysis of 3140 medical device recalls conducted between 1992 and 1998 reveals that 242 of them (7.7%) are attributable to software failures. Of those software related recalls, 192 (or 79%) were caused by software defects that were introduced when changes were made to the software after its initial production and distribution. Software validation and other related good software engineering practices discussed in this guidance are a principal means of avoiding such defects and resultant recalls.”
The guidance acknowledges the burden placed on the entities that must adhere to the regulation’s requirements and how challenging it can be to reach the desired level of confidence that software automated functions of a medical device meet specifications and user expectations noting that ‘developer[s] cannot test forever, and it is hard to know how much evidence is enough,’ with the guidance directing medical device manufacturers to take a risk-based approach to identify hazards posed by the automated functions of the device to determine how much testing is needed and to reach the desired level of confidence in functionality – and to be able to likewise convince regulators of the same.
The guidance also urges manufacturers to consider software requirements as a part of system design, deriving software requirements – the criteria for validation testing – from the overall system requirements for the device. The specifications for the software, then, should be an output of the design and development process, representing the user’s needs and intended uses that prompted the development of the device in which functions are being automated.
The current guidance states that “the vast majority of software problems are traceable to errors made during the design and development process…the quality of a software product is dependent primarily on design and development with a minimum concern for software manufacture,” and it goes on to say, “unlike hardware, software is not a physical entity and does not wear out. In fact, software may improve with age, as latent defects are discovered and removed. However, as software is constantly updated and changed, such improvements are sometimes countered by new defects introduced into the software during the change,” both statements highlight the importance of proper planning for software development during the design and development phase, as well as the critical need to manage changes as software is improved to ensure that updates do not have a negative effect elsewhere.
These two items lead to a warning to manufacturers to not become lax with engineering efforts, which often happens in the case of software engineering vs. traditional engineering in manufacturing environments. As software engineers embrace agile frameworks and develop and update software at a rapid pace, a belief that less planning is required and mistakes can be quickly corrected begins to proliferate – a detrimental notion that can have grave consequences in regulated industries in which software errors caused by hasty development or lack of validation testing can lead to the death of an end user. “Because of its complexity, the development process for software should be even more tightly controlled than for hardware, in order to prevent problems that cannot be easily detected later in the development process.” This sentiment rings true for the author of this article, as this also was the topic of his graduate capstone.
Obviously, a lot has changed since 2002 and, while all of the tenets of the 2002 guidance that is currently in place still rings true, the need for robust software validation practices is more critical now than it has ever been, a reality that will continue to be more true with each passing year.
“In recent years, advances in manufacturing technologies, including the adoption of automation, robotics, simulation, and other digital capabilities, have allowed manufacturers to reduce sources of error, optimize resources, and reduce patient risk. FDA recognizes the potential for these technologies to provide significant benefits for enhancing the quality, availability and safety of medical devices, and has undertaken several efforts to help foster the adoption and use of such technologies.”
The draft guidance, meant to supplement rather than supersede the 2002 final guidance, doubles down on the risk-based approach to validation activity, describing computer software assurance as ‘a risk-based approach to establish confidence in the automation used for production or quality systems, and identify where additional rigor may be appropriate,’ as well as continuing to stress the importance of robust design and development practices to prevent the introduction of defects in software products.
“Software testing alone is often insufficient to establish confidence that the software is fit for its intended use. Instead, the Software Validation guidance recommends “software quality assurance” focus on preventing the introduction of defects into the software development process, and it encourages the use of a risk-based approach for establishing confidence that software is fit for the intended use.”
The draft supplemental guidance continues to expand on the risk-based model initially outlined in the 2002 guidance, acknowledging that software used in medical devices and quality systems might have a broad range of intended uses, or perhaps even multiple intended uses, with varying degrees of risk and potential associated hazards. Firms will need to understand all of the software being used in their products and in the quality systems used to produce their products, as well as all of the intended uses for each software application to conduct proper risk analysis and determine appropriate validation methods – validating off-the-shelf email clients, after all, will pose next to no risk to a customer, despite being critical in the production and provision of product.
With ISO 14971 being internationally recognized as the definitive standard for risk management for medical devices, the supplemental draft guidance issued the following clarification:
“Note that conducting a risk-based analysis for computer software assurance for production or quality system software is distinct from performing a risk analysis for a medical device as described in ISO 14971:2019 – Medical devices – Application of risk management to medical devices. Unlike risks contemplated in ISO 14971:2019 for analysis (medical device risks), failures of the production or the quality system software to perform as intended do not occur in a probabilistic manner where an assessment of the likelihood of occurrence for a particular risk could be estimated based on historical data or modeling.”
Instead, the guidance emphasizes proper configuration and management of software applications, focusing on making the distinction between software features, functions, or operations that pose a high process risk, meaning that their failure to perform as intended may result in a quality problem that foreseeably compromises safety, meaning an increased medical device risk – say, software functions that maintain process parameters that affect physical properties of a product or manufacturing process (temperature, pressure, humidity, etc.).
Where the final guidance makes a distinction between what is required from a software developer and software end user in regard to validation requirements and who actually falls under the obligation to the requirements outlined in the QSR, the updated draft supplemental guidance gives a great deal of technical guidance to developers to better incorporate the design and execution of testing into the design process, as well as emphasizing the need for user site testing, which should also follow a written plan and include formal acceptance procedures to ensure that hardware and software are installed and configured as specified. While the user site validation guidance is intentionally much more general than the guidance for developer validation, the supplemental guidance is certainly more robust in the way in which it has made this distinction and the individual guidance that it poses for each scenario and for validation throughout the life cycle, including additional guidance for testing when changes are introduced after the initial product release.
The supplemental guidance is as ambiguous as the final guidance regarding how much testing is necessary in a given scenario in order for the manufacturer to achieve an expected level of confidence in software functionality, but the ambiguity is somewhat hidden – or at least more appropriately framed – within the context of examples where little validation might be needed vs. examples when extensive validation is more appropriate. Instead of simply saying, ‘it may be hard to tell,’ they first illustrate a manufacturing process using a CNC machine in which little validation would be needed because the process output can be fully verified against specifications prior to release. This is then contrasted with electronic record and electronic signature systems used throughout a manufacturing plant, a PLC that automates a sterilization process, or automated test equipment used for inspection in which the results are used to determine acceptance against acceptance criteria for a life-sustaining or life-supporting device, in which, for all cases, extensive validation would be more appropriate.
Further, it is highlighted that only functions of software being used or which the manufacturer is dependent on results as part of the quality system need to be validated, giving examples such as a statistical software package used for SPC or a database used for tracking CAPA data. Some features of such software packages might not be utilized by the manufacturer or they might not depend on the data output as part of their quality system, starkly contrasted with the example of the automated inspection equipment above, and thus features not utilized need not go through a validation process, but, the guidance warns, high-risk applications should not be running in the same operating environment with non-validated software functions, even if those functions are not being used.
It is re-emphasized in the supplemental guidance that manufacturers use OTS (off-the-shelf) vendor validation documentation, when available, as a starting point for validation activity for OTS software purchased for a specific intended use, but the manufacturers are still responsible for ensuring that the software does indeed work as intended for their use. In some cases, vendor validation information may be proprietary or they may refuse to provide it, but even if it is freely provided to the manufacturer who is using the software, this does not absolve the manufacturer of responsibility – the author has seen many times in which a manufacturer will simply ‘take their word’ when a vendor provides validation information, and not do due diligence to ensure that they’ve planned their own validation activity; a dangerous practice that the guidance rightly warns against.
In the face of the ever-increasing proliferation of interconnected devices and functionality controlled by software, the updated draft supplemental guidance is timely, just as it is time for folks to go back through and re-familiarize themselves with the precepts of the 2002 final guidance that is still in place today and remain as relevant as ever. You can download the new draft guidance here.