Tests Tab

Complete field reference, Verification vs. Validation guidance, TAID method explanations, and per-standard compliance mapping for the Tests tab in the RTMify requirements traceability template.

The Tests tab contains test groups and individual test cases. A Test Group (TG-001) maps to one or more requirements via the Requirements tab's Test Group IDs column. Each group contains one or more individual tests (T-001, T-002…), each typed as Verification or Validation and assigned a method (Test, Analysis, Inspection, or Demonstration). Tests form the third pillar of the requirements traceability matrix, linking every requirement to a specific verification activity. Across all regulated standards (AS9100, ISO 13485, DO-178C, IEC 62304, ISO 26262, ASPICE), the Tests tab ensures that every requirement is independently verifiable and that verification evidence is recorded and traceable.

Field-by-Field Reference

The Tests tab contains six columns that systematically define test groups, individual tests, and verification methods.

Column Field Name Description Valid Values & Tips
A Test Group ID Unique identifier for a test group that verifies one or more requirements. Groups related test cases and provides upward traceability to requirements. TG-001, TG-002, … Auto-populated. Sequential format. One test group may have multiple test cases (T-001, T-002, T-003…). Multiple requirements may reference the same test group. Each test group should have a descriptive name.
B Test ID Unique identifier for an individual test case within a test group. Enables detailed traceability and test result tracking. T-001, T-002, … Scoped to the test group. Each test case should have a concise title.
C Test Type Indicates whether the test is Verification or Validation. Critical for regulated audits. Verification or Validation — Verification: lab test, bench test. Validation: field trial, clinical study, UAT.
D Test Method Specifies how verification/validation is performed. Chosen from TAID. Test | Analysis | Inspection | Demonstration
E Notes Free-text field for test details, standards compliance references, test results, and traceability metadata. Link external test results, compliance references, and auditor-relevant information.
F RTMify Status Automatically populated field indicating whether this test is fully traced. Complete | In Progress | Not Started

Verification vs. Validation Deep-Dive

Verification and Validation are complementary but distinct activities. This distinction is critical for ISO 13485 audits, where auditors specifically check that V&V are recorded separately and that validation includes evidence of intended use.

Aspect Verification Validation
Definition Confirms that outputs (design, code, product) meet inputs (requirements, design intent). "Did we build it right?" Confirms that the product meets user needs and is fit for intended use. "Did we build the right thing?"
Scope Component-level, integration-level, system-level bench testing. Lab environment. System-level. Realistic or operational environment (field trial, clinical study, customer use).
Timing Throughout development: unit test, integration test, system test in lab. Late in development: after verification complete, before release. Often post-release in-service monitoring.
Independence (Regulated) May require independent tester (DAL B/C/D, ASIL D, Class A) to show impartiality. Often requires independent evaluator (clinical validation, field trial monitoring).
Test Type Column Test Type = "Verification" Test Type = "Validation"
Example (Medical Device) Lab bench test: confirm the blood glucose meter displays correct values when tested with calibration solutions of known glucose concentration. Clinical validation: 50 diabetic patients use the device over 30 days and achieve ≥95% accuracy compared to reference lab measurements in real-world conditions.
Auditor Focus (ISO 13485) Auditors verify that all design outputs (requirements) have corresponding design verification tests (bench tests). Test results and traceability matrix are reviewed. Auditors SPECIFICALLY check that design validation is documented separately. Clinical or field validation data must be recorded. This distinction is a key audit point.
Standard Emphasis DO-178C (§6.3 Structural Testing): All requirements must be covered by test cases. ISO 26262: All ASIL requirements must be verified. IEC 62304 (§5.5): All software requirements must have integration tests. ISO 13485 (§7.3.6): Design validation must demonstrate the product meets user needs. IEC 62304 (§5.7): Software release validation. ASPICE (SWE.6): Qualification testing confirms product is ready for release.

Auditor Focus (ISO 13485): During medical device audits, ISO 13485 inspectors closely examine the Requirements and Tests tabs. They verify that: (1) Every design output (requirement) has a linked test (verification activity). (2) Design verification tests are recorded separately from design validation tests. (3) Clinical validation activities are documented with evidence. (4) The RTMify status shows complete traceability. If your Tests tab shows only verification tests with no validation tests, auditors will ask: "Where is your clinical/field validation evidence?"

Per-Standard Guidance on V&V

DO-178C: Verification covers all testing activities (§6.3 Structural Testing, §6.4 Reviews & Analyses). Validation is system-level qualification. Both are required for all DAL levels. Verification must achieve code coverage targets based on DAL. Independence of verification is mandatory for DAL A/B.

ISO 26262: Verification is per Part 6 (unit testing, integration testing, qualification testing). Validation is system-level verification that ASIL safety requirements are met. ASIL D requires the highest verification rigor. Validation includes test evidence linking back to HARA safety goals.

IEC 62304: Verification is software integration and system testing (§5.5). Validation is software release verification (§5.7). Software safety class (A/B/C) determines verification rigor. Ensure every software requirement is traced to a test.

TAID Methods Explained

TAID stands for Test, Analysis, Inspection, Demonstration—the four fundamental methods for verification and validation. Each method is suited to different types of requirements and provides different types of evidence.

Test

Executing the system (hardware, software, or integrated product) and measuring output against expected results.

Examples:
  • Bench test: Power-on self-test, functional test suite executed on the device
  • Environmental test: Temperature cycling, humidity, vibration, or shock testing
  • EMC test: Electromagnetic compatibility testing per FCC/CE standards
  • Protocol test: CAN bus message validation, network packet injection and response verification
  • Stress test: Load testing, boundary-condition testing (max/min values, null inputs)
  • System test: End-to-end test of the integrated product in a controlled lab environment
Standards:
  • DO-178C §6.3 (Structural Testing)
  • ISO 26262 Part 6 §11
  • IEC 62304 §5.5
When to use: Use Test when you need to execute and measure the actual product behavior. This is the most direct and high-confidence verification method. Suitable for requirements where observable output (return codes, state changes, physical measurements) exists.

Analysis

Evaluating the system through mathematical, computational, or logical analysis without executing the product.

Examples:
  • Static analysis: Code review for adherence to coding standards, security vulnerabilities, memory leaks (using tools or manual review)
  • Stress analysis: Finite element analysis (FEA) of mechanical components under load
  • Thermal analysis: Computational fluid dynamics (CFD) or thermal simulation to verify temperature limits
  • Formal proof: Mathematical proof that an algorithm or logic is correct (e.g., proving a sorting function always terminates)
  • FMEA (Failure Mode & Effects Analysis): Systematic evaluation of failure modes and their impact
  • Traceability analysis: Verification that all requirements are traced and all user needs are covered
  • Security analysis: Threat modeling, attack surface analysis, crypto algorithm review
Standards:
  • DO-178C §6.4 (Reviews & Analyses)
  • ISO 26262 Part 6 §9
  • IEC 62304 §5.3.6
When to use: Use Analysis when test execution is impractical, costly, or unsafe (e.g., testing a failure mode for a safety system would damage the device). Analysis is also preferred for algorithm correctness, formal properties, and security claims. Analysis typically produces a detailed report or proof artifact.

Inspection

Visual, dimensional, or structural examination of the product (hardware, code, documents, drawings) to verify conformance.

Examples:
  • Code review: Peer review of source code for logic errors, style compliance, commenting adequacy
  • PCB inspection: Visual inspection of solder joints, component placement, board layout per IPC standards
  • Drawing review: Checking mechanical drawings for dimensioning accuracy, tolerances, surface finishes
  • Design review: Walk-through of architecture diagram, control flow diagram, or design specification
  • Configuration audit: Verification that produced code/documents match the design specification
  • Trace code inspection: Manual tracing of control flow to verify a requirement is implemented correctly
Standards:
  • DO-178C §6.4 (Reviews & Analyses)
  • ISO 13485 §7.3.4/5 Design Review
  • ASPICE SWE.3
When to use: Use Inspection when you need to verify implementation details, adherence to standards, or presence of expected artifacts. Inspection is effective for checking design correctness before full system test. It is often combined with Test (e.g., code review + unit test, drawing check + hardware test).

Demonstration

Showing that the product functions correctly in realistic or operational conditions (field trial, user acceptance test, live environment).

Examples:
  • Field trial: Real-world deployment with end-users under normal operating conditions
  • User acceptance test (UAT): End-user validation that the product meets their needs
  • Clinical validation: Patient use or clinical study demonstrating safety and effectiveness (ISO 13485)
  • In-service monitoring: Operational data collection showing that the product functions as intended in the field
  • Bench demonstration: Live demonstration to stakeholders showing the product executing a key use case
  • Environmental field test: Operating the product in its actual deployment environment (e.g., automotive on test track, aerospace in flight)
Standards:
  • ISO 13485 §7.3.6 Design Validation
  • ISO 26262 Part 6 §11
  • IEC 62304 §5.7
When to use: Use Demonstration for validation (verifying the product meets user needs in intended use) and for requirements where realistic conditions are critical. Demonstrations are typically used for system-level validation, user acceptance, and compliance with user needs. They provide high confidence but may be expensive and time-consuming.

Combining TAID Methods

A complete verification strategy often combines multiple TAID methods for comprehensive coverage. Examples:

  • Requirement: "Buffer overflow vulnerabilities SHALL be prevented." Methods: (1) Analysis: static analysis tool scanning. (2) Inspection: code review. (3) Test: fuzz testing with oversized inputs.
  • Requirement: "System SHALL operate correctly in temperatures 0–50°C." Methods: (1) Analysis: thermal simulation (FEA). (2) Test: environmental chamber testing. (3) Demonstration: field trial.
  • Requirement: "Software SHALL implement CRC-32 error detection." Methods: (1) Inspection: design review. (2) Analysis: formal proof of CRC detection. (3) Test: protocol test with corrupted packets.

Coverage Strategy: For critical requirements (safety, security, regulatory), use at least two TAID methods. This provides defense-in-depth: if one method has a blind spot, another may catch it. A combination of methods provides higher confidence in requirement closure.

Per-Standard Compliance Mapping

The Tests tab satisfies specific clauses and processes in each regulated standard. Below is a standard-by-standard breakdown showing how to structure tests, annotate them, and ensure compliance.

AS9100 Rev D

§8.3.5 Design Verification & Validation

Design verification and validation must be performed to ensure design outputs meet design inputs and the intended use. Configuration management applies throughout.

Guidance: Verify each design output (requirement) against its corresponding design input (user need). Use the Test Group IDs column to link one or more test groups to a requirement. Document all V&V activities in the Tests tab: include test method (TAID), test results, and any nonconformance findings. Maintain traceability from each test back to the requirement it verifies and the requirement back to the user need. Baseline all approved test cases for configuration management.

ISO 13485:2016

§7.3.5 Design Verification & §7.3.6 Design Validation

Verification confirms outputs meet inputs. Validation confirms the product meets user needs (intended use). Clinical or field validation activities must be recorded separately from bench verification. Auditors SPECIFICALLY check this distinction.

Guidance: Separate verification tests (bench, lab, component-level: Test Type = "Verification") from validation tests (clinical, field, system-level: Test Type = "Validation"). In the Notes column, mark which tests are clinical validation (e.g., "Clinical validation study conducted per ISO 14155" or "Post-market clinical follow-up data"). Each test must have a test method (Analysis, Test, Inspection, Demonstration). Record pass/fail and any deviations. ISO 13485 auditors will verify that verification and validation are recorded as distinct activities and that validation includes evidence of intended use.

DO-178C

§6.3 Structural Testing (Code Coverage) & §6.4 Reviews & Analyses

Software must be verified through structural testing (code coverage by test cases) and through analyses/reviews. DAL A/B require independence of verification. Coverage metrics (Statement, Decision, MC/DC) depend on DAL.

Guidance: Each test must have a test method: "Test" (execution), "Analysis" (code review, FMEA, static analysis), "Inspection" (peer review of code), or "Demonstration" (field demonstration). Record coverage metrics in Notes (e.g., "Statement Coverage: 98%" or "MC/DC Coverage: 95% for DAL A"). DAL A/B tests require an independent test engineer (not the developer); document independence claim in Notes. For DAL A, MC/DC (Modified Condition/Decision Coverage) is typically required; for DAL B, decision coverage may be acceptable. DAL C/D/E may use simpler metrics. Reference Table A test objectives (DO-178C Table A.1–A.7) by objective number in Notes to show alignment.

IEC 62304

§5.5 Software Integration Testing & §5.7 Software Release (Validation)

Software safety class determines verification rigor. Class A requires formal testing and validation; Class C permits simplified testing. Integration and system testing must be traced to software requirements.

Guidance: Annotate each test with the software safety class it addresses (Class A/B/C) in Notes. For Class A, all tests must be documented with detailed test cases, expected results, and actual results. Class B requires documented tests; Class C may allow simpler test records. Use Test Type "Verification" for unit/integration/system tests; use "Validation" for system-level tests verifying intended use. Test Method should reflect the verification activity: "Test" for execution, "Analysis" for code review, "Inspection" for walk-throughs. Ensure all software requirements have corresponding tests; unmapped requirements indicate incomplete verification planning.

ISO 26262

§10 ASIL-Determined Qualification Testing & §6 Coverage Metrics

ASIL (Automotive Safety Integrity Level) determines test coverage and independence requirements. ASIL D requires the highest rigor (formal methods, back-to-back testing, independent verification). Untagged requirements are QM (Quality Management) and use standard testing.

Guidance: Link each test to the requirements it verifies via Test Group ID. For safety-critical requirements marked with ASIL, ensure the test method and rigor match the ASIL level. ASIL A: basic testing acceptable. ASIL B: statement or decision coverage. ASIL C: decision coverage. ASIL D: MC/DC coverage and independent verification (separate tester). In Notes, record the ASIL level being verified and coverage metric achieved (e.g., "ASIL D – MC/DC 98%, Independent Tester: Jane Smith"). Back-to-back testing (comparing two independent implementations) is often required for ASIL D; document the back-to-back pair in Notes. Non-safety-critical (QM) requirements use standard V&V practices.

ASPICE

SWE.4 (Unit Verification), SWE.5 (Integration Test), SWE.6 (Qualification Test), SYS.4 (System Integration Test)

ASPICE separates unit, integration, and qualification testing. Each level must be traceable to corresponding requirements (LLR → Unit tests, HLR → Integration tests, SYS-xxx → System tests). CL2+ requires bidirectional traceability.

Guidance: Use Test Group IDs to link tests to requirements at each level: unit tests (SWE.4) verify LLR (low-level requirements), integration tests (SWE.5) verify HLR (high-level requirements), qualification tests (SWE.6) verify system-level SYS-xxx requirements. Document the test level in Notes (e.g., "Unit Test: LLR-045", "Integration Test: HLR-012", "System Test: SYS-005"). For CL2+ maturity, ensure every requirement has at least one traced test and every test has a parent requirement. Record test results (pass/fail), metrics (coverage, defect density), and any rework needed for CL3+ maturity claims.

Key Compliance Requirements by Standard

AS9100 Rev D (Aerospace)

Design verification (§8.3.5) must ensure design outputs meet design inputs. Maintain configuration management of all approved test cases. Create a Design Verification and Validation Plan (DV&V Plan) documenting the test strategy, methods, and acceptance criteria. Link each test group to the requirements it verifies. Record test results with traceability to the test case and date of execution. All critical tests require sign-off by the configuration manager or quality engineer.

ISO 13485:2016 (Medical Device)

Separate design verification (§7.3.5) from design validation (§7.3.6). Verification tests confirm design outputs meet design inputs. Validation tests confirm the product meets user needs and is safe and effective in intended use. Auditors will specifically request to see: (1) Design verification test plan and results. (2) Design validation plan with clinical or field trial evidence. (3) Traceability linking requirements to verification tests and validation evidence. If you lack clinical validation data, document the rationale. Record all V&V evidence and maintain it for audit.

DO-178C (Aerospace Software)

All requirements must be verified through structural testing (code coverage) and/or reviews/analyses. DAL A/B require independence: the test engineer must be different from the developer. Document independence in Notes. Coverage metrics are mandatory: Statement Coverage (SC), Decision Coverage (DC), and Modified Condition/Decision Coverage (MC/DC). For DAL A, MC/DC is typically required; for DAL B, DC may be acceptable; for DAL C/D/E, SC may suffice. Track coverage metrics for each test group. Reference DO-178C Table A test objectives in your test documentation.

IEC 62304 (Medical Device Software)

Software safety classification (Class A/B/C) drives verification rigor. Class A requires the most comprehensive testing: detailed test cases, expected results, actual results, and sign-off by an independent reviewer. Class B requires documented tests and documented results. Class C may allow simpler test documentation. Annotate each test with the software safety class addressed in Notes. Ensure all software requirements have corresponding integration and system tests. Record integration test results and system test results separately.

ISO 26262 (Automotive Functional Safety)

ASIL determines verification rigor and independence requirements. ASIL A requires standard verification. ASIL B requires decision coverage. ASIL C requires decision coverage and likely independent verification. ASIL D requires MC/DC coverage and mandatory independent verification. Annotate each safety-critical test with the ASIL level and coverage achieved. Back-to-back testing (two independent implementations compared) is often required for ASIL D. QM (Quality Management, non-safety) requirements use standard V&V.

ASPICE (Automotive Development Process)

ASPICE separates unit verification (SWE.4), integration testing (SWE.5), and qualification testing (SWE.6) from system integration testing (SYS.4). Each level must trace to corresponding requirements: LLR (low-level requirements) are verified by unit tests, HLR (high-level requirements) by integration tests, and SYS-xxx (system requirements) by system/qualification tests. Document the test level in Notes for traceability. For CL2+ maturity, ensure bidirectional traceability. Record test results, metrics, and any rework needed.

Frequently Asked Questions

How many tests should a test group have?
A test group (TG-001) may have one or more individual test cases (T-001, T-002, T-003…). The number depends on the complexity of the requirement. A simple requirement (e.g., "System SHALL respond within 200ms") might be verified by a single test. A complex requirement with multiple conditions might require multiple tests. Example: Requirement "System SHALL operate correctly at temperatures 0–50°C and −40–70°C" might be verified by a test group containing T-001 (test at 25°C nominal), T-002 (test at 0°C), T-003 (test at 50°C), T-004 (test at −40°C), T-005 (test at 70°C)—five tests total. Ensure that all test conditions and boundary cases are covered by the test group. If you have more than 10 tests in a single group, consider breaking the group into multiple test groups for clarity.
Can one test group cover multiple requirements?
Yes. A system-level integration test (e.g., TG-010 "End-to-End Functional Test") may verify multiple related requirements simultaneously. Example: A single test exercising the complete login flow might verify REQ-045 (password validation), REQ-046 (rate limiting), and REQ-047 (session creation) in one test run. This is common and appropriate for integration and system tests. However, the traceability matrix must clearly show which test verifies which requirement. In the Requirements tab, the Test Group ID column for REQ-045, REQ-046, and REQ-047 would all reference TG-010. If you need to ensure fine-grained traceability, you might create multiple test groups (TG-010a, TG-010b, TG-010c) for each requirement, even if the underlying test execution is combined. For audit purposes, document the relationship (e.g., in Notes: "TG-010a, TG-010b, TG-010c are executed as part of the integrated test TG-010: End-to-End Functional Test").
When do I use Analysis vs. Inspection?
Analysis and Inspection are both non-execution methods. Use Analysis when you need to evaluate properties through computation, logic, or mathematical reasoning. Use Inspection when you need to examine the artifact (code, design, drawing) visually or structurally. Example: For a requirement "The software SHALL not have buffer overflows," you might use (1) Analysis: static analysis tool (e.g., Coverity, Clang Static Analyzer) scanning the code for buffer overflow patterns, and (2) Inspection: peer code review examining each buffer operation for bounds checking. For a requirement "The algorithm SHALL have a worst-case time complexity of O(n log n)," use Analysis: mathematical proof or code tracing showing the algorithm adheres to the complexity bound. For a requirement "The PCB SHALL have solder joints per IPC-A-610 Grade A," use Inspection: visual or X-ray inspection of solder joints comparing to IPC standard photographs. In practice, many requirements use both: Analysis (FMEA of failure modes) + Inspection (design review of the FMEA) + Test (functional test of the mitigation).
Where do I record test results and pass/fail status?
The Tests tab documents the test cases and methods. The actual test results (pass/fail, measured values, any deviations) should be recorded in a separate Test Results or Test Execution log, which may be a separate sheet, a linked document, or an external test management tool (e.g., TestRail, JIRA). The Notes column in the Tests tab can reference the test result location: e.g., "Results in TestRail project XYZ, test run 2024-02-15" or "See ATP-001-Results.xlsx, Sheet 'Test Execution.'" Some teams include a Notes entry like "PASS" or "PASS (3/3 conditions met)" directly in the Tests tab for quick visibility, but official results are typically recorded externally for audit trail and version control. For regulated environments, ensure your test results are time-stamped, signed by the test engineer, and linked to the corresponding test case in the Tests tab. Create a bidirectional reference: from the test case to the result document and back. For compliance audits, reviewers will ask to see the test results; have them organized and easily accessible.
What about regression testing?
Regression testing verifies that recent changes (code fixes, requirement clarifications) do not break previously verified functionality. Regression tests are typically a subset of your full test suite, re-executed after each change. In the RTMify template, you can handle regression testing in a few ways: (1) Create a separate Test Group for regression: TG-REGR-001, TG-REGR-002, … listing the specific tests that must be re-run. Link these to the requirements affected by the change. (2) Add a Notes entry: "Regression Test – execute after CR-2024-045 approved" to flag which tests must be re-run. (3) For impact analysis, track which requirements are affected by a change and ensure their linked test groups are re-executed. Many teams maintain a "Regression Test Suite" document listing the critical test cases that must pass on every build. If you use test management tools (TestRail, JIRA, Azure Test Plans), mark test cases as part of the "regression suite" and execute them automatically on each build. The key is traceability: the RTMify Tests tab should link to the external test results, and you should have a clear policy for what triggers regression testing and when the full suite must be re-run.

Related Documentation

  • Requirements Tab — Defining testable SHALL statements and requirement traceability
  • User Needs Tab — Capturing stakeholder expectations and regulatory inputs
  • Traceability — Building and validating traceability matrices across all tabs
  • Status Codes — Understanding RTMify status indicators and completion criteria
  • AS9100 Guidance — Design verification and validation, configuration management
  • ISO 13485 Guidance — Design verification and validation, clinical validation requirements
  • DO-178C Guidance — Structural testing, code coverage, independence requirements
  • IEC 62304 Guidance — Software safety classification, testing by class level
  • ISO 26262 Guidance — ASIL-determined verification rigor and coverage metrics
  • ASPICE Guidance — Unit, integration, and qualification testing levels and traceability

Last Updated: March 22, 2026

Standards Covered: AS9100 Rev D, ISO 13485:2016, DO-178C, IEC 62304, ISO 26262:2018, ASPICE 3.1

RTMify is a compliance-aware requirements traceability manager designed for regulated engineering environments. This documentation reflects best practices across aerospace, medical device, automotive, and industrial safety standards.