← Back to Blog

DO-178C, the software considerations standard for airborne systems, is one of the most rigorous software certification frameworks ever developed. Its requirement for bidirectional traceability — from high-level requirements to low-level design to code to tests and back — has made it the reference point for safety-critical software development across aerospace, defense, and increasingly, medical devices and automotive.

AI systems present a fundamental challenge to DO-178C-style certification: the "requirements" encoded in a trained model are not specified by humans and verified against those specifications. They emerge from training data. The gap between DO-178C's traceability requirements and the operational reality of trained models has been one of the central problems in AI certification for mission-critical applications.

Where DO-178C Principles Apply to AI

Several DO-178C principles translate directly to AI governance, even where the standard itself does not apply:

DO-178C did not anticipate large language models. But the safety engineering discipline it embodies — make the system's behaviour deterministic and verifiable — applies directly.

The Certification Gap and How Governance Infrastructure Addresses It

The core certification gap for AI systems is that trained models do not have "requirements" in the DO-178C sense. Their behaviour in any given operational context is an emergent property of training, not a specified property that can be verified against a requirements document.

Hardware-anchored governance does not solve this gap — it manages it. Instead of certifying that the model will behave correctly in all possible contexts, governance infrastructure certifies the constraints within which the model is authorised to operate. The certification scope is not "this model is safe in all contexts" — it is "this model, in this configuration, operating within these parameters, under these governance conditions, has been assessed and approved."

This is a fundamentally different certification approach, but it is one that regulators in aviation, defense, and healthcare are increasingly accepting because it provides the evidence they actually need: proof that the deployment was governed, not proof that the model is perfect.

Applying the Framework in Defense Contexts

For defense contractors deploying AI in systems that interface with DO-178C-certified avionics, the governance certification approach provides a defensible boundary: the AI system operates as a separate subsystem with defined interfaces to the certified avionics, and its governance certification provides evidence that the interface will behave within specified parameters.

This partition approach — separating AI decision-making from certified safety-critical functions through well-defined, governance-certified interfaces — is becoming the standard architecture for AI integration in defense platforms. The Claviger.AI OS provides the governance infrastructure for the AI partition, with certificate-based evidence that satisfies the interface requirements of the certified partition.

The Path to Formal AI Certification

EASA's concept paper on AI in aviation, FAA's forthcoming AI policy framework, and the emerging EUROCAE/RTCA work on AI certification all point toward a governance-evidence-based approach to AI certification. The organisation that has hardware-anchored governance records from development through deployment will be positioned to satisfy these emerging standards more quickly than organisations relying on policy documentation alone.


Claviger.AI works with defense contractors and aviation system integrators on governance certification frameworks. Contact us to discuss your certification context.