When I was an aerospace engineer working on the NASA Space Shuttle Program, trust was mission-critical. Every bolt, every line of code, every system had to be validated and tested carefully, or the shuttle would never leave the launchpad. After their missions, astronauts would walk through the office and thank the thousands of engineers for getting them back home safely to their families—that’s how deeply ingrained trust and safety were in our systems.
Despite the “move fast and break things” rhetoric, tech should be no different. New technologies need to build trust before they can accelerate growth.
By 2027, about 50% of enterprises are expected to deploy AI agents, and a McKinsey report forecasts that by 2030, as much as 30% of all work could be carried out by AI agents. Many of the cybersecurity leaders I speak with are looking to bring in AI as fast as they can to enable the business, but also recognize that they need to ensure these integrations are done safely and securely with the right guardrails in place.
For AI to fulfill its promise, business leaders need to trust AI. That won’t happen on its own. Security leaders must take a lesson from aerospace engineering and build trust into their processes from day one—or risk missing out on the business growth it accelerates.
The relationship between trust and growth is not theoretical. I’ve lived it.
Founding a business based on trust
After NASA’s Space Shuttle program ended, I founded my first company: a platform for professionals and students to showcase and share evidence of their skills and competencies. It was a simple idea, but one that demanded that our customers trusted us. We quickly discovered universities wouldn’t partner with us until we proved we could handle sensitive student data securely. That meant providing assurance through a number of different avenues, including showing a clean SOC 2 attestation, answering long security questionnaires, and completing various compliance certifications through painstakingly manual processes.
That experience shaped the founding of Drata, where my cofounders and I set out to build the trust layer between great companies. By helping GRC leaders and their companies gain and prove their security posture to customers, partners, and auditors, we remove friction and accelerate growth. Our rapid trajectory from $1 million to $100 million in annual recurring revenue in just a few years is proof that businesses are seeing the value, and slowly starting to shift from viewing GRC teams as a cost center to a business enabler. That translates to real, tangible results–we’ve seen $18 billion in security influenced revenue with security teams using our SafeBase Trust Center.
Now, with AI, the stakes are even higher.
Today’s compliance frameworks and regulations — like SOC 2, ISO 27001, and GDPR — were designed for data privacy and security, not for AI systems that generate text, make decisions, or act autonomously.
Thanks to legislation like California’s newly enacted AI safety standards, regulators are slowly starting to catch up. But waiting for new rules and regulations isn’t enough—particularly as businesses rely on new AI technologies to stay ahead.
You wouldn’t launch an untested rocket
In many ways, this moment reminds me of the work I did at NASA. As an aerospace engineer, I never “tested in production.” Every shuttle mission was a meticulously planned operation.
Deploying AI without understanding and acknowledging its risk is like launching an untested rocket: the damage can be immediate and end in catastrophic failure. Just as a failed space mission can reduce the trust people have in NASA, a misstep in the use of AI, without fully understanding the risk or applying guardrails, can reduce the trust consumers put in that organization.
What we need now is a new trust operating system. To operationalize trust, leaders should create a program that is:
Transparent. In aerospace engineering, exhaustive documentation isn’t bureaucracy, but a force for accountability. The same applies to AI and trust. There needs to be traceability—from policy to control to evidence to attestation.
Continuous. Just as NASA is continuously monitoring its missions around-the clock, businesses must invest in trust as a continuous and ongoing process rather than a point-in-time checkbox. Controls, for example, need to be continuously monitored so that audit readiness becomes more a state of being, and not a last minute sprint.
Autonomous. Rocket engines today can manage their own operation through embedded computers, sensors, and control loops, without pilots or ground crew directly adjusting valves mid-flight. And as AI becomes a more prevalent part of everyday business, this must also be true of our trust programs. If humans, agents, and automated workflows are going to transact, they have to be able to validate trust on their own, deterministically, and without ambiguity.
When I think back to my aerospace days, what stands out is not just the complexity of space missions, but their interdependence. Tens of thousands of components, built by different teams, have to function together perfectly. Each team trusts that others are doing their work effectively, and decisions are documented to ensure transparency across the organization. In other words, trust was the layer that held the entire space shuttle program together.
The same is true for AI today, especially as we enter this budding era of agentic AI. We’re shifting to a new way of business, with hundreds—someday thousands—of agents, humans, and systems all continuously interacting with one another, generating tens of thousands of touch points. The tools are powerful and the opportunities vast, but only if we’re able to earn and sustain trust in every interaction. Companies that create a culture of transparent, continuous, autonomous trust will lead the next wave of innovation.
The future of AI is already under construction. The question is simple: will you build it on trust?
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

