AI Governance Gets Real: Key Takeaways from NVTC’s Conversation on Trustworthy AI

Artificial Intelligence is moving from possibility to practice, and the Northern Virginia Technology Council’s recent program, “AI Governance Gets Real: From Buzzwords to Business,” made that shift unmistakably clear. With Congressman Don Beyer and Trustible’s Gerald Kierce opening the discussion, followed by industry leaders from Leidos, Marashlian & Donahue, and Mobomo, the morning centered on a single question: How do we build and deploy AI that people can trust? 

The answer, as the panel made clear, isn’t simple, but it’s becoming increasingly urgent. 

AI Trust: More Urgency, Less Alignment

Congressman Beyer described himself as a “huge AI optimist,” especially in areas like healthcare, yet acknowledged that public trust remains fragile. The skepticism today centers on three consistent fears:  

  • Loss of jobs
  • Rapid environmental impact of expanding data centers, two thirds of which are located in Virginia
  • The specter of super-intelligent systems that trigger pop-culture anxieties about runaway machines

The result is a landscape where organizations want to adopt AI but consumers don’t feel protected, and frameworks are still catching up. Still, Beyer’s optimism remained clear, particularly in healthcare. He pointed to AI’s extraordinary potential to accelerate medical research, uncover new biological insights, and drive down costs across largescale systems like dialysis and medication management. As Beyer noted, “there has been a massive business imperative for cybersecurity…but no parallel ramp up for AI trust.” Early federal efforts, from NIST’s emerging frameworks to evolving executive orders, signal progress, but not consensus. 

With no short-term federal solution on the horizon, states have begun to take the lead. California already sees these as legal issues, while others lag behind. Meanwhile, companies aren’t stepping back from AI; they’re simply pushing forward without consistent guardrails. 

Transparency is Becoming Non-Negotiable 

Beyer pointed to the AI Transparency Act as one of the emerging paths forward. The central requirement: organizations must disclose how their models are trained, what data feeds them, and how outputs are monitored for safety. 

That kind of transparency is the precursor to what Beyer described as an AI safety label; a concept still in development but increasingly expected. Today, information flows more readily to national security agencies than to Congress. In the future, transparency will need to be democratized, not centralized. 

The Fastest Movers Will Shape the Standards 

Recent realignments in federal procurement signal a shift toward rapid development and iterative delivery, especially in defense and national security programs. Agencies are being pushed to modernize faster, and that acceleration places pressure on governance processes. 

Beyer expects NIST’s work to evolve into a trust certification parallel to FedRAMP, providing organizations with a recognizable unified approach to responsible AI. 

The Northern Virginia region, he emphasized, is uniquely positioned to lead. With Amazon’s relocation and a dense concentration of technical talent, NOVA could become a national center of gravity for AI trust. 

Panel Perspectives: The Path from Governance to Practicality  

Following the fireside discussion, the panel shifted to the realities of building responsible AI inside large organizations. 

Leidos: Governance at Scale 

Geoff Schaefer outlined Leidos’ multi-year journey to create an AI governance framework built on four pillars: 

  • Visibility across all AI use cases (Leidos currently has around 300 use cases)
  • Cross-disciplinary executive oversight
  • Cross-disciplinary executive oversight
  • Pattern-based governance to streamline controls across hundreds of similar use cases

With more than 600 expected AI use cases next year, a scalable model isn’t optional – it’s an operational necessity. 

Marashlian & Donahue: Start Small, Build the Muscles 

Attorney Brian Alexander described how organizations begin their governance journey: establishing context, building diverse governance teams, creating structured intake processes, and reducing silos between legal, compliance, privacy, and technical groups. Many organizations have governance “pieces,” he noted, but lack integration, especially as AI use cases multiply. 

Mobomo: Agentic AI and the Future of Product Development 

Mobomo CEO Brian Lacey brought a product-builder’s perspective, emphasizing that clients increasingly ask for one thing: Use AI to build products faster, more efficiently, and more intelligently.  

Mobomo  integrates AI into its human-centered design process through rapid prototyping with Claude, using AI to generate proto personas that deepen understanding of user jobs and motivations. Internally, Mobomo develops lightweight AI tools to support research, sales, and operational workflows. 

In Mobomo’s work, responsible AI begins at the design phase: documenting decisions, identifying data considerations, evaluating model options, and assessing risk early so transparency is baked into the product’s foundation. 

Agentic AI: A Workforce Multiplier Comes into Focus 

One of the most forward-leaning themes of the morning centered on agentic AI: systems that can take action autonomously on behalf of a user. 

Schaefer described agents as the “form factor that will finally push AI forward.” Unlike traditional AI tools that require prompting, agents will operate continuously, running in the background, completing tasks, and augmenting human capability at scale.  

For organizations the size of Leidos, this represents a profound shift, a potential “force multiplier” that could effectively double operational capacity. 

But with that opportunity comes risk. Liability for agentic AI systems, panelists agreed, will largely fall on the organizations that deploy them. As agents gain autonomy, legal and compliance teams must evolve just as quickly as developers. Questions about authorization, decision making boundaries, and contractual responsibility will shape the next generation of AI governance. 

Innovation and Governance Aren’t Opposites, They’re Interdependent  

Throughout the discussion, one consistent thread connected every stakeholder, from policymakers to engineers: speed must be matched with structure. 

Risk must be understood, measured, and contextualized rather than assumed. Standards like ISO 27001 and the emerging ISO 42001 provide familiar, auditable scaffolding for organizations who need to incorporate AI into their security posture. And in product development, responsible AI begins at the design phase: documenting models, assessing data flows, identifying risks, and being transparent about limitations. 

Innovation without governance is reckless. Governance without innovation is stagnant. The leaders who thrive will be those who strike the balance.  

Looking Ahead 

If there was a defining energy in the room, it was momentum. AI is accelerating, policy is evolving, and organizations across industry and government are searching for a path that enables transformational capability without compromising trust. 

Mobomo was proud to join this conversation and connect with leaders who are working to advance safe, transparent, and human-centered AI. As agentic AI begins moving from experimentation to enterprise deployment, Mobomo remains deeply focused on designing systems that augment human capability and embed trust at every stage, from design to deployment.  

Follow Mobomo for more insights at the intersection of AI governance, digital transformation, and human-centered technology. 

About Mobomo, LLC 

Mobomo, a private company headquartered in the D.C. metro area, is a CMMI Dev Level 3, ISO 27001:2022, ISO 9001:2015, and CMMC Level 1 provider of digital transformation system integration services. A premier provider of mobile, web, infrastructure, and cloud applications to federal agencies and large enterprises, Mobomo combines leading-edge technology with human-centered design and strategy to craft next generation digital experience. From private sector companies to government agencies, we have amassed deep expertise helping our clients enhance and expand their existing web and mobile suite. Interested in learning more about Mobomo? Take a tour of our capabilities, our portfolio of work, the team members who make our clients look so fantastic, and feel free to reach out with any questions you might have.