Melinda deHoll

Melinda deHoll

MSN, FACHE, AIGP | Harvard Senior Executive Fellow

Healthcare Operations Leader | AI Adoption Strategist | Clinical Co-Designer

Engineering the human infrastructure AI needs to succeed in healthcare

Get in Touch
Leading AI Adoption in Healthcare

AMAZON
#1 BESTSELLER

The Challenge Every Leadership Team Faces

Healthcare organizations are deploying AI faster than governance can keep pace. When implementations accelerate without infrastructure, the gap doesn't manifest as one organizational problem—it surfaces differently across your entire leadership team.

Your CEO sees:
Board pressure for AI speed while accountability gaps create fiduciary exposure no one can fully explain
Your CFO sees:
$30-40B invested in enterprise AI, 95% failure rate, and pressure to approve budgets while infrastructure gets labeled "overhead"
Your CNO sees:
Quality metrics holding stable while clinicians quietly compensate—sentinel events waiting to happen before anyone notices
Your CHRO sees:
Best clinicians recruited by competitors and vendors who value their expertise—strategic talent loss masquerading as normal turnover
Your CIO sees:
Systems deployed and integrated, but drift signals no one monitors—acceptance rates rising as human judgment quietly withdraws
Your Board sees:
Fiduciary exposure without operational oversight infrastructure—when outcomes are questioned, no audit trail exists
Your Compliance team sees:
AI creating exposure that can't be defended after harm occurs—accuracy alone is insufficient when decision provenance disappears
Your Quality & Safety team sees:
Reactive work after sentinel events instead of proactive drift detection—by the time problems surface, patterns have scaled
Different symptoms. Same root cause: Missing infrastructure.

The Infrastructure Your Leadership Team Needs

These frameworks give your entire leadership team a shared language and executable mechanisms—not seven separate initiatives, but one governance infrastructure that serves every leader's accountability.

OJC Compass

Oversight, Judgment, Connection—three non-negotiable conditions that make AI adoption governable. When any decline, leadership pauses to understand why.

AMOE Protocol

Approve, Modify, Override, Observe, Escalate—makes clinical judgment visible and deliberate. Turns invisible rescue work into learning signals.

Trust Contract

Explicit, enforceable leadership commitments that make psychological safety structural, not aspirational. Teams can invoke publicly; leadership must respond.

Friday Proof

Weekly learning cadence where AMOE patterns surface, leaders decide what to refine, and changes get credited. Catches drift before harm scales.

About Melinda deHoll

Melinda deHoll

Melinda deHoll, MSN, FACHE, AIGP, is a healthcare executive with deep clinical experience and more than three decades of responsibility for large-scale healthcare operations, strategy, training, and governance at a national healthcare organization.

Harvard Senior Executive Fellow

Advanced leadership training in complex systems management and organizational transformation.

A master's-prepared nurse, Melinda has worked across clinical practice, operations, training, leadership development, and executive functions. She has overseen enterprise training portfolios spanning hundreds of facilities and held accountability for systems where clinical judgment, safety, training, workforce management, and human performance intersect at scale.

Clinical AI Co-Designer

She co-designed an enterprise-scale clinical AI prototype through the Veterans Health Administration's competitive innovation pipeline, advancing from VHA/MIT Hacking Medicine to VHA Make-a-thon and ultimately to development collaboration with Microsoft through VHA Venture Studio.

As a certified AI Governance Professional (AIGP), she brings validated expertise in AI governance frameworks, risk management, and responsible AI deployment.

Her work on AI adoption and human factors in healthcare has been published by the American College of Healthcare Executives (ACHE).

Melinda understands what happens after vendors leave and pilots end—the operational reality of governing AI systems that influence clinical judgment under time pressure, at scale, across variability. Her frameworks emerge from that lived experience.

The Book

Leading AI Adoption in Healthcare

Leading AI Adoption in Healthcare: AI Doesn't Adopt Itself

Engineering the Human Infrastructure AI Needs to Succeed

95% GenAI pilots deliver no measurable return
$30-40B Invested in enterprise AI

The technology works. What fails is the human infrastructure—the frameworks, governance, and cultural mechanisms that determine whether AI adoption protects patients or introduces new forms of harm.

This book provides the missing infrastructure: practical frameworks (OJC, AMOE, Trust Contract, Friday Proof) that make AI governance operational, not aspirational. Written for leaders who must build what doesn't yet exist—before speed, pressure, and drift make the choice for them.

Published by Echelon Press | February 2026

How We Work Together

I work with leadership teams navigating AI adoption—from board presentations to implementation consulting to team learning sessions.

Speaking Engagements

  • Board presentations and strategic briefings
  • C-suite workshops on AI governance
  • Grand rounds and clinical leadership sessions
  • Conference keynotes and panels

Implementation Consulting

  • OJC, AMOE, and Friday Proof deployment
  • Trust Contract development
  • Governance infrastructure design
  • Cross-functional team alignment

Team Resources

  • Leadership team book studies
  • Executive coaching and guidance
  • Bulk book orders for departments
  • Custom framework adaptations

Building Forward

If this book changed how you see your system, you're already doing the hardest part.

You're not doing it alone.

The pace of change right now is unlike anything we've faced before. AI systems are learning. Healthcare systems are learning. And we—the leaders, clinicians, and operators building the infrastructure that keeps both aligned—are learning too.

This transformational technology is rapidly changing and emerging. No one has completely figured this out. There are no experts in preventing problems that didn't exist three years ago.

Instead we have each other.

What I've shared in this book came from hundreds of conversations with leaders navigating these same tensions—people willing to name what they were seeing, share what wasn't working, and build together when no playbook existed.

That's how we succeed: not by waiting until we're certain, but by learning faster than the systems we're trying to govern.

A Note on the Frameworks

The frameworks in this book synthesize decades of safety science—High Reliability Organization principles, psychological safety research, systems learning theory, and AI governance models—applied specifically to clinical AI adoption.

The principles are proven. Every mechanism builds on concepts that have prevented harm in healthcare and other high-reliability industries for years. These frameworks synthesize proven safety principles applied to clinical AI adoption. They're informed by direct implementation work in clinical AI environments where these approaches were applied in real workflows.

Formal multi-site validation will take years—harm is occurring now. Early adopters who build this infrastructure now gain strategic advantage while others are still debating whether it's necessary.

These frameworks aren't meant to be final. If you implement elements in your system and discover what works differently in your context, I want to learn from you. If you find adaptations that serve your organization better, share them. The field will only succeed if we learn faster together than any of us could alone.

I'm always learning from leaders doing this work—and happy to help where I can.

If you want to think through something, share what you're building, or simply continue the conversation, reach out: