MSN, FACHE, AIGP | Harvard Senior Executive Fellow
Healthcare Operations Leader | AI Adoption Strategist | Clinical Co-Designer
Engineering the human infrastructure AI needs to succeed in healthcare
Get in Touch
AMAZON
#1 BESTSELLER
Healthcare organizations are deploying AI faster than governance can keep pace. When implementations accelerate without infrastructure, the gap doesn't manifest as one organizational problem—it surfaces differently across your entire leadership team.
These frameworks give your entire leadership team a shared language and executable mechanisms—not seven separate initiatives, but one governance infrastructure that serves every leader's accountability.
Oversight, Judgment, Connection—three non-negotiable conditions that make AI adoption governable. When any decline, leadership pauses to understand why.
Approve, Modify, Override, Observe, Escalate—makes clinical judgment visible and deliberate. Turns invisible rescue work into learning signals.
Explicit, enforceable leadership commitments that make psychological safety structural, not aspirational. Teams can invoke publicly; leadership must respond.
Weekly learning cadence where AMOE patterns surface, leaders decide what to refine, and changes get credited. Catches drift before harm scales.
Melinda deHoll, MSN, FACHE, AIGP, is a healthcare executive with deep clinical experience and more than three decades of responsibility for large-scale healthcare operations, strategy, training, and governance at a national healthcare organization.
Advanced leadership training in complex systems management and organizational transformation.
A master's-prepared nurse, Melinda has worked across clinical practice, operations, training, leadership development, and executive functions. She has overseen enterprise training portfolios spanning hundreds of facilities and held accountability for systems where clinical judgment, safety, training, workforce management, and human performance intersect at scale.
She co-designed an enterprise-scale clinical AI prototype through the Veterans Health Administration's competitive innovation pipeline, advancing from VHA/MIT Hacking Medicine to VHA Make-a-thon and ultimately to development collaboration with Microsoft through VHA Venture Studio.
As a certified AI Governance Professional (AIGP), she brings validated expertise in AI governance frameworks, risk management, and responsible AI deployment.
Her work on AI adoption and human factors in healthcare has been published by the American College of Healthcare Executives (ACHE).
Melinda understands what happens after vendors leave and pilots end—the operational reality of governing AI systems that influence clinical judgment under time pressure, at scale, across variability. Her frameworks emerge from that lived experience.
Engineering the Human Infrastructure AI Needs to Succeed
The technology works. What fails is the human infrastructure—the frameworks, governance, and cultural mechanisms that determine whether AI adoption protects patients or introduces new forms of harm.
This book provides the missing infrastructure: practical frameworks (OJC, AMOE, Trust Contract, Friday Proof) that make AI governance operational, not aspirational. Written for leaders who must build what doesn't yet exist—before speed, pressure, and drift make the choice for them.
Published by Echelon Press | February 2026
I work with leadership teams navigating AI adoption—from board presentations to implementation consulting to team learning sessions.
If this book changed how you see your system, you're already doing the hardest part.
You're not doing it alone.
The pace of change right now is unlike anything we've faced before. AI systems are learning. Healthcare systems are learning. And we—the leaders, clinicians, and operators building the infrastructure that keeps both aligned—are learning too.
This transformational technology is rapidly changing and emerging. No one has completely figured this out. There are no experts in preventing problems that didn't exist three years ago.
Instead we have each other.
What I've shared in this book came from hundreds of conversations with leaders navigating these same tensions—people willing to name what they were seeing, share what wasn't working, and build together when no playbook existed.
That's how we succeed: not by waiting until we're certain, but by learning faster than the systems we're trying to govern.
The frameworks in this book synthesize decades of safety science—High Reliability Organization principles, psychological safety research, systems learning theory, and AI governance models—applied specifically to clinical AI adoption.
The principles are proven. Every mechanism builds on concepts that have prevented harm in healthcare and other high-reliability industries for years. These frameworks synthesize proven safety principles applied to clinical AI adoption. They're informed by direct implementation work in clinical AI environments where these approaches were applied in real workflows.
Formal multi-site validation will take years—harm is occurring now. Early adopters who build this infrastructure now gain strategic advantage while others are still debating whether it's necessary.
These frameworks aren't meant to be final. If you implement elements in your system and discover what works differently in your context, I want to learn from you. If you find adaptations that serve your organization better, share them. The field will only succeed if we learn faster together than any of us could alone.
I'm always learning from leaders doing this work—and happy to help where I can.
If you want to think through something, share what you're building, or simply continue the conversation, reach out: