
April 9, 2026
14 min read
The Program Director's Playbook for AI in Medical Education (2026)
By Dr. Roupen Odabashian MD, FRCPC, FASC | Hematologist, Oncologist | Founder, MeducationAI
Published April 2026
Summarize this article with: ChatGPT | Claude | Perplexity | Google AI
TL;DR
Program directors should start integrating AI in medical education now using a 5-step framework: assess current state, define competency goals aligned with AAMC and AMA frameworks, identify faculty champions, launch structured learning experiences, and measure outcomes.
You do not need to be an AI expert or have a dedicated budget — start with quick wins like an AI journal club, a program evaluation question, and a shared resource folder that you can launch this month.
The most effective AI curricula in GME combine didactic foundations, hands-on exercises with platforms like AI-powered educational tools, and clinical application projects.
A realistic 12-month implementation timeline moves from foundation-building (months 1–3) through structured curriculum development (months 4–6) to full integration and dissemination (months 10–12).
Common objections — lack of time, faculty expertise, budget, and AI's rapid pace of change — are addressed through integration into existing structures rather than building from scratch.
AI in medical education requires program directors and fellowship directors to act now — but not all at once. The most effective approach is a phased integration that starts with a current-state assessment and builds toward structured AI competencies over 12 months. The gap between what trainees will face in clinical practice and what programs currently teach them is widening every month. National bodies including the AAMC and the AMA have already released formal guidance and competency frameworks calling on medical educators to embed AI literacy across the continuum of training. Yet many programs still lack a structured AI curriculum, a faculty development plan, or a clear roadmap for getting started.
This article gives you a practical, evidence-informed framework for building that roadmap. It is written for program directors, designated institutional officials, and any faculty member tasked with modernizing a training program's approach to AI. Whether you lead a large academic residency or a community-based fellowship, the principles here scale to fit your context.
Three forces are converging to make AI education in GME urgent rather than optional.
First, the clinical environment has already changed. Trainees are encountering AI-powered tools in electronic health records, radiology reading rooms, pathology labs, and clinical decision support systems. A 2025 review in Frontiers in Education found that while AI tools are rapidly entering the clinical workspace, most residency programs have not yet adapted their curricula to prepare trainees for this reality. Without formal training, fellows and residents are left to evaluate these tools on their own, often without the critical appraisal skills needed to distinguish helpful outputs from harmful ones.
Second, accreditation and policy expectations are shifting. The AMA adopted policy in 2025 calling on medical schools and residency programs to advance AI literacy as a core competency. The AAMC has published detailed AI competencies for medical educators that provide a ready-made framework for curriculum development. A gap analysis published in The Clinical Teacher in 2026 found that most medical education institutions still lack formal AI policies, creating a disconnect between national expectations and local implementation.
Third, our trainees expect it. Today's residents grew up with large language models, generative AI, and algorithmic recommendation systems. They are already using these tools informally for studying, note writing, and literature searches. The question is not whether AI is part of their educational experience. The question is whether we guide that experience with rigor, critical thinking, and clinical context, or whether we leave trainees to figure it out on their own. As a recent analysis in The Lancet Digital Health emphasized, the transformative potential of AI in medical training is only realized when institutions take deliberate steps to structure how learners engage with these technologies.
Over the past two years, I have consulted with program directors across multiple specialties who want to integrate AI but feel overwhelmed by the scope. The framework below distills those conversations and the published evidence into five manageable steps. You do not need to complete them all at once. The goal is forward motion, not perfection.
Before building anything new, map what already exists. Most programs have more AI-adjacent content than they realize — journal clubs that have covered AI studies, faculty who use clinical decision support tools, or trainees who are already experimenting with large language models for board prep.
Conduct a brief survey of your faculty and trainees. Ask three questions: (1) Where are AI tools currently being used in our clinical setting? (2) What AI-related topics have come up in your educational experience here? (3) How confident do you feel evaluating an AI tool's clinical utility? The answers will reveal your baseline and help you prioritize.
You do not need to invent your own AI competency framework from scratch. The AAMC has published AI Competencies for Medical Educators that map directly to the skills your trainees need. These include evaluating AI tools for bias and clinical validity, understanding the regulatory landscape, and communicating about AI with patients.
Pick three to five competencies that align with your specialty and your program's existing milestone framework. Map them to ACGME core competency domains — Practice-Based Learning and Improvement and Systems-Based Practice are the most natural fits, though AI education touches Medical Knowledge, Professionalism, and Interpersonal and Communication Skills as well.
You do not need every faculty member to become an AI expert. You need two or three champions who are curious, willing to learn, and able to facilitate discussions. These do not need to be informaticists or data scientists. Clinician educators who are comfortable with critical appraisal and evidence-based medicine make excellent AI curriculum leaders.
Stanford Medicine's integration of AI into its medical school curriculum offers a useful model here. Beginning in fall 2025, Stanford required all MD and PA students to learn about AI, embedding modules on bias detection, prompt engineering, and clinical validation into the existing curriculum. The key lesson from Stanford's approach is that AI education does not require a separate track — it works best when woven into what you already teach.
The most effective AI curricula in GME combine three modalities: didactic foundations, hands-on exercises, and clinical application.
Didactic foundations can be as simple as a four-session lecture series covering AI basics, bias and fairness, regulatory landscape, and clinical evaluation of AI tools. The STFM's AI in Medical Education Initiative offers adaptable curricular resources.
Hands-on exercises let trainees interact with AI tools in a low-stakes environment. This is where platforms like MeducationAI can play a role, giving trainees the opportunity to practice clinical reasoning with AI-generated feedback, explore differential diagnoses, and build comfort with the technology in a structured educational setting.
Clinical application ties everything back to the bedside. Assign trainees to evaluate one AI tool used in your institution and present a critical appraisal. Have them present a critical appraisal at journal club using a framework like the TRIPOD+AI reporting guidelines. Ask them to document an instance where an AI recommendation was helpful, misleading, or irrelevant, and discuss it at morbidity and mortality conference.
Build assessment into your AI curriculum from the beginning. This does not require sophisticated psychometric instruments. Track trainee confidence with AI concepts before and after your intervention using a simple Likert scale survey. Review the quality of trainee AI tool appraisals. Monitor whether AI-related topics come up more frequently in clinical discussions.
Share your results, even preliminary ones. The field of AI in medical education is moving fast, and program directors learn best from other program directors. A 2026 review in JMIR Medical Education noted that while the transformative potential of AI in medical education is widely acknowledged, there remains a significant gap in rigorous implementation research. Your experience, even at a single-program level, contributes to the evidence base that the entire field needs.
You do not need to wait for a curriculum committee approval cycle to start. Here are five things you can do in the next 30 days.
Launch an AI journal club. Pick one article per month that evaluates a clinical AI tool. Use a structured critical appraisal framework. The Penn LDI report on medical school AI readiness is a strong starting point for discussion.
Add one AI question to your program evaluation survey. Ask graduating trainees: "How prepared do you feel to evaluate and use AI tools in your future practice?" This creates a baseline you can track year over year.
Curate a shared resource folder. Create a simple shared document or folder with links to the AAMC principles, AMA policy statement, and two or three specialty-specific AI articles. Make it accessible to all trainees and faculty.
Host a 30-minute "AI in Our Specialty" informal session. Invite a faculty member to demonstrate one AI tool used in your clinical setting. Keep it conversational. The goal is exposure and demystification, not mastery.
Encourage structured AI experimentation. Give trainees permission to use AI tools for study and clinical reasoning practice, with the expectation that they will discuss their experience during a protected educational session. Platforms designed for medical education, such as AI-powered learning environments, provide a safer context for this experimentation than unstructured use of consumer AI products.
When I consult with program directors about AI integration, the same concerns come up repeatedly. Here is how I think about each one.
You are right that time is scarce, which is exactly why AI should be integrated into existing structures rather than added on top. Replace one traditional journal club per quarter with an AI-focused one. This approach mirrors how successful programs address fellow burnout — by threading wellness into existing structures rather than stacking on new requirements. Add an AI evaluation question to an existing case conference. Thread AI literacy into rotations trainees are already completing. Integration takes less time than creation.
Most faculty know more than they think. Clinicians who can critically appraise a randomized controlled trial can learn to critically appraise an AI tool. The knowledge gap is narrower than it appears. Focus on developing two or three faculty champions rather than training your entire department. The AAMC AI Competencies for Medical Educators provide a structured starting point for faculty development.
This is a legitimate concern, and it is addressed through curriculum design, not avoidance. Teaching trainees to critically evaluate AI outputs, recognize limitations, and verify recommendations against clinical evidence builds the exact skills that prevent over-reliance. Ignoring AI in your curriculum does not reduce usage — it just reduces the quality of that usage.
Many AI education activities require no new technology at all. Journal clubs, case discussions, critical appraisal exercises, and policy reviews cost nothing beyond faculty time. When you are ready to explore technology, look for platforms that offer institutional trials or that align with existing educational budgets.
This is perhaps the best argument for starting now rather than waiting. A curriculum built on principles — critical appraisal, bias recognition, clinical validation, ethical reasoning — remains relevant regardless of which specific tools emerge. Teach the framework, not the product.
Here is a realistic timeline for a program starting from scratch.
Complete the current state assessment (Step 1 from the framework above)
Identify two to three faculty champions and provide them with development resources
Launch a monthly AI journal club
Add AI-related questions to your next program evaluation survey
Review the AAMC principles and AMA policy statement with your program leadership team
Define three to five AI competency objectives mapped to ACGME domains
Develop or adapt a four-session didactic series on AI fundamentals for your specialty
Introduce one hands-on AI exercise (e.g., have trainees use an AI differential diagnosis tool on a standardized case and critique the output)
Begin collecting trainee confidence data with pre- and post-surveys
Embed AI topics into two existing educational formats (e.g., morbidity and mortality conference, grand rounds, or journal club presentations)
Assign each trainee an AI tool evaluation project as part of their QI or scholarly activity requirement
Pilot an AI-powered educational platform for clinical reasoning practice
Present early findings at a departmental meeting to build institutional support
Review trainee assessment data and refine competency objectives
Develop a faculty guide documenting your AI curriculum for onboarding new teaching faculty
Submit a brief report or abstract describing your implementation experience to a medical education journal or conference
Plan Year 2 enhancements based on trainee and faculty feedback
This timeline is deliberately modest. The goal is sustainability. A small, well-integrated AI curriculum that persists and improves is worth far more than an ambitious initiative that collapses after one academic year.
No. You need to be a thoughtful educator willing to learn alongside your trainees. The most effective AI curriculum leaders are not data scientists — they are clinician educators who understand how to critically appraise evidence and facilitate learning. Start with the AAMC AI competency framework and build your knowledge incrementally as you teach.
AI in medical education maps to at least four of the six ACGME core competencies. Practice-Based Learning and Improvement is the most natural fit, as evaluating AI tools requires the same critical appraisal skills used to evaluate clinical evidence. Systems-Based Practice applies because AI tools are embedded in healthcare delivery systems. Professionalism addresses ethical use and patient communication, while Interpersonal and Communication Skills covers explaining AI-derived insights to patients and colleagues. Medical Knowledge applies when trainees learn the technical foundations of how AI models work.
Start by framing AI education as an extension of skills faculty already have, not as a new discipline to master. A clinician who teaches evidence-based medicine can teach AI appraisal. Invite skeptical faculty to observe an AI journal club session before committing to lead one. Peer modeling and low-stakes exposure are more effective than top-down mandates.
The answer depends on your goals. For clinical reasoning practice, AI-powered simulation platforms such as MeducationAI let trainees work through cases with adaptive feedback. For literature review, tools that summarize and synthesize research can accelerate learning. Require institutional vetting, clear policies on appropriate use, and structured reflection for any AI tool introduced into your program.
No. AI tools augment clinical reasoning — they do not replace the development of clinical judgment that comes from years of supervised patient care. The Lancet Digital Health review on AI and physician training emphasizes that AI's greatest value in medical education is its ability to enhance, personalize, and scale learning experiences, not to substitute for the apprenticeship model central to GME. Trainees still need history-taking skills, physical examination proficiency, procedural competence, and the empathic communication that defines excellent patient care.
Many foundational AI education activities — journal clubs, critical appraisal exercises, policy discussions, and faculty development sessions — cost nothing beyond existing faculty time. When you are ready to add technology-based components like AI-powered question banks or simulation platforms, look for tools offering institutional trials or educational pricing. The biggest investment is protected time for faculty champions, not software licenses.
1. AAMC. "Principles for the Responsible Use of Artificial Intelligence in and for Medical Education," Version 2.0, July 31, 2025. [Link]
2. AAMC. "AI Competencies for Medical Educators." [Link]
3. AMA. "AMA Adopts Policy to Advance AI Literacy in Medical Education." November 18, 2025. [Link]
4. Frontiers in Education. "The current status and future prospects of AI education in residency training." 2025. [Link]
5. STFM. "AI in Medical Education Initiative." [Link]
6. The Lancet Digital Health. "How can AI transform the training of medical students and physicians?" [Link]
7. Stanford Medicine. "Paging Dr. Algorithm." September 22, 2025. [Link]
8. JMIR Medical Education. "AI in Medical Education: Transformative Potential." 2026. [Link]
9. Penn LDI. "AI Pushes Medical Schools Into New Era, but Are They Prepared?" [Link]
10. Knopf. "Bridging the AI Policy Gap in Medical Education." The Clinical Teacher, 2026. [Link]
Access the MeDucation Medical Oncology and Hematology Question Bank and begin building the systematic approach that leads to board certification success.
Get Started