Back to Blog
Featured image for MeducationAI blog article: How to Build an AI Curriculum for Your Residency Program: A Step by Step Guide

April 12, 2026

14 min read

How to Build an AI Curriculum for Your Residency Program: A Step by Step Guide


How to Build an AI Curriculum for Your Residency Program: A Step by Step Guide

By Dr. Roupen Odabashian MD, FRCPC, FASC | Hematologist Oncologist | Founder, MeducationAI

Published April 2026

The Short Answer: Start Small, Build in Phases, and Anchor Everything to Clinical Relevance

If you are a program director or GME leader reading this, you already sense the urgency. Artificial intelligence is reshaping how we diagnose, treat, and communicate with patients, yet most residency programs still lack a structured curriculum that prepares trainees to use these tools safely and effectively. The good news is that building an AI curriculum does not require a computer science department or a seven figure budget. It requires a phased approach, faculty who are willing to learn alongside their residents, and a commitment to tying every educational objective back to patient care.

This guide walks you through a practical, twelve month roadmap for integrating AI education into your residency program. It is built on published frameworks from the AAMC, AMA, and peer reviewed literature, combined with lessons learned from building clinical AI tools at MeducationAI. Whether you lead a community based program with fifteen residents or an academic powerhouse with two hundred, the principles are the same: start with literacy, move to hands on integration, and iterate based on data. [1][2][3]


Why Your Program Needs an AI Curriculum Now

The case for urgency is no longer theoretical. A 2025 review published in Frontiers in Education found that while residents are increasingly using AI tools in some form, most programs have yet to implement formal AI training on how to evaluate their outputs or understand their limitations. [4] The American Medical Association responded by adopting a formal policy urging medical schools and residency programs to integrate AI literacy into their curricula. [3] And the AAMC released a comprehensive set of AI competencies for medical educators, signaling that accreditation bodies will soon follow with formal requirements. [2]

The gap between what residents encounter in practice and what programs teach them is widening. Large language models are already being used for clinical documentation, differential diagnosis support, patient communication, and literature synthesis. Without structured education, trainees are left to figure out on their own which tools to trust, when to override an algorithm, and how to explain AI assisted decisions to patients.

There is also a workforce argument. Programs that offer AI training will increasingly attract top applicants. A 2025 analysis in The Lancet Digital Health explored how AI has the potential to transform both undergraduate and graduate medical training. [6] Stanford Medicine began requiring AI curriculum for all MD and PA students starting in fall 2025. [5]

The question is no longer whether to build an AI curriculum. It is how quickly you can get one off the ground.


Phase 1: Foundation (Months 1 to 3) | AI Literacy, Policy, and Faculty Buy In

Establish a Curriculum Committee

Before designing a single learning objective, assemble a small working group. This should include at least one program director or associate program director, a chief resident, a faculty member with informatics interest (even if self taught), and ideally someone from your institution's IT or clinical informatics team. The committee's first job is not to create content. It is to define scope: what should residents know about AI by the end of training, and what is outside the boundaries of your program's responsibility?

Define Core Competencies

The AAMC's AI competency framework for medical educators provides an excellent starting point. [2] It organizes competencies into domains including foundational AI knowledge, ethical and equitable use, clinical application, and continuous learning. For a residency curriculum, consider mapping these to your existing ACGME milestones. AI literacy can naturally integrate into competencies like Practice Based Learning and Improvement, Systems Based Practice, and Professionalism.

A practical starting set of competencies might include:

  • Understanding the basics: What is machine learning? What is a large language model? What is the difference between supervised and unsupervised learning?

  • Evaluating AI tools: How do you assess an algorithm's sensitivity, specificity, and calibration? What are common sources of bias?

  • Clinical application: When should you trust an AI recommendation? When should you override it? How do you document AI assisted decisions?

  • Ethics and equity: How do training data biases propagate into clinical outputs? What are the regulatory and liability implications?

  • Communication: How do you explain an AI informed recommendation to a patient?

Develop an Institutional AI Policy

Knopf (2026) highlights a significant policy gap in medical education, noting that many programs lack clear guidelines on how trainees should use AI tools in clinical and educational settings. [10] Your curriculum should be built on a foundation of explicit policy: which tools are approved, what data can be entered into them, how AI generated content should be attributed, and what constitutes appropriate versus inappropriate use.

The AAMC's principles for responsible AI use provide a national framework, emphasizing transparency, accountability, equity, and ongoing evaluation. [1] Translate these into a one to two page institutional policy document and review it with all incoming residents during orientation.

Secure Faculty Buy In

This is where many programs stall. Faculty may feel unprepared to teach something they are still learning themselves. The key is to reframe the ask: you are not asking faculty to become AI experts. You are asking them to model curiosity, critical evaluation, and responsible adoption, skills they already teach in every other domain of medicine.

Host a single, low pressure faculty development session in Month 1. Show three to four examples of how AI tools are already being used in clinical practice (documentation assistants, imaging interpretation, risk calculators). Let faculty try them. Collect reactions. Use this session to identify early champions who can help codevelop curriculum content.

Phase 2: Integration (Months 4 to 8) | Tools, Case Simulations, and Assessment

article-02-assessment.png

Introduce Hands On Learning

Didactic lectures on "what is AI" will not move the needle alone. By Month 4, residents should be actively using AI tools in supervised educational settings. This is where the curriculum transitions from knowledge to application.

A survey of medical school faculty and students published in PMC found that survey respondents identified a combination of didactic and experiential approaches as promising for AI education. [7] Consider the following formats:

  • AI tool workshops: Two hour sessions where residents use approved AI tools on deidentified cases. For example, have them run the same clinical scenario through a large language model and a traditional clinical decision support tool, then compare the outputs.

  • Case based simulations: AI powered patient simulations allow residents to practice clinical reasoning in a safe environment. Platforms like MeducationAI generate dynamic clinical cases where the patient responds to the learner's questions and decisions in real time, providing immediate feedback on diagnostic and therapeutic reasoning.

  • Journal clubs with an AI lens: Select published studies that used AI models and have residents critically appraise the methodology, focusing on training data, validation, generalizability, and bias.

Align with ACGME Competencies

Integration does not mean adding more to an already packed schedule. The most sustainable approach is to embed AI content into existing educational structures. Morning report can include a monthly "AI case" where a clinical decision support tool's recommendation is discussed alongside the team's reasoning. Morbidity and mortality conferences can examine cases where algorithmic bias or over reliance on automation contributed to an adverse event.

The STFM's AI in Medical Education Initiative offers practical toolkits for weaving AI content into existing curricula without requiring additional protected time. [9]

Build Assessment Into the Curriculum

If you do not assess it, residents will not prioritize it. Create at least two formal assessment touchpoints:

1. A knowledge assessment at the midpoint (Month 6): A short quiz or case based assessment covering AI fundamentals, bias recognition, and appropriate clinical use.

2. An observed skill assessment by Month 8: Have residents demonstrate competency by evaluating a real AI tool's output in a clinical scenario, documenting their reasoning for accepting, modifying, or rejecting the tool's recommendation.

Phase 3: Evaluation and Iteration (Months 9 to 12) | Metrics, Feedback, and Scaling

Collect Meaningful Data

A 2026 article in JMIR Medical Education described AI's transformative potential in medical education but cautioned that most programs lack rigorous outcome measurement. [8] Do not fall into the trap of measuring only satisfaction ("Did you enjoy the AI workshop?"). Track outcomes that reflect genuine competency development:

  • Pre and post knowledge scores on AI fundamentals

  • Resident confidence in evaluating AI tool outputs (self reported, tracked longitudinally)

  • Observed performance on AI related case assessments

  • Frequency and appropriateness of AI tool use in clinical documentation and decision making (chart review)

  • Patient safety events related to AI tool use (ideally zero, but track it)

Iterate Based on Feedback

At Month 9, convene your curriculum committee for a structured review. What worked? What did residents find most and least valuable? Which faculty sessions generated the most engagement? Use resident feedback surveys, assessment data, and faculty observations to revise the curriculum for Year 2.

Common adjustments after Year 1 include:

  • Shifting more time from didactics to hands on workshops

  • Adding specialty specific AI applications (radiology AI for diagnostic radiology residents, natural language processing for psychiatry, genomic AI for oncology)

  • Creating resident led AI projects where trainees identify a clinical workflow that could benefit from AI and propose a solution

Plan for Scaling

Once your core curriculum is stable, think about dissemination. Can your AI modules be shared across programs in your institution? Can your assessment tools be adapted for other specialties? Document your curriculum, outcomes, and lessons learned. Publish them. The field desperately needs more program level implementation data.


Faculty Development: Getting Your Team Ready

Faculty development is the single biggest predictor of whether your AI curriculum will succeed or quietly disappear after Year 1. The literature is clear on this point: programs that invest in faculty readiness see higher adoption, better teaching quality, and more sustainable integration. [7][10]

A Practical Faculty Development Plan

Month 1: Awareness Session (2 hours)

Introduce the rationale for AI curriculum. Demonstrate three to four AI tools relevant to your specialty. Let faculty experiment. Collect questions and concerns.

Month 3: Teaching Skills Workshop (3 hours)

Teach faculty how to facilitate AI tool evaluations with residents. Provide case templates, discussion guides, and grading rubrics. Practice with role play scenarios.

Month 6: Peer Learning Circle (ongoing, monthly)

Create a low stakes monthly forum where faculty share AI tools they have encountered, discuss challenges, and workshop teaching approaches. Rotating facilitation keeps it sustainable.

Month 9: Advanced Topics Elective (for interested faculty)

Offer a deeper dive into topics like AI regulation, algorithmic fairness, and implementation science. Partner with your institution's informatics or data science teams. This creates your next generation of AI curriculum leaders.

Addressing Common Faculty Concerns

  • "I do not know enough about AI to teach it." You do not need to be a data scientist. You need to model critical thinking about technology, which is exactly what you already do with new drugs, devices, and guidelines.

  • "This will take time away from clinical teaching." Integration, not addition, is the model. Embedding AI into existing formats (morning report, case conferences, simulation labs) means minimal incremental time.

  • "The technology changes too fast." Teach principles, not products. If residents learn how to evaluate any AI tool's accuracy, bias, and clinical relevance, they will be prepared regardless of which specific tools emerge next year.


Measuring Success: What Metrics to Track

Tracking the right metrics ensures your curriculum evolves with evidence rather than intuition. Organize your measurement strategy across four domains:

Knowledge and Skills

  • Pre and post assessment scores on AI fundamentals (target: statistically significant improvement from PGY 1 to PGY 3)

  • Performance on AI specific case assessments (target: 80% of residents meeting competency benchmarks by end of PGY 2)

  • Resident ability to identify bias in a presented AI tool output (qualitative assessment via observed structured clinical exam)

Attitudes and Confidence

  • Longitudinal survey tracking resident confidence in using and evaluating AI tools (administer at orientation and annually)

  • Faculty comfort with teaching AI content (survey at baseline and after each development session)

Behavior and Application

  • Frequency of appropriate AI tool use in clinical documentation (chart audit, quarterly)

  • Quality of resident led AI related QI projects (rubric scored)

  • Number of residents who select AI related electives or scholarly projects

Program Level Outcomes

  • Applicant interest in your program's AI curriculum (track mentions in interview surveys)

  • Faculty participation in AI development sessions (attendance, sustained engagement)

  • Contribution to the field: publications, presentations, shared curricular materials

Frequently Asked Questions

Do we need a dedicated AI faculty member to run this curriculum?

No. While having a faculty champion with informatics interest is helpful, the phased approach described here is designed to be led by a curriculum committee with distributed responsibilities. Many successful programs start with a single interested faculty member who codevelops content with residents and scales from there.

How much does it cost to implement an AI curriculum?

The core curriculum (didactics, workshops, assessments) can be built with existing faculty time and freely available AI tools. Budget considerations include protected faculty development time (the most important investment), subscriptions to educational AI platforms for clinical simulations, and potentially a small informatics elective stipend. Initial costs vary widely depending on institutional size and existing infrastructure.

What if our institution does not have approved AI tools yet?

Start with AI literacy and critical appraisal, which require no institutional tools. Use publicly available AI models on deidentified, synthetic cases for hands on workshops. Simultaneously, advocate through your GME office for an institutional AI tool evaluation and approval process. Your curriculum development can actually accelerate institutional readiness.

How do we handle concerns about residents using AI to cheat on evaluations?

This is primarily a policy question. Your institutional AI policy (developed in Phase 1) should clearly define acceptable and unacceptable uses. Frame AI use in evaluations the way you frame calculator use in exams: sometimes it is the point (evaluate this AI tool's output), and sometimes it undermines the learning objective (write this note without AI assistance). Clarity prevents most problems.

Can we get ACGME credit for an AI curriculum?

AI content maps naturally to existing ACGME competencies, particularly Practice Based Learning and Improvement and Systems Based Practice. You do not need a separate accreditation pathway. Document your AI objectives within your existing competency mapping and include AI related assessments in your milestones reporting.

How do we keep the curriculum current when AI technology changes so rapidly?

Teach principles rather than specific products. Focus on skills like evaluating model performance, recognizing bias, understanding regulatory frameworks, and communicating AI informed decisions to patients. These skills remain relevant regardless of which tools dominate next year. Update your tool specific workshops annually, but keep your competency framework stable.

What resources exist for programs that are just getting started?

The AAMC's AI competency framework [2] and principles for responsible use [1] are essential starting documents. The STFM's AI in Medical Education Initiative [9] provides practical toolkits. The Frontiers in Education review [4] offers a comprehensive overview of the current landscape. And platforms like MeducationAI provide ready made clinical simulations that integrate AI powered learning without requiring programs to build tools from scratch.

Should we teach residents to code or build AI models?

For most residency programs, no. The goal is AI informed clinicians, not AI engineers. Residents should understand enough about how models work to evaluate them critically, the same way they understand enough biostatistics to read a clinical trial without needing to design one. Programs with strong research tracks may offer optional advanced modules for interested residents.


References

1. AAMC. "Principles for the Responsible Use of Artificial Intelligence in and for Medical Education," Version 2.0, July 31, 2025. https://www.aamc.org/about-us/mission-areas/medical-education/principles-ai-use

2. AAMC. "AI Competencies for Medical Educators." https://www.aamc.org/about-us/mission-areas/medical-education/advancing-ai-resource-collection/artificial-intelligence-competencies-medical-educators

3. AMA. "AMA Adopts Policy to Advance AI Literacy in Medical Education." November 18, 2025. https://www.ama-assn.org/press-center/ama-press-releases/ama-adopts-policy-advance-ai-literacy-medical-education

4. Frontiers in Education. "The current status and future prospects of AI education in residency training." 2025. https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1713676/full

5. Stanford Medicine. "Paging Dr. Algorithm." September 22, 2025. https://stanmed.stanford.edu/ai-medical-school-curriculum/

6. The Lancet Digital Health. "How can AI transform the training of medical students and physicians?" 2025. https://www.thelancet.com/journals/landig/article/PIIS2589-7500%2825%2900082-2/fulltext

7. PMC. "Integrating AI into medical education: a roadmap informed by survey of faculty and students." https://pmc.ncbi.nlm.nih.gov/articles/PMC12265092/

8. JMIR Medical Education. "AI in Medical Education: Transformative Potential." 2026. https://mededu.jmir.org/2026/1/e77127

9. STFM. "AI in Medical Education Initiative." https://www.stfm.org/about/keyinitiatives/artificial-intelligence/artificial-intelligence-in-medical-education/

10. Knopf. "Bridging the AI Policy Gap in Medical Education." The Clinical Teacher, 2026. https://asmepublications.onlinelibrary.wiley.com/doi/10.1111/tct.70347

Ready to start your preparation?

Access the MeDucation Medical Oncology and Hematology Question Bank and begin building the systematic approach that leads to board certification success.

Get Started