
April 12, 2026
18 min read
By Dr. Roupen Odabashian MD, FRCPC, FASCO — Hematologist-Oncologist | Founder, MeducationAI
Published April 2026
Summarize this article with: ChatGPT | Claude | Perplexity | Google AI
Your medical education program needs an AI policy today, not next academic year, not after the next curriculum committee meeting, but now. Generative AI tools are already in the hands of your learners. They are using ChatGPT to draft patient notes, Claude to study for board exams, and a growing number of specialized platforms to generate practice questions and clinical reasoning exercises. The question is not whether AI will affect your program. It already has. The question is whether you will guide that impact with intention or react to it after problems surface.
A strong AI policy should contain six essential elements: a clear statement of purpose and scope, a list of approved tools and acceptable use cases, academic integrity guidelines that reflect the realities of AI assisted work, patient data and privacy protections, defined responsibilities for both faculty and learners, and a built in review schedule that keeps the policy current as technology evolves. This article provides a practical, adaptable template for each of these elements, grounded in the AAMC framework and informed by emerging evidence from programs that have already navigated this transition. [1][2]
TL;DR
Most medical education programs still lack formal AI policies despite widespread learner use of generative AI tools, creating inconsistent standards and institutional risk.
The AAMC’s seven principles (transparency, accountability, equity, privacy, educational integrity, faculty development, continuous review) provide the ethical foundation for any institutional AI policy.
This article includes a complete seven-section policy template covering purpose and scope, approved tools, academic integrity, patient data protections, faculty and learner responsibilities, assessment standards, and review schedules.
Implementation should follow a phased rollout over 6–12 months: stakeholder engagement, faculty development, learner orientation, soft launch, then full enforcement with built-in review cycles.
Common pitfalls include writing policy in isolation, being too tool-specific, ignoring the clinical environment, and failing to address equity in AI access and literacy.
Despite the rapid adoption of generative AI by medical students and residents, most training programs still lack formal policies governing its use. A 2026 analysis published in The Clinical Teacher described this as a "policy gap" that leaves institutions vulnerable to inconsistent standards, academic integrity disputes, and missed opportunities for meaningful AI integration. [3]
The gap is not due to a lack of awareness. Program leaders know AI is transforming healthcare education. The challenge is that the technology has moved faster than institutional governance. A study from the University of Pennsylvania's Leonard Davis Institute found that while the majority of medical schools acknowledge AI's growing role, relatively few have translated that awareness into actionable, program level policy. Many schools are still in the exploratory phase, piloting individual courses or modules without a unifying framework. [4]
A survey of US osteopathic medical schools published in JMIR Medical Education confirmed this pattern. Among those who responded, 93% of deans (14 of 15) and 88% of student government presidents (14 of 16) reported that their schools had no formal student AI policies. Most institutions reported relying on existing academic honesty codes that were written long before large language models existed. [5]
This reactive posture carries real risk. Without clear guidelines, individual faculty members create their own rules, resulting in a patchwork of expectations that confuses learners and creates enforcement headaches. One professor may encourage AI assisted literature reviews while another treats any AI use as plagiarism. Learners navigate these contradictions without a compass, and when disputes arise, programs lack the policy infrastructure to resolve them fairly.
The solution is not to ban AI. That ship has sailed. The solution is to build a policy framework that embraces AI as a pedagogical tool while establishing clear boundaries around integrity, safety, and accountability. [6][7]
In 2025, the Association of American Medical Colleges released version 2.0 of its Principles for the Responsible Use of AI in and for Medical Education. This framework offers the most authoritative starting point for any institution developing its own AI policy. [1]
The AAMC principles are organized around seven themes:
1. Transparency and Disclosure. Learners and faculty should be clear about when and how AI tools are used in educational activities. This includes disclosing AI assistance in written work, presentations, and assessments.
2. Accountability. The human user remains responsible for the output of any AI tool. A student who submits AI generated content as their own work bears the same accountability as one who submits plagiarized text.
3. Equity and Access. AI policies should consider disparities in access to technology. Not all learners have equal familiarity with AI tools, and policies should avoid creating advantages for those who do.
4. Privacy and Data Security. Protected health information, institutional data, and personal information must never be entered into public AI tools. Policies must align with HIPAA, FERPA, and institutional data governance standards.
5. Educational Integrity. AI should enhance, not replace, the development of clinical reasoning, critical thinking, and professional identity. Assessments should be designed to measure genuine competency, not the ability to prompt an AI effectively.
6. Faculty Development. Educators need training and support to integrate AI into their teaching and to recognize AI generated work. Policies should include provisions for ongoing faculty development. The AAMC has also published specific AI competencies for medical educators to support this goal. [8]
7. Continuous Review. AI capabilities are evolving rapidly. Policies must include mechanisms for regular review and revision to remain relevant.
These principles provide a philosophical and ethical scaffold. The template that follows translates them into operational language your institution can adopt.
The following template is designed to be adapted to your institution's specific context, governance structure, and learner population. Each section includes guidance notes explaining the rationale and considerations for customization.
[Institution Name] Policy on the Use of Artificial Intelligence in Medical Education
Effective Date: [Date]
Review Date: [12 months from effective date]
Purpose: This policy establishes guidelines for the responsible use of artificial intelligence tools in all educational activities within [Institution/Program Name]. It aims to promote innovation and effective learning while safeguarding academic integrity, patient privacy, and equitable access to educational resources.
Scope: This policy applies to all medical students, graduate medical education trainees, faculty, and staff engaged in educational activities including but not limited to coursework, clinical rotations, research, assessments, and scholarly work.
Guidance: Define scope broadly enough to cover formal and informal educational settings. AI use does not stop at the classroom door. Learners use these tools during clinical rotations, research projects, and self directed study.
Approved Tools: The following AI tools have been reviewed and approved for educational use within our program:
[List approved tools with version numbers and links]
[Include institutional subscriptions and free tools separately]
[Note any tools specifically designed for medical education, such as platforms offering AI driven clinical case simulation or adaptive question generation]
Approved Use Cases:
Literature search and synthesis (with verification of all citations)
Study aid generation (flashcards, practice questions, concept summaries)
Writing assistance (grammar, clarity, structure) with full disclosure
Clinical reasoning practice through approved simulation platforms
Data analysis support for research projects (with faculty oversight)
Prohibited Use Cases:
Entering any protected health information into AI tools not approved for PHI handling
Submitting AI generated content as original work without disclosure
Using AI tools during proctored examinations unless explicitly authorized
Using AI to complete clinical documentation that will become part of a patient's medical record without attending supervision and review
Guidance: Be specific. Vague statements like "AI may be used responsibly" invite inconsistent interpretation. Name tools, name use cases, and update this section regularly. Platforms like MeDucation AI that are purpose built for medical education with appropriate data handling may be listed separately from general purpose tools. [9]
Disclosure Requirements: All use of AI tools in graded assignments, scholarly work, and presentations must be disclosed. Disclosure should include:
The name and version of the AI tool used
The nature of the AI assistance (e.g., "used for initial literature search," "used for grammar editing," "used to generate practice questions for self study")
A statement confirming that all AI generated content was reviewed, verified, and edited by the submitting learner
What Constitutes a Violation:
Submitting AI generated text, images, or data as one's own original work without disclosure
Using AI tools during assessments where such use is not explicitly permitted
Fabricating or falsifying AI assisted research findings
Sharing AI generated clinical reasoning exercises or assessment answers with peers in a manner that undermines evaluation integrity
Consequences: Violations of this policy will be adjudicated through existing institutional academic integrity processes. Consequences may range from assignment failure to dismissal, consistent with the severity and intent of the violation.
Guidance: The key principle is transparency, not prohibition. Learners should feel comfortable disclosing AI use because the policy normalizes appropriate usage. Reserve punitive language for genuine deception.
Absolute Prohibition: No protected health information (PHI) as defined by HIPAA, no personally identifiable patient data, and no institutional patient records may be entered into any AI tool unless that tool has been specifically approved by [Institution's] Information Security and Compliance Office for PHI handling.
De identification Standards: When using AI tools for educational case analysis, all patient data must be fully de identified in accordance with HIPAA Safe Harbor standards prior to entry into any AI system.
Institutional Data: Proprietary assessment data, unpublished research data, and confidential institutional information must not be entered into external AI tools without written authorization from the relevant department or office.
Guidance: This section should be reviewed by your institution's privacy officer and legal counsel. The consequences for PHI violations should align with your existing HIPAA breach response protocols. [1][4]
Faculty Responsibilities:
Clearly communicate AI use expectations for each course, rotation, or assessment in the syllabus or orientation materials
Complete institutional AI literacy training within [timeframe]
Design assessments that evaluate genuine competency, incorporating AI resistant assessment strategies where appropriate
Provide feedback on appropriate and inappropriate AI use as part of formative assessment
Report suspected policy violations through established channels
Learner Responsibilities:
Review and acknowledge this policy at the start of each academic year
Disclose all AI use in accordance with Section 3
Verify the accuracy of any AI generated content before submission or clinical application
Refrain from using AI tools in any manner that could compromise patient safety or privacy
Participate in institutional AI literacy programming
Shared Responsibilities:
Both faculty and learners are expected to engage in ongoing learning about AI capabilities, limitations, and ethical implications
Both parties should report concerns about AI tool performance, bias, or safety to [designated office]
Guidance: Faculty buy in is critical. If educators do not understand the policy or feel supported in implementing it, compliance will be uneven. Pair this section with a robust faculty development program. The AAMC's AI competencies framework for medical educators provides an excellent roadmap. [2][8]
Assessment Design Principles:
High stakes summative assessments should be designed to minimize the advantage of unauthorized AI use (e.g., oral examinations, observed clinical encounters, procedural assessments)
Formative assessments may intentionally incorporate AI as a learning tool, provided the learning objectives focus on critical evaluation of AI output rather than mere generation
Assessment rubrics should be updated to address AI assisted work where permitted, with clear criteria for evaluating the human contribution
AI in Evaluation of Learners:
Any use of AI tools in the evaluation of learner performance (e.g., automated essay scoring, AI assisted competency assessment) must be disclosed to learners and approved by the [Assessment Committee/GME Office]
AI generated evaluations must be reviewed and validated by a faculty member before being finalized
Guidance: This is where policy meets pedagogy. The goal is not to create "AI proof" assessments but to design evaluations that authentically measure the competencies you care about. Clinical reasoning, diagnostic accuracy, communication skills, and professional judgment remain best assessed through human observation and interaction. [6][7]
Annual Review: This policy will be reviewed and updated at least once annually by the [AI Policy Committee/Curriculum Committee/designated body].
Trigger Based Review: In addition to the annual review, this policy will be reviewed within 60 days of any of the following:
Release of major new AI tools or capabilities relevant to medical education
Significant changes to AAMC, LCME, ACGME, or other accreditation body guidance on AI
Institutional incidents involving AI misuse
Changes to federal or state regulations affecting AI use in education or healthcare
Stakeholder Input: Each review cycle will include input from medical students, residents, faculty, IT security, and compliance personnel.
Version Control: All versions of this policy will be archived and accessible. The current version number and effective date will be displayed prominently on the first page.
Guidance: A policy that is not reviewed regularly becomes a liability. AI capabilities shift on a timeline measured in months, not years. Build the review mechanism into the policy itself so it becomes part of institutional governance rather than an afterthought. [3][10]
No template works out of the box. The policy above is a starting framework, and meaningful adaptation requires attention to your institution's specific context.
Undergraduate vs. Graduate Medical Education. UME programs need to address AI use in coursework, preclinical study, and clerkships. GME programs face additional considerations around clinical documentation, patient care decisions, and scholarly activity. A single policy can serve both, but the "Approved Use Cases" section should include level specific guidance.
Research Intensive vs. Community Based Programs. Research intensive institutions need stronger language around AI in methodology, data analysis, and manuscript preparation. Community based programs may focus more on clinical documentation and patient privacy.
Accreditation Alignment. Review your accreditation body's most recent guidance. Accreditation bodies are beginning to consider how AI fits within their frameworks, though formal standards are still evolving. Your policy should explicitly reference compliance with these bodies and position your program to adapt as requirements become more defined. [5][8]
Existing Governance Structures. Map your AI policy to existing committees and reporting lines. Extend your curriculum committee's mandate to include AI related assessment standards rather than creating parallel governance structures.
Drafting a policy is the easier part. Implementation determines whether the document lives or collects dust. Here is a practical rollout sequence.
Phase 1: Stakeholder Engagement (Months 1 to 2). Circulate a draft to student government, resident representatives, faculty senate, IT security, compliance, and legal. Use town halls, focus groups, and anonymous surveys to surface concerns.
Phase 2: Faculty Development (Months 2 to 3). Faculty need training before the policy takes effect, and resources like the program director’s playbook for AI in medical education can help structure this process. Cover approved tools, assessment design within the new framework, and how to have productive conversations with learners about AI use. The AMA's call for advancing AI literacy underscores the urgency of this step. [2]
Phase 3: Learner Orientation (Month 3). Integrate AI policy orientation into existing onboarding. Require not just a signature but a brief module that tests comprehension of key provisions.
Phase 4: Soft Launch (Months 3 to 6). Implement with an emphasis on education over enforcement. Use early violations as teaching opportunities and collect data on how the policy is working.
Phase 5: Full Implementation and First Review (Months 6 to 12). Move to standard enforcement. Conduct the first formal review at the twelve month mark, incorporating lessons from the soft launch.
Writing policy in isolation. A policy drafted entirely by administrators without learner or faculty input will lack legitimacy and practical grounding. The most common objections surface during implementation, not during drafting. Engage stakeholders early.
Treating AI policy as an academic integrity issue alone. AI policy touches pedagogy, patient safety, data governance, equity, and professional development. Siloing it within your honor code misses the broader implications and limits your ability to leverage AI as a positive educational force.
Being too specific about tools. Naming ChatGPT 4.5 in your policy means the policy is outdated the moment the next version drops. Reference categories of tools (large language models, image generators, coding assistants) and maintain a separate, regularly updated appendix of specific approved tools.
Ignoring the clinical environment. Learners do not stop using AI when they enter the hospital. Policies must address AI use in clinical settings, including documentation assistance, clinical decision support, and the critical distinction between educational use and patient care use. [9][10]
Failing to address equity. If your policy assumes all learners have equal access to AI tools and equal digital literacy, it will disadvantage those who do not. Provide institutional access to key tools and build AI literacy programming into the curriculum rather than treating it as an extracurricular responsibility. Stanford Medicine's integration of AI into the medical school curriculum offers a model for embedding this literacy systematically. [6]
Setting it and forgetting it. An AI policy written in 2026 that is not reviewed until 2028 will be irrelevant long before its review date. Build the cadence of review into institutional rhythms, tied to existing committee schedules and accreditation cycles.
You need a dedicated policy. Academic integrity is only one dimension of AI governance in medical education. Patient privacy, assessment design, clinical documentation, faculty development, and equity all require specific guidance that does not fit within a traditional honor code. An updated integrity code is necessary but not sufficient. [3]
The policy should establish a baseline, a set of minimum standards and prohibited uses that apply program wide. Within that framework, individual faculty can set course specific expectations that are more restrictive or more permissive, as long as those expectations are clearly communicated in the syllabus and do not contradict the institutional policy.
Detection is a losing game. AI detection tools have high false positive rates and disproportionately flag nonnative English speakers. Instead of investing in surveillance, invest in assessment design that makes unauthorized AI use less advantageous. Oral examinations, clinical performance assessments, and reflective portfolios assessed over time are far more robust than any detection algorithm. [4][7]
This is an area requiring careful, institution specific guidance. AI assisted documentation is becoming standard in clinical practice, and learners should be prepared to use these tools. However, the educational value of writing clinical notes lies in the reasoning process it demands. A balanced approach allows supervised AI assisted documentation in specific rotations while preserving unassisted documentation requirements in core clinical training.
At minimum, annually. In practice, you should monitor for triggering events, major new tool releases, accreditation changes, institutional incidents, that warrant interim review. Assign a specific committee or individual the responsibility of monitoring the AI landscape and flagging issues between formal review cycles.
The AAMC's AI competencies for medical educators provide a structured framework for faculty development. [8] Many institutions are creating peer mentorship programs pairing AI experienced faculty with those who are newer to the technology. National conferences, online workshops from medical education societies, and purpose built educational platforms like MeDucation AI also offer faculty oriented resources. The AMA's policy resolution on AI literacy in medical education signals that this is becoming a professional expectation, not an optional interest. [2]
With appropriate safeguards, yes. AI can assist with analyzing assessment data patterns and providing preliminary feedback on written work. However, all AI generated evaluations must be reviewed by a qualified faculty member. No consequential evaluation decision should rest solely on AI output, and learners should know when AI is involved in their evaluation.
Build AI literacy into the curriculum rather than assuming it. Provide institutional access to approved tools so personal finances do not determine who benefits. Design assessments that reward critical thinking rather than prompt engineering skill. The Lancet Digital Health has highlighted that equitable AI integration requires deliberate curricular design, not passive adoption. [7]
The institutions that will navigate AI's impact most effectively are those that start with clear, adaptable policy and invest in the human infrastructure, faculty development, learner engagement, governance capacity, to sustain it. A policy document alone changes nothing. It is the foundation on which a culture of thoughtful AI integration is built.
The template in this article gives you a starting point. Adapt it to your context, engage your community, and commit to the ongoing work of revision. The technology will keep evolving. Your policy framework should evolve with it.
[1] AAMC. "Principles for the Responsible Use of Artificial Intelligence in and for Medical Education." Version 2.0, July 31, 2025. https://www.aamc.org/about-us/mission-areas/medical-education/principles-ai-use
[2] AMA. "AMA Adopts Policy to Advance AI Literacy in Medical Education." November 18, 2025. https://www.ama-assn.org/press-center/ama-press-releases/ama-adopts-policy-advance-ai-literacy-medical-education
[3] Knopf A. "Bridging the AI Policy Gap in Medical Education: Assessing the Lack of Standardised Guidelines in US Medical Schools." The Clinical Teacher, 2026. https://asmepublications.onlinelibrary.wiley.com/doi/10.1111/tct.70347
[4] Penn LDI. "AI Pushes Medical Schools Into New Era, but Are They Prepared?" https://ldi.upenn.edu/our-work/research-updates/ai-pushes-medical-schools-into-new-era-but-are-they-prepared/
[5] JMIR Medical Education. "Generative Artificial Intelligence in Medical Education—Policies and Training at US Osteopathic Medical Schools: Descriptive Cross-Sectional Survey." 2025. https://mededu.jmir.org/2025/1/e58766
[6] Stanford Medicine. "Paging Dr. Algorithm." September 22, 2025. https://stanmed.stanford.edu/ai-medical-school-curriculum/
[7] The Lancet Digital Health. "How Can AI Transform the Training of Medical Students and Physicians?" 2025. https://www.thelancet.com/journals/landig/article/PIIS2589-7500(25
[8] AAMC. "Artificial Intelligence Competencies for Medical Educators." 2025. https://www.aamc.org/about-us/mission-areas/medical-education/advancing-ai-resource-collection/artificial-intelligence-competencies-medical-educators
[9] Frontiers in Education. "The Current Status and Future Prospects of AI Education in Residency Training." 2025. https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1713676/full
[10] PMC. "AI in Medical Education: Promise, Pitfalls, Practical Pathways." https://pmc.ncbi.nlm.nih.gov/articles/PMC12176979/
Access the MeDucation Medical Oncology and Hematology Question Bank and begin building the systematic approach that leads to board certification success.
Get Started