The Framework Student Resources Student Guide GenAI-101 Module Defend Tool Downloads Publications Community About
A framework for higher education

Structured
AI-Guided
Education

Your students are using ChatGPT. SAGE gives you a structured, evidence-based process to let them use AI responsibly — and prove they actually learned something.

The SAGE Cycle
AI-integrated tasks
1
Generate
2
Evaluate
3
Refine
4
AI Critic
5
Reflect

Supervised verification
6
Defend
Steps 1–5 build capability through structured AI-supported learning. Step 6 assures individual attainment through supervised verification.
Why SAGE

A framework, not a policy

Most universities have an AI policy that says whether students may use AI. That is not enough. SAGE tells students how to use it responsibly — step by step — and then checks they actually understood what they produced. It has been tested with over 800 students across multiple Australian campuses.

📋
Teaches students how, not just whether
An AI policy says "you may use AI at Level 3." That tells students nothing about how to use it well. SAGE gives them a repeatable process: check the AI's work against real sources, fix what is wrong, and show their reasoning. The thinking has to be theirs.
🔬
Reveals what students actually know
When a student cannot explain what the AI got wrong, cannot point to a source that contradicts the AI, or cannot defend their own changes — that tells you something important about their learning. SAGE makes these gaps visible, not hidden.
🛡️
Closes the verification gap
In a structured audit, only 12% of unsupervised submissions produced a genuinely traceable evidence trail — even with process logs and AI interaction records. That is why SAGE includes a Defend step: a short, supervised checkpoint where each student shows they can explain and justify their own work.
How it works

Six steps, one assessment

SAGE is not an add-on to your assessment. It is the assessment. Students work through six structured steps — five using AI openly, and one final checkpoint under supervision.

Step 1
Generate
Students use an AI tool to produce an initial response to the task. In early weeks you provide the prompts. Later, students write their own — but must first show they understand the problem before engaging the AI.
Step 2
Evaluate
Students compare the AI output against real sources — industry standards, peer-reviewed research, clinical guidelines, or whatever applies to your discipline. They identify what is correct, what is missing, and what is wrong.
Step 3
Refine
Students fix the AI output based on their evaluation. Every change is documented: what they accepted, what they modified, what they rejected — and why. The result is work the student can genuinely call their own.
Step 4
AI Critic
Students flip the relationship. They tell the AI to act as an expert reviewer and then judge whether the AI's critique is valid or whether the AI itself is wrong. This is where students develop real authority over the tool.
Step 5
Reflect
Students write a structured reflection: what the AI got right, where it failed, what domain errors they found, and what this tells them about when AI can and cannot be trusted in their discipline.
Step 6
Defend
At one controlled moment — a short viva, a live demo, a timed exercise — each student proves under supervision that they can explain and justify their work. This closes the gap between "submitted good work" and "actually understood it."
Get started

Six Steps, one framework

SAGE alongside other frameworks

Your university probably already has an AI permission framework — something that sets how much AI students may use in each assessment. SAGE sits alongside it. The permission framework sets the boundary; SAGE provides the structured process students follow within that boundary. Together, they give you a complete response to generative AI in assessment.

The AI Assessment Scale (AIAS) is one widely adopted permission framework. Institutions using AIAS alongside SAGE have both layers covered: policy and pedagogy.

Read the implementation guide
Dimension Permission layer SAGE
Answers How much AI? How to use AI?
Unit Permission level Process cycle
Student action Follow the rule Apply the process
Assurance Not addressed Defend step
Validation Varies by framework 800+ students, 8 studies
Publications

The evidence base

SAGE is grounded in eight empirical studies spanning cybersecurity education, systems analysis and design, data analytics, and first-year ICT — delivered across multiple Australian campuses with more than 800 students.

Implementation Guide DOI Read the Framework
800+
Students across validation studies
8
Empirical studies
73%
Independently verified AI claims against authoritative sources
81%
Engaged in deep revision — rewriting or iterative refinement

Core publications

  1. 1.M. Elkhodr, E. Gide, R. Wu, and O. Darwish, “ICT students’ perceptions towards ChatGPT: An experimental reflective lab analysis,” STEM Education, vol. 3, no. 2, pp. 70–88, 2023. DOI
  2. 2.R. Sandu, E. Gide, and M. Elkhodr, “The role and impact of ChatGPT in educational practices: Insights from an Australian higher education case study,” Discover Education, vol. 3, no. 1, art. 71, 2024. DOI
  3. 3.M. Elkhodr and E. Gide, “The SAGE framework for developing critical thinking and responsible generative AI use in cybersecurity education,” Discover Education, vol. 4, art. 225, Nov. 2025. DOI
  4. 4.M. Elkhodr and E. Gide, “AI Leads, Humans Lead, or Collaborate? Empirical Findings and the SAGE Roadmap for Embedding GenAI in the Systems Analysis and Design Education,” STEM Education, accepted Feb. 2026. DOI
  5. 5.M. Elkhodr and E. Gide, “AI as Critic: Validating SAGE Pedagogy for Human Authority and Responsible GenAI Use in Systems Analysis and Design Education,” EdArXiv Preprints, 2025. Preprint
  6. 6.M. Elkhodr, A. Azra, and E. Gide, “How First-Year Students Actually Use ChatGPT in Permitted Assessments: Empirical Typologies, Verification Gaps, and the Policy-Practice Divide,” submitted to Discover Education, 2026. Preprint
  7. 7.M. Elkhodr and E. Gide, “The Death of Take-Home Assessment in the Era of GenAI: Here Is the Evidence,” submitted to STEM Education, 2026.
  8. 8.M. Elkhodr and E. Gide, “Assurance by Design: Embedding the SAGE Defend Step in AI-Integrated Higher Education Assessment,” Journal of University Teaching and Learning Practice, 2026.

Join the SAGE network

A national network of educators, researchers, and learning designers launching in 2026. Members access regular seminars, collaborate on evidence-based AI pedagogy, and have the opportunity to publish in the SAGE 2026 International Symposium proceedings in STEM Education.

Register your interest SAGE 2026 Symposium
About

The developers

SAGE was developed through sustained empirical research at Central Queensland University, with validation studies spanning cybersecurity management, systems analysis and design, data analytics, and first-year ICT units.

Dr Mahmoud Elkhodr
Dr Mahmoud Elkhodr
Senior Lecturer in ICT, School of Engineering and Technology, Central Queensland University. Research spans cybersecurity, AI in education, and the scholarship of learning and teaching. h-index 20+, 2,200+ citations. IEEE NSW Section leadership roles.
m.elkhodr@cqu.edu.au
Professor Ergun Gide
Professor Ergun Gide
Professor of Information Systems, School of Engineering and Technology, Central Queensland University. Research and leadership contributions span educational innovation, quality assurance in higher education, and applied AI in learning and teaching contexts.
e.gide1@cqu.edu.au