The Framework Student Resources Student Guide GenAI-101 Module Defend Tool Downloads Publications Community About

1. What Is SAGE

The Structured AI-Guided Education (SAGE) framework is a validated pedagogy for developing students' capacity to orchestrate generative AI outputs rather than passively accept them. It treats GenAI as an assistive tool whose outputs must be accepted, modified, or rejected with explicit, evidence-based justification — not as an authority to mimic nor as a tool to ban.

SAGE was developed through three years of sustained empirical research involving over 800 students across eight empirical studies, spanning cybersecurity management, data analytics, Professional Communication and systems analysis and design, delivered across multiple Australian campuses. The framework has been has been referenced by researchers at institutions including the University of Warwick and the National University of Singapore.

Version 2 of this guide introduces a sixth step — Defend — which places assurance of individual competence within the cycle itself rather than relying on an additional supervised assessment. The complete evidence base, discipline-specific examples, implementation templates, and detailed rubrics are available in Version 1 of this guide.

2. The SAGE Cycle

At the core of SAGE is a six-step cycle that moves students from passive AI consumption to critical AI orchestration, culminating in supervised demonstration of the competency developed through the preceding five steps. The cycle is designed to produce graduates who can deploy generative AI as a professional tool within the constraints of their discipline. They will be taught skills to evaluate AI outputs against industry standards, regulatory requirements, and domain-specific evidence rather than accepting them uncritically. This is the competency that employers now expect. That is not the ability to prompt an AI tool, but rather the ability to judge, correct, and take professional responsibility for what AI produces.

Step 1
Generate
Use GenAI to produce an initial output using structured or autonomous prompts.
Step 2
Evaluate
Compare the output against domain standards, regulations, and authoritative sources.
Step 3
Refine
Modify the output with evidence-based reasoning. Document what was changed and why.
Step 4
AI Critic
Assign AI a domain-specific persona to critically re-evaluate your refined output.
Step 5
Reflect
Conduct metacognitive analysis of the AI's strengths, limitations, and failure patterns.
Step 6
Defend
Demonstrate competency under supervised conditions. The student proves they own the reasoning, not just the document.

The cycle is sequenced to move students progressively through Bloom's revised taxonomy. Steps 1–3 operate at the Apply and Analyse levels: students generate output, compare it against domain standards, and modify it with evidence. Step 4 inverts the AI relationship. So, the student assigns AI a critical role and must evaluate the evaluator, operating at Bloom's Evaluate level. Step 5 requires metacognitive synthesis. This is the highest cognitive operation in the taxonomy, in which students articulate patterns of AI strength and failure across the task. Step 6 closes the cycle by requiring the student to demonstrate, under supervised conditions, that the competency developed through the preceding steps has been individually internalised rather than merely documented.

Design Principle
Steps 1–5 develop the competency through open, AI-integrated tasks. Step 6 verifies it under supervised conditions where the process cannot be simulated. The first five steps serve both learning and formative assessment purposes. Step 6 carries the institutional assurance function.

3. Two-Stage Progression

SAGE operates through two stages that move students from guided practice to independent application with supervised assurance. This progression mirrors the cognitive trajectory defined by Bloom's revised taxonomy: Stage 1 scaffolds the lower-order operations of Apply and Analyse under instructor guidance, while Stage 2 requires students to operate independently at the Evaluate and Create levels before demonstrating competency under supervised conditions. The two-stage design enusres that open and assurance tasks both are addressed within a single integrated teaching sequence rather than treated as separate assessment instruments.

Stage 1 — Formative
Tutorial-Based Scaffolded Learning

Students practise the SAGE cycle under instructor guidance with structured prompts, worked examples, and immediate feedback. Scaffolding is heavy. The goal is to build the evaluation, refinement, and reflection skills that students will apply independently in Stage 2.

  • Generate with standardised prompts provided by instructor
  • Evaluate against supplied checklists, standards or peer reviewed research, and readings
  • Refine with peer review and tutor feedback
  • AI Critic under guided conditions
  • Reflect in short metacognitive entries (150–250 words)
Stage 2 — Summative
Assessment-Driven Independent Application

Students apply the SAGE cycle autonomously in summative tasks with reduced scaffolding. Process logging remains embedded in the open task for formative purposes. Assurance of individual attainment is placed in the Defend step under supervised conditions.

  • Generate with self-designed prompts; human baseline required first
  • Evaluate against standards, stakeholder input, and research
  • Refine with documented accept/modify/reject decisions
  • AI Critic with domain-specific persona assignment
  • Reflect in evidence-based justification
  • Defend under supervised conditions (format varies by discipline)
Constructivist Learning Bloom's Taxonomy Situated Learning Zone of Proximal Development

4. What Students Do in Each Step

Step 1 Generate

Students use a generative AI tool to produce an initial output for the task at hand. In Stage 1, standardised prompts are provided to ensure comparable outputs across the cohort. In Stage 2, students design their own prompts but must first produce a human baseline (outline, notes, or preliminary analysis) before engaging the AI tool. This baseline demonstrates independent understanding of the problem and prevents complete delegation.

Step 2 Evaluate

Students compare the AI output against authoritative sources relevant to their discipline: industry standards, regulatory frameworks, peer-reviewed research, clinical guidelines, or case-study constraints. They identify what is present, what is missing, and what is incorrect. In Stage 1, instructors provide checklists and evaluation templates. In Stage 2, students identify evaluation criteria independently.

Step 3 Refine

Students modify the AI output on the basis of their evaluation. Each modification is documented as an accept, modify, or reject decision with a brief evidence-based justification citing specific standards, sources, or contextual constraints. The refinement produces a professional-standard artefact that the student can defend as their own intellectual product.

Step 4 AI Critic

Students assign the AI tool a domain-specific persona (e.g., "You are a senior cybersecurity auditor reviewing this policy for regulatory compliance") and submit their refined output for critical evaluation. They then analyse the AI's feedback, determining which critiques are valid, which reflect the AI's own limitations, and which require human judgement to resolve. This step develops metacognitive sophistication and strengthens the student's authority over the AI.

Step 5 Reflect

Students produce a metacognitive analysis documenting the AI's strengths and weaknesses as observed during the task, the specific domain errors identified, the corrective reasoning applied, and the broader implications for AI reliability in their discipline. Functional reflections — those that critique domain content directly — are distinguished from procedural reflections that merely describe workflow benefits.

Step 6 Defend

Students demonstrate, under supervised conditions, that they can reproduce or explain the reasoning documented in their open assessment without reliance on the artefacts that produced it. The format is determined by disciplinary context (see Section 5). The defining criterion is that the student shows they own the competency — they can identify risks, explain trade-offs, justify decisions, and respond to challenge questions — not merely that they submitted a document containing these elements. This step carries the institutional assurance function: it is the evidence that the intended learning outcomes have been individually achieved.

5. Defend — Discipline Implementation Examples

The Defend step is format-agnostic. The principle remains constant. So, the student demonstrates competency under conditions where the process cannot be simulated. The following table illustrates how Defend may be implemented across representative disciplines.

Discipline Defend Format What the Student Demonstrates
Cybersecurity Timed risk-scoring exercise or incident response viva Classifies and prioritises threats for an unseen scenario using the same frameworks applied in the open task. Explains why specific controls were selected over alternatives. Responds to challenge questions on regulatory alignment.
Programming Supervised code walkthrough or live debugging session Explains design decisions in submitted code. Debugs a seeded error in a related module under observation. Demonstrates understanding of logic, structure, and trade-offs rather than surface familiarity with the output.
Health Sciences Structured clinical reasoning viva or case interpretation Applies clinical decision-making to a new patient scenario. Justifies care plan modifications against clinical guidelines. Identifies where AI-generated recommendations require human override based on patient-specific factors.
Business Oral defence of strategic recommendation with examiner challenge Presents and defends a strategic position. Responds to examiner objections with evidence-based reasoning. Demonstrates understanding of stakeholder constraints, market assumptions, and ethical implications.
Education Teaching demonstration with reflective justification Delivers a short teaching segment based on a lesson plan developed with AI assistance. Explains pedagogical choices, adaptation for specific learner needs, and why certain AI-suggested approaches were modified or rejected.
Engineering Design defence with constraint interrogation Defends design decisions under examiner questioning. Explains trade-offs between competing requirements. Demonstrates that safety, sustainability, and regulatory considerations were understood, not merely included in the document.
Law Moot argument or case analysis viva Argues a legal position with reference to relevant authorities. Responds to judicial challenge on the strength of the analysis. Distinguishes AI-generated legal summaries from independently reasoned legal arguments.
Implementation Note
The Defend step does not require a full examination. A 10–15 minute structured oral per student, a timed in-class exercise, or a supervised practical demonstration is sufficient provided the task requires the student to reproduce reasoning rather than recall content. The objective is verification of competency, not endurance testing.

6. Why Defend

The inclusion of a supervised assurance step is not a theoretical preference. It is an empirically grounded response to a documented structural limitation of unsupervised assessment under GenAI-rich conditions.

In a structured audit of 25 group submissions across two cybersecurity management cohorts, assessments designed using the full SAGE protocol, incorporating base prompts, structured decision tables, mandatory AI interaction logs, and reflective commentary, were examined for process fidelity using a five-check protocol. Full traceability between documented AI outputs and human evaluation claims was not achieved in the majority of the students' submissions. Only 3 of 25 submissions (12%) produced evidence chains that were substantially auditable. The remaining submissions exhibited logical inconsistencies between reported AI positions and appended outputs, compliance-pattern text in evaluation cells, and structural indicators consistent with audit trail simulation (Elkhodr & Gide, 2026, STEM Education).

The University of Sydney's two-lane model distinguishes between open tasks for learning and secure tasks for assurance. SAGE with Defend operationalises both functions within a single, integrated pedagogical cycle: Steps 1–5 constitute the open lane, and Step 6 constitutes the secure lane.

References

Core SAGE Framework Publications

  1. M. Elkhodr, E. Gide, R. Wu, and O. Darwish, "ICT students' perceptions towards ChatGPT: An experimental reflective lab analysis," STEM Education, vol. 3, no. 2, pp. 70–88, 2023. [Online]. Available: https://doi.org/10.3934/steme.2023006
  2. R. Sandu, E. Gide, and M. Elkhodr, "The role and impact of ChatGPT in educational practices: Insights from an Australian higher education case study," Discover Education, vol. 3, no. 1, art. 71, 2024. [Online]. Available: https://doi.org/10.1007/s44217-024-00126-6
  3. M. Elkhodr and E. Gide, "The SAGE framework for developing critical thinking and responsible generative AI use in cybersecurity education," Discover Education, vol. 4, art. 225, Nov. 2025. doi: 10.1007/s44217-025-00935-3
  4. M. Elkhodr and E. Gide, "AI Leads, Humans Lead, or Collaborate? Empirical Findings and the SAGE Roadmap for Embedding GenAI in the Systems Analysis and Design Education," STEM Education, accepted Feb. 2026. Preprint: arXiv:2511.17515
  5. M. Elkhodr and E. Gide, "AI as Critic: Validating SAGE Pedagogy for Human Authority and Responsible GenAI Use in Systems Analysis and Design Education," EdArXiv Preprints, 2025. [Online]. Available: https://osf.io/preprints/edarxiv/8j3xf
  6. M. Elkhodr, A. Azra, and E. Gide, "How First-Year Students Actually Use ChatGPT in Permitted Assessments: Empirical Typologies, Verification Gaps, and the Policy-Practice Divide," submitted to Discover Education, 2026. Preprint: https://doi.org/10.21203/rs.3.rs-8628653/v1
  7. M. Elkhodr and E. Gide, "The Death of Take-Home Assessment in the Era of GenAI: Here Is the Evidence," submitted to STEM Education, 2026.

Extended Research Portfolio: GenAI in Education

  1. M. Elkhodr and E. Gide, Eds., Generative Artificial Intelligence Empowered Learning: A New Frontier in Educational Technology, 1st ed. New York, NY: Taylor and Francis, 2025, 248 pp. doi: 10.1201/9781003422433. eBook ISBN: 9781003422433
  2. R. J. Eddine, E. Gide, and A. Al-Sabbagh, "Generative AI in higher education: A cross-sector analysis of ChatGPT's impact on STEM, social sciences, and healthcare," STEM Education, vol. 5, no. 5, pp. 757–801, 2025. doi: 10.3934/steme.2025035
  3. K. Wangsa, S. Karim, E. Gide, and M. Elkhodr, "A systematic review and comprehensive analysis of pioneering AI chatbot models from education to healthcare: ChatGPT, Bard, Llama, Ernie and Grok," Future Internet, vol. 16, no. 7, art. 219, 2024. doi: 10.3390/fi16070219
  4. K. Wangsa, R. Sandu, S. Karim, M. Elkhodr, and E. Gide, "A systematic review and analysis on the potentials and challenges of GenAI chatbots in higher education," in Proc. 21st Int. Conf. Information Technology Based Higher Education and Training (ITHET), Paris, France, Nov. 2024, pp. 1–7. doi: 10.1109/ITHET61869.2024.10837608
  5. A. Al Tawara, J. El-Den, E. Gide, and Y. Sebastian, "A systematic review and comprehensive analysis of AI-enabled re-skilling and upskilling in education: Transformative strategies for the future," in Proc. 21st Int. Conf. Information Technology Based Higher Education and Training (ITHET), Paris, France, Nov. 2024, pp. 1–10. doi: 10.1109/ITHET61869.2024.10837638
  6. A. Al Tawara, E. Gide, and J. El-Den, "Systematic review and comprehensive analysis of integrating human-centered AI in higher education: Enhancing teaching, learning, and ethics," in Proc. 12th Int. Conf. Future Internet of Things and Cloud (FiCloud), Aug. 2025, pp. 305–312. doi: 10.1109/FiCloud61071.2025.00051
  7. H. Ranasinghe, E. Gide, and M. Elkhodr, "The significance of GenAI empowered ERP systems course teaching in quality education," in Proc. 21st Int. Conf. Information Technology Based Higher Education and Training (ITHET), Paris, France, Nov. 2024, pp. 1–7. doi: 10.1109/ITHET61869.2024.10837679
  8. G. Chaudhry, E. Gide, E. Yadegaridehkordi, and R. Tumpa, "Generative AI-powered teaching and learning in engineering and project management higher education: A systematic review," in Joint International Conference on AI, Big Data and Blockchain, Cham, Switzerland: Springer Nature, Aug. 2025, pp. 99–113.
  9. E. Gusman, E. Gide, M. Elkhodr, and G. Chaudhry, "The benefits and challenges of using artificial intelligence in teaching English as a foreign language in higher education," in Proc. 21st Int. Conf. Information Technology Based Higher Education and Training (ITHET), Paris, France, Nov. 2024, pp. 1–7. doi: 10.1109/ITHET61869.2024.10837597
  10. E. Gusman, E. Gide, G. Chaudhry, and M. Elkhodr, "A comprehensive review to identify the challenges and opportunities of using digital technology in English teaching in higher education," in International Society for Technology, Education, and Science, 2023.
  11. R. Sandu, E. Gide, S. Karim, and P. Singh, "A framework for GenAI-empowered curriculum and learning resources: A case study from an Australian higher education," in Proc. 21st Int. Conf. Information Technology Based Higher Education and Training (ITHET), Paris, France, Nov. 2024, pp. 1–8. doi: 10.1109/ITHET61869.2024.10837623
  12. N. Sandu and E. Gide, "Adoption of AI-Chatbots to enhance student learning experience in higher education in India," in Proc. 18th Int. Conf. Information Technology Based Higher Education and Training (ITHET), Sept. 2019, pp. 1–5. doi: 10.1109/ITHET46829.2019.8937382
  13. M. Elkhodr, K. Wangsa, E. Gide, and S. Karim, "A systematic review and multifaceted analysis of the integration of artificial intelligence and blockchain: Shaping the future of Australian higher education," Future Internet, vol. 16, no. 10, art. 378, 2024. doi: 10.3390/fi16100378
  14. M. Elkhodr, E. Gide, and N. Pandey, "Enhancing mental health support for international students: A digital framework for holistic well-being in higher education," STEM Education, vol. 4, no. 4, pp. 466–488, 2024. doi: 10.3934/steme.2024025
  15. N. Abbasi, E. Gide, P. Kalutara, and P. Lawrence, "Barriers and opportunities in online delivery of architecture and building design studios: Australian educators' perspectives," 2024.

Foundational Frameworks and Standards Referenced

  1. L. W. Anderson and D. R. Krathwohl, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York, NY: Longman, 2001.
  2. The University of Sydney, "Academic integrity," 2025. Available: sydney.edu.au
  3. International Organization for Standardization, "ISO/IEC 27001:2022 Information security management systems — Requirements," Oct. 2022. Available: iso.org/standard/27001