What this guide is and how to use it
This is the student-facing component of the SAGE framework. It does not tell students whether they may use AI — that is determined by your unit policy and the AI Assessment Scale level you have set. It tells them how to use AI in a way that develops genuine learning and maintains academic integrity.
The guide is designed to be handed to students at the start of a unit or before a SAGE-integrated assessment task. You can use it verbatim, adapt the step names for your discipline, or reframe the examples to match your context. The log template in Step 3 can be provided as a separate document or embedded in your assessment instructions.
The guide works alongside the educator-facing six-step SAGE cycle. Steps 1–5 here correspond to Steps 1–5 of the full SAGE cycle. The sixth step — Defend — is not included in this guide because it is a supervised assessment checkpoint, not a student-led process. The closing note in Step 5 prepares students for it without revealing the assurance design.
Adapting this guide: You are free to modify the step names, examples, and language for your discipline and student cohort under a CC BY 4.0 licence. The only requirement is that you retain the attribution line: Adapted from the SAGE Framework (Elkhodr & Gide, 2026). sage-framework.com
SAGE: A Student Guide to Working with Generative AI
Student-facing documentGenerative AI can produce fluent, confident, and convincing output on almost any topic within seconds. That is precisely why using it well requires a structured process. SAGE guides you through five steps that ensure the thinking behind your work is yours — not the AI's. Each step builds on the last, creating a documented trail of your reasoning that demonstrates what you know.
Before you use generative AI, you need to bring something of your own to the conversation. AI is a tool for extending your thinking, not a substitute for starting it. Your educator will set up this step in one of two ways.
Your lecturer has provided a specific prompt with instructions, constraints, and context built in. Use it exactly as provided. This prompt has been designed to produce output that is useful but imperfect — your job in the steps that follow is to identify what is missing, wrong, or incomplete.
Before you open any AI tool, attempt to answer the task yourself. This does not need to be polished — bullet points, rough notes, a sketch, or a half-formed argument. The standard is not quality. It is that your own thinking exists before the AI is involved. Once you have your pre-attempt, build it into your prompt alongside the task description and any standards or readings your lecturer has provided.
If you feel you have no knowledge of the topic at all, write down what you understand the task to be asking, which terms or concepts you recognise, and where your understanding breaks down. A structured statement of what you do not yet know is still a pre-attempt — and it is far more useful than a blank prompt.
In an assessment context: you are expected to arrive with enough understanding of your discipline to construct a starting point. If you cannot produce anything at all before using AI, that is worth pausing on — it may be a signal about your preparation rather than a limitation of the tool.
The output you received in Step 1 will look competent. It may be well-structured, clearly written, and appear to answer the task. That does not mean it is accurate, complete, or appropriate for your context.
Your job is to compare the AI output against the anchors your lecturer has provided. An anchor is any authoritative reference point for the task — a marking rubric, a professional standard, a regulatory guideline, a research paper, a disciplinary framework, or an industry code of practice.
Go through the AI output point by point. For each claim, recommendation, or assertion the AI has made, ask: does this align with what the anchor says? Is anything oversimplified? Is anything missing? Is anything wrong?
If no anchor has been provided: identify the most authoritative standard or source relevant to this task and use that as your reference point. This might be a key reading from your unit, a professional body's guidelines, or the foundational framework in your discipline. If you are unsure what the right anchor is, finding out is part of the work — and it is a skill you will need in every professional context you enter after graduation.
Document every misalignment, gap, or error you find. This documentation feeds directly into Step 3.
This is where you take ownership of the output. Using what you found in Step 2, work through the AI-generated content and make deliberate decisions about every significant element. You may move between Evaluate and Refine more than once — this iteration is expected. The only step designed as a single bounded pass is Step 4.
For each decision, record it in your log using three categories:
The AI output is accurate and appropriate as it stands, and you can explain why with reference to your anchor.
The AI output is partially correct but needs adjustment. Record what you changed and why, citing the specific standard, evidence, or reasoning that informed your modification.
The AI output is wrong, misleading, or inappropriate for the context. Record what you rejected and why, with reference to what the correct position is according to your anchor.
Record each decision using the following structure. Your educator may provide a customised version — if not, use this format as your default.
| AI Output (relevant excerpt) | Decision | Justification (~50–75 words) |
|---|---|---|
| Paste or summarise the relevant AI claim here | Accept / Modify / Reject | Explain your reasoning with reference to your anchor or professional judgement |
| Continue for each significant element | Accept / Modify / Reject | Each justification should cite a specific source, standard, or disciplinary principle |
This log is the core artefact of your work under SAGE. It is not a supplementary document. It is the primary evidence that you engaged critically with the AI output rather than accepting it passively. If you are ever asked to account for your work, this log is your answer.
You have now built a log of documented decisions. In this step, you feed that log back into the AI — but this time you change the AI's role.
Construct a new prompt that instructs the AI to act as a professional reviewer matched to the anchor you used in Step 2. The professional role and the anchor always match:
Give the AI your log from Step 3 and ask it to audit your decisions. Where did you accept something you should have questioned? Where did you reject something that had merit? Where is your justification weak or unsupported?
The AI will return a set of concerns about your decisions. Your job is to make a final adjudication on each one:
The AI has identified a genuine weakness in your reasoning. Record what you will change and why.
The AI's critique is itself flawed, generic, or does not apply to your specific context. Record why you are rejecting it with reference to your anchor or professional judgement.
This adjudication is the final act of this step. The Audit is a single bounded pass — you do not loop back and forth with the AI indefinitely. One critique, one adjudication, documented. You cannot construct a meaningful Audit prompt without a genuine log from Step 3. No log, no meaningful critique. The steps are designed to build on each other.
This step has two layers.
Review the log you have built across Steps 3 and 4. This record demonstrates what you decided, what you changed, what you rejected, and why. It is your evidence of authorship — not authorship of the AI output, but authorship of the thinking that shaped, corrected, and justified the final work.
The anchors your lecturer provided were specific to this task. The AI limitations you identified were specific to this context. But the habit of holding AI output against an authoritative standard is transferable to any context you will encounter. In your career, the anchor may be a regulatory framework, a professional code of conduct, an industry standard, or an organisational policy. The skill SAGE is building is not reliance on a fixed set of references — it is the judgement to identify the right anchor for any situation and to hold AI output accountable against it.
Take a moment to consider: what did you learn about AI's limitations in this task? What would you do differently next time? Where did your own disciplinary knowledge prove essential in ways the AI could not replicate?
A note on what may come next: In many professional and academic settings, you will be expected to walk others through your reasoning — not just present a finished product. Your educator may include a supervised session where you discuss your work, explain your decisions, and demonstrate your understanding. The log and reflections you have built across these steps are your preparation for that conversation. This is not a test of memory. It is an opportunity to show that the thinking is yours.