Week after week, a quiet but consistent admission emerges from trainers, assessors, and compliance managers across various training organisations:
“Our students are already using AI… but our assessments pretend it doesn’t exist.”
In almost every modern workplace, AI is just another tool on the bench.
Workers are using it to draft emails, summarise reports, check calculations, structure presentations, generate ideas, and interrogate large volumes of information.
If our assessments ignore that reality, we are not assessing workplace performance. We are assessing how well a learner can operate in a world that no longer exists.
The challenge for Australian RTOs is not “Should students be allowed to use AI?” The real question is:
How do we design assessments that remain compliant, fair and valid under the Standards, and reflect the way work is actually done with AI on the desk?
This article explores a practical way to do that.
Important note:
The fundamental mandate of every Australian Registered Training Organisation (RTO) remains unchanged: to certify that a learner possesses the specific skills and knowledge defined in the Training Package. Under the Standards, this obligation is non-negotiable. We are bound by the Principles of Assessment and the Rules of Evidence to ensure that every judgement of competence is valid, reliable, and, above all, authentic. Irrespective of the tools available in a modern workplace, the requirement for individual assessment is absolute; the learner must demonstrate that the work submitted is genuinely their own and that the skills displayed belong to them personally. There are no shortcuts to competence. However, as the tools used by industry evolve, the challenge for RTOs is to uphold these strict standards of individual performance while navigating a reality where Artificial Intelligence is increasingly present on the student’s desk.
Established Facts:
Research consistently shows AI detectors produce false positives, particularly against non-native English speakers (a huge demographic in VET). Relying on them is a violation of the Fairness principle.
You cannot prove AI use with software; you can only prove competence with questioning. If you suspect AI, don't run a scan; run a viva voce (oral questioning).
Students paying $30/month for GPT-4 (or similar advanced models) have a significant advantage over those using free versions. If an assessment requires complex reasoning that a free model can't do, but a paid model can, you are assessing financial capacity, not competency.
Learners often paste workplace data, client names, or proprietary company info into public AI tools to generate reports. This is a privacy breach. Do you have a "Red Zone / Green Zone" data classification guide?
Students often don't know how to admit they used AI. Without a standard format, they hide it.
Stop asking, “Is AI allowed?” and start asking, “What must remain human?”
The most useful shift I have seen in RTO conversations is this one simple reframing.
Instead of:
“Do we let students use ChatGPT / Grok/ Clause/ Copilot / Gemini in this unit?”
Ask:
“For this unit of competency, which parts of the performance must clearly be the learner’s own judgement, skill and decision-making – even if they use AI in support?”
Once you identify the “non-negotiably human” components, you can deliberately design:
-
Where AI may be used as a tool
-
Where AI must not be used
-
Where AI use is allowed but must be critiqued, checked or supplemented by the learner.
For example:
-
A business services learner might use AI to generate a first draft of a report, but the assessment focuses on how they refine, reorganise and correct that draft to meet workplace requirements.
-
A community services learner might use AI to summarise a policy, but is assessed on how they identify what is missing, what is inaccurate, and how they would adapt it for a specific client scenario.
-
A trades learner might use AI to generate ideas for a job sequence, but is assessed on selecting the safe, feasible option and justifying why.
We stop pretending that AI is not being used, and instead assess the learner’s capacity to work with, question and improve AI outputs.
In some cases, the “non‑negotiably human” element will be technical judgement; in others, it will be ethics, boundaries or communication. A learner in health administration might use AI to draft a client email, but the evidence is their ability to adjust tone, remove inappropriate content and ensure privacy and consent obligations are met. An ICT learner might use AI to propose code snippets and documentation, but the assessment centres on debugging, refactoring and explaining the logic to a non‑technical stakeholder. The same pattern holds across sectors: AI can help generate options, but the professional competence we certify must remain clearly visible in the learner’s own decisions, adaptations and explanations.
What the Legislation Says
The Outcome Standards are very clear about what assessment systems must do. Standard 1.4 states:
“The assessment system ensures assessment is conducted in a way that is fair and appropriate and enables accurate assessment judgment of VET student competency.” (Standard 1.4 – Outcome Standard 1).
It goes on to explain that robust assessment systems are:
“critical to upholding and defending the integrity of an NVR RTO’s assessment decisions, thereby facilitating the integrity of VET.”
The Explanatory material reinforces that assessments must be “evidence‑based, fair, and adequately correspond to the requirements of the training product.” In an AI‑enabled environment, “corresponding to the requirements of the training product” includes matching current industry practice, which in many sectors now assumes access to AI tools.
Put simply:
-
The whole assessment system must be fair, appropriate and accurate so assessors can confidently decide if a learner is competent.
-
Assessments that pretend AI does not exist are no longer an “accurate” picture of competence in many workplaces.
-
To stay fair and accurate, assessment design must reflect an AI‑enabled reality, not a pre‑AI world.
A documented, AI‑aware design – clearly stating what must remain human, where AI is permitted, and how its use is checked – is exactly what a “robust” assessment system looks like under the 2025 Standards.
This is also where validation under Outcome Standard 1.5 becomes important. The Standards require that assessment systems are “quality assured by appropriately skilled and credentialled persons through a regular process of validating assessment practices and judgements”, with every training product validated at least once every five years and more often where risks emerge. Unmanaged AI use is precisely the kind of risk that should trigger more frequent, risk‑based validation: sampling AI‑assisted submissions, checking whether authenticity controls are working, and adjusting tools and instructions where gaps appear.
Principles of Assessment in an AI‑enabled world
Under Standard 1.4, RTOs must show that “the assessment system facilitates assessment which must be conducted in accordance with the following principles”:
-
“fairness – assessment accommodates the needs of the VET student, including implementing reasonable adjustments where appropriate and enabling reassessment where necessary”. Under Standard 2.2, RTOs are already required to review each learner’s language, literacy, numeracy and digital literacy before enrolment and to advise whether the course is suitable. That same information can guide how AI is positioned in assessment: cohorts with low digital literacy may need staged support, scaffolded tasks or non‑AI alternatives, while cohorts working in highly digitised industries may reasonably be expected to engage with AI tools as part of competent performance.
-
“flexibility – assessment is appropriate to the context, training product and VET student, and assesses the VET student’s skills and knowledge that are relevant to the training product, regardless of how or where the VET student has acquired those skills or that knowledge”.
-
“validity – assessment includes practical application components that enable the VET student to demonstrate the relevant skills and knowledge in a practical setting”.
-
“reliability – assessment evidence is interpreted consistently by assessors and the outcomes of assessment are comparable irrespective of which assessor is conducting the assessment.”
Assessors must make judgements that are “justified based on the following rules of evidence”:
-
“validity – assessment evidence is adequate, such that the assessor can be reasonably assured that the VET student possesses the skills and knowledge described in the training product”.
-
“sufficiency – the quality, quantity and relevance of the assessment evidence enables the assessor to make an informed judgement of the VET student’s competency in the skills and knowledge described in the training product”.
-
“authenticity – the assessor is assured that a VET student’s assessment evidence is the original and genuine work of that VET student”.
-
“currency – the assessment evidence presented to the assessor documents and demonstrates the VET student’s current skills and knowledge.”
Simple terms:
Your whole assessment system must be fair, appropriate and accurate so assessors can confidently decide if a student is competent. Assessments that pretend AI doesn’t exist are no longer “accurate” pictures of competence. To stay fair and accurate, assessments must be designed for an AI‑enabled reality, not a pre‑AI world.
Strong assessment systems are critical to protecting the integrity of an RTO’s decisions and, by extension, the integrity of VET as a whole. The real integrity risk is not “allowing AI” but having weak, outdated systems that ignore actual student behaviour. Documented, AI‑aware design (what must stay human, where AI is allowed, how it is checked) is exactly what a “robust” system looks like.
Outcome Standard 1.5: “The assessment system is quality assured by appropriately skilled and credentialled persons through a regular process of validating assessment practices and judgements.”
Every training product “is validated at least once every five years and on a more frequent basis where the organisation becomes aware of risks to training outcomes, any changes to the training product or receives relevant feedback from VET students, trainers, assessors, and industry.” (Standard 1.5, Performance Indicator 2(b)).
RTOs must “utilise a risk‑based approach” to decide what to validate and sample sizes, with risks including “any instances where the NVR RTO identifies that training outcomes may not be meeting the requirements of the Instrument, the training product or expectations of VET students / relevant stakeholders.”
Simple terms:
Qualified people must regularly check (validate) that assessment tools and judgements are working as intended.
Each training product must be formally validated at least once every five years.
If there are risks (e.g. changed units, odd outcomes, or new technologies like AI), validation should happen more often and with larger/more targeted samples.
Widespread unsupervised AI use is a clear risk to training outcomes, so under the Standards, it should trigger more frequent, risk‑based validation.
Having “no AI position” is a compliance risk.
The law expects you to identify and respond to risks like this.
RTOs must, prior to enrolment, review “the skills and competencies of prospective VET students, including their language, literacy and numeracy proficiency and digital literacy.” (Standard 2.2, Performance Indicator 2(a)).
Based on that review, the RTO must “provide advice to each prospective VET student about whether the training product is suitable for them.” (Standard 2.2, 2(b)).
Simple terms:
Before enrolling, RTOs must check LLN and digital literacy, then advise whether the course suits the learner.
Don’t design AI‑heavy assessments without checking that your cohort’s digital literacy can handle it.
If AI use is central to assessment, that must feature in pre‑enrolment advice (“you will need to use digital tools including AI‑based systems”). Etc.
Outcome Standard 2.4: “Reasonable adjustments are made to support VET students with disability to access and participate in training and assessment on an equal basis.”
RTOs must show that:
-
“VET students are supported to disclose their disability, if the VET student wishes to do so”.
-
“Reasonable adjustments are made for VET students with disability where appropriate”.
RTOs must ensure “all information provided to VET students … is clear, accurate and current”, and that key information (including assessment requirements, modes of delivery, and any equipment or IT students must acquire) is “easily accessible”.
Prior to enrolment or fees, RTOs must give documentation that sets out “any obligations or liabilities which may be imposed by the organisation or third parties on the VET student”.
Simple terms:
In some contexts, using AI as a reading/writing or cognitive support may be a reasonable adjustment, provided the core competency (“what must remain human”) is still demonstrated.
Banning AI outright can conflict with this requirement if AI replicates assistive tech used in workplaces.
Standard 1.8 requires that “Facilities, resources and equipment for each training product are fit‑for‑purpose, safe, accessible and sufficient.” (Outcome Standard 1.8).
RTOs must show how they identify these and ensure “VET students have access to the facilities, resources and equipment they need to participate in the training and assessment relevant to the training product.” (1.8(b)(ii))
Designing AI-aware assessments that still meet the Principles of Assessment
Any assessment that incorporates AI must still comply with the familiar four Principles of Assessment: fairness, flexibility, validity and reliability.
Fairness: AI as a support, not a barrier
Fairness does not mean “everyone must use AI”. It means students are not disadvantaged because of their background, access or disability.
In practice, that means:
-
Providing AI as an option where it mirrors workplace practice, not forcing it on learners who lack access or confidence.
-
Making expectations explicit: if learners can use AI, they must be told how, when and under what conditions.
-
Ensuring learners with disability are not penalised for using AI in ways that are analogous to assistive technology or workplace supports.
The key fairness test is simple: Would a reasonable person say this assessment setup gives different kinds of learners an equal chance to demonstrate competence, regardless of whether they are already “tech savvy”?
Flexibility: more than one way to demonstrate competence
AI-aware assessment design can actually increase flexibility when done well.
For example, you might:
-
Allow a learner to use AI to generate alternative formats (dot points, draft emails, summaries) and then select the one that best fits the workplace context.
-
Give learners a choice:
Different industries and cohorts will sit at different points on the “AI involvement” spectrum. Flexibility allows you to respect that.
Flexibility can also be built into the evidence requirements themselves. For a communication unit, one learner might submit an AI‑assisted email sequence plus annotations showing their edits, while another submits a manually drafted sequence created during a supervised session. Both can meet the same benchmark if the marking guide focuses on clarity, appropriateness and compliance, not on whether AI was used. The key is that the assessment remains appropriate to the context, the training product and the learner, while still targeting the same competency outcomes.
Validity: Are we still assessing the right thing?
The real risk with AI is not “cheating” in the narrow sense. It is a construct invalidity: accidentally assessing how well a chatbot performs, rather than how well the learner performs.
To keep validity intact, you can:
-
Make AI use part of the scenario: “Your workplace has given you access to an AI tool. Use it to draft a response, then identify what you would change, add or remove before sending it to a client.”
-
Focus marking guides on the learner’s decisions and corrections, not the raw AI text itself.
-
Build tasks where AI cannot reasonably complete the full requirement, such as site-specific procedures, personalised case notes or live role-plays.
In other words, AI becomes part of the context, while the competency standard remains the anchor.
Reliability: consistency in a messy AI landscape
AI tools produce variable outputs. That makes some assessors nervous about reliability.
You can strengthen reliability by:
-
Using clear, detailed marking rubrics that focus on the human actions: checking, adapting, justifying, applying, and communicating.
-
Incorporating a short viva voce or debrief where the learner explains what they did with AI, what they changed and why.
-
Moderating samples where AI has been used, so assessors can align expectations and identify over-reliance.
The human evidence (reasoning, choices, explanations) is what allows reliability to hold steady, even when AI outputs differ.
Keeping the Rules of Evidence intact when AI is in the room
The other non-negotiable is the Rules of Evidence: validity, sufficiency, authenticity and currency. AI does not change these rules – it just changes how we demonstrate them.
Validity and sufficiency: more than “copy and paste”
An AI-generated paragraph dropped into an assignment tells us very little about the learner.
To preserve validity and sufficiency, assessors can require:
-
The AI prompt(s) used
-
The AI output
-
The learner’s edited version
-
A brief explanation of the changes made and why.
Suddenly, we have a chain of evidence:
-
The learner can frame an appropriate prompt (or not – which is also evidence!).
-
The learner can evaluate the AI response against the task and the training package requirements.
-
The learner can adjust, correct, add and remove content to align with workplace expectations.
Those combined artefacts are usually more informative than a polished final product with no explanation of how it was produced.
This approach also aligns neatly with the validation expectations in Standard 1.5. When validators review clusters of assessments that include prompts, AI outputs, edits and learner reflections, they can see far more clearly whether the tools are producing valid, sufficient, authentic and current evidence. Patterns of over‑reliance on AI, or assessors ignoring obvious red flags, become visible in the validation sample rather than remaining hidden in apparently “perfect” final products.
Authenticity: Is it their work?
This is the anxiety I hear most often.
But authenticity has never meant “no external tools used”. It has always meant “is this an honest demonstration of the learner’s competence?”
Some practical ways to support authenticity in an AI context:
-
Ask learners to annotate what is AI-generated and what is their own writing or analysis.
-
Include short oral questioning where the learner talks through key parts of their submission. If they cannot explain it, they have not demonstrated competence.
-
Use in-class or observed components where you can see the learner working with AI in real time: what they type, what they accept, what they reject.
AI can actually help reveal authenticity when you design tasks that require learners to critique and adapt its output in front of you.
Currency: AI as a pathway to up-to-date practice
If anything, sensible use of AI strengthens currency.
Modern workplaces are shifting rapidly towards AI-enabled workflows. Giving learners the opportunity to use current tools, with current interfaces, in realistic scenarios helps ensure:
-
the skills assessed reflect today’s (and tomorrow’s) workplace, not yesterday’s
-
learners graduate with confidence in navigating AI ethically and effectively, rather than discovering it for the first time on the job.
This is particularly important in industries undergoing rapid digital change. Where training packages reference current legislation, standards or digital systems, using contemporary AI interfaces in scenarios helps to demonstrate that learners’ skills are genuinely current, not frozen at the time the assessment tool was first written. It also supports the broader intent of the 2025 Standards: that nationally recognised outcomes reflect real, contemporary industry practice rather than outdated workflows.
Simple, concrete examples from the VET classroom
This is where the theory becomes real. A few practical examples in an Australian VET context:
Example A: BSB – Writing a report with AI assistance
Task:
Draft a workplace incident summary report for a near miss in a warehouse.
AI-aware design:
-
Learner uses AI to generate a first draft (optional).
-
Learner must:
-
Evidence:
Assessment focus:
-
Understanding of incident reporting requirements
-
Ability to integrate policy content
-
Judgement about what is appropriate to submit.
Example B: CHC – Case notes and boundaries
Task:
Prepare case notes after a role-played client interaction in a youth work setting.
AI-aware design:
-
AI use is not allowed to generate the case notes. Authenticity and privacy are critical here.
-
However, learners may use AI to:
Assessment focus:
-
Distinguishing between helpful and unhelpful AI suggestions
-
Writing accurate, objective notes based on the actual role-play, not generic content.
Example C: CPC – Risk assessment and method statement
Task:
Complete a basic risk assessment and method statement for a small construction task.
AI-aware design:
-
A learner can use AI to list generic hazards and controls.
-
Every week now, the same quiet admission comes from trainers, assessors and compliance managers:
“Our students are already using AI… but our assessments pretend it doesn’t exist.”
In almost every modern workplace, AI is just another tool on the bench.
Workers are using it to draft emails, summarise reports, check calculations, structure presentations, generate ideas, and interrogate large volumes of information.
If our assessments ignore that reality, we are not assessing workplace performance. We are assessing how well a learner can operate in a world that no longer exists.
The challenge for Australian RTOs is not “Should students be allowed to use AI?” The real question is:
How do we design assessments that remain compliant, fair and valid under the Standards, and reflect the way work is actually done with AI on the desk?
This article explores a practical way to do that.
Stop asking “Is AI allowed?” and start asking “What must remain human?”
The most useful shift I have seen in RTO conversations is this one simple reframing.
Instead of:
“Do we let students use ChatGPT / Grok/ Clause/ Copilot / Gemini in this unit?”
Ask:
“For this unit of competency, which parts of the performance must clearly be the learner’s own judgement, skill and decision-making – even if they use AI in support?”
Once you identify the “non-negotiably human” components, you can deliberately design:
-
Where AI may be used as a tool
-
Where AI must not be used
-
Where AI use is allowed but must be critiqued, checked or supplemented by the learner.
For example:
-
A business services learner might use AI to generate a first draft of a report, but the assessment focuses on how they refine, reorganise and correct that draft to meet workplace requirements.
-
A community services learner might use AI to summarise a policy, but is assessed on how they identify what is missing, what is inaccurate, and how they would adapt it for a specific client scenario.
-
A trades learner might use AI to generate ideas for a job sequence, but is assessed on selecting the safe, feasible option and justifying why.
We stop pretending that AI is not being used, and instead assess the learner’s capacity to work with, question and improve AI outputs.
In some cases, the “non‑negotiably human” element will be technical judgement; in others, it will be ethics, boundaries or communication. A learner in health administration might use AI to draft a client email, but the evidence is their ability to adjust tone, remove inappropriate content and ensure privacy and consent obligations are met. An ICT learner might use AI to propose code snippets and documentation, but the assessment centres on debugging, refactoring and explaining the logic to a non‑technical stakeholder. The same pattern holds across sectors: AI can help generate options, but the professional competence we certify must remain clearly visible in the learner’s own decisions, adaptations and explanations.
What the Legislation Says
The Outcome Standards are very clear about what assessment systems must do. Standard 1.4 states:
“The assessment system ensures assessment is conducted in a way that is fair and appropriate and enables accurate assessment judgment of VET student competency.” (Standard 1.4 – Outcome Standard 1).
It goes on to explain that robust assessment systems are:
“critical to upholding and defending the integrity of an NVR RTO’s assessment decisions, thereby facilitating the integrity of VET.”
The Explanatory material reinforces that assessments must be “evidence‑based, fair, and adequately correspond to the requirements of the training product.” In an AI‑enabled environment, “corresponding to the requirements of the training product” includes matching current industry practice, which in many sectors now assumes access to AI tools.
Put simply:
-
The whole assessment system must be fair, appropriate and accurate so assessors can confidently decide if a learner is competent.
-
Assessments that pretend AI does not exist are no longer an “accurate” picture of competence in many workplaces.
-
To stay fair and accurate, assessment design must reflect an AI‑enabled reality, not a pre‑AI world.
A documented, AI‑aware design – clearly stating what must remain human, where AI is permitted, and how its use is checked – is exactly what a “robust” assessment system looks like under the 2025 Standards.
This is also where validation under Outcome Standard 1.5 becomes important. The Standards require that assessment systems are “quality assured by appropriately skilled and credentialled persons through a regular process of validating assessment practices and judgements”, with every training product validated at least once every five years and more often where risks emerge. Unmanaged AI use is precisely the kind of risk that should trigger more frequent, risk‑based validation: sampling AI‑assisted submissions, checking whether authenticity controls are working, and adjusting tools and instructions where gaps appear.
Principles of Assessment in an AI‑enabled world
Under Standard 1.4, RTOs must show that “the assessment system facilitates assessment which must be conducted in accordance with the following principles”:
-
“fairness – assessment accommodates the needs of the VET student, including implementing reasonable adjustments where appropriate and enabling reassessment where necessary”. Under Standard 2.2, RTOs are already required to review each learner’s language, literacy, numeracy and digital literacy before enrolment and to advise whether the course is suitable. That same information can guide how AI is positioned in assessment: cohorts with low digital literacy may need staged support, scaffolded tasks or non‑AI alternatives, while cohorts working in highly digitised industries may reasonably be expected to engage with AI tools as part of competent performance.
-
“flexibility – assessment is appropriate to the context, training product and VET student, and assesses the VET student’s skills and knowledge that are relevant to the training product, regardless of how or where the VET student has acquired those skills or that knowledge”.
-
“validity – assessment includes practical application components that enable the VET student to demonstrate the relevant skills and knowledge in a practical setting”.
-
“reliability – assessment evidence is interpreted consistently by assessors and the outcomes of assessment are comparable irrespective of which assessor is conducting the assessment.”
Assessors must make judgements that are “justified based on the following rules of evidence”:
-
“validity – assessment evidence is adequate, such that the assessor can be reasonably assured that the VET student possesses the skills and knowledge described in the training product”.
-
“sufficiency – the quality, quantity and relevance of the assessment evidence enables the assessor to make an informed judgement of the VET student’s competency in the skills and knowledge described in the training product”.
-
“authenticity – the assessor is assured that a VET student’s assessment evidence is the original and genuine work of that VET student”.
-
“currency – the assessment evidence presented to the assessor documents and demonstrates the VET student’s current skills and knowledge.”
Simple terms:
Your whole assessment system must be fair, appropriate and accurate so assessors can confidently decide if a student is competent. Assessments that pretend AI doesn’t exist are no longer “accurate” pictures of competence. To stay fair and accurate, assessments must be designed for an AI‑enabled reality, not a pre‑AI world.
Strong assessment systems are critical to protecting the integrity of an RTO’s decisions and, by extension, the integrity of VET as a whole. The real integrity risk is not “allowing AI” but having weak, outdated systems that ignore actual student behaviour. Documented, AI‑aware design (what must stay human, where AI is allowed, how it is checked) is exactly what a “robust” system looks like.
Outcome Standard 1.5: “The assessment system is quality assured by appropriately skilled and credentialled persons through a regular process of validating assessment practices and judgements.”
Every training product “is validated at least once every five years and on a more frequent basis where the organisation becomes aware of risks to training outcomes, any changes to the training product or receives relevant feedback from VET students, trainers, assessors, and industry.” (Standard 1.5, Performance Indicator 2(b)).
RTOs must “utilise a risk‑based approach” to decide what to validate and sample sizes, with risks including “any instances where the NVR RTO identifies that training outcomes may not be meeting the requirements of the Instrument, the training product or expectations of VET students / relevant stakeholders.”.
Simple terms:
Qualified people must regularly check (validate) that assessment tools and judgements are working as intended.
Each training product must be formally validated at least once every five years.
If there are risks (e.g. changed units, odd outcomes, or new technologies like AI), validation should happen more often and with larger/more targeted samples.
Widespread unsupervised AI use is a clear risk to training outcomes, so under the Standards, it should trigger more frequent, risk‑based validation.
Having “no AI position” is a compliance risk; the law expects you to identify and respond to risks like this.
RTOs must, prior to enrolment, review “the skills and competencies of prospective VET students, including their language, literacy and numeracy proficiency and digital literacy.” (Standard 2.2, Performance Indicator 2(a)).
Based on that review, the RTO must “provide advice to each prospective VET student about whether the training product is suitable for them.” (Standard 2.2, 2(b)).
Simple terms:
Before enrolling, RTOs must check LLN and digital literacy, then advise whether the course suits the learner.
Don’t design AI‑heavy assessments without checking that your cohort’s digital literacy can handle it.
If AI use is central to assessment, that must feature in pre‑enrolment advice (“you will need to use digital tools including AI‑based systems”). Etc.
Outcome Standard 2.4: “Reasonable adjustments are made to support VET students with disability to access and participate in training and assessment on an equal basis.”
RTOs must show that:
-
“VET students are supported to disclose their disability, if the VET student wishes to do so”.
-
“Reasonable adjustments are made for VET students with disability where appropriate”.
RTOs must ensure “all information provided to VET students … is clear, accurate and current”, and that key information (including assessment requirements, modes of delivery, and any equipment or IT students must acquire) is “easily accessible”.
Prior to enrolment or fees, RTOs must give documentation that sets out “any obligations or liabilities which may be imposed by the organisation or third parties on the VET student”.
Simple terms:
In some contexts, using AI as a reading/writing or cognitive support may be a reasonable adjustment, provided the core competency (“what must remain human”) is still demonstrated.
Banning AI outright can conflict with this requirement if AI replicates assistive tech used in workplaces.
Standard 1.8 requires that “Facilities, resources and equipment for each training product are fit‑for‑purpose, safe, accessible and sufficient.” (Outcome Standard 1.8).
RTOs must show how they identify these and ensure “VET students have access to the facilities, resources and equipment they need to participate in the training and assessment relevant to the training product.” (1.8(b)(ii))
Designing AI-aware assessments that still meet the Principles of Assessment
Any assessment that incorporates AI must still comply with the familiar four Principles of Assessment: fairness, flexibility, validity and reliability.
Fairness: AI as a support, not a barrier
Fairness does not mean “everyone must use AI”. It means students are not disadvantaged because of their background, access or disability.
In practice, that means:
-
Providing AI as an option where it mirrors workplace practice, not forcing it on learners who lack access or confidence.
-
Making expectations explicit: if learners can use AI, they must be told how, when and under what conditions.
-
Ensuring learners with disability are not penalised for using AI in ways that are analogous to assistive technology or workplace supports.
The key fairness test is simple: Would a reasonable person say this assessment setup gives different kinds of learners an equal chance to demonstrate competence, regardless of whether they are already “tech savvy”?
Flexibility: more than one way to demonstrate competence
AI-aware assessment design can actually increase flexibility when done well.
For example, you might:
-
Allow a learner to use AI to generate alternative formats (dot points, draft emails, summaries) and then select the one that best fits the workplace context.
-
Give learners a choice:
Different industries and cohorts will sit at different points on the “AI involvement” spectrum. Flexibility allows you to respect that.
Flexibility can also be built into the evidence requirements themselves. For a communication unit, one learner might submit an AI‑assisted email sequence plus annotations showing their edits, while another submits a manually drafted sequence created during a supervised session. Both can meet the same benchmark if the marking guide focuses on clarity, appropriateness and compliance, not on whether AI was used. The key is that the assessment remains appropriate to the context, the training product and the learner, while still targeting the same competency outcomes.
Validity: Are we still assessing the right thing?
The real risk with AI is not “cheating” in the narrow sense. It is a construct invalidity: accidentally assessing how well a chatbot performs, rather than how well the learner performs.
To keep validity intact, you can:
-
Make AI use part of the scenario: “Your workplace has given you access to an AI tool. Use it to draft a response, then identify what you would change, add or remove before sending it to a client.”
-
Focus marking guides on the learner’s decisions and corrections, not the raw AI text itself.
-
Build tasks where AI cannot reasonably complete the full requirement, such as site-specific procedures, personalised case notes or live role-plays.
In other words, AI becomes part of the context, while the competency standard remains the anchor.
Reliability: consistency in a messy AI landscape
AI tools produce variable outputs. That makes some assessors nervous about reliability.
You can strengthen reliability by:
-
Using clear, detailed marking rubrics that focus on the human actions: checking, adapting, justifying, applying, and communicating.
-
Incorporating a short viva voce or debrief where the learner explains what they did with AI, what they changed and why.
-
Moderating samples where AI has been used, so assessors can align expectations and identify over-reliance.
The human evidence (reasoning, choices, explanations) is what allows reliability to hold steady, even when AI outputs differ.
Keeping the Rules of Evidence intact when AI is in the room
The other non-negotiable is the Rules of Evidence: validity, sufficiency, authenticity and currency. AI does not change these rules – it just changes how we demonstrate them.
Validity and sufficiency: more than “copy and paste”
An AI-generated paragraph dropped into an assignment tells us very little about the learner.
To preserve validity and sufficiency, assessors can require:
-
The AI prompt(s) used
-
The AI output
-
The learner’s edited version
-
A brief explanation of the changes made and why.
Suddenly, we have a chain of evidence:
-
The learner can frame an appropriate prompt (or not – which is also evidence!).
-
The learner can evaluate the AI response against the task and the training package requirements.
-
The learner can adjust, correct, add and remove content to align with workplace expectations.
Those combined artefacts are usually more informative than a polished final product with no explanation of how it was produced.
This approach also aligns neatly with the validation expectations in Standard 1.5. When validators review clusters of assessments that include prompts, AI outputs, edits and learner reflections, they can see far more clearly whether the tools are producing valid, sufficient, authentic and current evidence. Patterns of over‑reliance on AI, or assessors ignoring obvious red flags, become visible in the validation sample rather than remaining hidden in apparently “perfect” final products.
Authenticity: Is it their work?
This is the anxiety I hear most often.
But authenticity has never meant “no external tools used”. It has always meant “is this an honest demonstration of the learner’s competence?”
Some practical ways to support authenticity in an AI context:
-
Ask learners to annotate what is AI-generated and what is their own writing or analysis.
-
Include short oral questioning where the learner talks through key parts of their submission. If they cannot explain it, they have not demonstrated competence.
-
Use in-class or observed components where you can see the learner working with AI in real time: what they type, what they accept, what they reject.
AI can actually help reveal authenticity when you design tasks that require learners to critique and adapt its output in front of you.
Currency: AI as a pathway to up-to-date practice
If anything, sensible use of AI strengthens currency.
Modern workplaces are shifting rapidly towards AI-enabled workflows. Giving learners the opportunity to use current tools, with current interfaces, in realistic scenarios helps ensure:
-
the skills assessed reflect today’s (and tomorrow’s) workplace, not yesterday’s
-
learners graduate with confidence in navigating AI ethically and effectively, rather than discovering it for the first time on the job.
This is particularly important in industries undergoing rapid digital change. Where training packages reference current legislation, standards or digital systems, using contemporary AI interfaces in scenarios helps to demonstrate that learners’ skills are genuinely current, not frozen at the time the assessment tool was first written. It also supports the broader intent of the 2025 Standards: that nationally recognised outcomes reflect real, contemporary industry practice rather than outdated workflows.
Simple, concrete examples from the VET classroom
This is where the theory becomes real.
A few practical examples in an Australian VET context:
Example A: BSB – Writing a report with AI assistance
Task:
Draft a workplace incident summary report for a near miss in a warehouse.
AI-aware design:
-
Learner uses AI to generate a first draft (optional).
-
Learner must:
-
Evidence:
Assessment focus:
-
Understanding of incident reporting requirements
-
Ability to integrate policy content
-
Judgement about what is appropriate to submit.
Example B: CHC – Case notes and boundaries
Task:
Prepare case notes after a role-played client interaction in a youth work setting.
AI-aware design:
-
AI use is not allowed to generate the case notes. Authenticity and privacy are critical here.
-
However, learners may use AI to:
Assessment focus:
-
Distinguishing between helpful and unhelpful AI suggestions
-
Writing accurate, objective notes based on the actual role-play, not generic content.
Example C: CPC – Risk assessment and method statement
Task:
Complete a basic risk assessment and method statement for a small construction task.
AI-aware design:
-
A learner can use AI to list generic hazards and controls.
-
Learner is then required to:
Assessment focus:
-
Applying WHS knowledge in context
-
Recognising gaps or errors in AI outputs
-
Making sound safety decisions.
In each example, the AI is present, but the competence we are certifying remains clearly with the learner.
The real compliance risk is pretending AI is not there
From a regulatory and risk perspective, I think the most dangerous position for an RTO in 2025 is not “we allow AI” but:
“We have no documented position on AI and no evidence of how we manage it.”
If learners are using AI privately (and they are), but:
-
Your assessment tools never mention it
-
Your policies ignore it
-
Your assessors have had no PD on it
-
Your validators never discuss it,
Then you have a growing integrity risk and no coherent story to tell an auditor.
By contrast, an RTO that can show:
-
a clear, written position on AI use
-
unit-by-unit decisions about where AI is appropriate and where it is not
-
assessment tools that explicitly incorporate or exclude AI
-
evidence of assessor training and validation discussion
is in a much stronger position – both educationally and from a compliance standpoint.
Adding a short, unit‑level AI statement to each assessment tool can make this visible and auditable. That statement can specify whether AI is permitted, prohibited or permitted with conditions; what must remain the learner’s own work; and what evidence of AI use (such as prompts or screenshots) must be supplied. When these decisions are referenced to the Principles of Assessment and the Rules of Evidence, they give auditors a clear line of sight from legislative requirement to day‑to‑day practice.
AI as a mirror, not a shortcut
AI is not going away. For many industries, it has become part of the standard toolkit, alongside spreadsheets, email and search engines. Our job in VET is not to pretend otherwise. Our job is to:
-
design assessments that reflect how work is really done
-
protect the integrity of national qualifications
-
ensure that, at the end of the day, the person we are certifying can actually do the job – with or without AI on the screen in front of them.
If we treat AI as a mirror of modern work, not as a shortcut or a threat, we can build assessments that are more authentic, more engaging, and still fully aligned with the Principles of Assessment and the Rules of Evidence.
The revised Standards already give RTOs the mandate to do this work. They ask for fair, flexible, valid and reliable assessment; for evidence that is sufficient, authentic and current; for risk‑based validation; and for facilities, resources and equipment that are fit‑for‑purpose. Thoughtful, AI‑aware assessment design is not an optional innovation project sitting on the side of compliance – it is fast becoming the most practical way to comply.
That is the conversation I would like to see more of in the sector: not “ban it or allow it,” but “how do we assess human competence in an AI-enabled world – and do it well?”
-
Learner is then required to:
Assessment focus:
-
Applying WHS knowledge in context
-
Recognising gaps or errors in AI outputs
-
Making sound safety decisions.
In each example, the AI is present, but the competence we are certifying remains clearly with the learner.
The real compliance risk is pretending AI is not there
From a regulatory and risk perspective, I think the most dangerous position for an RTO in 2025 is not “we allow AI” but:
“We have no documented position on AI and no evidence of how we manage it.”
If learners are using AI privately (and they are), but:
-
Your assessment tools never mention it
-
Your policies ignore it
-
Your assessors have had no PD on it
-
Your validators never discuss it,
Then you have a growing integrity risk and no coherent story to tell an auditor.
By contrast, an RTO that can show:
-
a clear, written position on AI use
-
unit-by-unit decisions about where AI is appropriate and where it is not
-
assessment tools that explicitly incorporate or exclude AI
-
evidence of assessor training and validation discussion
is in a much stronger position – both educationally and from a compliance standpoint.
Adding a short, unit‑level AI statement to each assessment tool can make this visible and auditable. That statement can specify whether AI is permitted, prohibited or permitted with conditions; what must remain the learner’s own work; and what evidence of AI use (such as prompts or screenshots) must be supplied. When these decisions are referenced to the Principles of Assessment and the Rules of Evidence, they give auditors a clear line of sight from legislative requirement to day‑to‑day practice.
AI as a mirror, not a shortcut
AI is not going away. For many industries, it has become part of the standard toolkit, alongside spreadsheets, email and search engines. Our job in VET is not to pretend otherwise. Our job is to:
-
design assessments that reflect how work is really done
-
protect the integrity of national qualifications
-
ensure that, at the end of the day, the person we are certifying can actually do the job – with or without AI on the screen in front of them.
If we treat AI as a mirror of modern work, not as a shortcut or a threat, we can build assessments that are more authentic, more engaging, and still fully aligned with the Principles of Assessment and the Rules of Evidence.
The revised Standards already give RTOs the mandate to do this work. They ask for fair, flexible, valid and reliable assessment; for evidence that is sufficient, authentic and current; for risk‑based validation; and for facilities, resources and equipment that are fit‑for‑purpose. Thoughtful, AI‑aware assessment design is not an optional innovation project sitting on the side of compliance – it is fast becoming the most practical way to comply.
That is the conversation I would like to see more of in the sector: not “ban it or allow it,” but “how do we assess human competence in an AI-enabled world – and do it well?”
Important Disclaimer & Compliance Notice
Not Legal or Regulatory Advice: The information, frameworks, and examples provided in this article are for general educational and informational purposes only. They do not constitute legal advice, regulatory interpretation, or a guarantee of compliance. While every effort has been made to align this content with current VET practices and the Standards for RTOs, regulations and interpretations by the national regulator (ASQA) may change. Registered Training Organisations (RTOs) should always rely on their own internal policies, legal counsel, and direct advice from the regulator when making compliance decisions.
Individual Assessment & Competency The integration of Artificial Intelligence (AI) into assessment design does not alter the fundamental requirements of the VET sector. Every learner must be assessed individually. It remains the non-negotiable responsibility of the RTO and the assessor to ensure that the student has personally acquired and demonstrated the specific knowledge and skills required by the Unit of Competency.
Navigating the New World of AI We acknowledge that the ability to navigate, utilise, and critique Artificial Intelligence is rapidly becoming a critical workplace skill. However, AI must be treated as a tool to support—not replace—human competence. An assessment outcome of "Competent" must reflect the learner’s own ability to perform the task to the standard required in the workplace, independent of the generative capabilities of the tool used.
Verification of Authenticity RTOs are responsible for implementing robust controls to verify authenticity. Where AI is permitted, it must be used transparently, ethically, and in accordance with the specific instructions of the assessment task. The ultimate judgement of competence rests on the evidence of the learner's own judgement, application, and understanding.
