A new global study reveals what students actually think about AI in education, and the implications for VET providers who are still debating whether to have a policy.
The Digital Education Council recently published a guide that should be required reading for every training provider in Australia, regardless of whether they consider themselves an "AI-forward" organisation. Titled "Student Voices on AI: An Actionable Guide for Institutions and Faculty," the guide draws on insights from students at universities across multiple continents who were asked a simple but powerful question: What is your lived experience of AI in education right now?
The answers are not theoretical. They are not speculative. They describe what is already happening in classrooms, online platforms, and assessment submissions around the world. And while the guide was produced for the higher education sector, its findings carry direct and urgent implications for Australian vocational education and training.
Because if university students across twelve countries are telling researchers that they are already using AI daily, that institutional policies are unclear, that they are learning AI skills through trial and error rather than structured guidance, and that they want their institutions to lead rather than react, then the same dynamics are unfolding in VET. The only difference is that VET providers are operating under a regulatory framework that makes the absence of clear AI governance not just an educational gap but a compliance risk.
The Silence Is the Problem
The most striking finding in the Digital Education Council report is not about what students are doing with AI. It is about what happens in the absence of institutional guidance. When providers fail to set clear expectations around AI use, students do not stop using AI. They use it discreetly, without shared norms, defaulting to their own personal ethics about what constitutes acceptable use.
This creates an environment of inconsistency that is corrosive to assessment integrity. In one classroom, a student uses AI to generate a first draft of a written assessment and considers this acceptable because nobody said otherwise. In the next classroom, a student avoids AI entirely because they assume any use would constitute academic misconduct. Both students are acting in good faith. Both are guessing. And the RTO has no defensible position because it never told either of them what the rules were.
Under the 2025 Standards, this is not just an educational shortcoming. Standard 1.4 requires assessment to be conducted in accordance with principles of fairness, including that the assessment is appropriate to the context and the student. Standard 2.1 requires that students have access to clear and accurate information, including assessment requirements. When an RTO has no AI use policy, or has a policy so vague that students cannot determine what is permitted, the organisation is creating the conditions for inconsistent assessment outcomes that may not withstand regulatory scrutiny.
The Digital Education Council report captures this precisely: students are calling for clarity, consistency, and shared expectations so that AI can be used openly rather than covertly. In VET terms, they are asking for what the Standards already require: transparent rules, clearly communicated, consistently applied.
Process Over Output: What Assessment Reform Actually Looks Like
The second major insight from the report challenges how many RTOs currently design their assessments. Students themselves expressed concern that overreliance on AI compromises the development of critical thinking and reasoning skills. They are not asking for unlimited AI access. They are asking for assessment designs that protect core cognitive development while recognising AI's role as a support tool.
This is a significant finding because it inverts the assumption many providers make about students and AI. The common fear is that students want to use AI to avoid doing the work. The reality, according to these students, is more nuanced: they want to use AI intelligently, but they also want to be sure they are actually learning something. One student's observation captured this with striking clarity: if you cannot explain your work to a professor without AI's help, then you did not actually learn it.
For VET providers, this insight points toward a fundamental redesign of assessment methodology. The traditional approach of assessing a final written output, whether a report, a case study, or a project plan, is increasingly vulnerable to AI-assisted completion in ways that are difficult to detect and arguably impossible to prevent. The alternative, which students themselves are advocating for, is to assess the process rather than the product.
In practice, this means designing assessments that require students to demonstrate their reasoning journey: graded drafts that show how thinking developed, reflection logs that explain what decisions were made and why, documented revisions that show how feedback was incorporated, and explicit disclosure of where and how AI tools were used. The competency is demonstrated not by what the final document says, but by whether the student can explain, defend, and build upon the work they have submitted.
Under Standard 1.1, training must be structured to provide sufficient time for instruction, practice, feedback, and assessment. Assessments designed around process rather than output align naturally with this requirement because they build in the iterative practice and feedback loops that develop genuine competence. Under Standard 1.4, the principles of validity and sufficiency require that assessment evidence adequately demonstrates the student possesses the skills and knowledge described in the training product. A process-based assessment that includes reasoning documentation, draft iterations, and AI use disclosure provides richer and more defensible evidence of competence than a polished final document that may or may not have been generated by the student.
Discipline-Sensitive AI Policies: One Size Has Never Fitted All
The report's call for differentiated AI policies across disciplines resonates strongly with the VET sector, where the diversity of training products makes a blanket AI policy not just unhelpful but potentially counterproductive.
Consider the difference between a Certificate IV in Business and a Certificate III in Electrotechnology. In the business qualification, AI tools may have a legitimate role in helping students draft communications, analyse data, or develop project plans, skills that mirror how AI is actually used in contemporary business practice. Banning AI from these assessments would arguably misrepresent the reality of the workplace the student is being prepared for.
In the electrotechnology qualification, the critical competencies involve physical skills, safety procedures, and practical application of technical knowledge in live environments. AI has a far more limited role in these contexts, and assessments centred on workplace observation, practical demonstration, and safety compliance are inherently more resistant to AI-assisted shortcuts.
Between these extremes lies a vast range of qualifications where the appropriate role of AI will vary by unit, by assessment method, and by the specific competency being assessed. A community services qualification might appropriately allow AI assistance for research and case study analysis while requiring entirely unassisted performance in client interaction role-plays. A hospitality qualification might permit AI for menu costing exercises while requiring demonstrated practical skills in food preparation and service.
The implication for RTOs is that AI policies need to be developed at the training product level, not the organisational level. A single blanket policy that says "AI is permitted" or "AI is prohibited" fails to account for the diversity of competencies, assessment methods, and industry expectations across different qualifications. What is needed is a framework that allows each qualification's training and assessment strategy to specify where AI use is appropriate, where it is restricted, and where it is prohibited, with a clear rationale linked to the competencies being assessed and the industry context the qualification serves.
Teaching Students to Evaluate AI, Not Just Use It
Perhaps the most forward-looking finding in the report is students' own recognition that they need to learn not just how to use AI tools but how to critically evaluate what those tools produce. Students told researchers that institutional AI training focuses almost entirely on prompting, on how to ask AI the right questions. What is missing is structured guidance on how to assess the quality, accuracy, and reliability of what comes back.
This is a critical gap because AI tools generate outputs that are fluent, confident, and often entirely wrong. In VET, where assessment evidence must demonstrate genuine competence against nationally defined standards, the ability to identify AI-generated inaccuracies, biases, and fabricated references is not a nice-to-have skill. It is essential to maintain assessment integrity.
Students in the Digital Education Council study specifically asked for what they described as "AI auditing" capabilities: the ability to validate AI-generated arguments logically, verify sources that AI claims to be citing, and exercise contextual judgment about whether an AI output is appropriate for a specific purpose. They are asking, in effect, to be taught the critical thinking skills that will make them effective users of AI rather than passive consumers of its outputs.
For VET providers, this presents both a challenge and an opportunity. The challenge is that most trainers and assessors in the VET workforce have not themselves been trained in AI output evaluation. Under Standard 3.1, providers must facilitate access to continuing professional development to enable staff to effectively perform their role. If the role now includes guiding students through AI-integrated learning environments, then AI literacy for trainers and assessors is no longer optional. It is a workforce capability requirement.
The opportunity is that VET providers who build AI evaluation skills into their training delivery will be producing graduates who are genuinely more capable than those who merely know how to generate AI outputs. An accounting student who can use AI to draft a BAS reconciliation and then verify the output against the actual tax legislation is more competent than a student who simply submits whatever the AI generated. A project management student who can evaluate an AI-produced risk register and identify the gaps is demonstrating higher-order thinking that employers will value.
Standard 1.2 requires that training reflect current industry practice. In 2026, current industry practice increasingly involves working with AI tools. But working with AI effectively requires the ability to evaluate and verify its outputs, not just generate them. RTOs that embed this capability into their training are not just responding to a trend. They are meeting the Standard.
From Fear to Framework: Building Institutional AI Culture
The Digital Education Council report's final major theme is the need for institutions to adopt a positive culture toward AI literacy rather than treating AI as a threat to be managed or a problem to be policed. Students described their AI learning as fragmented and informal, occurring through trial and error rather than through structured institutional guidance. They want their institutions to lead.
This cultural shift is perhaps the most challenging recommendation for the VET sector because it requires providers to move beyond the reactive posture that has characterised much of the sector's response to AI. The dominant conversation in VET compliance circles over the past two years has centred on detection: how do we catch students using AI? How do we prevent AI-assisted cheating? How do we maintain assessment integrity in an environment where AI can generate competent-looking outputs?
These are legitimate concerns, but they represent only half the picture. The other half, which the Digital Education Council report illuminates through the voices of students themselves, is about preparation: how do we ensure students can use AI effectively, ethically, and critically in the workplaces they are being trained for? How do we build AI literacy as a foundational capability rather than treating it as a compliance risk?
The report specifically recommends that institutions reduce their reliance on AI detection tools, a recommendation that will be uncomfortable for many providers but that deserves serious consideration. AI detection tools are unreliable, producing both false positives that penalise innocent students and false negatives that miss AI-generated content. Building an assessment integrity strategy on the foundation of detection tools is building on sand. The alternative, which aligns with the process-based assessment approach discussed earlier, is to design assessments that make AI use visible, documented, and evaluated rather than hidden and policed.
For RTOs operating under the 2025 Standards, the cultural shift from AI-as-threat to AI-as-capability has practical compliance dimensions. Standard 4.4 requires systematic monitoring and evaluation to support continuous improvement. If student feedback, industry engagement, and workforce development data all point toward AI integration as a quality improvement opportunity, then providers have an obligation under their own continuous improvement frameworks to respond. Ignoring AI is not a neutral position. It is a decision to remain static while the environment changes around you.
What VET Providers Should Do Now
The Digital Education Council report is a mirror held up to education providers globally, and the reflection should prompt urgent action in the VET sector.
The first step is to establish explicit AI use guidelines at the training product level, communicated to students at enrolment and reinforced at the start of every assessment. These guidelines should specify which assessments permit AI use, which restrict it, and which prohibit it, with a clear rationale tied to the competencies being assessed. The guidelines should define what constitutes acceptable AI use (research assistance, drafting support, data analysis) and what constitutes unacceptable use (submitting AI-generated work as original without disclosure, using AI to bypass required practical demonstrations, fabricating assessment evidence).
The second step is to redesign assessments to focus on process rather than output. This does not mean abandoning written assessments or practical projects. It means building in the evidence points that demonstrate genuine learning: draft submissions, reflection journals, reasoning logs, AI use declarations, and viva voce components where students must explain and defend their work.
The third step is to invest in AI literacy for both students and staff. For students, this means moving beyond basic awareness of AI tools to structured training in output evaluation, source verification, and ethical use. For trainers and assessors, it means professional development that builds confidence in facilitating AI-integrated learning environments and assessing work that may include AI-assisted components.
The fourth step is to update training and assessment strategies to reflect AI as a dimension of industry currency. For qualifications where AI tools are now part of standard industry practice, the TAS should explicitly address how AI is integrated into delivery and assessment. For qualifications where AI has limited application, the TAS should explain why AI use is restricted in specific assessments.
The fifth step is to treat AI governance as a leadership responsibility, not a compliance afterthought. Under Standard 4.1, governing persons must lead a culture of integrity, fairness, and transparency. In 2026, that culture must include a considered, documented position on how the organisation approaches AI in training and assessment. The absence of such a position is itself a governance gap.
The Students Are Ahead of Us
The most humbling aspect of the Digital Education Council report is that the clearest thinking about AI in education is coming from the students themselves. They are not asking for a free pass to use AI without accountability. They are asking for structured guidance, honest engagement, and assessments that test whether they have actually learned something. They are asking, in essence, for the educational leadership that providers should already be providing.
The VET sector has a choice. It can continue to treat AI as an integrity threat to be policed through detection tools and blanket prohibitions, an approach that the evidence increasingly shows does not work and that students themselves are telling us is counterproductive. Or it can lead: setting clear expectations, redesigning assessments for the AI era, building genuine AI literacy across the workforce, and demonstrating through its actions that quality vocational education can embrace technological change without compromising the competencies that employers need.
The students quoted in this report attend universities on six continents. They speak different languages, study different disciplines, and operate in different cultural contexts. But they are saying the same thing: we are already using AI, we want to use it well, and we need our institutions to help us do that responsibly.
VET students in Australia are no different. The question is whether their RTOs are ready to respond.





