Background and definition
In compliance conversations across Australian vocational education and training, one phrase keeps surfacing with uncomfortable regularity: unduly short training. The term describes a pattern where registered training organisations deliver qualifications, skill sets, or units in timeframes that are simply too compressed for learners to acquire the knowledge and skills the Australian Qualifications Framework expects. It is possible to defend competency-based training while rejecting the pretence that time does not matter. Competency-based systems were never intended to become duration-free zones. The AQF has always described a “volume of learning” that signals the typical range of effort required for each qualification level, and sector regulators have repeatedly cautioned providers against cutting delivery to a point where practice, feedback, and genuine skill development cannot occur. When delivery is accelerated beyond what evidence and industry practice support, ticked boxes displace capability, and that is when paperwork begins to look pristine while performance falters.
How we got here
The risk was recognised early. Soon after the national regulator commenced, unduly short delivery emerged as a system-wide concern rather than a handful of localised problems. By 2017, the issue had reached the point where the regulator undertook a strategic review. Its analysis of thousands of publicly advertised courses found that more than a quarter promoted durations below the minimum recommended time, and nearly one in twelve were listed at less than half the advised minimum. The review warned that such practices threatened the long-term sustainability of the training system and called for a stronger focus on outcomes, transparency, and enforcement. Those findings have aged poorly, not because they were wrong, but because the market has continued to reward speed while the compliance conversation has lagged behind real-world risk.
Policymakers have not been idle. The AQF has continued to articulate the idea of a typical effort range at each level, and the standards that will govern RTOs from 2025 sharpen expectations about “structure and pacing” so that students have sufficient time for instruction, practice, feedback, and assessment. The guidance for those standards does not mince words about the danger of condensing or diluting engagement, especially for high-risk programs or where practical skills must be built in real environments. These policy signals matter because they turn an abstract dispute about hours into a concrete requirement to demonstrate that delivery is paced so that learners can actually become competent.
The contemporary sector is also large and complex, which magnifies both the risk and the opportunity. As of mid-2024, there were more than 3,700 RTOs regulated federally, with additional providers overseen in Victoria and Western Australia, taking the national total to roughly four thousand. A system of this scale touches almost every industry and region, which is precisely why the stakes are high when duration and depth are squeezed.
Why the problem persists
The first driver is interpretive ambiguity. Training packages, which cover most qualifications, typically specify outcomes without prescribing minimum delivery time, while accredited courses tend to nominate hours. That split has left providers reading the AQF’s volume-of-learning guidance as an optional backdrop rather than a baseline to be engaged with seriously in design, resourcing, and scheduling. The guidance is not perfunctory. It describes indicative ranges that exist precisely because people need exposure, practice, and feedback to move from novice to competent. For example, the qualification type descriptors have long signalled a typical one to two years of effort for certificate III programs, with clear explanations of what that effort is intended to achieve. When those signals are ignored, the system stops speaking the language of learning and starts speaking only the language of logistics.
The second driver is that market incentives often reward speed over substance. In a highly competitive environment, short and cheap looks attractive to price-sensitive learners, including international students who must manage tight budgets and visa timelines. When two providers display the same national code and promise the same credential, many students understandably choose the faster option. Employers then encounter graduates who hold the right certificate but lack the underlying capability, and confidence drains from the brand of vocational qualifications. Over time, quality-focused providers find themselves undercut by operators who compress delivery, and they either compromise to survive or exit the market. The regulator’s own practice guide now explicitly identifies acceleration that deprives learners of time for skill development as a “known risk to quality outcomes,” which is a diplomatic way of saying that the business model of rapid throughput does not produce reliable competence.
The third driver is enforcement lag. Risk-based regulation is both necessary and sensible, yet it depends on timely, trusted signals. In the unduly short debate, those signals have too often arrived after the harm. Strategic reviews documented the problem years ago, but advertising that spruiks two-week certificates or eight-week care qualifications has continued to appear precisely because market rewards are immediate and regulatory consequences are delayed. When the expected penalty arrives long after the revenue has been banked, the incentive structure is clear. It is to the sector’s credit that the 2025 standards shift the proof burden away from hours on a page toward evidence that the structure and pacing of delivery allow students to attain the learning outcomes in practice. The new test is simple to say and hard to fake: show that your graduates can actually do the job.
What unduly short delivery does to learners and employers
The most visible damage is to learner confidence and safety-critical work. When a care qualification that industry expects to take many months is compressed into a few weeks, the predictable result is that new workers do not recognise deterioration, do not handle medications confidently, and do not manage manual handling safely. In construction, the pattern shows up as superficial exposure to codes and methods without the repeated coached practice that embeds safe habits. In technical fields, it appears as rule-following without problem-solving. When graduates meet real complexity, they revert to guesswork or script-reading, which helps no one. Employers then absorb costs in remedial supervision, productivity loss, and incidents that should never have occurred. Those experiences travel quickly through small industry networks and across regional communities. Over time, they turn what should be a skills pipeline into a reputational liability.
The damage radiates outward. In a system where the national brand rests on portability and trust, persistent rates of undercooked delivery are a slow-acting solvent. The sector has already invested heavily to rebuild after earlier scandals. Each round of unduly short offerings undoes that work, particularly in areas where public interest and safety are directly at stake. When employers cannot trust that a certificate signals capability, they respond by designing their own internal training, diverting resources that would be better spent growing the business. When learners discover their shiny credentials do not open doors, they either give up on the vocation or bear the cost of repeating training with a better provider. None of these dynamics is consistent with a system that is supposed to reduce skills gaps and lift productivity.
Evidence, not assertion
The public record supports the proposition that duration matters because time is a proxy for opportunities to practise, receive feedback, and consolidate. The AQF’s volume-of-learning construct has always aimed to express that reality in a way that respects the flexibility of competency-based systems while anchoring them in the known rhythms of learning. The second edition of the AQF sets out these indicative ranges across the qualification types and details why the ranges exist. Ideas like one to two years for certificate III are not arbitrary. They reflect the time needed to move beyond routine tasks to a reliable level of independent performance across varying contexts. When delivery takes only a fraction of that effort without compensating structures like extensive workplace learning, the gap shows up in the work.
The problem was measured in the regulator’s own strategic review. “More than a quarter” of advertised courses were short of the minimum advised time, and “eight per cent” were less than half. Those numbers are not outliers selected to shock. They describe a structural issue in a national market. Nor were the review’s recommendations ambiguous. They called for strengthening the standards framework, improving the regulator’s effectiveness, and increasing transparency so that students and employers could see what they were buying. The fact that similar concerns continue to surface eight years later tells us that a mix of cultural, commercial, and capability barriers has blunted the response. That is not a reason to abandon the project. It is a reason to take the next step with more clarity about what will actually shift practice.
What other countries do differently
There is no virtue in importing another nation’s system wholesale. There is value in noting what stable, high-performing VET ecosystems hold constant. Germany’s apprenticeship model specifies durations in training ordinances for each occupation, commonly three or three and a half years, with some two-year pathways for particular cohorts. Switzerland does likewise, with three- or four-year programs leading to the federal VET diploma, and with the distribution of learning across the workplace, vocational school, and branch training centres set out in the training plan that accompanies each ordinance. Singapore’s polytechnic diplomas are designed as three-year programs. None of these examples makes duration a fetish. They make it a commitment to the depth and breadth of practice needed to become employable in complex work. The lesson is not that Australia should import fixed-time rules everywhere. It is that successful systems align delivery time, practice intensity, and assessment depth so that a credential means what it says.
Why the 2025 standards matter
The most important change in the standards that takes effect from 2025 is not a new definition or a novel administrative test. It is the insistence that providers demonstrate that “structure and pacing” allow students enough time for instruction, practice, feedback, and assessment, and that the chosen mode of delivery is appropriate for the skills being developed. The associated guidance goes further. It names risks that providers must actively control, including the hazards of accelerated training and the pitfalls of shifting practical skill development into settings that cannot carry the load. This is a pivot from policing inputs to verifying outcomes, but it also clarifies that time is part of quality. Evidence of the “amount of training” is no longer a tick on a strategy document. It is a narrative that must be demonstrated in delivery and then evidenced again in graduates who can perform, not just recall.
That insistence will be most persuasive when it is consistent with the direction of national policy. The National Skills Agreement now frames a five-year reform program across jurisdictions to support a responsive and high-quality VET system. The inaugural National Skills Plan sets a shared vision and signals work to strengthen system architecture and the workforce that sustains it. Those documents are not enforcement tools. They are the macro policy context in which providers, auditors, and quality teams can reposition duration as an enabler of competence rather than a compliance artefact. The message is that governments expect quality outcomes for learners and employers, and that delivery models that cannot show learning time commensurate with complexity are unlikely to meet that test.
What quality practice looks like
In organisations that take learning seriously, program design starts with what graduates must actually be able to do. Working backward from real tasks produces a delivery structure where theory introduces concepts, simulated environments allow early attempts without harm, and workplace or workshop practice builds fluency through repetition under feedback. In such designs, time is not an administrative quota. It is headroom for struggle and improvement. Trainers plan for the dip that every learner experiences when a task becomes more complex. Assessors design instruments that rely on performance, not proxy indicators. Managers watch for evidence that students are improving at the rate the qualification expects. None of this can be rushed without robbing learners of the very experiences that create competence.
The standards now expect providers to show that industry engagement informs choices about mode, sequencing, and the “amount of training” required for learners to meet an industry standard, not merely a theoretical threshold. In practical terms, that means mapping typical workplace tasks against units and identifying where real practice must occur. It means publishing honest program durations and explaining the rationale to learners at enrolment. It means resisting the easy slide from clustered delivery to collapsed practice. Above all, it means confronting the seductive appeal of quick completions with the simple truth that safe and effective workers are not produced by compressed exposure alone.
Transparency, incentives, and enforcement
Information asymmetry has been a stubborn barrier. Learners and employers cannot make informed choices if advertising conceals how much time and practice sit behind a course. The 2017 review emphasised the need for clearer public information about duration and outcomes. Transparency is not a panacea, but it is a precondition for a functional market. When a provider that invests in workshops, supervised practice, and work placements must compete with a provider that compresses delivery into short online blocks, the only way to make the difference visible is to show it. Published program durations, realistic calendars, explicit practice hours, and frank explanations of assessment expectations help learners understand what quality looks like. They also make it much harder for poor operators to hide.
Incentives matter just as much. Funding models that pay on enrolments and completions with little regard for graduate outcomes dilute the motivation to invest in depth. The National Skills Plan’s language about strengthening the architecture and workforce points toward a better alignment between public funding and the capability we say we want. When public money follows quality, providers that deliver genuine competence thrive. When it follows speed, unduly short delivery reproduces itself. The logic of reform is to change the payoffs so that quality is not a courageous choice; it is the rational one.
Enforcement will always be the backstop. Risk-based approaches only work when egregious behaviour is met with timely consequences and when providers know the rules will be applied consistently. The regulation reports and practice guides released over the last year are promising in tone, particularly the frank identification of accelerated delivery as a known risk. The test is now operational. If a qualification is advertised at timeframes that cannot plausibly support the learning outcomes at the cohort scale offered, enforcement needs to arrive quickly enough to change behaviour, not years after the cohort has graduated.
A note on international education
International students have been caught in the crossfire of market incentives and variable quality. They are often the most price-sensitive and the least able to detect the difference between a high-intensity program that demands months of coached practice and a low-touch pathway that moves swiftly through content with little supported application. When unduly short delivery is dangled as a fast route to a credential, the immediate attraction can be powerful and the long-term cost severe. The national interest in maintaining a trusted education brand is not served by tolerating training that outruns the time it takes to build capability. That point is as much about ethics as it is about economics.
What CAQA experts advise RTO leaders to do now
Begin with a candid audit of structure and pacing against the program’s learning outcomes, not against historical timetables. Where qualifications include tasks that can only be learned by doing, identify the minimum threshold of supervised practice that reliably produces safe beginners, then schedule it as non-negotiable time rather than an optional practicum that gets trimmed when enrolments spike. Use industry engagement to validate that threshold and document both the rationale and the evidence of attainment. Where programs are delivered online, build deliberate loops of demonstration, coached attempts, feedback, and renewed attempts instead of confining learner activity to readings and quizzes. Make all of this visible to learners at enrolment so that expectations are shared.
Then align the assessment with the reality of work. Replace or supplement recall items with scenario-based tasks that require judgment, handling variation, and integrating multiple skills under time pressure. Calibrate assessor judgements through moderation anchored in genuine work samples. Track early graduate performance with employers and bring that data back into design decisions about time and sequencing. If graduates consistently struggle with particular tasks, stretch the delivery where it is thin rather than squeezing elsewhere to “stay on time.” That is how a competency-based system earns its legitimacy. It adapts delivery to evidence about learning, rather than adapting assessment to fit compressed calendars.
Finally, communicate the value proposition bluntly. Publish program durations and practice expectations, and tell prospective students why they are set where they are. Explain that the difference between a certificate that changes a career and one that gathers dust is often measured in coached hours and repeated attempts, not in slide counts. If that transparency costs a few “fast and cheap” enrolments, it will attract learners and employers who recognise quality when they see it. Those are the relationships that sustain a reputation.
The policy horizon
Reform momentum is real. The National Skills Agreement sets a five-year framework for shared action, and the National Skills Plan outlines how governments intend to strengthen the system’s architecture, including its workforce. Regulation is signalling clearly that providers must demonstrate that delivery is not only compliant on paper but also paced and structured so that students gain the intended capability. The AQF continues to provide a common language for effort, one that honest providers can use to articulate why it takes time to convert interest into competence. None of these instruments is magic. Together, they provide a scaffolding for a sector that wants to be judged by what its graduates can do.
If we do not act
The cost of inaction is already being paid by learners who must retrain, by employers who absorb the risk of preventable errors, and by communities that rely on safe and skilful practice. It is paid by quality-focused providers squeezed by competitors who promise the same credentials in a fraction of the time. It is paid in the erosion of trust in qualifications that should be passports to employment and productivity. Australia does not need to abandon flexibility to recover credibility. It needs to reconnect duration to depth and depth to outcomes. The standards now give the sector permission to do exactly that.
Conclusion
Unduly short training is not a dispute about hours for their own sake. It is a dispute about whether the time learners spend with us is enough to transform interest into capability that holds up under pressure. The evidence base is clear. The AQF has always tied qualifications to an indicative effort range because practice and feedback take time. The regulator’s strategic review documented how cutting corners on that time became a systemic risk. The standards that apply from 2025 close the loop by insisting that structure and pacing match the complexity of the skills being taught. The national policy settings add a longer runway for reform. The task for providers is to lean into that alignment. Publish honest durations. Design for coached practice. Assess in ways that reveal judgment and adaptability. Demonstrate to learners and employers that your graduates can do what the training product promises they can do. When that promise becomes the organising principle, training matrices will still turn green. The difference is that competence will turn green with them.
