Why this update is different, and why it matters for RTOs
The European Commission’s Joint Research Centre has confirmed active development of the next full edition of the Digital Competence Framework for Citizens, with consultation and validation steps running through 2025 and an expected publication window in late 2025. For Australia’s vocational education and training community, this is not a distant European policy note; it is a practical signal that digital competence expectations will soon shift in ways that affect curriculum, assessment, trainer capability and compliance narratives. Preparing now protects learners from whiplash later, and it allows providers to influence practice rather than scramble to keep up.
Where the sector stands today, and what is already certain
DigComp provides a common language for digital capability across five areas and twenty-one competences, supported by an eight-level proficiency ladder that recognises developmental progression. The current version, DigComp 2.2, consolidates that structure and adds more than two hundred and fifty contemporary examples that bring artificial intelligence, remote collaboration, accessibility and sustainability into clearer view without changing the framework’s core architecture. These features are not speculative; they are live references that Australian providers can use now to make learning outcomes and assessment criteria more explicit.
Australia does not start from zero. The national Digital Literacy Skills Framework describes foundational digital skills across personal life, work and education, and it sits alongside the Australian Core Skills Framework that underpins language, literacy and numeracy. DigComp alignment does not replace these foundations; it complements them by giving RTOs a globally recognised taxonomy and level structure that can sit behind training package outcomes and employer expectations. Using both sets of references helps auditors see a coherent story that is local in its compliance anchors and international in its competence language.
What DigComp 3.0 is expected to change, and how to read the signals
The Joint Research Centre describes a structured development program for DigComp 3.0, including expert engagement and stakeholder consultation, and it has stated its expectation of publication in late 2025. Public consultation notices and partner communications reinforce that artificial intelligence will be integrated more deeply across the competence areas, rather than treated as a side note. Providers do not need to guess the exact phrasing to begin work. It is enough to assume that information literacy, communication and collaboration, content creation, safety, and problem solving will all carry clearer expectations about AI use, evaluation and impact.
DigComp 3.0 Framework for Australian VET and Compliance
|
Dimension |
Description for the Australian VET Sector |
Trainer/Assessor Application |
Compliance/RTO Expert Application |
|
1. Competence Areas |
The framework is built on five (5) core competence areas that cover a broad range of digital skills and knowledge. These areas move beyond simple tool use to encompass critical, ethical, and responsible engagement with technology. |
Use these five areas to structure training programs, learning outcomes, and assessment tasks. Ensure you address all areas, not just technical skills, to provide a holistic digital literacy education. |
Align training packages and units of competency with these five areas. Review existing course materials to ensure they cover the full scope of digital competence as defined by DigComp. |
|
2. Competences |
Within each competence area are 21 specific competencies that define the granular skills and knowledge. For example, within the "Information and Data Literacy" area, a competence is "Browsing, searching and filtering data, information and digital content." |
Design assessments that directly evaluate these specific competences. Use them as a checklist to ensure your training adequately covers each required skill. |
Map specific competences to the performance criteria and knowledge evidence required by Australian training packages. This helps demonstrate how the RTO's offerings are nationally and internationally benchmarked. |
|
3. Proficiency Levels |
DigComp 3.0 details eight proficiency levels, from A1 (Foundation) to D2 (Highly Specialised). Each level builds on the previous one in terms of autonomy, cognitive complexity, and the level of problem-solving. |
Use these levels to gauge a learner's starting point and track their progress. This allows for tailored training plans and differentiated instruction. For example, a learner at level A1 might need basic guided instruction, while a learner at C2 can solve complex problems independently. |
Use the proficiency levels to design and validate assessment tools. The levels provide a clear benchmark for what a competent person at a certain skill level should be able to do, ensuring consistent and valid assessments across the organisation. |
|
4. Knowledge, Skills, and Attitudes |
This dimension provides a detailed breakdown of the required knowledge, skills, and attitudes for each competence. This goes beyond "what a person can do" to include the underlying understanding, practical ability, and mindset required for digital competence. |
Integrate this dimension into your training delivery by not only teaching skills but also explaining the "why" (knowledge) and encouraging a safe, critical, and ethical mindset (attitudes). |
Ensure your RTO's course documentation and learner guides explicitly address these three elements. This supports compliance by demonstrating that your training is comprehensive and aligned with the full scope of the framework. |
|
5. Use Cases |
DigComp 3.0 provides practical use cases to show how the competences are applied in different contexts, such as education, employment, and social participation. This is highly relevant for the VET sector, which focuses on job readiness. |
Use these use cases to create realistic and engaging scenarios for your learners. For example, a use case for "digital content creation" could be designing a business flyer for a workplace task. |
Use the use cases to contextualise and justify the digital skills being taught in your training programs. This helps align training outcomes with real-world industry needs and compliance requirements for industry engagement. |
Translating the framework into the VET curriculum and assessment that works
In VET, the question is always practical. If a unit expects learners to gather and judge information, the assessment brief should name the competence and the intended proficiency level, and it should require the learner to show how sources were found, how algorithmically curated feeds were tested, and how claims were verified. If a capstone task involves producing a digital artefact with AI-assisted tools, the learner should be asked to justify tool selection, disclose the workflow, critique the outputs and attribute appropriately. DigComp’s language becomes part of the task, not a mapping exercise completed after delivery. That simple shift makes validity and sufficiency easier to demonstrate during an audit because the competence, the behaviour and the evidence are aligned in plain view.
Safety and integrity in an AI-enabled environment
Safety sits within the current framework as a full competence area that includes device and data protection, health and wellbeing, and environmental impact. In practice, this now includes awareness of how training data can leak through careless prompts, how synthetic media can be used to deceive, how design patterns can capture attention, and how the energy cost of large models should inform tool selection when alternatives exist. Embedding these concerns as learning outcomes changes the character of digital tasks. Learners are not only taught to operate tools, they are taught to safeguard people, respect information and weigh environmental consequences. That is the line between capability and care.
Translating DigComp 3.0 into VET Curriculum and Assessment
|
VET Curriculum Development |
VET Assessment Design |
Compliance and Audit Evidence |
|
Integrate Competences into Learning Outcomes: Embed DigComp’s specific competences (e.g., "Browsing, searching, and filtering data") directly into the unit of competency's learning outcomes and performance criteria. This ensures digital skills are a fundamental part of what's being taught, not an optional extra. |
Explicitly Reference Competence and Proficiency: Assessment briefs should clearly state the specific DigComp competence and intended proficiency level (e.g., "This task assesses your ability to find and evaluate information online at proficiency level B2"). |
The learner's work and the assessor's rubric directly reference the DigComp framework. This provides a clear line of sight between the training product and the skills being developed, making it easy to audit. |
|
Develop Practical Scenarios: Create learning activities and contextualised scenarios that require learners to apply the digital competences. For example, a task on "information literacy" would involve a genuine need to find and use information, such as researching a new piece of workplace equipment. |
Require Behavioural Evidence: Assessments must ask learners to demonstrate the how behind their digital actions. Instead of simply providing an answer, the learner is required to show their workflow. For example, an assessment on information gathering should require learners to document their search queries, the sources they considered, and their reasoning for selecting or rejecting them. |
The evidence submitted by the learner—such as a screenshot of a search history, an annotated bibliography, or a detailed process log—provides concrete proof of the digital skills being applied, moving beyond a simple "yes/no" competence check. |
|
Contextualise AI and Emerging Tech: For competencies related to new technologies like AI, the curriculum should include specific content on their ethical and practical application. For example, training on "digital content creation" should cover the responsible use of AI tools. |
Require Justification and Critique: For tasks involving AI or other complex tools, learners must justify their choices and critique the outputs. An assessment might ask, "Why did you choose this AI tool over another?" or "What are the limitations of the AI-generated content you produced?" |
The learner's justification and critical analysis in the assessment submission serve as direct evidence of their competence beyond basic tool operation. This demonstrates the learner's understanding of the technology's implications and their ability to engage with it critically. |
|
Align Learning with Dimensions of Competency: Structure curriculum to address all dimensions of competency: knowledge, skills, and attitudes. This means teaching not just how to use a tool, but also the underlying concepts and the ethical mindset needed to use it responsibly. |
Assess Integrated Competence: Design assessments that require the integration of multiple skills. A single capstone task might assess "problem-solving" (a DigComp competence) by requiring a learner to research, plan, and create a digital solution to a workplace issue. |
The assessment tools, such as rubrics and marking guides, demonstrate how different competencies are being assessed together, aligning with the VET requirement for integrated assessment of skills, knowledge, and attitudes. |
|
Integrate DigComp Language: Use the language of the DigComp framework as standard practice in all curriculum documents, from training plans to learner guides. The competence titles and proficiency levels become part of the everyday language of trainers and learners. |
Make the Framework Visible in Assessment Briefs: The assessment brief itself becomes the primary document for alignment. It is designed from the ground up with the DigComp framework in mind, making the connection between the task and the competence obvious to both the learner and the assessor. |
The assessment brief, learner workbook, and final evidence are all demonstrably aligned. During an audit, this visible alignment simplifies the process of proving that the RTO's training and assessment are valid, sufficient, and meet industry standards. |
Trainer capability that scales without burning teams out
RTOs cannot prepare for DigComp 3.0 by asking every trainer to become an expert in every new tool. The focus should be on shared understanding of the framework, on purposeful task design, on feedback that builds judgment, and on ethical decision-making when new tools are introduced. High leverage supports include sample assessment briefs written in DigComp language, model prompts and reflection templates that reveal process as well as product, accessibility checklists that ensure materials work for all learners, and moderation guides that help assessors recognise performance at the appropriate proficiency level. Building this common kit reduces the pressure on individuals and creates a repeatable practice that survives staff turnover.
A phased transition plan that respects real delivery constraints
A three-phase approach fits normal quality cycles. The first phase is diagnostic, where a representative set of programs is reviewed to identify which DigComp competences and levels already appear in current tasks. Many units will be closer than expected, because everyday workplace practice has been digital for years. The second phase is design, where a small number of units or clusters are rewritten as exemplars that pair training package outcomes with DigComp competences and levels in their learning outcomes and assessment briefs. The third phase is evaluation, where learner work, moderation records and employer feedback are used to refine the approach and decide where to extend next. This work can be documented through existing continuous improvement processes, which reduces administrative friction and gives auditors clear evidence of deliberate transition.
Technology and infrastructure choices that will not age badly
The aim is not to chase every new platform; it is to create stable conditions for authentic tasks. Cloud-based learning environments allow teams to update materials quickly when DigComp 3.0 lands. Assessment systems that can accept multiple evidence types and capture process as well as product make AI-assisted work visible and assessable. Learning analytics, used ethically and with transparency, help trainers spot who is stuck and who needs extension. Virtual and augmented reality can support safe practice in high-risk environments when scenarios are tied explicitly to training package outcomes and DigComp competences. Each of these choices supports the same outcome, which is to make judgment, collaboration and problem-solving visible in evidence, not just implied in a final submission.
Partnering with industry and regulators to build trust
Industry wants graduates who can apply tools with judgment, not simply operate software. Using DigComp language with advisory committees helps providers describe capability in terms that employers can recognise and test. Regulators are also signalling what they will look for as online delivery and digital assessment evolve. ASQA’s guidance makes clear that standards apply across delivery modes, that online students must receive the same quality outcomes as on-campus cohorts, and that providers should plan online training and assessment with integrity and support in view. Connecting these threads gives RTOs a defensible position. The competence is named, the level is stated, the evidence is authentic, and the support is visible.
Equity and inclusion as non-negotiable design principles
Digital competence agendas will fail if they widen existing gaps. The national Digital Literacy Skills Framework provides a reminder that foundation skills and participation are the starting points for fair design. Providers should keep tasks accessible on common devices, ensure captions and alternative text are standard, provide offline or low-bandwidth pathways where possible, and make explicit how privacy and confidentiality will be protected when digital tools are used in assessment. As AI becomes part of tasks, learners should be taught to question outputs, to check for bias, and to bring local knowledge to bear when tools are trained on data from other contexts. These practices are not extras; they are the conditions under which competence can be claimed with integrity.
Measuring impact, not just adoption
Counting mapped units is not the same as building capability. Providers should track progression across selected competences, analyse where learners struggle to move from tool operation to judgement, and ask employers for structured feedback about observed digital behaviours in the first months of employment. Over time, this evidence shows whether the new assessment briefs and learning activities are producing graduates who can locate reliable information, collaborate with purpose, create responsibly, protect people and data, and solve problems in contexts where AI is part of the workflow. As DigComp 3.0 introduces clearer outcome statements, consistency in measurement will improve, and continuous improvement can be anchored in specific competence gains rather than general satisfaction measures.
Timeline and immediate next steps for 2025
The Joint Research Centre’s public timeline gives Australian providers a practical window to act. Over the coming teaching cycles, RTOs can complete diagnostics, refactor a set of exemplar units, build trainer capability in framework interpretation and ethical AI use, and engage advisory committees to test whether learners can demonstrate the competences and levels stated in briefs. Consultation notices are open to international stakeholders, which means Australian voices can contribute to the detail rather than accepting it passively. When the final text arrives, well-prepared providers will be adjusting details rather than rebuilding from scratch, and auditors will see a clear arc from policy to practice.
The CAQA view, and the promise behind the mechanics
As auditors, consultants and educators, we see frameworks as tools that help people learn well and work well. DigComp gives a shared vocabulary and a progression ladder that make planning and evidence simpler to discuss across classrooms, boardrooms and audit rooms. Australia’s foundation frameworks give the local grounding that protects equity and participation. Together they allow RTOs to design learning that is flexible, accessible and credible. The task in front of us is to use the months before publication to get our own house in order, to write briefs that name and level competences, to support trainers with practical artefacts, and to invite employers into conversations where capability is described in plain terms. If we do that, learners will not just be ready for a new document, they will be ready for workplaces where judgment and responsibility matter as much as speed and scale. That is the point of DigComp 3.0, and it is why the work belongs on the agenda now.
