Why I applaud AILit, and why I’m asking for more
The AILit Framework is a genuine step-change. Co-developed by the European Commission, OECD and Code.org, it positions AI literacy as a lifelong foundation rather than a one-off lesson, and it organises the work around four powerful domains: Engaging with AI, Designing AI, Creating with AI, and Managing AI. For schools, that structure is exactly what was missing: a shared language, a cross-curricular approach, and a roadmap teachers can actually use.
But if we stop there, we leave out millions of learners and educators in vocational education and adult learning, where AI is already changing the work, the tools, and the rules. Apprentices, trainees, trainers and assessors don’t just need to understand AI; they need to deploy and govern it safely, navigate regulation, protect well-being, and translate literacy into employability and enterprise value. That is why I’m arguing constructively, and with deep respect for AILit’s ambition, for a broader, VET-ready extension.
Below, I set out ten additional domains that, alongside AILit’s four, make AI literacy truly comprehensive for lifelong and vocational contexts. I also highlight equity and implementation gaps, then finish with a practical blueprint that RTOs and systems can put to work immediately.
Ten domains that complete the picture for VET and lifelong learning
Ethical and Values Literacy
AI is not value-neutral. In VET, ethical trade-offs show up in assessment integrity, surveillance risks on worksites, and the fairness of automated screening or rostering. Learners need structured practice in moral reasoning, stakeholder mapping, and value alignment so they can justify choices, challenge harmful defaults, and uphold professional codes. For assessors, this domain anchors clear AI-use declarations, authentic assessment design, and proportionate responses to misconduct.
Legal and Policy Literacy
Vocational practice sits inside compliance ecosystems, privacy, WHS, consumer law, IP, sectoral codes, and emerging AI regulation. Literacy must cover rights, duties, contracts, consent, copyright, data retention, and audit trails. Apprentices should graduate able to spot a risky integration; managers should be able to read a vendor DPIA; trainers should know what counts as valid evidence when AI is in the loop.
Digital Wellbeing and Mental Health
AI can amplify overreliance, anxiety, attention fragmentation, and exposure to manipulation or misinformation. Literacy must include self-regulation strategies, evidence-based guidelines for screen/use hygiene, and escalation pathways when tools affect judgement, safety or confidence. In apprenticeship models, that means normalising reflective check-ins about tool use, not just technique.
Economic and Workforce Awareness
AI is redrawing work: task bundles are shifting, new roles are appearing, and productivity is moving from individual effort to human-AI orchestration. Learners need labour-market awareness, job redesign basics, and career resilience skills: how to map tasks to tools, measure benefits, and keep skills current. For RTOs, this domain links AI literacy with employability, enterprise value and local industry demand.
Cultural and Linguistic Adaptation
Bias and representation are not abstract. Tools trained on global data may miss local idiom, First Nations knowledge systems, or multilingual realities. Literacy here means contextualisation: learners test models for cultural fit, co-design prompt libraries with community input, and learn when to switch tools, or switch off, to avoid harm.
Environmental Impact Awareness
AI has a footprint, energy, water, e-waste, and also a role in sustainability (optimising logistics, reducing rework, monitoring assets). Vocational learners should be able to estimate footprint, compare tool choices, and design low-impact workflows. For trades and operations, that’s concrete: schedule inference locally vs. in the cloud, specify device lifecycles, and plan responsible decommissioning.
Data Literacy
Every credible AI workflow stands on data purpose, provenance, quality, governance, and stewardship. This domain upgrades generic “data skills” to privacy-by-design, consent, de-identification, versioning, lineage, and access control. In assessment, it clarifies what counts as legitimate evidence and how to maintain traceability without over-collecting.
Emotional and Social Intelligence
Human work depends on empathy, negotiation, de-escalation, and clear communication, capabilities AI simulates but does not own. Literacy means practising human-AI collaboration etiquette, recognising when to hand decisions back to people, and communicating limits and uncertainties to clients, patients, or teammates.
Pedagogical and Teaching Literacy
Teachers, trainers and assessors need instructional design patterns for AI: aligning tools to outcomes, scaffolding promptcraft, designing orals/vivas and live demonstrations to protect authenticity, and writing rubrics that reward process evidence. This is the bridge between AILit’s ideas and week-to-week delivery in TAFE and RTOs.
Interconnectedness and Systems Thinking
AI is a systems phenomenon, touching supply chains, platforms, standards, and public institutions. Learners should map dependencies and risks (e.g., an agent that books transport, invoices clients, and updates safety logs) and design fail-safes, hand-offs, and accountability. For operations managers, this domain underpins business continuity and incident response.
What AILit gets right, and the gaps we must close
Equity and context inclusion
AILit’s intent is inclusive, but its expert base has limited representation from Africa, Southeast Asia, the Middle East, and Indigenous communities. Without sustained co-design, classroom scenarios and exemplars risk global-north defaults that don’t reflect linguistic, cultural or resource realities. For Australia, that means an explicit commitment to First Nations partnership, multilingual resources, and low-bandwidth options, designed in, not tacked on.
VET and adult-learning alignment
AILit’s strongest materials target primary and secondary schooling. VET needs work-as-learning scenarios: apprenticeship day books augmented with AI logs, safety cases where AI is part of the hazard and the control, and RPL processes where AI helps gather, but never decides, evidence. Adult learners require modular, just-in-time micro-learning that maps directly to tools and tasks in their workplace.
Implementation and assessment
Frontline educators ask for templates, not just principles. We need:
-
Unit blueprints that show exactly where AI enters the task, what disclosure looks like, and how Viva/observation secures authenticity.
-
Assessment models that privilege workflow evidence (prompts, iteration logs, testing notes, decisions taken) alongside artefacts.
-
Validated rubrics that score judgment, not just output polish, so learners can pass by demonstrating process mastery and professional reasoning.
Staff capacity and professional learning
Digital confidence varies widely. Trainers and assessors need sector-specific case studies, prompt libraries, safety checklists, and change-management support. PL must be ongoing and job-embedded: brief “learn-do-share” cycles, coaching, and moderated exemplars, backed by policies that enable responsible classroom use rather than drive it underground.
Policy and regulatory integration
Suppose AI literacy isn’t aligned with standards, funding, reporting and procurement, adoption fragments. RTOs need clear crosswalks from literacy outcomes to compliance artefacts (assessment validation, academic integrity, privacy, and risk registers), and procurement criteria that enforce privacy, accessibility, transparency, and auditability from vendors.
A practical blueprint RTOs can implement this year
1) Map and extend the curriculum
Start with AILit’s four domains and overlay the ten above. Identify two to three high-leverage units per program where AI can authentically improve learning or workflow. Publish a shared glossary, disclosure norms, and a simple “trust but verify” checklist learners carry between classes and worksites.
2) Re-tool assessment for authenticity
Make process evidence non-negotiable: prompts, drafts, model checks, and decision rationales accompany submissions. Introduce short orals/vivas or live demonstrations for at least one assessment per unit. Use triangulation, artefact + process trail + observation, to reduce false positives and protect genuinely original student work.
3) Build a micro-credential stack for staff
Offer three bite-sized micro-credentials that stack for internal recognition:
-
AI-Ready Educator: policy, disclosure, basic promptcraft, integrity by design.
-
AI-Ready Assessor: assessment validation with AI in the loop, viva techniques, evidence standards.
-
AI-Ready Program Lead: risk registers, procurement, data governance, and change leadership.
Each micro includes exemplars from local industries.
4) Govern for safety and trust
Stand up an AI Governance Register: approved tools, permitted uses, data locations, known risks, mitigations, and review dates. Link it to your risk register and academic integrity processes. Require human-in-the-loop for every high-stakes decision. Publish plain-English guidelines for learners, employers and staff.
5) Procure what you can defend
Adopt procurement criteria that mandate privacy-by-design, accessibility (WCAG 2.2), data minimisation, exportable logs, and explainability. Insist on model change notifications and the ability to reproduce decisions for audit. Where vendors can’t meet the bar, do not deploy in assessment-relevant contexts.
6) Localise for equity
Co-design with First Nations partners and community organisations. Provide low-bandwidth pathways, language support, and culturally responsive examples. Equip learners with scripts and strategies for informed refusal; knowing when not to use AI is part of competence.
7) Measure what matters
Track practice, capability, and culture. In practice, look for routines (disclosure, verification, risk notes) appearing across classes. In capability, sample portfolios that show process mastery and improvement over time. In culture, monitor incident trends, staff confidence, and learner satisfaction. Share results and iterate.
What this adds to AILit’s four domains, without breaking them
The result is not a rival to AILit but a VET-ready extension that honours its architecture and answers the realities of adult learning and work. Engaging with AI necessitates a multi-faceted approach, delving into its ethical, cultural, environmental, and economic implications. This comprehensive exploration fosters informed judgment, moving beyond superficial understanding to a deeper appreciation of AI's societal impact and responsibilities.
Designing AI systems requires a robust framework built on data stewardship and a keen awareness of legal and policy landscapes. This ensures that the creation of AI by students respects individual rights, upholds data privacy, and operates within established regulatory and contextual boundaries. The emphasis here is on responsible development, where innovation is balanced with accountability.
Creating with AI demands a blend of pedagogical discipline and refined human skills. This transformation elevates co-creation to an accountable collaboration, where the unique strengths of human creativity and AI's capabilities are harnessed synergistically. It encourages critical thinking, problem-solving, and the development of essential interpersonal skills alongside technological proficiency.
Managing AI effectively is fortified by strong governance frameworks, a focus on user well-being, and the application of systems thinking. This holistic approach ensures that risk management is not merely a theoretical exercise but a real and practical endeavour, addressing potential challenges and fostering a safe and beneficial AI environment.
The ultimate outcome of this comprehensive framework is not an antagonistic rival to existing AI literacy initiatives, but rather a VET-ready extension. This approach honours the foundational architecture of AI literacy while directly addressing the practical realities of adult learning and the demands of the modern workforce. It provides a relevant and applicable pathway for individuals to navigate and contribute to an AI-driven future.
The call to action
AI literacy will define employability, quality, and equity in the next decade. AILit gives schools and systems the spine they need; VET and lifelong learning need more muscle on that spine, ethics, law, wellbeing, workforce, culture, environment, data, human skills, pedagogy, and systems thinking.
If we build these domains into programs, assessments, procurement and professional learning now, AI literacy becomes exactly what the moment demands: a universal, lifelong capability that helps people navigate, critique, and shape a world where AI is everywhere, and where human judgment still matters most.
