Artificial Intelligence Literacy: Skills, Challenges, and Opportunities
This post reflects on the essential skills for AI literacy, highlighting the ethical challenges and opportunities GAI presents. From promoting critical thinking to leveraging creativity, we discuss how AI tools can enhance personalized learning while addressing concerns like data security, privacy, and algorithmic biases.
Felipe Aristimuñp
3/26/20254 min read


Generative artificial intelligence (GAI) is transforming our society and education worldwide, bringing both innovative opportunities and ethical challenges. Tools such as ChatGPT, Microsoft Copilot, and Google Gemini demonstrate the ability to generate diverse content, ranging from text and images to programming code and audio. The growing impact of GAI in educational environments has sparked critical reflections on the new skills required for AI literacy, as well as its ethical and age-appropriate use for students.
In this context, different countries and international organizations have discussed ways to effectively integrate GAI into education. In Australia, the "Australian Framework for Generative AI in Schools" emphasizes the importance of transparency, equity, and data security, as well as highlighting the need to teach students how to recognize and avoid algorithmic biases. This framework also warns about the risk of algorithmic discrimination and underscores the importance of ensuring that GAI tools promote a balanced exposure to diverse ideas and perspectives.
In Canada, the report "Responsible AI and Children" highlights the ethical risks of using GAI with younger audiences, stressing the need to protect children's privacy and rights, especially when considering cognitive differences across age groups. This concern is also evident in the United States, where states such as Georgia, California, and Colorado have created guidelines that emphasize protecting personal data and academic integrity, warning about the risks of inadvertently introducing sensitive information into AI systems.
In the United Kingdom, there is a strong call to empower teachers and students with a critical understanding of GAI, emphasizing the need to adapt pedagogical practices to include the ethical use of these tools. UNESCO, for its part, advocates for a human-centered approach, promoting values such as inclusion, equity, and cultural diversity. The European Union highlights data protection and security as key pillars for the responsible integration of GAI in education, particularly when dealing with systems classified as "high-risk."
Based on these international frameworks, we propose a benchmark for the development of AI literacy, which includes essential skills and challenges identified in these documents:
For students aged 13 to 18 (upper secondary education), understanding the basic principles of AI, including how algorithms, neural networks, and machine learning systems work, is essential. It is also important for students to learn how to use generative AI tools critically and creatively, developing skills such as prompt engineering to achieve more effective results. Critically evaluating generated content, identifying potential biases or inaccuracies, is also crucial, as is recognizing that these tools lack human consciousness and ethical judgment.
Ethics and responsibility in AI usage are another central aspect of AI literacy. Teaching young people to understand the risks related to privacy, security, and academic integrity is crucial to ensuring they use these technologies transparently and responsibly. Moreover, students should be aware of how AI impacts society, potentially reinforcing biases or promoting inclusion and diversity, depending on how it is implemented.
Among the most significant challenges for AI literacy is the ability of these tools to generate responses that appear “humanized,” which can confuse users and make it difficult to distinguish between AI and human intelligence. This ambiguity in interacting with AI-generated responses raises questions about whether the excessive use of these technologies could weaken students' critical thinking and creativity.
Considering that access to AI tools by minors under 18 is regulated by parental control, it is essential to reflect on the need to ensure that all students have equal access to these technologies. Some families may restrict access for ethical, security, economic reasons, or due to fear or lack of understanding of AI’s potential. This scenario requires constant dialogue between schools and families to prevent such limitations from resulting in new forms of exclusion.
However, it is precisely because of these potentials that ensuring equitable access to GAI becomes so important in schools. This technology can be a powerful ally in personalized learning, adapting content to each student’s pace and style. GAI tools also encourage creativity and innovation, facilitating the creation of multimedia content and promoting new forms of artistic expression. Furthermore, GAI offers inclusive solutions, potentially beneficial for students with special educational needs. AI literacy is, therefore, essential for young people to understand this technology and use it critically and responsibly. By balancing the benefits with the challenges GAI presents, it is possible to ensure the safe, inclusive, and innovative integration of artificial intelligence in education.
References
Australian Government Department of Education. (2023). Australian Framework for Generative Artificial Intelligence (AI) in Schools.
Canada CIFAR. (2024). Responsible AI and Children: Insights, Implications, and Best Practices.
Innovation, Science and Economic Development Canada. (2022). Learning Together for Responsible Artificial Intelligence: Report of the Public Awareness Working Group.
Georgia Department of Education. (2025). Leveraging AI in the K-12 Setting: Ensuring the ethical, effective, and secure use of AI tools and systems in Georgia’s schools.
TeachAI. (2024). AI Guidance for Schools Toolkit.
California Department of Education. (2023). Artificial Intelligence: Learning with AI, Learning about AI.
Colorado Education Initiative. (2024). Colorado Roadmap for AI in K-12 Education: Guidance for Integrating AI into Teaching and Learning.
Washington Office of Superintendent of Public Instruction. (2024). Human-Centered AI Guidance for K–12 Public Schools.
Wisconsin Department of Public Instruction. (2024). AI Guidance for Enhancing K-12 and Library Education.
Delaware Department of Education. (2024). Generative AI in the Classroom Guidance.
European Commission. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for Educators.
Centre for Global Higher Education. (2023, Outubro 19). Navigating opportunities and challenges of AI: Jisc’s national centre for AI in tertiary education [Vídeo]. YouTube.
UK Government. (2025, Janeiro 22). Generative artificial intelligence (AI) in education. GOV.UK.
UNESCO. (2023). Guidance for generative AI in education and research.
Australian Human Rights Commission. (2023, Julho 14). Utilising ethical AI in the Australian Education System: Submission to the Standing Committee on Employment, Education and Training.