Skip to main content

Open Access 03.05.2024

Primary school students’ perceptions of artificial intelligence – for good or bad

verfasst von: Susanne Walan

Erschienen in: International Journal of Technology and Design Education

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Since the end of 2022, global discussions on Artificial Intelligence (AI) have surged, influencing diverse societal groups, such as teachers, students and policymakers. This case study focuses on Swedish primary school students aged 11–12. The aim is to examine their cognitive and affective perceptions of AI and their current usage. Data, comprising a pre-test, focus group interviews, and post-lesson evaluation reports, were analysed using a fusion of Mitcham’s philosophical framework of technology with a behavioural component, and the four basic pillars of AI literacy. Results revealed students’ cognitive perceptions encompassing AI as both a machine and a concept with or without human attributes. Affective perceptions were mixed, with students expressing positive views on AI’s support in studies and practical tasks, alongside concerns about rapid development, job loss, privacy invasion, and potential harm. Regarding AI usage, students initially explored various AI tools, emphasising the need for regulations to slow down and contemplate consequences. This study provides insights into primary school students perceptions and use of AI, serving as a foundation for further exploration of AI literacy in education contexts and considerations for policy makers to take into account, listening to children’s voices.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

During the last year a new socioscientific issue (SSI) has become one of the most discussed in society in all kinds of organisations, not at least in education. Although artificial intelligence (AI) has existed for quite some time, the release of ChatGPT to the public in November 2022 increased awareness of AI. In a short period, ChatGPT became a tool used worldwide. According to different websites (e.g., Shewale, 2023), more than 180 million users have utilized ChatGPT provided by OpenAI since its release until December 2023.
In media, at least in Sweden, there have been reports almost on a daily basis during the last year, about the technical revolution of AI, with comments about its benefits as well as expected dangers. From an international perspective, it is an important issue, and leaders in society have voiced the need for regulations on AI development. For instance, the prime minister of UK invited leaders from all over the world to a safety summit about AI in November 2023 to discuss international regulations (gov. uk., 2023). During the summit, the first agreement was signed by representatives from companies and governments from the 28 participating countries. More recently, the European Union has decided on an AI act to regulate the use of AI (European Council, 2023).
Many questions arise, about safety, worries about jobs being lost, but also of how AI can support us in many ways, maybe even help us to find solutions to the climate crisis. However, so far, it seems as there are few, if there are any studies reporting about how young people perceive AI. They are the ones that are supposed to live in the future, with AI likely having an even greater impact on society than today. The UN Convention on the Rights of the Child was adopted by the UN General Assembly already at the end of 1989 and entered into force in September 1990. The Convention on the Rights of the Child (United Nations, 1989) is a legally binding international agreement that states that children are individuals with their own rights, not the possessions of parents or other adults. It contains 54 articles, all of which are equally important and form a whole. However, four basic principles must always be considered when dealing with matters concerning children:
Article 2) All children have the same rights and equal value.
Article 3) The best interests of the child must be taken into account in all decisions concerning children.
Article 6) All children have the right to life and development.
Article 12) All children have the right to express their opinion and have it respected.
It could be argued that for example, article 3 is of interest when making decisions about AI. UNICEF and the World Economic Forum also claim that AI will impact children in many ways and they ask for partners to build solutions that uphold child rights and take into account opportunities as well as risks in the future AI age (UNICEF, 2023). Already in 2001, Shier argued that children should be part in decision-making based on the Convention on the Rights of the Child. He proposed a model in different steps, with the first being that children are listened to, the second, that they are supported in expressing their views, the third, that their views are taken into account, fourth, that they are involved in decision-making and finally, that they share power and responsibility for decision-making.
Hence, to consider children and to listen to their voices about AI, I have in this study focused on how young people perceive AI, and to be more specific, what Swedish primary school, students aged 11–12 years old, know and think about AI. Since the use of AI also has increased during the last year by the public, it was also of my interest to find out if young people, already at primary school level use AI, and if so, how. The following research questions were posed:
What are primary school students’ cognitive and affective perceptions of AI?
If primary school students use AI already, how do they use it?

Background – AI history in short, from launch to being part of education

Even though AI is on the agenda all over the world, a brief overview of what it is and a short history about its development is presented as follows.
Already during the 1950s AI was introduced and a proposed definition was:
every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to stimulate it. (McCarthy et al., 1955, p.2).
Other examples of definitions of AI presented in research include its characterisation as a specialised field within computer science. This field is dedicated to the development of smart machines capable of executing tasks that generally necessitate human intellect, including but not limited to visual understanding, voice recognition, decision-making processes, and translating languages (Russell & Norvig, 2009; Marcus & Davis, 2019). On the other hand, Machine Learning (ML) is a specific area within AI that emphasises the creation of algorithms and statistical models that allow machines to enhance their efficiency in a particular task progressively by learning from data, without the need for explicit programming (Brauner, Hich, Philipsen & Ziefle, 2023).
A well-known example from the early days of AI is the chatbot ELIZA, which was created during the 1960s (Potts et al., 2021). This chatbot could converse with humans and was the first program that was able to pass the Turing test, signifying that it could engage in conversation in an intelligent and natural way (Haenlein & Kaplan, 2019). However, since then, a lot of development has been made. Another well-known example is from 2015 when Google’s AI managed to win over the world chess champion in the game “Go” (Haenlein & Kaplan, 2019). Nowadays, AI is used in many fields such as voice assistants, text and image generation, self-driving cars, human-robot interactions, healthcare, etc. (e.g., Corea, 2019; Kulida & Lebedev, 2020; Onnasch & Roesler, 2020; Su & Yang, 2022).
The launch of ChatGPT in late 2022 caused a lot of debate in the education sector. Globally, the initial apprehension was that students might exploit ChatGPT and similar AI tools to cheat on their assignments, thereby devaluing the significance of learning evaluation, certification, and qualifications (Anders, 2023). Some educational institutions prohibited the use of ChatGPT, while others cautiously embraced the new technology (Tlili et al., 2023). Numerous schools and universities, for example, adopted a forward-thinking stance, asserting that instead of trying to ban their use, students and staff should be guided to use AI tools effectively, ethically, and transparently (Russell Group, 2023). UNESCO has presented Guidance for generative AI in education and research (2023) to support educators, students, and researchers in how to deal with access to this kind of AI in education. Furthermore, the guidance suggests the development of suitable rules and policies and recommends crucial measures for government bodies to control the application of generative AI. It also introduces models and specific instances for policy creation and instructional planning that allow the ethical and efficient utilization of this technology in education. Lastly, it urges the global community to ponder over the deep, long-term effects of generative AI on our comprehension of knowledge and the determination of educational content, techniques, and results, as well as our approach to evaluating and authenticating learning (UNESCO, 2023).
From a science education perspective, the use of AI was presented in the review by Jia et al. (2024). They concluded that AI has played an important role in science teaching and learning, especially in the early stages of education. However, despite their accurate review, there were no articles included that actually had investigated primary school students’ perceptions of AI from cognitive and affective perspectives. The studies rather have reported on the use of AI to stimulate learning and how it affected attitudes towards technology.
In addition to the findings from the review made by Jia et al. (2024), it has been argued that children need to learn about AI. Yang (2022) argued that already from early years children should learn about AI and suggested a curriculum design including why, what and how this could be implemented. The same idea was highlighted by Holmes et al. (2019). They discussed AI education and argued that it should be classified into either Learning with AI or Learning about AI. The latter is the focus of this study, trying to find out what primary school students know about AI, but also their attitudes to AI.

AI literacy in education

What kind of knowledge do students need to understand AI? Based on a review of publications about AI in education and literacy, Ng et al. (2021) concluded that there are four parts that should serve as basis for AI literacy in education, namely to:
  • know about and understand how AI works.
  • be able to use and apply AI.
  • evaluate and create AI.
  • consider AI and ethics.
Based on this definition of AI literacy, the previous arguments about the need to include AI in education, not only as tools, but also for students to learn about AI, are reasonable. This is of importance not only for young students, but also for the public, as will be presented in the following section.

Public perceptions of AI

As indicated in the introduction, there are several questions raised about the use of AI, and some think that beside the opportunities, there are also risks. Some examples of public perceptions of AI are presented as follows. The World Economic Forum (2022) reported that the areas where people think AI improve their lives are mainly in education, entertainment and transportation. When it comes to public perceptions of AI as dangerous, it has for instance, been argued by researchers (Hick & Zietle, 2022) that public perceptions of AI in some aspects is influenced by science fiction movies with intelligent robots that take over the world. Less dangerous, but still perceived as problematic, fear of replacement and the risk of people losing their jobs is another concern raised by people (Smith & Anderson, 2014). This is, of course the case, but again, it is also argued that new jobs will be created (World Economic Forum, 2023). In addition, Brauner et al. (2023) found in their study that people see both benefits and possible dangers with AI. The participants in their study were not worried about their future on the labor market. Another finding in their study was that people think that it is good that AI is not influenced by emotions, hence more trustful. This was also found by Cismariu and Gherhes (2019) and Liu and Tao (2022). Finally, Brauner et al. (2023) argued that education about AI is necessary for the general public to enable people to evaluate the benefits and barriers of AI.

Theoretical framework

Mitcham’s philosophical framework of technology (1994) has been used by several researchers in technology education (e.g., Ankiewicz, 2019; Blom & Abrie, 2021; Su & Ding, 2022; Svenningsson, 2020). This framework presents technology in four different manifestations:
1.
Objects: Technology as material objects, ranging from kitchenware to computers.
 
2.
Knowledge: This includes recipes, rules, theories, and intuitive “know-how”.
 
3.
Activities: This involves the design, construction, and use of technological objects.
 
4.
Volition: This pertains to knowing how to use technology and understanding its consequences.
 
The studies by Blom and Abrie (2021), Su and Ding (2022), and Svenningsson (2020) all used Mitcham’s typology of technology to analyse students’ perceptions of technology. They found that students often have a limited understanding of technology, primarily associating it with objects and activities. This understanding often overlooks the aspects of knowledge and volition in technology. However, there are variations across different contexts. For instance, while South African and Swedish students frequently associated technology with modern electrical objects, Chinese students described technology from various aspects, including its features, production, function, operation, and use. Despite the limited perception, the studies suggest that students have the potential to describe technology more comprehensively using all four aspects of Mitcham’s typology. This indicates a need for educational interventions to broaden students’ understanding of technology beyond just objects and activities.
One example of the use of Mitcham’s framework has been presented by Ankiewicz (2019). However, he worked with development of Mitcham’s framework and argued that a behavioural component of students’ attitudes towards technology needed to be added. In this study, I will use the developed model of Mitcham’s framework presented by Ankiewicz (2019) even though this study is a qualitative study and previous studies mostly have been used in quantitative research settings. Thereby, this study can serve as a new way of using the framework compared to how it has been used before.
To the best of my knowledge, the analysis of primary students’ perceptions of Artificial Intelligence (AI) is not a widely explored area, and there are no existing frameworks specifically designed for this purpose. One potential approach could be to employ the developed model of Mitcham’s framework as presented by Ankiewicz (2019). Another approach could be to use the concept of digital literacy, which has been extensively defined and utilised (Audrin & Audrin, 2022; Tinmaz, Lee, Fanea-Ivanovici, 2022). However, the application of digital literacy as a theoretical framework presents challenges due to the multitude of definitions and the lack of explicit inclusion of affective aspects of individuals’ perceptions. An alternative strategy could be to adopt the AI literacy framework proposed by Ng et al. (2021), which is based on four fundamental components. It might also be feasible to integrate the model introduced by Ankiewicz (2019) with the foundational elements of AI literacy as outlined by Ng et al. (2021). Consequently, my intention is to utilise the model depicted in Fig. 1 for data analysis and discussion of the results in this study.

Method

Research context

In this case study, collaboration was made with a primary school where the science and technology teachers teaching grade five and six (students at the age of 11–12 years), were interested in working with AI as an SSI theme during some weeks from March to May 2023. The reason for this interest being all the news about ChatGPT, their own curiosity in learning about AI, but also in exploring how AI could be taught to their students. The school is a compulsory school with students aged 6–12 years old. The school is situated in a municipality in the middle of Sweden with about 12 000 habitants and in this school, there are about 430 students. Some of the teachers had previously been involved in research projects with a nearby situated university and based on already established contacts, the idea of conducting a case study about students’ perceptions of AI was decided between the teachers and the researcher (author). Before starting any activities with the students all ethical concerns were taken into account. Hence, information letters and consent forms were sent to the students and their parents. All of the students were allowed and willing to participate in the study. It was informed that all students were about to take part in all activities, but it was not necessary to be involved in any data collection. Furthermore, information was also provided that the participants would be kept anonymous, that data would be safely stored and that it was possible to withdraw consent to participate anytime during the study. All the ethical steps being taken were based on the ethical guidelines for scientific research recommended by The Swedish Research Council (2017).
The next step was to find out what the students already knew and thought about AI. One of the teachers designed a pre-test with only five questions to get an idea of the general starting point for the students. The reason for only making a test with a few questions was that this could be enough to find out what the students already knew and thought about AI. The questions were about if they had ever used any AI, their positive and negative thoughts about AI, and also their explanations of what AI is, both explained in written text and they were also asked to draw a picture to explain their understanding of AI. Thereafter, the teachers started activities with the students. The activities were inspired by lesson plans created by researchers at the Mid University of Sweden, for the purpose of teaching students in this age group about AI. The lesson plans can be found on the website https://​www.​miun.​se/​mot-mittuniversitete​t/​samverkan/​run/​barnensuniversit​et/​ai/​.
However, unfortunately this website is only accessible in Swedish. Therefore, a brief summary of the lesson plans is presented here in Table 1.
Table 1
Overview of lesson plans about AI
Lesson
Content
1
Presentation of what AI is. Examples: recommendation on Youtube, Tiktok and Instagram; virtual assistants such as Siri, Alexa, Google Assistant and ChatGPT; self-driving cars; face recognition and Google translate. Discussions about good and bad with the examples.
2
How a computer communicates. AI is based on algorithms. What an algorithm is. Machine learning.
3
Ethical dilemmas.
4
Human, machine or in between. Biohacking.
All of the lessons include different kinds of practical exercises and the content from the lesson plans presented on the website was separated into several lessons. In total, there were 10 lessons, each lasting 40 min. After these lessons, the primary school students used ChatGPT to create different questions for practice before tests they were going to have in different school subjects. The final activity for the students was a trip to a nearby university where they spent one day with activities related to AI. Half of the day they worked with a combination of ChatGPT and Dall-E to create stories. Here they were trained in the importance of writing appropriate prompts. The students also met a person working at the university who is an expert in programming with a special interest in AI. He held a short lecture about AI for about 15 min. The other half of the day, the students worked on an art-based activity where they, in groups of three to four, were asked to create a collage presenting what kind of AI they would like to have in the future.
At the end of the project, focus group interviews were held with 12 of the participating primary school students (one girl and one boy randomly picked from each of the classes). The focus groups lasted about 40 min each. The interview was semi-structured, with questions found in Appendix I.
In addition, the teachers asked the students at the end of the project to write short evaluation reports and 30 reports were sent to the researcher (author). In the reports the students were requested to write their responses to two questions; What have you learnt about AI? and How do you feel about AI? The reason for only 30 reports being collected was that it was the end of the semester and it was not a top priority to remind students to write these evaluations with many other things going on such as national exams and the upcoming summer holiday.
The researcher also had a discussion with the teachers about giving the students the same test again as they had from the beginning to compare if there had been any changes in the students’ knowledge about and attitudes to AI. However, based on the situation of many tests going on and the stress of being at the end of the semester, we agreed upon only using the interviews and the evaluation reports that were able to be collected as enough for post data collection.

Participants, data collection and analysis

The participants of this study have already been mentioned in the research context. However, to clarify even more, there were in total 60 primary students aged 11–12 years that participated in the data collections. However, data were only collected from 30 of these students’ evaluation reports due to practical issues, and 12 of the students participated in the focus group interviews that took place after the activities. The interviews were audio-recorded and transcribed. Quotes from students in the focus group interview will be presented in the result section as “I” for interview and number 1–12. Quotes from the evaluation reports will only be presented as “E”, for evaluation report, followed by a number, hence E1-30.
Data was analysed using two different approaches. The data from the pretests was mainly analysed using descriptive statistics, while data from the focus group interviews and the evaluation reports were analyzed through the model presented in Fig. 1. Hence, using the fusion of the Ankiewicz (2019) model and the foundational elements of AI literacy as outlined by Ng et al. (2021).
The data in this study consist of both a pre-test, interviews, and an evaluation report. However, there is no research question posed to compare the activities before and after. The purpose is not to evaluate whether and what students learn from the lessons but rather to obtain a richer picture of their perceptions and experiences with AI. Data from the activity when the students worked with collages at the university are not included in this paper since the focus on that particular activity was more based on the students’ ideas about the kind of support they wished from AI in the future.

Results

Primary school students’ cognitive perceptions of AI

Data from the pre-test, presenting how the primary school students responded to the question of where it is possible to find AI in society, showed that 13% responded that they do not know. The rest of the students listed that AI can be found on the Internet, in cars, phones, social media, hospitals, and actually almost everywhere. The pre-test also included a request that the students would draw a picture trying to show what AI looks like. Even though 13% had responded that they do not know where to find AI in society, all of the students made drawings either with a laptop or a robot. Some students drew both. Two of the students included a picture of a brain in their drawings, a brain connected to a laptop. Examples of drawings are presented in Fig. 2.
Data from the focus group interviews showed that the students had similar cognitive perceptions of what AI is in the pretest. The students mentioned AI as an information tool that could be found both through Google and ChatGPT. They also mentioned Snapchat, self-driven cars and referred in particular to the brand Tesla. The ideas of AI being a robot, or that it could be found in computers were also presented during the interviews. Example of AI robots being in the service for people, for instance by doing shopping if you are not able to go to the shop yourself because of illness. Some examples of students’ comments on what AI is during the interviews:
AI is not a real person that sits there and write when you chat. It is a robot. (I2)
Well, AI is like Google, Google translate and Snapchat has an AI, but it is not so good. (I7)
AI, it’s in self-driven cars, Musk you know, Tesla. (I8)
In addition, there were comments that referred to AI as a brain, a digital brain in computers, that is able to think for itself. One comment was that AI is more like a human; it can make up things on its own. These perceptions where more emphasised during the interviews than in the pretest. The students also talked about AI as not having any feelings as a difference compared to humans. Some examples from the students’ comments:
It’s [AI] like a human, it can make things up, on its’ own. (I5)
We have learnt a lot about how AI works. They [AIs] are very clever, but they don’t have any feelings. They don’t’ think about consequences. (I3)
It is like a brain, a digital one, inside a computer. It can think by itself. (I9)
The evaluation reports did not provide much information about the students’ perceptions of what AI is. Instead, the students mostly wrote things like I have learnt a lot about AI. Still, a few quotes will be presented as examples:
I have learnt that AI is much more than a robot. (E15)
I have learnt about programming and ChatGPT (E20).
Summarising the students’ cognitive perceptions of AI, they describe it as both a machine (such as robots, computers, phones and self-driven cars) and consider its functionalities. Additionally, there are notions of AI having or lacking human attributes. Some view AI as humanlike, suggesting it can think on its own or portraying it as a brain. On the other hand, there were comments about AI differing from human attributes, particularly in the perception that AI lacks any feelings.

Primary school students’ affective perceptions of AI

In the pre-test, students were prompted to articulate both positive and negative perspectives on AI. They were also required to express whether they predominantly considered themselves positive or negative towards AI. Out of the 60 students, approximately 16% indicated uncertainty regarding what they found positive about it. The remaining students provided positive comments, highlighting AI’s capacity to assist in text writing, its accessibility, utility in perilous situations, and its potential for facilitating learning.
In the focus group interviews the students talked about positive aspects of AI based on how it can be used. The things mentioned were to support them in their studies and AIs as robots helping with practical things such as shopping. Two examples of comments:
If you are ill, and cannot go and by food, an AI robot could do it for you. That’s good. (I6)
It is positive that it can help us in our studies, to practice before exams. (I7)
A comment from the evaluation reports was that:
It has been fun! I find it interesting and cool with AI. (E25)
All 60 students responded to negative aspects of AI in the pretest. None of them wrote that they did not know, or that they did not think of any negative effects. The risks mentioned were for instance possibilities for bad use, that people could become lazy and that AI could make mistakes.
During the interviews, students discussed the risks more than the opportunities, and they mentioned similar negative aspects as in the pretest. However, they also added that they were afraid and found it a little bit scary, that development is going too fast, that AI could spy on people and even kill. Furthermore, the risk of people losing their jobs, that AI robots could develop and become mean. The last comment related to something they had seen in movies. Some examples of comments:
It feels like it is going a little bit too fast and there is a risk that we will become lazy. (I2)
It is bad if it (AI) starts to spy on people. If it knows what you are thinking. Like for instance, maybe you plan for a birthday gift, you want to keep that as a secret. You want to keep your privacy. (I7)
First, you think that it could be good, like if an AI robot take care of your old grandma. I don’t want that, it’s not personal, I want to visit and call my grandma myself. Otherwise, I would get a bad conscience. (I5)
I have seen movies when AI, or robots take over the world. What if that really happens? If you ask an AI to save the environment and take away the cause of the problems, then it would probably kill us all. (I4)
The evaluation reports were mainly filled with comments from the students that they found it scary with the development of AI and similar comments as during the interviews were found. One example:
It has been interesting to learn more about AI. Interesting, but also scary. It seems as it is going very fast and people can lose their jobs and we don’t really know what is going to happen, and what if it becomes smarter than people? (E13)
Still, even though it seems as the students were mainly negative about AI when explicitly asked if they very more in favor, or more against AI, 75% of the students reported in the pretest that, they were positive. Out of the 60 students, 13% did not know and 12% wrote they were negative. During the interviews, the students were asked the same question and in one of the groups they were despite the risks they had talked about, positive. In the other group they reported that they felt more scared and therefore were negative, mostly to the speed of the development.
Summarising the students’ affective perceptions about AI, the students were both positive and negative. Students discussed positive aspects such as AI’s support in studies and practical tasks. However, students also expressed concerns about AI’s negative impacts, including fears of rapid development, job loss, privacy invasion, and potential harm. Overall, students exhibited a mix of positive and negative sentiments towards AI.

Primary school students’ use of AI and their recommendations for the future

In the pre-test, 68% of the students reported having used some kind of AI, 2% did not know, and the remaining 30% responded negatively to this question. Throughout the project, students utilised ChatGPT in lessons about AI at school and when crafting stories at the university. In the story creation session, they also employed the AI tool Dall-E to generate pictures. As previously mentioned, many students found it interesting and enjoyable to learn about AI and experiment with various tools. One student talked raised concerns about others’ behavior, specifically regarding the risk of using ChatGPT for cheating in school. However, students also expressed apprehensions about the future and called for regulations, which was highlighted in interviews and some evaluation reports. Additionally, students argued that they have limited influence on decisions. Two comments exemplify this sentiment:
Since students can cheat, they have started to forbid the use of ChatGPT, at least at the school where my brother is studying. (I9)
There is not much we can do. There was some researcher who did not want to work with this anymore. He thought that things were going too fast. They should move slower. They should think about consequences before they become exalted over what they can do. They should take it easy and calculate on risks. But, what can we do? Nothing. (I4)
Summarising aspects related to students’ use of AI, they were initially exploring various AI tools. Notably, students discussed future behavior and the necessity for regulations, primarily to slow down and contemplate consequences.

Discussion

In this study, primary school students’ perceptions of AI have been presented, both cognitive and affective and also how they have started to use some tools, primarily ChatGPT, hence behavioral aspects. The theoretical framework, based on Mitcham’s philosophical framework of technology (1994) set the foundation for analysing students’ perceptions. However, with the modification made in Ankiewicz’s expanded model (2019). In addition, the AI literacy components presented by Ng et al. (2021) have been included as part of the analyses, which will be even more elaborated on here in the discussion section.
In terms of cognition, primary school students displayed an awareness of AI, perceiving it both as a machine (including robots, computers, phones, and self-driven cars) and as a tool applicable in various situations. This aligns not only with the technological aspects outlined in Mitcham’s framework (1994) but also corresponds to the adjusted model proposed by Ankiewicz (2019). When viewed through the lens of AI literacy (Ng et al., 2021), the cognitive aspect is deemed synonymous with comprehending how AI functions. While students in this study indicated learning about AI, it cannot be definitively asserted that they have grasped its operational mechanisms, as such insights are not explicitly evident in the collected data. Still, it might be argued that steps have been taken for students to start learning about AI as suggested in previous studies as one important aspect in education Holmes et al., 2019; Yang, 2022).
In terms of the affective perceptions, a nuanced perspective was revealed, with students expressing both positive and negative sentiments towards AI. Positive aspects included AI’s support in studies and practical tasks, aligning with public perceptions reported by the World Economic Forum (2022). The students’ negative perceptions encompassed concerns about rapid development, job loss, privacy invasion and potential harm. These perceptions were also in line with findings reported in previous studies Hick & Zietle, 2022; Smith & Anderson, 2014). For instance, some students referred to science fiction movies and wondered if this was something that could really be happening in the future if AI were to take over. In total, the results showed the emotional complexity and the multifaceted nature of students’ affective perceptions of AI.
When analysing the affective perceptions through the lens of AI literacy (Ng et al., 2021), I make connections with the ethical perspective proposed by Ng and colleagues. However, the ethical perspective is also connected to the cognitive domain. As well as the affective domain is affected by the cognitive. The dimensions are intertwined. Lack of knowledge may also cause worries and this can influence decisions on ethical aspects.
Therefore, as argued by for instance Brauner et al. (2023) education about AI is necessary to enable people to evaluate benefits and barriers of AI. To be able to evaluate and create AI, even more education is needed and this was not part of the lessons the primary school students in this project faced.
However, the students were able to use some AI tools and this I interpreted as being part of the behavioral component in Ankiewicz’s model (2019) and also be able to use and apply AI as suggested in the AI literacy basic foundation (Ng et al., 2021). In addition to the actual use, the students in this study also made suggestions for policy makers and AI developers, hence suggestions for other people’s behavior, as the students emphasised the need for regulations, emphasising the importance of responsible AI use. In this respect, steps have started to be taken for instance by the European Union (2023). In the future, maybe policy makers and AI developers also should listen to the voices from children and take into account the Convention on the Rights of the Child, recommendations from previous research (Shier, 2001) and UNICEF (2023). AI will most certainly impact children in many ways.

Limitations and conclusions

This study has taken place in Sweden, with only a small number of primary school students. It could be argued that the data collection is missing a post-test to elaborate on what the students learned during the project. However, as earlier stated, practical issues made this impossible to conduct. Still, the idea was not to evaluate what kind of knowledge the students developed during the project, but to use all collected data to create an overall picture of the students’ perceptions of AI. As a researcher, I did not participate during the lessons, except for the ones taking place at the university. This is a limitation since I cannot say if, or how the students were influenced by their teachers. In a future study, this limitation should be considered, and researchers should also observe what is going on during the whole process to be able to identify factors that may influence the students. On the other hand, there are other factors to consider as well. Are the children talking about AI at home? What are their parents saying? Still, this study can serve as a contribution of knowledge about students’ perceptions and use of AI. Overall, the study contributes valuable insights into primary school students’ perceptions and use of AI, providing a basis for further exploration of AI literacy in educational contexts.

Acknowledgements

Thank you to the primary school students who volunteered to be part of this study. Acknowledgements also to the teachers who collaborated and worked with lessons about AI as well as with AI during the project.

Declarations

Conflict of interest

The author declares that there is no conflict of interest.

Ethical approval and informed consent

All procedures performed with human subjects were in accordance with the ethical standards of the Swedish Research Council (SCR). Informed consent was obtained from all participants in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix I

Questions for the focus group interviews

1.
I know that you have worked with a project about AI, can you please tell me about it? What have you been doing?
 
2.
What have you learnt about AI? What is it and how does it work?
 
3.
How did you feel about working with the project?
 
4.
How did you feel about AI?
 
5.
Can you tell me more about things you find positive with AI?
 
6.
Can you tell me more about things you find negative with AI?
 
7.
What do you think about AI and the future? How should we use it? Or, should we avoid using it?
 
8.
Do you use any kind of AI yourself, if so, what do you use and for what purpose?
 
9.
Anything else you would like to tell me about the project or AI?
 
Literatur
Zurück zum Zitat Marcus, G., & Davis, E. (2019). Rebooting AI: Building Artificial Intelligence we can trust. Pantheon Books. Marcus, G., & Davis, E. (2019). Rebooting AI: Building Artificial Intelligence we can trust. Pantheon Books.
Zurück zum Zitat Mitcham, C. (1994). Thinking through Technology. The University of Chicago. Mitcham, C. (1994). Thinking through Technology. The University of Chicago.
Zurück zum Zitat Ng, D. T. K., Leung, J. K. L., Chu, K. W. S., & Qiao, M. S. (2021). AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology, 58(1), 504–509. https://doi.org/10.1002/pra2.487 Ng, D. T. K., Leung, J. K. L., Chu, K. W. S., & Qiao, M. S. (2021). AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology, 58(1), 504–509. https://​doi.​org/​10.​1002/​pra2.​487
Zurück zum Zitat Potts, C., Ennis, E., Bond, R., Mulvenna, M., McTear, M., Boyd, K., Broderick, T., Malcolm, M., Kuosmanen, L., Nieminen, H., Vartiainen, A-K., Kostenius, C., Cahill, B., Vakaloudis, A., McConvey, G., & O’Neill, S. (2021). Chatbots to support mental wellbeing of people living in rural areas: Can user groups contribute to co-design? Journal of Technology in Behavioral Science. https://doi.org/10.1007/s41347-021-00222-6CrossRef Potts, C., Ennis, E., Bond, R., Mulvenna, M., McTear, M., Boyd, K., Broderick, T., Malcolm, M., Kuosmanen, L., Nieminen, H., Vartiainen, A-K., Kostenius, C., Cahill, B., Vakaloudis, A., McConvey, G., & O’Neill, S. (2021). Chatbots to support mental wellbeing of people living in rural areas: Can user groups contribute to co-design? Journal of Technology in Behavioral Science. https://​doi.​org/​10.​1007/​s41347-021-00222-6CrossRef
Zurück zum Zitat Russell, S., & Norvig, P. (2009). Artificial Intelligence: A modern approach (3rd ed.). Prentice Hall. Russell, S., & Norvig, P. (2009). Artificial Intelligence: A modern approach (3rd ed.). Prentice Hall.
Zurück zum Zitat Tlili, A., Shehata, B., Agyemang Adarkwah, M., Bozkurt, A., Hickey, D.T., Huang, R., & Brighter Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(15), 1–24. https://doi.org/10.1186/s40561-023-00237-x Tlili, A., Shehata, B., Agyemang Adarkwah, M., Bozkurt, A.,  Hickey, D.T., Huang, R., & Brighter Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(15), 1–24. https://​doi.​org/​10.​1186/​s40561-023-00237-x
Metadaten
Titel
Primary school students’ perceptions of artificial intelligence – for good or bad
verfasst von
Susanne Walan
Publikationsdatum
03.05.2024
Verlag
Springer Netherlands
Erschienen in
International Journal of Technology and Design Education
Print ISSN: 0957-7572
Elektronische ISSN: 1573-1804
DOI
https://doi.org/10.1007/s10798-024-09898-2

    Premium Partner