The icon indicates free access to the linked research on JSTOR.

How should education change to address, incorporate, or challenge today’s AI systems, especially powerful large language models? What role should educators and scholars play in shaping the future of generative AI? The release of ChatGPT in November 2022 triggered an explosion of news, opinion pieces, and social media posts addressing these questions. Yet many are not aware of the current and historical body of academic work that offers clarity, substance, and nuance to enrich the discourse.

JSTOR Daily Membership AdJSTOR Daily Membership Ad

Linking the terms “AI” and “education” invites a constellation of discussions. This selection of articles is hardly comprehensive, but it includes explanations of AI concepts and provides historical context for today’s systems. It describes a range of possible educational applications as well as adverse impacts, such as learning loss and increased inequity. Some articles touch on philosophical questions about AI in relation to learning, thinking, and human communication. Others will help educators prepare students for civic participation around concerns including information integrity, impacts on jobs, and energy consumption. Yet others outline educator and student rights in relation to AI and exhort educators to share their expertise in societal and industry discussions on the future of AI.

Nabeel Gillani, Rebecca Eynon, Catherine Chiabaut, and Kelsey Finkel, “Unpacking the ‘Black Box’ of AI in Education,” Educational Technology & Society 26, no. 1 (2023): 99–111.

Whether we’re aware of it or not, AI was already widespread in education before ChatGPT. Nabeel Gillani et al. describe AI applications such as learning analytics and adaptive learning systems, automated communications with students, early warning systems, and automated writing assessment. They seek to help educators develop literacy around the capacities and risks of these systems by providing an accessible introduction to machine learning and deep learning as well as rule-based AI. They present a cautious view, calling for scrutiny of bias in such systems and inequitable distribution of risks and benefits. They hope that engineers will collaborate deeply with educators on the development of such systems.

Jürgen Rudolph, Samson Tan, and Shannon Tan, “ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education?The Journal of Applied Learning and Teaching 6, no. 1 (January 24, 2023).

Jürgen Rudolph et al. give a practically oriented overview of ChatGPT’s implications for higher education. They explain the statistical nature of large language models as they tell the history of OpenAI and its attempts to mitigate bias and risk in the development of ChatGPT. They illustrate ways ChatGPT can be used with examples and screenshots. Their literature review shows the state of artificial intelligence in education (AIEd) as of January 2023. An extensive list of challenges and opportunities culminates in a set of recommendations that emphasizes explicit policy as well as expanding digital literacy education to include AI.

Emily M. Bender, Timnit Gebru, Angela McMillan-Major, and Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜,” FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (March 2021): 610–623.

Student and faculty understanding of the risks and impacts of large language models is central to AI literacy and civic participation around AI policy. This hugely influential paper details documented and likely adverse impacts of the current data-and-resource-intensive, non-transparent mode of development of these models. Bender et al. emphasize the ways in which these costs will likely be borne disproportionately by marginalized groups. They call for transparency around the energy use and cost of these models as well as transparency around the data used to train them. They warn that models perpetuate and even amplify human biases and that the seeming coherence of these systems’ outputs can be used for malicious purposes even though it doesn’t reflect real understanding.

The authors argue that inclusive participation in development can encourage alternate development paths that are less resource intensive. They further argue that beneficial applications for marginalized groups, such as improved automatic speech recognition systems, must be accompanied by plans to mitigate harm.

Erik Brynjolfsson, “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” Daedalus 151, no. 2 (2022): 272–87.

Erik Brynjolfsson argues that when we think of artificial intelligence as aiming to substitute for human intelligence, we miss the opportunity to focus on how it can complement and extend human capabilities. Brynjolfsson calls for policy that shifts AI development incentives away from automation toward augmentation. Automation is more likely to result in the elimination of lower-level jobs and in growing inequality. He points educators toward augmentation as a framework for thinking about AI applications that assist learning and teaching. How can we create incentives for AI to support and extend what teachers do rather than substituting for teachers? And how can we encourage students to use AI to extend their thinking and learning rather than using AI to skip learning?

Kevin Scott, “I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale,” Daedalus 151, no. 2 (2022): 75–84.

Brynjolfsson’s focus on AI as “augmentation” converges with Microsoft computer scientist Kevin Scott’s focus on “cognitive assistance.” Steering discussion of AI away from visions of autonomous systems with their own goals, Scott argues that near-term AI will serve to help humans with cognitive work. Scott situates this assistance in relation to evolving historical definitions of work and the way in which tools for work embody generalized knowledge about specific domains. He’s intrigued by the way deep neural networks can represent domain knowledge in new ways, as seen in the unexpected coding capabilities offered by OpenAI’s GPT-3 language model, which have enabled people with less technical knowledge to code. His article can help educators frame discussions of how students should build knowledge and what knowledge is still relevant in contexts where AI assistance is nearly ubiquitous.

Laura D. Tyson and John Zysman, “Automation, AI & Work,” Daedalus 151, no. 2 (2022): 256–71.

How can educators prepare students for future work environments integrated with AI and advise students on how majors and career paths may be affected by AI automation? And how can educators prepare students to participate in discussions of government policy around AI and work? Laura Tyson and John Zysman emphasize the importance of policy in determining how economic gains due to AI are distributed and how well workers weather disruptions due to AI. They observe that recent trends in automation and gig work have exacerbated inequality and reduced the supply of “good” jobs for low- and middle-income workers. They predict that AI will intensify these effects, but they point to the way collective bargaining, social insurance, and protections for gig workers have mitigated such impacts in countries like Germany. They argue that such interventions can serve as models to help frame discussions of intelligent labor policies for “an inclusive AI era.”

Todd C. Helmus, Artificial Intelligence, Deepfakes, and Disinformation: A Primer (RAND Corporation, 2022).

Educators’ considerations of academic integrity and AI text can draw on parallel discussions of authenticity and labeling of AI content in other societal contexts. Artificial intelligence has made deepfake audio, video, and images as well as generated text much more difficult to detect as such. Here, Todd Helmus considers the consequences to political systems and individuals as he offers a review of the ways in which these can and have been used to promote disinformation. He considers ways to identify deepfakes and ways to authenticate provenance of videos and images. Helmus advocates for regulatory action, tools for journalistic scrutiny, and widespread efforts to promote media literacy. As well as informing discussions of authenticity in educational contexts, this report might help us shape curricula to teach students about the risks of deepfakes and unlabeled AI.

William Hasselberger, “Can Machines Have Common Sense?The New Atlantis 65 (2021): 94–109.

Students, by definition, are engaged in developing their cognitive capacities; their understanding of their own intelligence is in flux and may be influenced by their interactions with AI systems and by AI hype. In his review of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson, William Hasselberger warns that in overestimating AI’s ability to mimic human intelligence we devalue the human and overlook human capacities that are integral to everyday life decision making, understanding, and reasoning. Hasselberger provides examples of both academic and everyday common-sense reasoning that continue to be out of reach for AI. He provides a historical overview of debates around the limits of artificial intelligence and its implications for our understanding of human intelligence, citing the likes of Alan Turing and Marvin Minsky as well as contemporary discussions of data-driven language models.

Gwo-Jen Hwang and Nian-Shing Chen, “Exploring the Potential of Generative Artificial Intelligence in Education: Applications, Challenges, and Future Research Directions,” Educational Technology & Society 26, no. 2 (2023).

Gwo-Jen Hwang and Nian-Shing Chen are enthusiastic about the potential benefits of incorporating generative AI into education. They outline a variety of roles a large language model like ChatGPT might play, from student to tutor to peer to domain expert to administrator. For example, educators might assign students to “teach” ChatGPT on a subject. Hwang and Chen provide sample ChatGPT session transcripts to illustrate their suggestions. They share prompting techniques to help educators better design AI-based teaching strategies. At the same time, they are concerned about student overreliance on generative AI. They urge educators to guide students to use it critically and to reflect on their interactions with AI. Hwang and Chen don’t touch on concerns about bias, inaccuracy, or fabrication, but they call for further research into the impact of integrating generative AI on learning outcomes.

Lauren Goodlad and Samuel Baker, “Now the Humanities Can Disrupt ‘AI’,” Public Books (February 20, 2023).

Lauren Goodlad and Samuel Baker situate both academic integrity concerns and the pressures on educators to “embrace” AI in the context of market forces. They ground their discussion of AI risks in a deep technical understanding of the limits of predictive models at mimicking human intelligence. Goodlad and Baker urge educators to communicate the purpose and value of teaching with writing to help students engage with the plurality of the world and communicate with others. Beyond the classroom, they argue, educators should question tech industry narratives and participate in public discussion on regulation and the future of AI. They see higher education as resilient: academic skepticism about former waves of hype around MOOCs, for example, suggests that educators will not likely be dazzled or terrified into submission to AI. Goodlad and Baker hope we will instead take up our place as experts who should help shape the future of the role of machines in human thought and communication.

Kathryn Conrad, “Sneak Preview: A Blueprint for an AI Bill of Rights for Education,” Critical AI 2.1 (July 17, 2023).

How can the field of education put the needs of students and scholars first as we shape our response to AI, the way we teach about it, and the way we might incorporate it into pedagogy? Kathryn Conrad’s manifesto builds on and extends the Biden administration’s Office of Science and Technology Policy 2022 “Blueprint for an AI Bill of Rights.” Conrad argues that educators should have input into institutional policies on AI and access to professional development around AI. Instructors should be able to decide whether and how to incorporate AI into pedagogy, basing their decisions on expert recommendations and peer-reviewed research. Conrad outlines student rights around AI systems, including the right to know when AI is being used to evaluate them and the right to request alternate human evaluation. They deserve detailed instructor guidance on policies around AI use without fear of reprisals. Conrad maintains that students should be able to appeal any charges of academic misconduct involving AI, and they should be offered alternatives to any AI-based assignments that might put their creative work at risk of exposure or use without compensation. Both students’ and educators’ legal rights must be respected in any educational application of automated generative systems.


Support JSTOR Daily! Join our new membership program on Patreon today.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Educational Technology & Society, Vol. 26, No. 1 (January 2023), pp. 99–111
International Forum of Educational Technology & Society, National Taiwan Normal University, Taiwan
Daedalus, Vol. 151, No. 2, AI & Society (Spring 2022), pp. 272–287
The MIT Press on behalf of American Academy of Arts & Sciences
Daedalus, Vol. 151, No. 2, AI & Society (Spring 2022), pp. 75–84
The MIT Press on behalf of American Academy of Arts & Sciences
Daedalus, Vol. 151, No. 2, AI & Society (Spring 2022), pp. 256–271
The MIT Press on behalf of American Academy of Arts & Sciences
Artificial Intelligence, Deepfakes, and Disinformation: A Primer, (July 2022)
RAND Corporation
The New Atlantis, No. 65 (Summer 2021), pp. 94–109
Center for the Study of Technology and Society
Educational Technology & Society, Vol. 26, No. 2 (April 2023)
International Forum of Educational Technology & Society, National Taiwan Normal University, Taiwan