*Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia*
27 June 2025
/https://openletter.earth/open-letter-stop-the-uncritical-adoption-of-ai-technologies-in-academia-b65bba1e/
Dear Universities of The Netherlands, Dutch Universities of Applied
Sciences, and Respective Executive Boards,
With this letter we take a principled stand against the proliferation of
so-called 'AI' technologies in universities. As an educational
institution, we cannot condone the uncritical use of AI by students,
faculty, or leadership. We also call for reconsidering any direct
financial relationships between Dutch universities and AI companies. The
unfettered introduction of AI technology leads to contravention of the
spirit of the EU Al act. It undermines our basic pedagogical values and
the principles of scientific integrity. It prevents us from maintaining
our standards of independence and transparency. And most concerning, AI
use has been shown to hinder learning and deskill critical thought.
As academics, and especially as university-level educators, we have the
responsibility to educate our students, not to rubber stamp degrees
without any relationship to university-level skills. Our duty as
educators is the cultivation of critical thinking and intellectual
honesty, and it is not our role either to police or promote cheating,
nor to normalise our students' and mentees' avoidance of deep thought.
Universities are about engaging deeply with the subject matter. The goal
of academic training is not to solve problems as efficiently and quickly
as possible, but to develop skills for identifying and dealing with
novel problems, which have never been solved before. We expect students
to be given space and time to form their own deeply considered opinions
informed by our expertise and nurtured by our educational spaces. Such
spaces must be protected from industry advertising, and our funding must
not be misspent on profit-making companies, which offer little in return
and actively deskill our students. Even the term 'Artificial
Intelligence' itself (which scientifically refers to a field of academic
study) is widely misused, with conceptual unclarity coopted to advance
industry agendas and undermine scholarly discussions. It is our task to
demystify and to challenge 'AI' in our teaching, research and in our
engagement with society.
We must protect and cultivate the ecosystem of human knowledge. AI
models can mimic the appearance of scholarly work, but they are (by
construction) unconcerned with truth—the result is a torrential
outpouring of unchecked but convincing-sounding "information". At best,
such output is accidentally true, but generally citationless, divorced
from human reasoning and the web of scholarship that it steals from. At
worst, it is confidently wrong. Both outcomes are dangerous to the
ecosystem.
Overhyped 'AI' technologies, such as chatbots, large language models,
and related products, are just that: products that the technology
industry, just like the tobacco and petroleum industries, pump out for
profit and in contradiction to the values of ecological sustainability,
human dignity, pedagogical safeguarding, data privacy, scientific
integrity, and democracy. These 'AI' products are materially and
psychologically detrimental to our students' ability to write and think
for themselves, existing instead for the benefit of investors and
multinational companies. As a marketing strategy to introduce such tools
in the classroom, companies falsely claim that students are lazy or lack
writing skills. We condemn those claims and reassert students’ agency
vis-à-vis corporate control.
We have been here before with tobacco, petroleum, and many other harmful
industries who do not have our interests at heart and who are
indifferent to the academic progress of our students and to the
integrity of our scholarly processes.
We call upon you to:
• *Resist the introduction of AI in our own software* systems, from
Microsoft to OpenAI to Apple. It is not in our interests to let our
processes be corrupted and give away our data to be used to train models
that are not only useless to us, but also harmful.
• *Ban AI use in the classroom* for student assignments, in the same way
we ban essay mills and other forms of plagiarism. Students must be
protected from de-skilling and allowed space and time to perform their
assignments themselves.
• *Cease normalising the AI hype* and the lies which are prevalent in
the technology industry's framing of these technologies. The
technologies do not have the advertised capacities and their adoption
puts students and academics at risk of violating ethical, legal,
scholarly, and scientific standards of reliability, sustainability, and
safety.
• *Fortify our academic freedom* as university staff to enforce these
principles and standards in our classrooms and our research as well as
on the computer systems we are obliged to use as part of our work. We as
academics have the right to our own spaces.
• *Sustain critical thinking on AI* and promote critical engagement with
technology on a firm academic footing. Scholarly discussion must be free
from the conflicts of interest caused by industry funding, and reasoned
resistance must always be an option.
Yours sincerely,
*Olivia Guest, Assistant Professor of Computational Cognitive Science*,
Cognitive Science & Artificial Intelligence Department and Donders
Centre for Cognition, Radboud University Nijmegen
*Iris van Rooij, Professor of Computational Cognitive Science*,
Cognitive Science & Artificial Intelligence Department and Donders
Centre for Cognition, Radboud University Nijmegen
*Marcela Suarez Estrada, Lecturer in Critical Intersectional
Perspectives on Artificial Intelligence*, School of Artificial
Intelligence, Radboud University Nijmegen
*Lucy Avraamidou, Professor of Science Education*, Faculty of Science
and Engineering, University of Groningen
*Barbara Müller, Associate Professor of Human-Machine Interaction*,
Faculty of Social Sciences, Radboud University Nijmegen
*Marjan Smeulders, Researcher Microbiology and teacher ambassador for
Teaching and Learning Centre*, Faculty of Science, Radboud University
Nijmegen
*Arnoud Oude Groote Beverborg, Lecturer of Pedagogy*, Faculty of Social
Sciences, Radboud University Nijmegen
*Ronald de Haan, Assistant Professor in Theoretical Computer Science*,
Faculty of Science, University of Amsterdam
*Mirko Tobias Schäfer, Associate Professor of AI, Data & Society*,
Faculty of Science, Utrecht University
*Mark Dingemanse, Associate Professor & Section leader AI, Language and
Communication Technology*, Faculty of Arts, Radboud University Nijmegen
*Frans-Willem Korsten, Professor in Literature, Culture, and Law*,
Leiden University for the Arts in Society
*Mark Blokpoel, Assistant Professor of Computational Cognitive Science*,
Cognitive Science & Artificial Intelligence Department and Donders
Center for Cognition, Radboud University Nijmegen
*Juliette Alenda-Demoutiez, Assistant Professor, Economic Theory and
Policy*, Faculty of Management, Radboud University Nijmegen
*Federica Russo, Professor of Philosophy and Ethics of Techno-Science* &
Westerdijk Chair, Freudenthal Institute, Utrecht University
*Felienne Hermans, Professor in Computer Science Education*, Vrije
Universiteit Amsterdam
*Francien Dechesne, Associate Professor of Ethics and Digital
Technologies*, eLaw Center for Law and Digital Technologies, Leiden
University
*Jaap-Henk Hoepman, Professor in Computer Science*, Radboud University /
Karlstad University.
*Jelle van Dijk, Associate Professor Embodied Interaction Design*,
Faculty of Engineering Technology, University of Twente
*Andrea Reyes Elizondo, Researcher & PhD Candidate*, Faculties of Social
Sciences & Humanities, Leiden University
*Djoerd Hiemstra, Professor in Computer Science*, Radboud University
*Liesbet van Zoonen, Professor of Cultural Sociology*, Erasmus
University Rotterdam
*Emily Sandford, Postdoctoral Researcher*, Leiden Observatory, Leiden
University
*M. Birna van Riemsdijk, Associate Professor Intimate Computing*,
Human-Media Interaction, University of Twente
*Maaike Harbers, Professor of Artificial Intelligence & Society*,
Rotterdam University of Applied Sciences
*Marieke Peeters, Senior Researcher Responsible Applied Artificial
Intelligence and Human-AI Interaction*, Research Group on Artificial
Intelligence, HU University of Applied Sciences Utrecht
*Marieke Woensdregt, Assistant Professor of Computational Cognitive
Science*, Cognitive Science & Artificial Intelligence Department and
Donders Center for Cognition, Radboud University Nijmegen
*Edwin van Meerkerk, Professor of Cultural Education*, Radboud Institute
for Culture and Heritage, Faculty of Arts, Radboud University Nijmegen
*Sietske Tacoma, Senior Research Responsible Applied Artificial
Intelligence*, Research Group on Artificial Intelligence, HU University
of Applied Sciences Utrecht
*Nolen Gertz, Associate Professor of Applied Philosophy*, Chair of
Interdisciplinary Sciences Examination Board, University of Twente
*Ileana Camerino, Lecturer of Academic Skills*, School of Artificial
Intelligence, Radboud University Nijmegen
*Annelies Kleinherenbrink, Assistant Professor for Gender and Diversity
in AI*, Cognitive Science & Artificial Intelligence Department and
Gender & Diversity, Radboud University Nijmegen
*Resources*
Avraamidou, L. (2024). Can we disrupt the momentum of the AI
colonization of science education?. Journal of Research in Science
Teaching, 61(10), 2570–2574. https://doi.org/10.1002/tea.21961
Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6),
775–779. https://doi.org/10.1016/0005-1098(83)90046-8
Bender, E.M. & Hanna A. (2025). The AI Con - How to Fight Big Tech’s
Hype and Create the Future We Want. Harper. https://thecon.ai/
Bender, E.M. ‘Resisting Dehumanization in the Age of “AI”’. Current
Directions in Psychological Science 33, no. 2 (1 April 2024): 114–20.
https://doi.org/10.1177/09637214231217286
Bender, E. Shah, C. (2022, December 13). All-knowing machines are a
fantasy. IAI News. iai.tv/articles/all-knowing-machines-are-a-fantasy...
<https://iai.tv/articles/all-knowing-machines-are-a-fantasy-auid-2334>
Birhane, A. (2020). Algorithmic Colonization of Africa. SCRIPT-Ed,
17(2), 389–409. https://doi.org/10.2966/scrip.170220.389
Birhane, A. & Guest, O. (2021). Towards Decolonising Computational
Sciences. Kvinder, Køn & Forskning, 2, 60–73.
https://doi.org/10.7146/kkf.v29i2.124899
Broussard, M. (2018). Artificial unintelligence: How computers
misunderstand the world. MIT Press.
mitpress.mit.edu/9780262537018/artificial-unintell...
<https://mitpress.mit.edu/9780262537018/artificial-unintelligence/>
Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary
costs of artificial intelligence. Yale University Press.
yalebooks.yale.edu/book/9780300264630/atlas-of-ai/...
<https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/>
Dingemanse, M. (2024). Generative AI and Research Integrity. Guidance
commissioned and adopted by Radboud University Faculty of Arts.
https://doi.org/10.31219/osf.io/2c48n
Erscoi, L., Kleinherenbrink, A. V., & Guest, O. (2023, February 11).
Pygmalion Displacement: When Humanising AI Dehumanises Women.
https://doi.org/10.31235/osf.io/jqxb6
Fergusson, G., Schroeder, C., Winters, B., & Zhou, E. (2023). Generating
Harm. Generative AI’s Impact & Paths Forward. EPIC (Electronic Privacy
Information Center).
epic.org/documents/generating-harms-generative-ais...
<https://epic.org/documents/generating-harms-generative-ais-impact-paths-forward/>
Fernandez, A. L. (2025) Resisting AI Mania in Schools - Part I. Substack
newsletter. Nobody Wants This (blog).
annelutzfernandez.substack.com/p/resisting-ai-mani...
<https://annelutzfernandez.substack.com/p/resisting-ai-mania-in-schools-part>
Forbes, S. H., & Guest, O. (2025). To Improve Literacy, Improve Equality
in Education, Not Large Language Models. Cognitive Science, 49(4),
e70058. https://doi.org/10.1111/cogs.70058
Gebru, T., & Torres, Émile P. (2024). The TESCREAL bundle: Eugenics and
the promise of utopia through artificial general intelligence. First
Monday, 29(4). https://doi.org/10.5210/fm.v29i4.13636
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading
and the future of critical thinking. Societies, 15(1), 6.
https://doi.org/10.3390/soc15010006
Gertz, N. (2023). More than Machines? Jacques Ellul on AI's Real Threat
to Humanity. Commonweal Magazine.
commonwealmagazine.org/jacques-ellul-gertz-artific...
<https://www.commonwealmagazine.org/jacques-ellul-gertz-artificial-intelligence-AI-WGA-technology>
Gertz, N. (2024). Nihilism and Technology: Updated Edition. Rowman &
Littlefield. rowman.com/ISBN/9781538193266/Nihilism-and-Technol...
<https://rowman.com/ISBN/9781538193266/Nihilism-and-Technology-Updated-Edition>
Gibney, E. (2024). Not all'open source'AI models are actually open:
here's a ranking. Nature. https://www.nature.com/articles/d41586-024-02012-5
Jackson, L., & Williams, R. (2024). How disabled people get exploited to
build the technology of war. The New Republic.
newrepublic.com/article/179391/wheelchair-warfare-...
<https://newrepublic.com/article/179391/wheelchair-warfare-pipeline-disability-technology>
Kalluri, P. (2020). Don’t ask if artificial intelligence is good or
fair, ask how it shifts power. Nature, 583(7815), 169–169.
https://www.nature.com/articles/d41586-020-02003-2
Liesenfeld, A., & Dingemanse, M. (2024). Rethinking open source
generative AI: open-washing and the EU AI Act. In The 2024 ACM
Conference on Fairness, Accountability, and Transparency (FAccT ’24).
Rio de Janeiro, Brazil: ACM. https://dl.acm.org/doi/10.1145/3630106.3659005
Lutz Fernandez, A. (2025). Help Sheet: Resisting AI Mania in Schools.
drive.google.com/file/d/1e_kgaBs8yZL2pa_RVAemmfhow...
<https://drive.google.com/file/d/1e_kgaBs8yZL2pa_RVAemmfhow0x5dstk/view>
McQuillan, D. (2022). Resisting AI: An anti-fascist approach to
artificial intelligence. Policy Press.
https://bristoluniversitypress.co.uk/resisting-ai
McQuillan, D. (2025). The role of the university is to resist AI.
Seminar, June 11, Goldsmiths Centre for Philosophy and Critical Thought.
https://danmcquillan.org/cpct_seminar.html
Mejías, U. A., & Couldry, N. (2024). Data grab: The new colonialism of
big tech and how to fight back. The University of Chicago Press.
press.uchicago.edu/ucp/books/book/chicago/D/bo2161...
<https://press.uchicago.edu/ucp/books/book/chicago/D/bo216184200.html>
Monett, D., & Paquet, G. (2025). Against the Commodification of
Education—If harms then not AI. Journal of Open, Distance, and Digital
Education, 2(1). https://doi.org/10.25619/WAZGW457
Monett, D., & Grigorescu, B. (2024). Deconstructing the AI Myth:
Fallacies and Harms of Algorithmification. Proceedings of the 23rd
European Conference on e-Learning, ECEL 2024, 23(1), 242-248.
https://doi.org/10.34190/ecel.23.1.2759
Muldoon, J., Graham, M., & Cant, C. (2024). Feeding the machine: The
hidden human labor powering A.I. Bloomsbury Publishing.
bloomsbury.com/us/feeding-the-machine-978163973497...
<https://www.bloomsbury.com/us/feeding-the-machine-9781639734979/>
Narayanan, A & Kapoor, S. (2024) AI Snake Oil: What Artificial
Intelligence Can Do, What It Can't, and How to Tell the Difference.
Princeton University Press
O’Neil, C. (2017). Weapons of Math destruction: How big data increases
inequality and threatens democracy. Crown.
https://dl.acm.org/doi/10.5555/3002861
Oakley, B., Johnston, M., Chen, K. Z., Jung, E., & Sejnowski, T. J.
(2025). The Memory Paradox: Why Our Brains Need Knowledge in an Age of
AI. http://dx.doi.org/10.2139/ssrn.5250447
<https://dx.doi.org/10.2139/ssrn.5250447>
Perrigo, B. (2023). Exclusive: OpenAI used Kenyan workers on less than
$2 per hour to make ChatGPT less toxic. Time Magazine, 18, 2023.
time.com/6247678/openai-chatgpt-kenya-workers/...
<https://time.com/6247678/openai-chatgpt-kenya-workers/>
RCSC author collective. (2023). A Sustainability Manifesto for Higher
Education. earthsystemgovernance.org/2023radboud/....
<https://www.earthsystemgovernance.org/2023radboud/>
https://repository.ubn.ru.nl/handle/2066/301240
Ricaurte, P. (2022). Ethics for the majority world: AI and the question
of violence at scale. Media, Culture & Society, 44(4), 726–745.
https://doi.org/10.1177/01634437221099612
Sano-Franchini, J., McIntyre, M., & Fernandes, M. (2023). Refusing GenAI
in Writing Studies: A Quickstart Guide. https://refusinggenai.wordpress.com/
Shah, C., & Bender, E.M.. ‘Situating Search’. In Proceedings of the 2022
Conference on Human Information Interaction and Retrieval, 221–32. CHIIR
’22. New York, NY, USA: Association for Computing Machinery, 2022.
https://doi.org/10.1145/3498366.3505816
Suarez, M., Müller, B. C. N., Guest, O., & van Rooij, I. (2025).
Critical AI Literacy: Beyond hegemonic perspectives on sustainability.
Sustainability Dispatch, Radboud Centre for Sustainability Challenges.
https://doi.org/10.5281/zenodo.15677840.
rcsc.substack.com/p/critical-ai-literacy-beyond-he...
<https://rcsc.substack.com/p/critical-ai-literacy-beyond-hegemonic>
van Dijk, J. (2020). Post-human Interaction Design, Yes, but Cautiously.
In: ACM Designing Interactive Systems Conference (DIS' 20). Association
for Computing Machinery, New York, NY, USA, 257–261.
https://doi.org/10.1145/3393914.3395886
van der Gun, L., & Guest, O. (2024). Artificial Intelligence: Panacea or
Non-Intentional Dehumanisation?. Journal of Human-Technology Relations,
2. https://doi.org/10.59490/jhtr.2024.2.7272
van Rooij, I., & Guest, O. (2024, September). Don’t believe the hype:
AGI is far from inevitable. Press release, Radboud University.
ru.nl/en/research/research-news/dont-believe-the-h...
<https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable>
van Rooij, I. (2022, December). Against automated plagiarism [Blog].
Iris van Rooij. irisvanrooijcogsci.com/2022/12/29/against-automate...
<https://irisvanrooijcogsci.com/2022/12/29/against-automated-plagiarism/>
van Rooij, I., & Guest, O. (2025). Combining Psychology with Artificial
Intelligence: What could possibly go wrong? PsyArXiv.
https://doi.org/10.31234/osf.io/aue4m_v1
van Rooij, I., Guest, O., Adolfi, F., De Haan, R., Kolokolova, A., &
Rich, P. (2024). Reclaiming AI as a Theoretical Tool for Cognitive
Science. Computational Brain & Behavior, 7(4), 616–636.
https://doi.org/10.1007/s42113-024-00217-5
Williams, D. P. (2024). Scholars are Failing the GPT Review Process.
Historical Studies in the Natural Sciences, 54(5), 625-629.
https://doi.org/10.1525/hsns.2024.54.5.625