Abstract
Artificial Intelligence (AI)-driven language models (chatbots) progressively accelerate the collection and translation of environmental evidence that could be used to inform planetary conservation plans and strategies. Yet, the consequences of chatbot-generated conservation content have never been globally assessed. Drawing on distributive, recognition, procedural, and epistemic dimensions of environmental justice, we interviewed and analysed 30,000 responses from ChatGPT on ecological restoration expertise, stakeholder engagements, and techniques. Our results show that more than two-thirds of the chatbot’s answers rely on the expertise of male academics working at universities in the United States, while largely ignoring evidence from low- and lower-middle-income countries (7%) and Indigenous and community restoration experiences (2%). A focus on planting and reforestation techniques (69%) underpins optimistic environmental outcomes (60%), neglecting holistic technical approaches that consider non-forest ecosystems (25%) and non-tree species (8%). This analysis highlights how biases in AI-driven knowledge production can reinforce Western science, overlooking diverse sources of expertise and perspectives regarding conservation research and practices. In the fast-paced domain of generative AI, safeguard mechanisms are needed to ensure that these expanding chatbot developments can incorporate just principles in addressing the pace and scale of the worldwide environmental crisis.
Original language | English |
---|---|
Pages (from-to) | 1-8 |
Number of pages | 8 |
Journal | Humanities and Social Sciences Communications |
Volume | 11 |
Issue number | 1 |
DOIs | |
Publication status | Published - Dec 2024 |
Bibliographical note
Funding Information:This study was funded by CSIRO’s Valuing Sustainability Future Science Platform (VS FSP).
Publisher Copyright:
© The Author(s) 2024.