Abstract
ChatGPT is considered a risk and an opportunity for academia. An area of threat in contemporary settings is whether it can become a student agent for assessments in academia. This study determines how ChatGPT can become a human agent for students on two financial accounting course units, multiple choice question assessments. The study provided five numerical-based and five narrative-based multiple choice questions. There were ten questions for the Introductory Financial Accounting and 10 for the Advanced Financial Accounting course units. ChatGPT received one question at a time requesting a solution. In the Introductory Financial Accounting section, ChatGPT produced incorrect answers because it incorrectly assumed the underlying assumptions contained in those questions. In Advanced Financial Accounting, ChatGPT presented incorrect answers because of the complexity of the task contained in those questions. ChatGPT demonstrated similar competencies in providing solutions to numerical-based and narrative-based questions. ChatGPT obtained the correct answers to sit in the 80th percentile in the Introductory Financial Accounting course unit assessment and the 50th percentile in the Advanced Financial course unit assessment. ChatGPT4 showed improved performance, with the 90th percentile for Introductory Financial Accounting and the 70th percentile for Advanced Financial Accounting. The findings indicate that the knowledge construct requires reflective thinking with ChatGPT in the ecosystem, and what is assumed and assessable knowledge must be revisited.
Original language | English |
---|---|
Article number | 100213 |
Pages (from-to) | 1-10 |
Number of pages | 10 |
Journal | Journal of Open Innovation: Technology, Market, and Complexity |
Volume | 10 |
Issue number | 1 |
DOIs | |
Publication status | Published - Mar 2024 |
Bibliographical note
Publisher Copyright:© 2024 The Authors