ChatGPT AI failed in Accounting, Tax & Financial Exams
In the accounting exam, learners beat ChatGPT
Student performance was shown to be superior than ChatGPT by researchers.
A chatbot developed by OpenAI. However, the researchers emphasised ChatGPT’s potential as a ground-breaking technology that can revolutionise how people teach and learn while praising its exceptional performance.
The goal of a study carried out by Brigham Young University and 186 other universities was to assess how well OpenAI’s technology performed in accounting tests. Issues in Accounting Education recently released their research findings. The findings indicated that while ChatGPT scored 46.4 percent, students received an average grade of 76.7 percent.
In 11.3 percent of the questions, ChatGPT outperformed the student average, the report claims, notably in the areas of auditing and accounting information systems (AIS). The bot had trouble with tax, financial, and managerial evaluations, though, which the researchers speculate may be because it had trouble with the necessary mathematical operations.
The AI bot generated natural language writing using machine learning and scored better on true/false and multiple-choice questions (69.5% right), but less accurately on short-answer questions (varying from 28.7% to 39.1%).
Researchers also observed that ChatGPT struggled with higher-order questions and occasionally supplied inaccurate responses with credible textual justifications or provided various answers to the same topic.
A number of professors were enlisted by David Wood, a BYU accounting professor, to evaluate ChatGPT’s performance against college accounting students. They submitted more than 25,000 classroom accounting test questions with the assistance of 327 co-authors from 186 universities in 14 different countries.
2,268 textbook test bank questions from BYU undergraduate students encompassing a range of accounting topics and question formats were used in the study.
Source : Economic Times