EXPLAINABLE AI AND AWE: Balancing Tensions between Transparency and Predictive Accuracy

David Boulanger, Vivekanandan Suresh Kumar

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

1 Citation (Scopus)

Abstract

Automated writing evaluation (AWE) is a complex research topic having a long way to go before becoming a mature and trustworthy technology. Research focuses on reporting the performance of AWE systems by measuring their predictive accuracy without analyzing what the algorithm actually learns. This lack of transparency is partly due to the high complexity and dimensionality of AWE, and the usage of deep learning to cope with them just exacerbated the problem. Interestingly, the ascent of explainable artificial intelligence (XAI) allows to look retrospectively at the way AWE is developed. However, XAI is still in its infancy facing many limitations. Consequently, it is crucial to understand these limitations and how to correctly interpret black-box scoring models. This chapter investigates the extent to which XAI can help determine the most generalizable model and its explanations help fix learned fallacies. The chapter presents several applications and cautions about XAI in the AWE field, drawing conclusions that (1) the explanation models of the random forests had much higher descriptive accuracy than those of the gradient-boosted trees, two inherently opaque architectures, and (2) the simplest AWE models produced the most stable, consistent, and interpretable explanations.

Original languageEnglish
Title of host publicationThe Routledge International Handbook of Automated Essay Evaluation
Pages445-468
Number of pages24
ISBN (Electronic)9781040033241
DOIs
Publication statusPublished - 1 Jan. 2024

Fingerprint

Dive into the research topics of 'EXPLAINABLE AI AND AWE: Balancing Tensions between Transparency and Predictive Accuracy'. Together they form a unique fingerprint.

Cite this