Automated scoring of course forum posts in online learning

Project: Grants (e.g. Tri-Agency)Full Application

Project Details

Description

The purpose of this research is to create a system that automatically scores students' course discussion posts, assessing their grasp of the course material. Manually evaluating large volumes of forum posts is time-consuming and can yield inconsistent results. Implementing automatic scoring could significantly reduce this inconsistency in post scoring and the workload for instructors. While tasks like automated essay scoring and short answer grading focus on writing quality or correctness, our proposed automated post scoring system aims at evaluating online course discussion posts in terms of writing quality, relevance to the topics, and the students' level of cognitive engagement with the material. Our primary research question in this study is - how effective the measures of post relevancy, writing quality, and cognitive engagement detections are in automating the scoring of course discussion posts? To assess relevancy, we will analyze semantic consistency between posts and topics. We will use the Interactive, Constructive, Active, and Passive (ICAP) framework to detect the level of cognitive engagement of the student in forum posts. The quality of writing will be evaluated using a hierarchical text model. Our experiment will utilize Stanford's MOOC dataset in this study. This 16-week research project aims to design, implement, and deploy a system for automatic scoring of course forum posts.
StatusFinished
Effective start/end date1/09/2431/12/24

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.