This paper reports on the descriptive results of an experiment comparing automatically detected reflective and not-reflective texts against human judgements. Based on the theory of reflective writing assessment and their operationalisation five elements of reflection were defined. For each element of reflection a set of indicators was developed, which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts. Using a large blog corpus 149 texts were retrieved, which were either annotated as reflective or notreflective. An online survey was then used to gather human judgements for these texts. These two data sets were used to compare the quality of the reflection detection algorithm with human judgments. The analysis indicates the expected difference between reflective and not-reflective texts.
|Number of pages
|CEUR Workshop Proceedings
|Published - 2012
|2nd Workshop on Awareness and Reflection in Technology-Enhanced Learning, ARTEL 2012 - In Conjunction with the 7th European Conference on Technology Enhanced Learning, EC-TEL 2012 - Saarbrucken, Germany
Duration: 18 Sep. 2012 → 18 Sep. 2012
- Thinking skills analytics