Comparing automatically detected reflective texts with human judgements

Thomas Daniel Ullmann, Fridolin Wild, Peter Scott

    Research output: Contribution to journalConference articlepeer-review

    11 Citations (Scopus)

    Abstract

    This paper reports on the descriptive results of an experiment comparing automatically detected reflective and not-reflective texts against human judgements. Based on the theory of reflective writing assessment and their operationalisation five elements of reflection were defined. For each element of reflection a set of indicators was developed, which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts. Using a large blog corpus 149 texts were retrieved, which were either annotated as reflective or notreflective. An online survey was then used to gather human judgements for these texts. These two data sets were used to compare the quality of the reflection detection algorithm with human judgments. The analysis indicates the expected difference between reflective and not-reflective texts.

    Original languageEnglish
    Pages (from-to)101-116
    Number of pages16
    JournalCEUR Workshop Proceedings
    Volume931
    Publication statusPublished - 2012
    Event2nd Workshop on Awareness and Reflection in Technology-Enhanced Learning, ARTEL 2012 - In Conjunction with the 7th European Conference on Technology Enhanced Learning, EC-TEL 2012 - Saarbrucken, Germany
    Duration: 18 Sep. 201218 Sep. 2012

    Keywords

    • Detection
    • Reflection
    • Thinking skills analytics

    Fingerprint

    Dive into the research topics of 'Comparing automatically detected reflective texts with human judgements'. Together they form a unique fingerprint.

    Cite this