Monday, July 6, 2015

Min Yuan's Dissertation Defense, July 29



Time & Location: Wednesday, July 29 at 10:00 am in the ITLS department conference room.

Title: DOES AUDIENCE MATTER? A STUDY OF HOW PEOPLE APPLY AND PERCEIVE QUALITY RUBRICS WHEN EVALUATING OPEN EDUCATIONAL RESOURCES


 


Abstract
This dissertation study investigated how people applied quality rubrics when evaluating Open Educational Resources (OER), how they perceived the utility of these rubrics, and whether teachers and non-teachers differed in terms of their application and perceptions. Study participants were 22 teachers and 22 non-teachers, who evaluated 20 OER using three quality rubrics (comprised of a total of 17 quality indicators), and reported their perceptions of the utility of the three rubrics and their indicators. In addition, five teachers and five non-teachers participated in interviews in which their perceptions were explored in more depth.
In terms of rubric application, results showed that different rubrics (and indicators) had different application characteristics, with some indicators more difficult to use and some indicators resulting in higher ratings of OER. A factor analysis and hierarchical clustering analysis showed that indicators which were theoretically related tended to cluster together. A reliability analysis using the intra-class correlation coefficient (ICC) showed that despite differences in participants’ agreement on different indicators, 16 out of the 17 indicators led to good inter-rater reliability. When comparing teachers’ and non-teachers’ application, teachers’ overall ratings of OER were higher than non-teachers’, but non-teachers’ ratings had higher ICC values than teachers.
In terms of rubric perception, participants gave positive ratings to the utility of all 17 quality indicators, and there was no significant difference between teachers’ and non-teachers’ ratings. However, the interviews revealed that teachers preferred more flexible rubrics, while non-teachers preferred rubrics with detailed guidelines. In addition, teachers emphasized the importance of the “alignment with standards” indicator, while non-teachers emphasized the importance of the “motivation” indicator.
Findings from this study adds to the understanding of how people apply quality rubrics and perceive their utility by providing data about the reliability and validity of these rubrics. Findings also suggest that one audience’s application and perception of rubrics may not simply generalize to another audience. Furthermore, this study provides suggestions for rubric developers, in that they should more fully consider users’ prior knowledge, and they should provide rubrics tailored to different audiences.