Participatory online learning environments offer tantalizing opportunities for evaluation, but how? Examples, opportunities, challenges will be discussed.
Current aspirations for the outcome of formal K-12 education have progressed beyond emphasis of content acquisition by students. As interest in developing higher order thinking skills grows, so does the interest in measuring the development of these skills in students. Problem solving, communication, and collaboration are not competencies that can be evaluated by multiple-choice tests. Observing students in the midst of their skill performances offers better opportunity for meaningful evaluation of these skills. Established methodologies (case studies, etc.) exist, however, scaling them remains a largely unmet challenge. Because of its anytime, anywhere, anyone mission and nature, Open Education has a particularly critical need for new evaluation methods, both for learners to mark their own progress AND for providers to evaluate and improve their Open Educational Resources and the programs.
Because of their online, digital nature, open education programs (programs employing and supporting participatory use of Open Educational Resources) offer tantalizing opportunities to measure higher order skill development by analysis of online participation of and contributions made by learners. In particular, the real-time data logs of learner and educator activity maintained by online learning platforms offer exciting new opportunities for constant formative evaluation of higher-order thinking performances. Because these logs exist in scalable data warehouses, there is exciting potential for these data to provide teachers with just-in-time feedback on particular students and for aggregated data to evaluate skill growth across learning communities. These opportunities are paired with significant challenges around organizing data for efficient human evaluation, developing computational algorithms for meaningful evaluation of student work that can meaningfully inform learning and practice, tracking students across multiple platforms in light of student privacy and protection laws and guidelines, and many others.
We will offer two *brief* examples and then facilitate a discussion of the possibilities and challenges of leveraging open education platforms to more meaningfully evaluate student learning. First, we will hear from an ongoing study by researchers at the Harvard Graduate School of Education, who are investigating learning as exhibited in classroom wikis. Project Manager Justin Reich will briefly share their approach to evaluating expert thinking, complex communication, and new media literacy skills within these very diverse learning environments. Second, we will hear from Vital Signs (www.vitalsignsme.org), an open education/ citizen science program of the Gulf of Maine Research Institute. Program Manager Sarah Kirn will share their progress in creating an evaluation approach to leverage the rich dataset of online student contributions (text, images, drawings, comments) to evaluate the development of higher order thinking skills in participants.
Our discussion will engage session attendees on common challenges, opportunities, and possible approaches for using participation data and participant contributions to evaluate learning and development of higher order skills.
NOTE Number reflects attendee interest not registrations or attendance. Get there early!