This week’s assignment has had me spending A LOT of time staring at my notes trying to think through the challenges of assessing learning in online, asynchronous modules. Even if it were an online course, I would still have opportunities to interact with students, and that opens up so many more possibilities for active learning assignments and quality assessment tools. When you’re designing stand-alone online learning modules, the challenges are both conceptual AND technological. I could probably dream up a lot of interesting assessment, but can they be implemented?
That said, here are my thoughts on Fink’s Procedures for Educative Assessment:
As I noted last week, for the purposes of this class I am working on an introductory research skills module aimed at freshmen, covering the library catalogs and journal databases. Here are two forward-looking assessment situations:
1. A student needs to locate a specific book chapter that is listed as required reading on a class syllabus
2. A student needs to locate three peer-reviewed journal articles on a specific topic for a class assignment
One of my learning goals relates specifically to situation 2: Evaluate three major databases and choose the appropriate one for starting journal article searches. I’m having a harder time coming up with criteria that are more than just binary. For example, one criterion for meeting this goal would be: The student selects the correct starting database for the discipline in the assignment (i.e. WoS for hard science topic, etc.). Another criterion might be: Student selects WoS or ASP if most recent research (i.e. last 3-5 years) is required by the assignment. A third criterion would be: Student selects at least one additional database for the search and compares the results.
I’m going to talk about self-assessment, FIDeLity Feedback and the active learning stuff together, because I think this is really the tricky part of designing online learning modules. Let’s face it, most online tutorials are still fairly passive, either a series of text heavy pages, maybe a narrated PowerPoint, or screencasts. We’ve been seeing more going on with tools like pop-up quizzes, which helps, but the simple fact is that is is much harder to engage students in active experiences, and relatedly, hard to give immediate feedback (or even any feedback) when there is no instructor and the modules are accessed asynchronously. In most cases, you are back to delivering information to learners, so there is not an opportunity to actually see what they are doing, and therefore it is difficult to assess their learning. The actual proof of learning might come when they turn their assignments in to their classes, and the librarian might never get any feedback as to whether an online module was helpful to students or not.
However, I don’t think all hope is lost. It does seem like there is technology that is making it easier to embed quizzes into screencasts, so in addition to doing some simple multiple-choice assessment of knowledge activities, I thought perhaps a type of self-assessment might be to ask students to rate their confidence in carrying out the task on a likert-type scale. In addition, I think the use of simulations may help in online library instruction. For example, I recently heard about the Guide on the Side tool from the University of Arizona (where our course leaders are!), and it looks very promising (does it work with resources that are behind a login, like databases?). In the case of my specific learning goal, if I had a tool that would allow learners to run simulated searches with feedback like some of the samples on the Guide on the Side page, that would seem to be more of an active experience, and would hopefully allow for more accurate assessment. I am also interested in exploring the possibilities of games for designing online instruction in libraries, but it is not a subject I know a lot about. So if anybody here has ever used games or other simulations, I would love to hear about it! For those of us working on asynchronous online modules, I think the real challenge lies in what tools are available at your particular institution, because unfortunately that is going to limit the kinds of instructional and assessment activities you can design. The other constraint that goes with that is that all the feedback has to be pre-programmed in, so there is no flexibility for individualization or spontaneity.
Speaking of feedback, I want to talk a little bit about the FIDeLity model. As I already noted, frequent and immediate feedback are big hurdles in asynchronous online learning, particularly in self-contained modules. Something that I think does not get enough attention, but that I think is really important, however, is how emotion is communicated online, particularly in text-based environments. I’m coming at this as somebody with a background in online communication research, and based on some recent research that I’ve seen, as well as a decade+ of observing and studying online conversation, my current working hypothesis is that people are generally really bad at accurately decoding the emotional intent of text-based communication. For online learning, I think the implication is that your words may be perceived as sounding harsher if they are delivered through text. Even feedback that is meant to sound positive may be at best perceived as neutral. So if a goal is to design online learning that contains feedback delivered with empathy, then I think we have to try to come up with ways to move beyond text, and incorporate maybe audio and or video feedback on our modules. Of course, that adds to the workload and the technical challenges, which is also an inescapable part of online instructional design. It’s not something I’ve seen much of, so I’m interested if anybody knows of good examples.
I’m not sure if I’ve quite answered all the questions, but I’d love to hear what people think, particularly if you are also trying to create stand-alone online learning content.
Hi, Kris!
I’m also working on an online info lit course and we looked into Guide on the Side and Adobe Captivate when we began our online courses. We really liked Guide on the Side but because our school had a good deal with Adobe we actually ended up using Captivate. It allows for Demo, Training, and Assessment (or a combination of any of the three) which makes the students watch you do something, then they try it with your instructions, then it has them do the activity without the instructions. It’s been really helpful in many of the online courses we have on campus.
I completely agree with the struggle of not getting emotions across correctly in online courses. I’m a pretty active instructor and for me, I think it’s going to be difficult to interpret whether students are getting things as we go along or if they’re just getting lucky. For example, in a physical class it’s very easy to tell whether students have read the assigned articles or watched the videos that they were supposed to before class started. I can watch some students in the library pulling out random sentences for discussion boards in other online course and my worry is that this will just repeat in my own course.
Ashley
Hi Kris,
I love your post. It really made all the content from this week make so much more sense for me! I’ve been so busy with work lately, I’ve only been able to give this course a fraction of the attention it needs (excuses, excuses…). What I did realize this week was that I really need to scale back on my learning goals and expectations for what the students can reasonably learn in a one-shot, online module.
I don’t have a background in communication research like you (sounds really interesting!) but, just from my own experience I definitely see how difficult it is to convey emotion through text. I think incorporating audio or video is a great idea for providing feedback. I might steal that idea for my online modules. :)
Aissa
Those are really fantastic comments about online/asynchronous learning are giving me a lot to think about and consider for my own online course. I really like the idea of providing audio or video feedback to students as a way of making feedback sound more positive and encouraging.
As I was thinking about your criteria for situation 2, it occurred to me to ask why your criteria are about databases rather than about peer-review? Why not have students write short answers where they can show their ability to identify peer-reviewed articles? After going over peer-review and what it means and giving them a list of databases where they can find them, they could then say why they think a particular article is peer-reviewed, with excellent, good, acceptable, poor, etc., depending on how many they correctly identify. That seems a bit less, binary, but then again, this might not be your focus for the module.
Thanks everyone for the feedback! Thane, I wish I could actually implement something like what you suggest – I think it would be great. Unfortunately, I’m working on stand-alone tutorial type modules, rather than an actual course with enrolled students, so there is no way to solicit open-ended responses. Basically, we currently have this resource: http://hcl.harvard.edu/research/toolkit/, and I’m trying to think about how to redesign this as a more interactive module.