More than Surveys: Using Qualtrics as a Teaching Tool

In addition to using Qualtrics as a research tool, Qualtrics can be used as a teaching tool — beyond its obvious utility in a marketing research course.  For example, at the Marriott School of Management (Brigham Young University) I use Qualtrics in my theory-driven consumer behavior MBA elective.  In essence, I give two assignments in which students create stimuli designed for consumers.  I use Qualtrics to show the stimuli to consumers and collect their evaluations.  The results give students feedback from actual consumers, yielding a more “hands-on” assignment than can be achieved with traditional instructor feedback. Qualtrics makes three appearances in my course.

Comprehension Assignment

In its first appearance, Qualtrics is part of a hands-on assignment regarding the comprehension stage of information processing.  The learning objectives of the assignment are for students to improve their ability to (1) simplify messages to maximize comprehension among consumers, and (2) anticipate and minimize unintended interpretations among consumers.  I do not discuss these learning objectives with students in advance.  Instead, I give students the following instructions:

This is an assignment in the sense that your group’s results will add to your class participation score, and it is a contest in the sense that the winning group will receive a $20 gift card for the BYU food court.

In essence, your group will write a set of instructions for accomplishing a certain task.  The task is described below.  The group who instructs the most people to accurately complete the task will receive the prize.

Here are the details.  For each group, I will recruit and pay 50 people from the United States (via Amazon’s Mechanical Turk) to participate in this exercise.  I will give each participant the instructions written by your group.  Your group’s 50 participants will not see the instructions from any other group.  I will collect each participant’s response, and I will evaluate its accuracy.  The group who yields the highest rate of accuracy among their 50 participants will win the contest.  Participants will also be asked a few questions that evaluate the clarity of your instructions; their responses will be used in case of a tie.

A successful response to the task is the following:

“ReaganObamaClintonFordCarterBushBushNixon” (not including the quotes).

Any deviance from this correct response (e.g., using all lower-case letters) will be scored as an inaccurate response.

In the instructions you are writing, you are not allowed to simply demonstrate the text that they should type.  Instead, you must walk the participants through a process to create the text.  My process for generating the text was to list every U.S. president after LBJ and sort them by the third character in their name.  You can describe any process you like, except that you may not write the words “Obama,” “Reagan,” etc.  Also, you may not include an e-mail address through which participants can ask you questions; your instructions need to stand on their own.

If you like, you may include website links in your instructions as long as they’re not websites that are created by you.  For example, a link to a website that lists all the U.S. Presidents would be fine.

Your instructions should be text only (i.e., no photos, no video, no audio).  Simple HTML (e.g., to create bold or colored letters) is acceptable, but not necessary.  Other than these guidelines, you may use any format you like for the instructions.

You don’t need to write any “Welcome!” or “Thank you!” messages.  I will take care of that.  The last line of your instructions should be the following:  Type the list of characters you have created into the box below and click “Continue.”

You can assume the participants are at least 18 years old, fluent in English, and living in the United States.  You can also assume they have internet access and some familiarity with computers.

With pretesting, I found that study participants had difficulty comprehending the task (as evidenced by their incorrect responses) unless great care was taken to write instructions that were easily comprehended.  My experience in the classroom has been that the task is difficult enough that it is challenging for students to craft a set of instructions that maximizes comprehension by participants.

In essence, I use Qualtrics to create a between-subjects experiment.  In addition to conditions created with the sets of instructions submitted by students, I add a condition of my own: a “low-water mark,” which is a bare-bones set of instructions that I created with hardly any effort toward maximizing comprehension.

After collecting the data using Qualtrics, I discuss the results with the class.  I lay some groundwork for the third appearance of Qualtrics (much later in my course) by explaining Mechanical Turk in some detail, including how one can pay people to perform tasks, how surprisingly inexpensive it is, and how easy the process is to perform.  (I also mention my opinion that Mechanical Turk in particular [and crowd-sourcing in general] is a remarkable resource that could be used in a variety of business models, and its potential has not yet been fully realized.)  I then report the results of the Qualtrics study, including the overall success rate for each group.  Of course, during class the group with the highest success rate is awarded their prize.  More importantly, by comparing their group’s success rate to the “low-water mark,”  students can see the influence of their efforts toward increasing comprehension, which leads to a discussion of learning point #1.  During class I also discuss common incorrect responses to the task, and I point out groups that effectively prevented those incorrect responses by “trapping” miscomprehensions with double-check instructions, all of which leads to a discussion of learning point #2.

Affect Assignment

For its second appearance in my course, Qualtrics is part of a hands-on assignment about consumers’ affective responses to advertisements.  The setup of this assignment is similar to the Comprehension Assignment: students submit ad elements, I create an experiment using Qualtrics using their ad elements as stimuli, I use Mechanical Turk to recruit participants, and I discuss the study’s outcomes with students and tie the results to the assignment’s learning objectives.  To lay more groundwork for the third appearance of Qualtrics later in the course, I ensure that the students have all the details necessary to re‑create my Qualtrics questionnaire if they like.

Experimentation Discussion

The third appearance of Qualtrics in my course is in the context of a discussion about experimentation.  The discussion point is that although consumer behavior theory suggests potential managerial actions and makes predictions, outcomes among consumers cannot be predicted with certainty, so experimentation—testing a variety of managerial actions—is often of great value.  To emphasize this point, I explain how the Affect Assignment could have been “beaten” with a high likelihood of winning full assignment points and the $20 prize.  I explain that students could have pre-tested a variety of submissions to the Affect Assignment using the same methods I would eventually use to score the assignment.  I remind the students that during their marketing research class they learned how to use Qualtrics, and I remind them that earlier in the course I had explained how easy and inexpensive it is to use Mechanical Turk.  I explain that rather than simply using their judgment to winnow their many ideas down to a single one for submission to the Affect Assignment, they could have pre-tested all their ideas (analogous to testing a variety of managerial actions), and submitted the one that performed the best.  The learning point is that judgment is good, but when experimentation is practical it is often superior to judgment alone.

In sum, I have found Qualtrics to be a useful tool for teaching.  I have found that for some assignments Qualtrics allows me to go beyond evaluating their work as an instructor, giving students the opportunity to see the responses of “real live people” to their work.

– Eric DeRosia