Physics tells us you can't measure something without altering the thing being measured.
The judge in a courtroom also tells the lawyer "to stop asking leading questions". Of course, the witness hears this adjudication. Does it matter when the judge says "The witness will please disregard the question."?
Likewise, to what degree do you influence your interviewee just by the way you ask your questions?
If you say, "tell me how you feel about these colors" have you biased your listener?
You bet. You just told them that colors are more meaningful to YOU than some other element on the web page was meaningful TO THEM. Suddenly, their priorities (and thoughts) shift.
This kind of "biasing" of interviews remains a significant problem when trying to probe into the fundamental emotional appeals of your site. If you seek a fresh perspective on branding, or product features, or site themes, how can you avoid such biasing?
How do you ferret out the "truth" from that user? That is the question.Speaking of questions, here's some research that shows that your approach to questionnaires has its own problems.
Three researchers looked at whether people responded differently when questions in a Likert format had the "Strongly Agree" component first versus last.
Here's a sample question, out of 10 questions presented to 104 undergraduate students (Strongly Agree on the left).
[My] college has an excellent reputation.
Another 104 students got this version (Strongly Agree on the right).
[My] college has an excellent reputation.
What is your prediction? Will Strongly Agree on the left tend to get greater agreement than when it's on the right?
Have you ever wondered if it made a difference?
What should you do about this?
Our three researchers found that taking all 10 items into account, the two approaches gave different overall scores. Three of the questions in particular contributed towards the difference. For those three questions, "Strongly Agree" was on the left and received greater agreement than when "Strongly Agree" was on the right.
But note this important qualification. All three questions were "favorably worded" (out of five favorably worded).
The researchers had carefully avoided the "automatic response" problem typical for questionnaires. Sometimes participants automatically circle everything on the right or left. Perhaps you've done this, yourself.
Therefore, five of their ten questions were worded positively, like "College courses are useful". The other five were negatively worded, like "College faculty is extremely unqualified." The questions were mixed more or less at random.
The authors suggest that in this study, the students liked their college.
"Thus, their responses to unfavorably worded items required some degree of active participation considering the effort necessary to overcome any yea-saying response tendency. Since the responses to the unfavorably worded items did not require this kind of cognitive effort, the bias towards the left side of the scale would be more in evidence."
The authors state their findings were in agreement with other research.
Our usability recommendation: Beware of "observer effects" with Likert scale questionnaires.
Putting "Strongly Agree" on the left may unduly enhance scores when the question is favorably slanted and participants are favorably disposed to the topic.
Psychologists have long been aware of the need to "let the patient speak their piece". Carl Rogers invented "unconditional positive regard" as a means for cultivating a non-judgmental relationship with his psychotherapy patients. That's one way to avoid the observer effect.
However, a more systematic approach to discovering the "mental constructs" used by people for interpreting their world can be found in the "Repertory Grid Interview" developed by George Kelly.
Many usability studies have adopted the Repertory Grid Interview (RGI) technique. A 2009 study by Veronica Hinkle is easily found on the web, as are other references to the RGI method.
RGI provides a straightforward model that helps you minimize "observer effects" as you interview participants about their goals, mental models, and even their PET (Persuasion, Emotion and Trust) expectations.
Marketers regularly use RGI to understand attitudes about product branding, functions, usage, and even product concepts.
Here's the steps for discovering what your end-users feel is important about your website and the offerings it presents. I report our author's story.
Assume you provide the UX for a high-end cooking and specialty food website. You're driven to enhance your competitive position by discovering what design elements affect the "first impression" the most.You locate five other home pages to compare with your own.
You want to avoid biasing participant responses with the same old drivel. You want to discover something new, exciting, and innovative for your team!The research involved eight participants individually going through the following process. Reflecting their interest and involvement, seven agreed they liked to cook. One was neutral. Here are the sites.
Phase 1. ("Construct elicitation")
1. Veronica created 6 sets, each with three home pages ("triads) that systematically compared all six pages. (NOTE: your own study could include your own 6 alternative designs, or some other useful set of comparisons.)
Here is a table to illustrate the "combinations". Present them randomly across participants ‚Äď I indicate only one such sequence in the left column.
|Sequence||Site A||Site B||Site C||Site D||Site E||Site F|
2. Each participant examined three of the website home pages at once. The author asked: "tell me what two of these pages have in common and give me a name for that." Participants received a briefing on the interview goals and an example of this process before starting.
During the process, Veronica wrote down their answers. They might have said "Sites A and B both have pictures of people at the top of the page. This reflects the construct used by the participant when forming their first impression.
3. Veronica then asked: "For the other, third website, tell me what it does not have in common with the first two."
She wrote down their answer. They may have said "This site has a picture of a laptop computer." This reflects the contrast used during the first impression.
Repeat steps 2 and 3 for all sets of pages.
4. If the participant previously gave the same construct, Veronica elicited a "deeper" response using laddering techniques that probe for the reasons underlying the emotional appeal of that construct. It was also acceptable if a participant could not think of a construct.
5. Veronica listed the "constructs" she heard. Each construct received a name: e.g., "Picture of people" versus "Picture of computers". This process elicited an unbiased response because participants generated the construct by themselves.
6. Participants wrote their various construct and contrast pairs on index cards. Each card also had a horizontal rating scale ranging from 1 to 5 where 1 = matches construct and 5 = matches contrast.
The construct count ranged from 10 to 19 per interview, with an average of 14.
Phase 2. ("Rating phase.")
1. At the end of the interview, the participant rated all six web pages using their own construct/contrast pairs. For each construct/contrast pair they entered a score from 1 to 5 where 1 represented the construct and 5 represented the contrast.
2. Veronica converted the participant's scores into a computational score as follows.
Since zero falls between the Construct and Contrast, zero reflects a neutral point. Other scores reflect departure either toward the Construct (negative score) or towards the Contrast (a positive score).
|Construct/Contrast Scale||Score Conversion|
3. Veronica averaged the participant scores by "meta-category" to accommodate the differences in "construct" between participants. Meta-categories reflected the mix of participant constructs contributing to the overall first impression. They also reflected the averaged aggregate of each participant's scores across their constructs.
Aesthetics received the most extreme scores and thus was most influential in creating the first impression. Williams-Sonoma did best across the meta-categories as well as across individual participant scores.
|Dean & DeLuca||-2.5||0.20||-1.0||3.3||-0.5||2.3¬†¬†(4th)|
|Crate & Barrel||3.2||2.6||2.0||-3.7||-0.8||6.1¬†¬†(2nd)|
4. After the scores were converted to -2, -1, 0, 1, 2, Veronica then summed the scores within a meta-category and took the mean for a given store. Thus, a -2.5 (for Dean & DeLuca) means the "construct" characterized that store for the group of 8 participants within the Aesthetics meta-category. The constructs for Aesthetics included "colors, pictures, font styles, page layout, and seasonality". A positive score, like 3.8 for Williams-Sonoma means participants favored the "contrast" within the Aesthetics meta-category.
Participant comments illustrate this Aesthetic construct: "colors and font professionally done and pleasing, seasonal themes are predominant, colors blend better and are softer, pictures interest you in the products."
5. Veronica indicates the most important meta-categories appear to be aesthetics/layout, promotional products, and ease of browsing and shopping.
6. In summary, the interviews lasted from 30 to 60 minutes, averaging 40 minutes. The researcher suggested having participants rank the websites in order of preference. This would corroborate the averages given by the meta-category scores and the home page scores.
Do you feel this process could help you avoid the bias you might introduce into an interview?
You would not be pointing to any elements of your own design to start the conversation.
Better yet, you will have put any web page of special interest to YOU in a context that lets your PARTICIPANT decide what's interesting.
This is the achievement of the Repertory Grid Technique.
Veronica made these additional observations...
"This in-depth information about important features allows researchers to create benchmarking goals for the first impressions of a home page."
Ah, tell me, just how do you feel about that?
What name or "construct" would you use to label that feeling?
And how does this contrast with asking direct questions ‚Äď or even administering a Likert questionnaire?
Friedman, H. H., Herskovitz, P.J., and Pollack, S (1993). The Biasing Effects of Scale-Checking Styles on Response to a Likert Scale. Proceedings of the Survey Research Methods Section, American Statistical Association. pp. 792-794.
Hinkle, Veronica (2009). Using Repertory Grid Interviews to Capture First Impressions of Home Pages. Usability News, 11 (2). (See references in this article for other usability studies using Repertory Grid Technique.)
Very interesting and useful. For Culture assessment, for instance, the design of a good survey is a big issue. Congratulations.
The RGI seems useful for so much more. I might try it out with some symbology efforts that I have coming up; trying to decide which symbology format is "better" than another.
Interesting, useful article ‚Äď thanks! I've always considered questionnaire bias to be an extremely thorny thicket. "Favorably worded" to one subject can be "unfavorably worded" to another. "Neutral" can be favorably or unfavorably slanted, depending on the user's bias, which seems difficult to determine. The testing environment can play a role, as can the dress of the experimenter. For that reason I always try to track actual user behaviors as a key to preferences instead of using questionnaires if at all possible. Least bias of all, perhaps, enters into experiments using passive measures, which involve no direct interaction with users, but simply reviewing physical written records or wear patterns.
Sign up to get our Newsletter delivered straight to your inbox
HFI may use ‚Äúcookies‚ÄĚ or ‚Äúweb beacons‚ÄĚ to track how Users use the Website. A cookie is a piece of software that a web server can store on Users‚Äô PCs and use to identify Users should they visit the Website again. Users may adjust their web browser software if they do not wish to accept cookies. To withdraw your consent after accepting a cookie, delete the cookie from your computer.
HFI believes that every User should know how it utilizes the information collected from Users. The Website is not directed at children under 13 years of age, and HFI does not knowingly collect personally identifiable information from children under 13 years of age online. Please note that the Website may contain links to other websites. These linked sites may not be operated or controlled by HFI. HFI is not responsible for the privacy practices of these or any other websites, and you access these websites entirely at your own risk. HFI recommends that you review the privacy practices of any other websites that you choose to visit.
HFI is based, and this website is hosted, in the United States of America. If User is from the European Union or other regions of the world with laws governing data collection and use that may differ from U.S. law and User is registering an account on the Website, visiting the Website, purchasing products or services from HFI or the Website, or otherwise using the Website, please note that any personally identifiable information that User provides to HFI will be transferred to the United States. Any such personally identifiable information provided will be processed and stored in the United States by HFI or a service provider acting on its behalf. By providing your personally identifiable information, User hereby specifically and expressly consents to such transfer and processing and the uses and disclosures set forth herein.
In the course of its business, HFI may perform expert reviews, usability testing, and other consulting work where personal privacy is a concern. HFI believes in the importance of protecting personal information, and may use measures to provide this protection, including, but not limited to, using consent forms for participants or ‚Äúdummy‚ÄĚ test data.
HFI may use personally identifiable information collected through the Website for the specific purposes for which the information was collected, to process purchases and sales of products or services offered via the Website if any, to contact Users regarding products and services offered by HFI, its parent, subsidiary and other related companies in order to otherwise to enhance Users‚Äô experience with HFI. HFI may also use information collected through the Website for research regarding the effectiveness of the Website and the business planning, marketing, advertising and sales efforts of HFI. HFI does not sell any User information under any circumstances.
HFI may disclose personally identifiable information collected from Users to its parent, subsidiary and other related companies to use the information for the purposes outlined above, as necessary to provide the services offered by HFI and to provide the Website itself, and for the specific purposes for which the information was collected. HFI may disclose personally identifiable information at the request of law enforcement or governmental agencies or in response to subpoenas, court orders or other legal process, to establish, protect or exercise HFI‚Äôs legal or other rights or to defend against a legal claim or as otherwise required or allowed by law. HFI may disclose personally identifiable information in order to protect the rights, property or safety of a User or any other person. HFI may disclose personally identifiable information to investigate or prevent a violation by User of any contractual or other relationship with HFI or the perpetration of any illegal or harmful activity. HFI may also disclose aggregate, anonymous data based on information collected from Users to investors and potential partners. Finally, HFI may disclose or transfer personally identifiable information collected from Users in connection with or in contemplation of a sale of its assets or business or a merger, consolidation or other reorganization of its business.
If a User includes such User‚Äôs personally identifiable information as part of the User posting to the Website, such information may be made available to any parties using the Website. HFI does not edit or otherwise remove such information from User information before it is posted on the Website. If a User does not wish to have such User‚Äôs personally identifiable information made available in this manner, such User must remove any such information before posting. HFI is not liable for any damages caused or incurred due to personally identifiable information made available in the foregoing manners. For example, a User posts on an HFI-administered forum would be considered Personal Information as provided by User and subject to the terms of this section.
Information about Users that is maintained on HFI‚Äôs systems or those of its service providers is protected using industry standard security measures. However, no security measures are perfect or impenetrable, and HFI cannot guarantee that the information submitted to, maintained on or transmitted from its systems will be completely secure. HFI is not responsible for the circumvention of any privacy settings or security measures relating to the Website by any Users or third parties.
Human Factors International, Inc.
PO Box 2020
1680 highway 1, STE 3600
Fairfield IA 52556
HFI reserves the right to cancel any course up to 14 (fourteen) days prior to the first day of the course. Registrants will be promptly notified and will receive a full refund or be transferred to the equivalent class of their choice within a 12-month period. HFI is not responsible for travel expenses or any costs that may be incurred as a result of cancellations.
$100 processing fee if cancelling within two weeks of course start date.
4 Pack + Exam registration: Rs. 10,000 per participant processing fee (to be paid by the participant) if cancelling or transferring the course (4 Pack-CUA/CXA) registration before three weeks from the course start date. No refund or carry forward of the course fees if cancelling or transferring the course registration within three weeks before the course start date.
Individual Modules: Rs. 3,000 per participant ‚Äėper module‚Äô processing fee (to be paid by the participant) if cancelling or transferring the course (any Individual HFI course) registration before three weeks from the course start date. No refund or carry forward of the course fees if cancelling or transferring the course registration within three weeks before the course start date.
Exam: Rs. 3,000 per participant processing fee (to be paid by the participant) if cancelling or transferring the pre agreed CUA/CXA exam date before three weeks from the examination date. No refund or carry forward of the exam fees if requesting/cancelling or transferring the CUA/CXA exam within three weeks before the examination date.
There will be no audio or video recording allowed in class. Students who have any disability that might affect their performance in this class are encouraged to speak with the instructor at the beginning of the class.
The course and training materials and all other handouts provided by HFI during the course are published, copyrighted works proprietary and owned exclusively by HFI. The course participant does not acquire title nor ownership rights in any of these materials. Further the course participant agrees not to reproduce, modify, and/or convert to electronic format (i.e., softcopy) any of the materials received from or provided by HFI. The materials provided in the class are for the sole use of the class participant. HFI does not provide the materials in electronic format to the participants in public or onsite courses.