Practical Usability Evaluation
Gary Perlman
OCLC Online Computer Library Center
Dublin, Ohio 43017 USA
perlman@acm.org
Contents:
Practical Usability Evaluation is an introduction to cost-effective,
low-skill, low-investment methods of usability
assessment. The methods include
- Inspection Methods (e.g., heuristic evaluation),
- Observational Skills and Video
(including user testing with think-aloud protocols),
- Program Instrumentation, and
- Questionnaires.
The tutorial features many step-by-step
procedures to aid in evaluation plan design.
[H.5.2]
User interface, Evaluation/methodology,
[D.2.2]
Software engineering, Tools and techniques, User interfaces,
[H.1.2]
Information systems, User/machine systems, Human factors.
Despite the great effort involved in developing user
interfaces and the large potential costs of bad ones, many
user interfaces are not evaluated for their usability or
acceptability to users, thus risking failure in the field.
Bad user interfaces can contribute to human error,
possibly resulting in personal and financial damages.
The
ACM SIGCHI 1992 Report Curricula in HCI
cites the importance of usability evaluation: "someone will
evaluate usability even if not the developer, and, in some
cases, not evaluating constitutes professional
misconduct." Even though not all members of a software
development team may be involved directly in the
evaluation of usability, it is useful for all to have an
appreciation of the methods for evaluating usability,
particularly practical methods that can applied with
limited skills and resources. The methods covered here:
Inspections, Observation, Instrumentation, and
Questionnaires, can all be used by evaluators with a
wide range of skills and with minimal equipment
requirements.
The course will focus on four methods for evaluating
systems:
- Inspection Methods, such as heuristic evaluation
and evaluation checklists, help developers/evaluators
focus attention on potential problem areas.
- Observational Skills and Video help find major
usability problems early in system development (e.g.,
with prototypes);
- Program Instrumentation records the frequencies
and times of actions users take in systems,
information useful for isolating high and low usage
and effort;
- Questionnaires help gather structured feedback on
system usability and help users help evaluators
generate solutions.
All the methods can be used by a broad base of
evaluators, minimizing skill and equipment
requirements. Exercises are assigned to provide
experience gathering and interpreting each kind of
evaluation information.
- To introduce cost effective methods of evaluating
interactive systems, particularly early in the
development process, when redesign is least
expensive.
- To provide participants with enough experience
during the course so that they are able to apply the
methods on their own.
- Managers interested in increasing their knowledge
of usability evaluation so they can increase usability
testing in their operations.
- Software Engineers interested in practical methods
they can use to evaluate usability during the
development process and not just as an afterthought.
- Human Factors Specialists who want to learn more
about the methods covered.
- Introduction To General Concepts
- User Interface Acceptability
- Problems of Interactive System Development
- Goals for Interactive System Evaluation
- Cost-Benefit Analysis of Usability Testing
- Evaluating before Costly Implementation
- System Evaluation Steps
- Which and How Many Users to Test?
- Benchmark Tasks
- Before the User Arrives
- When the User Arrives
- Data Collection
- After The User is Done
- Interpreting Data
- Practical Evaluation Methods
- Choosing an Evaluation Method
- Inspection Methods
- Observation and Video
- Program Instrumentation
- Questionnaires
- Summary and Conclusions
- Reference Materials
- Societies and Conferences
- Journals and Publishers
- Usability Evaluation Bibliography
- AERA, APA, and NCEA (1985)
Standards for Educational and Psychological Testing.
Washington, DC: American Psychological Association.
- American Psychological Association (1982)
Ethical Principles in the Conduct of Research with Human Participants.
Washington, DC: APA.
- Bias, R. G. and Mayhew, D. J. (Eds.) (1994)
Cost-Justifying Usability,
Boston, Mass.: Academic Press.
- Card, S. K., Moran, T. P. and Newell, A. (1983)
The Psychology of Human-Computer Interaction.
Hillsdale, NJ: Lawrence Erlbaum Associates.
- Chin, J. P., Diehl, V. A. and Norman, K. L. (1988)
Development of an Instrument Measuring User Satisfaction of the
Human-Computer Interface.
ACM CHI'88 Proceedings,
213-218.
- Dumas, J. S and Redish, J. C (1993)
A Practical Guide to Usability Testing.
Norwood, NJ: Ablex.
- Gould, J. D. and Lewis, C. (1985)
Designing for Usability: Key Principles and What Designers Think.
Comm. of the ACM, 28:3,
300-311.
- Gould, J. D. (1988)
How to Design Usable Systems.
In M. Helander (Ed.)
Handbook of Human-Computer Interaction.
New York: North-Holland. 757-789.
- Grissom, S. B. & Perlman, G. (1995)
StEP(3D): A Standardized Evaluation Plan for Three-Dimensional Interaction Techniques.
International Journal of Human-Computer Studies, 43:1,
15-41.
- Hix, D. and Hartson, H. R. (1993)
Developing User Interface: Ensuring Usability through Product and Process.
New York: Wiley.
- Jeffries, R. et al (1991)
User Interface Evaluation in the Real World:
A Comparison of Four Techniques.
ACM CHI'91,
119-124.
- Karat, J. (1988)
Software Evaluation Methodologies.
Chapter 41 in M. Helander (Ed.)
Handbook of Human-Computer Interaction.
New York: North-Holland, 891-903.
- Lewis, C. and Rieman, J. (1993)
Task-Centered User Interface Design.
(text at
ftp.cs.colorado.edu in
/pub/cs/distribs/clewis/HCI-Design-Book).
- Landauer, T. K. (1988)
Research Methods in Human-Computer Interaction.
Chapter 42 in M. Helander (Ed.)
Handbook of Human-Computer Interaction.
New York: North-Holland, 905-928.
- Nielsen, J. (1993)
Usability Engineering,
Boston: Academic Press.
- Nielsen, J. and Landauer, T. K.
A Mathematical Model of the Finding of Usability Problems.
Proceedings of INTERCHI'93.
206-213.
- Nielsen, J. and Mack, R. L. (Eds.) (1994)
Usability Inspection Methods.
New York: John Wiley and Sons.
- Norman, D. A. (1990)
The Design of Everyday Things.
Doubleday.
- Preece, J. (Ed.)
A Guide to Usability: Human Factors in Computing.
Wokingham, England: Addison-Wesley.
- Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S. and
Carey, T. (1994)
Human-Computer Interaction.
Wokingham, UK: Addison-Wesley.
- Ravden, S. J. and Graham I. Johnson, G. I. (1989)
Evaluating Usability of Human-Computer Interfaces: A Practical Method.
Chichester, England: Wiley.
- Smith, S. L. and Mosier, J. N. (1984)
A Design Evaluation Checklist for User-System Interface Software.
Bedford, Mass.: MITRE TR-10090.
- Solso, R. L. and Johnson, H. H. (1989)
An Introduction to Experimental Design in Psychology: A Case Approach
(Fourth Edition). New York: Harper and Row, Publishers.
- Whiteside, J., Bennett, J. and Holtzblatt, K. (1988)
Usability Engineering: Our Experience and Evolution.
Chapter 36 in M. Helander (Ed.)
Handbook of Human-Computer Interaction.
New York: North-Holland, 791-817.
- Wiklund, M. E. (Ed.) (1994)
Usability in Practice: How Companies Develop User-Friendly Products.
Boston: Academic Press.
- Wright, P. C. and Monk, A. F. (1991)
A Cost-Effective Evaluation Method for Use by Designers.
International Journal of Man-Machine Studies, 35:6,
891-912.