Practical Usability Evaluation

Gary Perlman
OCLC Online Computer Library Center
Dublin, Ohio 43017 USA
perlman@acm.org

Contents:


ABSTRACT

Practical Usability Evaluation is an introduction to cost-effective, low-skill, low-investment methods of usability assessment. The methods include
  1. Inspection Methods (e.g., heuristic evaluation),
  2. Observational Skills and Video (including user testing with think-aloud protocols),
  3. Program Instrumentation, and
  4. Questionnaires.
The tutorial features many step-by-step procedures to aid in evaluation plan design.

KEYWORDS

[H.5.2] User interface, Evaluation/methodology, [D.2.2] Software engineering, Tools and techniques, User interfaces, [H.1.2] Information systems, User/machine systems, Human factors.

INTRODUCTION

Despite the great effort involved in developing user interfaces and the large potential costs of bad ones, many user interfaces are not evaluated for their usability or acceptability to users, thus risking failure in the field. Bad user interfaces can contribute to human error, possibly resulting in personal and financial damages. The ACM SIGCHI 1992 Report Curricula in HCI cites the importance of usability evaluation: "someone will evaluate usability even if not the developer, and, in some cases, not evaluating constitutes professional misconduct." Even though not all members of a software development team may be involved directly in the evaluation of usability, it is useful for all to have an appreciation of the methods for evaluating usability, particularly practical methods that can applied with limited skills and resources. The methods covered here: Inspections, Observation, Instrumentation, and Questionnaires, can all be used by evaluators with a wide range of skills and with minimal equipment requirements.

CONTENT

The course will focus on four methods for evaluating systems:
  1. Inspection Methods, such as heuristic evaluation and evaluation checklists, help developers/evaluators focus attention on potential problem areas.
  2. Observational Skills and Video help find major usability problems early in system development (e.g., with prototypes);
  3. Program Instrumentation records the frequencies and times of actions users take in systems, information useful for isolating high and low usage and effort;
  4. Questionnaires help gather structured feedback on system usability and help users help evaluators generate solutions.
All the methods can be used by a broad base of evaluators, minimizing skill and equipment requirements. Exercises are assigned to provide experience gathering and interpreting each kind of evaluation information.

OBJECTIVES


AUDIENCE


TOPICS


CORE BIBLIOGRAPHY

  1. AERA, APA, and NCEA (1985) Standards for Educational and Psychological Testing. Washington, DC: American Psychological Association.
  2. American Psychological Association (1982) Ethical Principles in the Conduct of Research with Human Participants. Washington, DC: APA.
  3. Bias, R. G. and Mayhew, D. J. (Eds.) (1994) Cost-Justifying Usability, Boston, Mass.: Academic Press.
  4. Card, S. K., Moran, T. P. and Newell, A. (1983) The Psychology of Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.
  5. Chin, J. P., Diehl, V. A. and Norman, K. L. (1988) Development of an Instrument Measuring User Satisfaction of the Human-Computer Interface. ACM CHI'88 Proceedings, 213-218.
  6. Dumas, J. S and Redish, J. C (1993) A Practical Guide to Usability Testing. Norwood, NJ: Ablex.
  7. Gould, J. D. and Lewis, C. (1985) Designing for Usability: Key Principles and What Designers Think. Comm. of the ACM, 28:3, 300-311.
  8. Gould, J. D. (1988) How to Design Usable Systems. In M. Helander (Ed.) Handbook of Human-Computer Interaction. New York: North-Holland. 757-789.
  9. Grissom, S. B. & Perlman, G. (1995) StEP(3D): A Standardized Evaluation Plan for Three-Dimensional Interaction Techniques. International Journal of Human-Computer Studies, 43:1, 15-41.
  10. Hix, D. and Hartson, H. R. (1993) Developing User Interface: Ensuring Usability through Product and Process. New York: Wiley.
  11. Jeffries, R. et al (1991) User Interface Evaluation in the Real World: A Comparison of Four Techniques. ACM CHI'91, 119-124.
  12. Karat, J. (1988) Software Evaluation Methodologies. Chapter 41 in M. Helander (Ed.) Handbook of Human-Computer Interaction. New York: North-Holland, 891-903.
  13. Lewis, C. and Rieman, J. (1993) Task-Centered User Interface Design. (text at ftp.cs.colorado.edu in /pub/cs/distribs/clewis/HCI-Design-Book).
  14. Landauer, T. K. (1988) Research Methods in Human-Computer Interaction. Chapter 42 in M. Helander (Ed.) Handbook of Human-Computer Interaction. New York: North-Holland, 905-928.
  15. Nielsen, J. (1993) Usability Engineering, Boston: Academic Press.
  16. Nielsen, J. and Landauer, T. K. A Mathematical Model of the Finding of Usability Problems. Proceedings of INTERCHI'93. 206-213.
  17. Nielsen, J. and Mack, R. L. (Eds.) (1994) Usability Inspection Methods. New York: John Wiley and Sons.
  18. Norman, D. A. (1990) The Design of Everyday Things. Doubleday.
  19. Preece, J. (Ed.) A Guide to Usability: Human Factors in Computing. Wokingham, England: Addison-Wesley.
  20. Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S. and Carey, T. (1994) Human-Computer Interaction. Wokingham, UK: Addison-Wesley.
  21. Ravden, S. J. and Graham I. Johnson, G. I. (1989) Evaluating Usability of Human-Computer Interfaces: A Practical Method. Chichester, England: Wiley.
  22. Smith, S. L. and Mosier, J. N. (1984) A Design Evaluation Checklist for User-System Interface Software. Bedford, Mass.: MITRE TR-10090.
  23. Solso, R. L. and Johnson, H. H. (1989) An Introduction to Experimental Design in Psychology: A Case Approach (Fourth Edition). New York: Harper and Row, Publishers.
  24. Whiteside, J., Bennett, J. and Holtzblatt, K. (1988) Usability Engineering: Our Experience and Evolution. Chapter 36 in M. Helander (Ed.) Handbook of Human-Computer Interaction. New York: North-Holland, 791-817.
  25. Wiklund, M. E. (Ed.) (1994) Usability in Practice: How Companies Develop User-Friendly Products. Boston: Academic Press.
  26. Wright, P. C. and Monk, A. F. (1991) A Cost-Effective Evaluation Method for Use by Designers. International Journal of Man-Machine Studies, 35:6, 891-912.