Building today’s systems requires a more intimate understanding of users’ work than ever before. Computers are smaller and more common and interfaces are more powerful. Today, many users of computers neither know nor wish to learn how the computer operates. They merely wish to get their jobs done.

In addition, vendors are under increasing pressure to develop innovative products quickly. To be innovative means to address important needs in new ways. Since existing products cannot act as a model, guidance must come from users themselves.

As the industry has recognized these challenges, practitioners are looking for new ways to involve the customer closely in design. This has resulted in such approaches as Joint Application Design

[Wood 89], user-centered requirements analysis [CMartin 88], user-centered design [Norman 86], and many participatory design techniques [Greenbaum 91, Schuler 93] including our own Contextual Design [Beyer 93].

These customer-centered design approaches make the customer, and the understanding of the customer, the center of design activities. The two primary questions such approaches address are:

  • How do I understand the customer?
  • How do I ensure this understanding is reflected in my system?

Understanding the customer is hard. Design teams need extensive, detailed information about customers and how they work to build systems that support them well. The first requirement on any customer-driven process is to build awareness of the customer into the design team, and continue providing customer feedback throughout the life cycle.

But even given customer data, we have found that it is still hard to build a system in response. It requires a series of conceptual leaps to go from facts about the customer to a system design. How can we turn facts into a system we know will be useful?

Finally, no system is built by a single individual, but the quality of the system is the result of individual actions. How do teams develop the same understanding of the customer, and the same vision for the system? How can we manage the interplay between people to that end?

Contextual Design is our approach to bringing customer data into design through a well-defined sequence of activities. Whiteside, Bennet and Holtzblatt laid the foundations for Contextual Design in 1988 [Whiteside 88]. We have since used and extended this process in developing both hardware and software products, in small groups and large, at multiple companies.

Here, we summarize our experience with customer-centered design. We describe the steps that we have refined through our work on design problems. We describe the reasons for each step, and draw out the implications for managing the design process.

Understanding the Customer

Our first concern is to bring valid, useful data about how people work into the engineering process. The system we provide will support and constrain how people work [Holtzblatt 93]. We need to understand work in enough detail to know what the system must do to support work well, and what innovations will streamline the work.

Finding out about work is hard. Not only are developers building for users doing unfamiliar work, but users themselves have difficulty saying what they do. People are adaptable and resourceful creatures-they invent a thousand work-arounds and quick fixes to problems, and then forget that they invented the work-around. Even the detail of everyday work becomes second nature and invisible.

The users cannot say what they really do because it is unconscious-they do not reflect on it and cannot describe it. The defined policy for an organization is no longer representative because it no longer reflects what is really going on.

Contextual Inquiry

How can we get detailed information about how people work when they cannot articulate it on their own? Holtzblatt’s approach [I] was to adapt ethnographic research methods to fit the time and resource constraints of engineering. The result was the first step of our process, Contextual Inquiry.

Contextual Inquiry provides techniques to get data from users in context: while they work at real tasks in their workplace. In a contextual interview the interviewer observes the user at work and can interrupt at any time and ask questions as an outsider: “What are you doing now?” “Isn’t there a policy for this?” “Is that what you expected to happen?”

Confronted in the moment of doing the work, users can enter into a conversation about what is happening, why, and the implications for any supporting system. The user and interviewer discover together what was previously implicit in the user’s mind. Talking about work as it happens, artifacts created previously, and specific past projects reveals the user’s job beyond the work done on that day.

A contextual interview usually takes from two to three hours. Typically several members of a design team interview several customers at the same site simultaneously. We get a view across a whole organization in half a day.

We recommend that a product’s designers conduct interviews. Great product ideas derive from a marriage of detailed understanding of a customer need with an in-depth understanding of technology. In our experience, the best products happen when the product’s designers are involved in collecting and interpreting customer data This field gathering technique has been extremely successful at collecting detailed data about work practice which is hard to elicit any other way.

  • Gather data through interviews with your customers in their workplace while they work.
  • Put the people making design decisions in front of the user.

Involving the Customer

What is the best way to involve the customer in the design process? We certainly want to build the best system for them we can. But we also want to optimize both development time and the customers’ own time.

As outlined by Muller in [Muller 93], customer-centered techniques tend either towards having the designer participate in the user’s world, or having the user participate in design activities. We find both approaches useful, but want to ensure the user is as effective as possible in both roles.

When we participate in the user’s world, we want them to show us their world so well that we know it. We want our foot to be sore where their shoe pinches.

Working with users in their workplace helps them in this. Whenever they are working on or describing their real problem, users are much more eloquent than when talking in generalities. The impact of the real situation is much greater.

Conversely, when the user participates in design activities, we want to make them strong participants in the design process.[II]

In our experience, customers are at disadvantage when brought into a design meeting. The user’s unique contribution is their real work experience. Taken out of their work context, they are much less able to represent real experience [Whiteside 88].

Worse, as a representative of the user community, we ask them to discount their own actual experience. Instead of allowing them to stand for “what I need” we ask them to stand for “what all users would want.” They become just one more designer among designers.

Customers are at a disadvantage when building data models or other specialized models with the design team. This requires that they learn an unfamiliar language and translate their experience into this unfamiliar language. Even if we work from the user’s artifacts, the language represents an abstraction of what they do. The user must translate it back into specific instances to understand what it means [Ehn 91a, Holmqvist 91].

Customers are at a disadvantage when brought into our laboratory and asked to work on an unfamiliar problem. Once again we take them away from the context that ties them to reality. We ask them to imagine what their work is like without any of the reminders they use daily to do their work.

Instead, in Contextual Design we build on our users’ strengths by doing all our work with them in their own context, on their own problem. (Or get as close to this ideal as possible.)

If we wish to validate a model of how they work, we do not show and walk through the model with them. Instead, during an interview about their own work practice, we respond to their description of their work by drawing a picture. This picture is one of our models which responds immediately to what they just said about their own experience. It is a conversation aid, not something to be learned.

If we wish to co-design with a user, we take a previously developed prototype to their workplace (as discussed below). We invite them to work through their immediate work problem using the prototype. Users respond directly to the prototype as though it were real and give much better feedback than would be possible in a meeting room [Knox 89].

Even when we must use a laboratory for practical reasons, we recommend that users bring in their own work and try to do it in the lab. Even if we lose the context provided by their workplace, they are familiar with their own problem and it helps them reconstruct the missing context.

  • Use your users well. Let their own context strengthen them.

Affinity Diagrams

As an interviewing process, Contextual Inquiry successfully extracts data about customers’ work in detail. However, one developer talking to one user is insufficient:

  • The whole team needs to understand what happened with the customer;
  • The whole team, including the interviewer, must understand the implications for the design;
  • Different people have different perspectives and will see different implications in the data;
  • Data from multiple users must be brought together;
  • A working team will typically have many demands on their time. Not everyone will be able to go on every visit.

To bring the team together, share the data, and develop interpretations which the team buys into, Contextual Design includes an affinity diagramming process [Brassard 89].

The team, or a subset, sits down together and goes over the transcript or notes of each interview, writing facts about the user, interpretations, design ideas, and questions on Post-Itª notes. After the first round of interviewing is complete (usually 5-8 interviews or 400-600 notes), the team organizes the notes into clusters on a wall. These clusters are named and collected into higher-level groupings. (This entire process is fully described by Holtzblatt and Jones [Holtzblatt 93]).

An effective affinity avoids using standard categories to cluster notes. We ban terms like “usability” or “quality.” This forces the team to think deeply and creatively about the data, and forces the name of the group to represent what is really there.

For example, an affinity we built to understand object search mechanisms has a top level note labeled “The user’s purpose.” The cluster names beneath it tell what the user’s purpose in searching the object system might be: “Find a particular object,” “Understand the structure of the system,” and “Reuse existing objects.” Under each of these headings are the cluster of individual notes defining the category. Later, we could read the affinity, understand these as the user’s three primary motives, and ensure our design supported each well.

Group interpretation allows other members of the team to be brought back into the conversation. On real teams, it is rare that everyone can be in every meeting. By participating in interpretation sessions or in building the affinity, team members can be brought back into contact with the customer and can also provide their own unique perspectives on the data.

When done, we walk the affinity saying what each part is about and brainstorming design ideas for that part. These ideas can be attached directly to the affinity itself. Later, when we pick up these ideas to develop, they will be directly tied to the customer data which sparked them.

An affinity captures our insight into the customers’ work. The cluster names represent this insight and tie it back to data from individual customers through the individual notes in each cluster. The affinity organizes data across multiple customers and shows where the data is weak.

  • Interpret customer data together, as a team.

The Think Tank

We prefer to dedicate a room to the team design effort. We are writing down an enormous amount of information about the customer. The affinity diagram and work models (described below) represent everything the team has discovered, structured for easy understanding. Keeping them on the wall means that the team is literally surrounded by their data about their customer.

Given the opportunity, the team will continually return to this data throughout the design process. It is common in our meetings for a team member to gesture or walk over to a part of their affinity to support a design idea. It is hard to achieve this kind of fidelity to the customer when the data about the customer is tucked away, out of sight.

The room also acts as a living record of the design process. A team member or manager who wants to catch up can browse the walls on their own, or another team member can use the walls to tell them what has happened. One manager told us he prefers to use the room to find out how the team is doing-he found it more immediate and more real than a status report or presentation.

  • If you want your team to be creative, give them a room.

Work Modeling

The affinity organizes our data in a way which is easy to understand, and captures all the detail well. But to understand the structure customers put on their work, we also draw diagrams.

These diagrams, work models, show the work of a single person or of an organization. They explicitly represent roles, flow of communication and information, work tasks, steps, motivation, and strategy of the work. Where there are problems in the work, they are shown directly on the model. Unlike a list of findings, requirements, or wishes, work models show how all aspects of work relate to each other.

We find four types of work models to be generally useful:

Context Models (figure 1) show how organizational culture, policies, and procedures constrain and create expectations about how people work and what they produce. Context work models represent standards, procedures, policies, directives, expectations, deliverables and other constraints.

The context model shows what part of the work can be changed by introducing new technology, and what changes affect people or organizations who are not customers. Changing their work is always more difficult. Where an organization has standard procedures, we can design the system to support them directly, automating where possible.

Physical Models (figure 2) represent the physical environment as it impacts the work. To the extent they can, people structure their environment to support the work; then they work around any problems put in their way by limitations in the physical layout, location, hardware configuration, or technology. Physical work models show the physical space and systems that affect the work.

Physical models reveal whether the work is split between locations and the system could simplify work through direct communication. They reveal whether the work involves moving around, and whether the system must also move or must provide artifacts that move. And they show the range of hardware, software, and network platforms that the system must support.

Flow Models (figure 3) represent the important roles people take on. A role is a set of responsibilities and associated tasks for the purpose of accomplishing a part of the work. Roles may be formal or informal, growing out of the work itself. One person usually fills several roles, and roles can be filled by several people. Each role represents a different type of customer of our system. When a user interacts with a system, they are trying to meet the responsibility their role defines. The flow model shows what is needed and what is supplied in filling a role. Flow models also show the communication and coordination between roles, and the flow of artifacts between roles.

The flow model shows communication across a whole work domain, not only among current users of a system. This reveals new, unrecognized roles that could be supported by a system. It also shows the needs of people who will never be direct users, but depend on the system for information. With this knowledge, the team can build a system which better supports them.



As a language, flow work models say: think about roles. Define what their responsibilities are. Define how each role communicates with others, and what they communicate. You must know these things to understand work.

Someone building a flow model cannot help but ask questions about roles, their responsibilities, and how they communicate. The modeling language itself guides the designer in what to pay attention to. Conversely, anything the language cannot express is easy to ignore.

Other languages, such as data flow diagrams or object models, exist to support other conversations about systems. They make explicit the concepts needed to support these other conversations. For example, data flow diagrams talk about the flow and transformation of data. These other languages do not support or guide the conversations we want to have about work.

This is why we introduce new modeling languages, despite the large number that exist. Thinking about work is difficult; thinking about how a system supports work is difficult. The languages we introduce in work models and in user environment design below, tell the designer what to pay attention to at each point in the process. No existing language does this for us.

We do not find that introducing new modeling languages confuses design teams. Our languages are simple-teams doing design work pick them up in a few minutes. We find it more powerful to introduce these languages than to make a mapping from an existing language to the concepts we are trying to express.

  • Let modeling languages help you. When you must, invent new ones to say exactly what you need to say.

Work Re-design

Working with specific customers gives the team an understanding of the work of those customers. However, we want an innovative design which transforms work in new ways, and which is useful to all our customers. How do we invent such a transformation? How can we ensure we have transformed the work usefully?

This is a new conversation. Up to now we have been talking about the work as it is; now we talk about the work as it will be, when our new system is in place.

This is not a conversation you can avoid. Every system changes the work of its users. It is best to think about and design the effect you want your system to have explicitly.

We make this conversation explicit through abstract work models (figure 5). We gather all the same kind of models together: all the flow models, all the physical models, all the context models, and the sequence models which address each task. Then we build new models of each type, removing the particular details of each customer’s work and revealing its underlying structure. These abstract work models show the aspects of work that our system will support. Anything the team chooses not to represent will not be supported by the system. This abstraction allows us to meet the needs of a whole market by building on what we discovered from individuals.

Our best ideas for how to improve the work often come from seeing how a particularly thoughtful person or group has solved their own problems. We build this solution into our abstract work models and our system, so all customers can take advantage of it.

Once we have this consolidated model, we study it for problems and inefficiencies. We develop an abstract work model which brings together data from all customers, keeping good ideas, fixing problems, and using technology to combine and remove steps. When done, we have a statement of how our users will work, if we can implement the system to support it.

We validate our re-design of the work by checking it against the data from customers we have visited and through Contextual Inquiry with new customers. When interviewing new customers, we look for aspects of their work our re-designed work model cannot account for. These refine and extend the re-designed work model.

Making the work re-design conversation explicit ensures we do not do silly things unintentionally. For example, in creating a presentation, ideas move from slides to handout notes and back again as the creator tries different approaches to presenting the ideas. So a presentation system should support modifying slides and notes in parallel. Providing a notes facility which does not allow the slide to be changed, as some commercial systems do, is not enough.

We verify any design idea against the re-designed work model to ensure that it fits into the users’ job well. We use it to see that the new work practice our system will support hangs together. We anticipate new problems our changes may cause, and prevent them.

Taken together, abstract work models are a coherent statement of who our customer is. We use this statement throughout the rest of the design process.

  • Design the way you want to change your user’s work on purpose, or you will do it by accident.
  • Your customers are your best source of ideas. Steal from them where you can.



We test the design with paper prototypes, inspired by Kyng [Kyng 88, Ehn 91]. These are rough mockups of the user interface drawn on Post-It notes and paper. We take the prototypes to the users’ workplace and ask them to pretend it is a system and to work with it. They are trying out their real work using the prototype, so they can react as they would to a real product. We observe and probe in the same way as a contextual interview.

We do not have to tell our users what level of detail they should respond to-the roughness of the prototype does that. If we present a prototype running on a computer, they respond to details of the look and the layout. If we present a hand-drawn prototype on paper, they respond to the structure and function in the system.

We start with very rough prototypes and encourage users to explore, trying to accomplish a task of their own. When they ask if the system does something we design on the spot: “Yes. How would you expect it to work? Show me.” The user sees that the design is incomplete and open to change, and is drawn into the design conversation. (This requires designers to run the interview, to respond appropriately and to design with the user.)

This kind of rough prototyping tests our user environment design. We can see whether the structure and function we provide is useful. Users can respond to the prototype without learning the user environment language.

The user environment design successfully predicts how users will react to a given interface. Where an interface is unfaithful to the design we have found that users reject it. For example, one interface we tested merged focus areas in the user environment design. The users’ comments indicated they were rejecting it because the interface mixed unrelated work, just as predicted by the user environment design: “I don’t want to know all that-take it away!”

As the user environment design stabilizes, we start to care more about the user interface. We build more careful prototypes, and in our customer interviews ask our users to live with the limitations of the system we designed. Finally, it becomes useful to build and test running prototypes that can evolve into the real system.

  • Structure your system first. Then make it real in the user interface.

Iteration with Customers

Customer iteration is a powerful team design technique. When we can produce an idea, develop it, prototype it, test it with customers, and validate, modify or discard it within forty-eight hours, we can stabilize a design very quickly.

When team members advocate different design solutions and the best is not clear from the customer data, it is often more efficient to prototype and test the alternatives than to try to reach consensus in the team. Team members let go of ideas more easily when they see users react badly to them than when another team member rejects them.

We use customer iteration from work modeling through system delivery. We visit new customers after building abstract work models to ensure the abstraction holds for them. We build rough prototypes of user environment designs to test that our system structure works. And we prototype the user interface and early system versions to ensure we are being true to our design and have not broken it in the implementation.

Furthermore the development process itself is iterative (as recognized by Boehm [Boehm 86], Booch [Booch 86], and others). The insights we gain from working with users on prototypes cause us to modify our understanding of their work and our re-designed work models. We get quickly to an initial system design for a small part of the problem, but return to earlier steps to incorporate new information and to expand the system to new areas. The quick design of a part of the system gives the team a sense of accomplishment.

  • Iterate with your customers. Iterate, iterate, iterate.


One participant in our design process said to us afterward, “It was cool, but it was also structured. I always knew what to do.” Along with producing good results, this should be the test of any design process: it should make people feel that they can be creative and move rapidly, but also that, at every point, they know what to do.

Too often, a methodology feels like a straight-jacket. Structure need not conflict with creativity-in providing a clear path forward, the right structure should set people free to be creative. Too often, this does not happen. When we combine customer-centered design with creative team processes, it does not have to.


[I] Contextual Inquiry was developed by Karen Holtzblatt in 1986. Sandy Jones assisted in developing the first course on Contextual Inquiry in 1988. Since then, Holtzblatt and Beyer have built on Contextual Inquiry to address the full design process.[II] We are indebted to the work of Pelle Ehn, Kim Madsen, and others at Aarhus University for inspiring our approach.


[Beyer 93] H. Beyer and K. Holtzblatt, “Contextual Design: Toward a Customer-Centered Development Process,” Software Development ’93 Spring Proceedings, February 1993, Santa Clara, California.

[Boehm 86] B. Boehm, “A Spiral Model of Software Development and Enhancement,” IEEE Computer. 21(5), 61-72, 1986.

[Booch 86] G. Booch, “Object-Oriented Development,” IEEE Transactions on Software Engineering. SE-12, 1986.

[Brassard 89] M. Brassard, Memory Jogger Plus, GOAL/QPC, Methuen, MA, 1989.

[Carter 91] J. Carter Jr., “Combining Task Analysis with Software Engineering for Designing Interactive Systems” in Taking Software Design Seriously. John Karat (Ed.), p. 209. Academic Press, NY, 1991.

[Ehn 91] P. Ehn and M. Kyng, “Cardboard Computers: Mocking-it-up or Hands-on the Future,” in Design at Work, J. Greenbaum and M. Kyng (Eds.), p. 169. Hillsdale, NJ: Lawrence Earlbaum Pub. (1991).

[Ehn 91] P. Ehn and D. Sjšgren, “From System Descriptions to Scripts for Action,” in Design at Work, J. Greenbaum and M. Kyng (Eds.), p. 241. Hillsdale, NJ: Lawrence Earlbaum Pub. (1991).

[Greenbaum 91] J. Greenbaum and M. Kyng (Eds.), Design at Work: Cošperative Design of Computer Systems. Hillsdale, J.J.: Lawrence Erlbaum Associates, 1991.

[Holmqvist 91] B. Holmqvist and P. B. Andersen, “Language, Perspectives and Design,” in Design at Work, J. Greenbaum and M. Kyng (Eds.), p. 155. Hillsdale, NJ: Lawrence Earlbaum Pub. (1991).

[Holtzblatt 93] K. Holtzblatt and S. Jones, “Contextual Inquiry: A Participatory Technique for System Design,” Participatory Design: Principles and Practice. Aki Namioka and Doug Schuler (Eds.), Hillsdale, NJ: Lawrence Earlbaum Pub. 1993.

[Keller 92] M. Keller and K. Shumate, Software Specification and Design, John Wiley and Sons, New York, 1992.

[Kensing 91] F. Kensing and K. H. Madsen, “Generating Visions: Future Workshops and Metaphorical Design,” in Design at Work, J. Greenbaum and M. Kyng (Eds.), p. 155. Hillsdale, NJ: Lawrence Earlbaum Pub. (1991).

[Knox 89] S. Knox, W. Bailey, and E. Lynch, “Directed Dialog Protocols: Verbal Data for User Interface Design,” in Human Factors in Computing Systems CHI ’89 Conference Proceedings, May 1989, Austin, Texas, p283.

[Kyng 88] M. Kyng, “Designing for a Dollar a Day,” in Proceedings of CSCW’88: Conference of Computer-Supported Cooperative Work (pp. 178-188). Portland OR. New York: Association for Computing Machinery.

[CMartin 88] C. Martin, User-Centered Requirements Analysis. Prentice-Hall, Englewood Cliffs, N.J., 1988.

[JMartin 92] J. Martin and J. Odell, Object-Oriented Analysis and Design, Englewood Cliffs, NJ: Prentice Hall, 1992, p121.

[Muller 93] M. Muller, D. Wildman, and E. White, “Taxonomy of PD Practices: A Brief Practitioner’s Guide,” in Communications of the ACM, V36 N4, June 1993.

[Norman 86] D. A. Norman and S. W. Draper (Eds), User Centered System Design. New Jersey: Lawrence Erlbaum Associates, 1986.

[Pugh 91] S. Pugh, Total Design, Addison-Wesley Publishing Limited, 1991.

[Schuler 93] D. Schuler and A. Namioka (Eds.), Participatory Design: Perspectives on Systems Design, N.J.: Lawrence Erlbaum Associates, 1993.[Seaton 92] P. Seaton and T. Stewart, “Evolving Task Oriented Systems,” Human Factors in Computing Systems CHI ’92 Conference Proceedings, May 1992, Monterey, California.

[Whiteside 88] J. Whiteside, J. Bennett, and K. Holtzblatt, “Usability Engineering: Our Experience and Evolution,” Handbook of Human Computer Interaction, M. Helander (Ed.). New York: North Holland, 1988.

[Wood 89] J. Wood and D. Silver, Joint Application Design, John Wiley and Sons, New York, 1989.