The cost of missed or incorrect requirements has been well-documented within the software industry. A number of studies have shown nearly exponential cost-to-fix growth as projects move from requirements to design, code, and test. So it’s no wonder industry is always looking for better ways to reduce requirements errors through initiatives such as requirements management systems and Agile development. But how effectively do even the most advanced approaches deal with fundamental requirements errors, the ones that arise from having core customer assumptions wrong? I think of these as blind spots where, whether through ignorance, distraction, or inertia, companies seem unable to perceive or correct their own basic misunderstandings about customers and what really matters to them. Here’s an example from my own experience:
I once consulted for a startup back in the early dot-com boom days. They were developing a product that would ensure referring physicians only place medical orders that wouldn’t be denied by insurance. Insurance companies only reimburse when the ordered test or procedure codes map to qualified symptom codes. They won’t, for example, pay for a knee x-ray if the symptom is shoulder pain. The company founders envisioned a medical ordering system featuring a human body UI where all the user had to do was mouse over a part of the body to identify symptoms and it would present only qualified tests or procedures to order. The founders assumed they understood how orders are made and that referring physicians would be the primary users of their new product. A lot of engineering effort went into making this new design visually striking and easy to use. It was innovative, it was cool-looking…and, unfortunately, it missed the mark.
To explain why, I’ll start with what we found out about the work of creating medical orders. I worked with an employee trained in Contextual Design to conduct contextual inquiry interviews at a number of healthcare offices and institutions. Though physicians prescribed orders, such as CT exams, we found that it was the office assistant (OA) who typically did the actual work of making it happen. Once a physician informed the OA of what to order, verbally or written, the OA would proceed to determine the proper sequence if there were multiple orders, figure out the patient’s availability, then contact the referred specialist services to obtain authorization and schedule an appointment. Most orders would be routine and predictable so it was easy for the OA to know the correct order codes to enter from memory or from a cheat sheet. Sometimes, for more complex tests or procedures, the OA would consult with the service’s scheduler who might recommend a change to the physician’s original order. In these cases the OA would have to go track down the physician, who was probably off to his or her next exam, to approve the change.
Once we understood the medical ordering context, we tested paper mockups of the company’s product design with OAs, since we knew they would be the ones who really dealt with orders. And what was the very first thing they did? They immediately wanted to get rid of the human body UI—the one the engineers had spent so much time perfecting. OAs liked their existing form-based UIs where they could quickly and unambiguously tab to fields and type in the necessary information. They did like the idea of a feature that could identify procedure codes insurance would pay for when a symptom code is entered, but they clearly rejected the idea of hunting around for symptoms and procedures off of a graphical model. Even more importantly, the engineers’ mockup didn’t include support for the largest part of the OA’s work: scheduling the order. To schedule an order, the OA would typically have to find and provide answers to screening questions from the referred service, verify insurance coverage, coordinate multiple calendars to find appointment times, and provide the patient with prep instructions. Scheduling could be time-consuming and full of problems such as needing information from the physician or patient when they are unavailable.
So all the startup needed to do was make changes to the design based on user feedback, right? Well, it wasn’t going to be that easy. First, they were so invested in the human body UI paradigm—in fact it was core to the original vision on which the company was founded—that the idea of giving it up was not considered even in the face of clear evidence to do so. For example, the VP of Engineering had this to say after seeing, first hand, OAs rejecting the prototype outright: “They don’t get it.” Secondly, they stuck with a product scope that only addressed entering the order in isolation, deciding against recasting their business to provide an overall scheduling solution or integrate cleanly with existing scheduling systems. So in the end, not only was their offering user-unfriendly for OAs, it failed to effectively support the core work of appointment scheduling. After launch, the company never really got off the ground and eventually folded due to very low sales from lack of customer interest.
Their confidence in the original business plan and product concept was so great that it had blinded them to customer realities. The train had already left the station before they really knew customer needs. They failed to distinguish the core value of their innovation—screening symptom codes for reimbursable procedures—from superficialities such as a “cool” UI.
Fortunately, blind spots can be avoided by obtaining customer work practice data and concept validation early on, prior to locking in business and product plans. I have the greatest hope for companies that are not only brimming with new product ideas but who also have the humility to recognize that they may not know everything about their customers. They show a willingness to seek out deeper customer understanding and adapt when it challenges existing assumptions.
Does your organization have any blind spots?