User Acceptance Testing. It’s a phase of testing that’s bothered me for a while. Not because I think it’s a waste of time, in fact, quite the opposite. It’s bothered me because I think it hasn’t been given the care, attention and expertise it deserves.
The intent of UAT has always been a noble one. We wanted our users to accept our product and accept what we had built. The way we went about proving that acceptance though was deeply flawed. The process looked a little like this:
Step 1: Find some users or, if users aren’t available, you can do it yourself
Step 2: Write a script for the user which outlines exactly how they should test the system step by step in predefined workflows
Step 3: Get the user to complete each step putting a ‘Pass’ mark next to each one
Step 4: Write any questions or confusion from the users down as ‘training issues’ (If you’re testing it yourself, you probably won’t have any questions because you’ve already tested it in the system and integration test phases and you know it works. Besides, you’re not the one who has to use it on a daily basis anyway)
The problem with this approach is that it seeks to validate the product even if it invalidates the experience of the person using it. At this point it’s not really about your user accepting your product, it’s about your product functionally working when put in front of an end user.
How do we solve this problem?
Firstly we need to understand our users. We need to understand their needs, their activities and the context in which they are performing those activities. By doing this we change the meaning of UAT into a list of things our users will accept, things that will delight our customers once we’re done. Once the feature goes live, User Confirmation Testing can commence. Confirmation testing allows us to measure the degree to which we were successful against our initial assumptions, hypotheses and insights. Both qualitative measures such as customer feedback and quantitative measures through the use of data analytics can help us to identify what our users have accepted and what they have rejected. This gives us the opportunity to research, build, monitor, iterate and improve.
UAT done in accordance with this new meaning is hard. It involves a lot of effort, time, understanding and empathy. Not all of the “defects” we find during the first UAT phase will be ironed out so confirmation testing shows us where we’ve been successful and where we need to iterate and improve. This pushes testing to the start of the development lifecycle and helps to prevent usability flaws in our finished product. Perhaps, most importantly, it provides the opportunity to test before the code is written, as the code is being written and after the code has been deployed adding value at each of these stages.
At a time where people have so much choice and greater expectations that software should be engaging and intuitive, understanding who we are building software for can help us to meet those expectations. It is no longer good enough to be happy with “work arounds” or “at least they can still <insert anything unrelated to the thing they can’t, but really want to do>” attitudes towards software development. Focusing on what our customers value and delivering the things that will really make a difference give us a head start on any competitors in our market.
Every time a new idea for a feature comes into a team we can all strive to understand who we are building it for and what problems this change will solve. If we can’t answer these two questions, it’s time to put the deerstalker on and seek out the answers!