Science and Software

Lately I’ve been thinking more about the approaches and methodologies we use to develop software. With the introduction of terms such as TDD (Test Driven Development), ATDD (Acceptance Test Driven Development), BDD (Behaviour Driven Development), Specification by Example, User Stories, User Journeys, etc, etc, it’s easy to see why teams may be struggling with understanding and using these approaches. The introduction of these terms can sometimes see people getting hung up on definitions and forgetting that these approaches all have one thing in common. The intent is to build a common understanding between team members and stakeholders of different disciplines and knowledge or skill levels. All of the above approaches have a purpose and if understood and implemented well can be of real benefit to a team.

Overarching these approaches, we also have the two software methodology juggernauts of Agile and Waterfall. Agile is often seen as the new kid on the block and Waterfall as an outdated relic of the past but these views can also just be a common perception. Agile has, as people will tell you, ‘been around since the 70’s’ (depending on who you talk to) and Waterfall is still an approach being used today with success. I’ve worked on software under both of these methodologies along with hybrids of ‘Watergile’, ‘Water-scrum-fall’ and other such terms. I’ve seen a good percentage of time that should be spent developing software spent on clarifying definitions of terminology. At times, this can feel a little bit dogmatic. Jared Spool studied the habits of highly effective teams and noted that those who were more effective favoured tricks and techniques over methodology and dogma. In his transcript of ‘Anatomy of a design decision‘ he uses the analogy of a recipe as an artefact of a process not something that is exclusively needed or required to cook.

decision-making
Anatomy of a design decision – Jared Spool

 

Thinking about the things I’ve experienced and researched, I kept coming back to a process (or method) I learnt in school and one that has been around for hundreds of years, the scientific method. Regardless of the approaches, tools, techniques, tips or mindsets used in software development, the scientific method can be applied to the software development life cycle (SDLC) as an aid to understand the purpose of each stage. I’m not using explicit definitions of known SDLC models, more phases I’ve seen teams go through (or strive to go through) during product and feature delivery. I aim to be methodology agnostic.

 

Research1. OBSERVATION | RESEARCH

Before starting any project we need to know ‘What problem are we trying to solve’ and ‘Who are we solving it for’. Once we know the answers to these questions, we are then able to observe and collect information about the problem or our intended users. Our research can include things such as data analysis, market or competitor research, customer feedback or user interviews. At this stage it’s important to make sure we have both quantitative and qualitative data as well as using multiple sources to help negate the impact of bias and assumption. If we collect information from multiple sources we can make better hypotheses. At this stage we may also observe that our initial thoughts about the problem or user needs to be modified for greater impact or success.

 

2.  HYPOTHESIS | DESIGNHypothesis

If we have a balanced mix of research, observation and information, it puts us in a really good place to make a hypothesis about what we’re building and how to build it. This can be thought of as design. Making a hypothesis about software is thinking about ‘If I change X, then I expect Y to happen’. X is the visible change, and Y is the consequence of that change. For example, ‘If I change this link to a button, then I expect more users to interact with it’. From this point, I then have something to measure in order to know how successful that change has been. Design could be thought of as the visual representation of my hypothesis whether that be UI, code, architecture or infrastructure.

 

Experiment3. EXPERIMENT | BUILD & TEST

During the experimentation phase, we start to build and test the change we are intending to make. Testing also allows us to modify our build as we discover more information about the change that we didn’t know before. We may also choose to build more than one experiment if we are wanting to measure the success of an A/B test.  This phase is not to be underestimated in terms of the amount of information it can provide us and should also be considered dynamic, rather than static. The more rigorous and defined our observation (research) and hypothesis (design), the clearer the purpose of the experiment (building and testing) should be.

 

Analysis4. ANALYSIS | MEASURE SUCCESS

After the development or product team has done all their hard work, it’s then time for the push to production. It’s at this point, most teams stop and move onto the next thing with the only measure of success being that the code has made it into a production environment and is alive. But what about our hypothesis? We now need to see if what we thought would happen, happened or if we need to make further changes. Measuring the success of a change is critical to knowing what to do next and also provides us with more information about what went well and where we went wrong. This analysis of what happens when we put our software in front of either part of or our entire user base can not only provide us information about what to do in the immediate future but can also gives us insight into what to focus on longer term.

 

Conclusion5. CONCLUSION | ITERATE

By this stage we have a lot of information and because of our initial observation, hypothesis, experimentation and analysis, the conclusion about what to do next should be relatively clear. Do we need to perform any modifications or are further experiments required? Were we successful enough to leave this area of focus for now and move onto another one?  Like any good life cycle, once we have reached our conclusion it’s time to start on the next iteration armed with more information and insights than we had before.

By treating software development in a more scientific way I find it easier to understand both what needs to be done and why.

Science Based SDLC
The scientific method in software development – Stephanie Wilson

 

Creating a Team Experience

When helping to create and deliver software, the user experience (UX) is at the core of my daily activities. It helps guide my questioning and decision making while helping to identify the value a product will bring when it’s being used.

I recently had the opportunity to speak at UXNZ 2016 (http://www.uxnewzealand.com/) on the topic of applying the things I’ve learnt in UX to teams. I believe that the people behind the software, the creators, designers and analysts, are more interesting and more complex than what they produce.

“Great teams that are respectful to each other and work in a safe and supportive environment are more capable of building great things.”

Personas

Like most things, my story starts with the people. In UX, we use Personas to help create empathy and connect with who will be using our product. There is a trap that we can fall into when we start to classify people and that is the stereotype. In our workplaces, a lot of us have a title, something that helps others identify what our role is or what we are responsible for. When a group of people have the same title, it can be easy to assume that all of those people are alike. We assume that the people with the same title must all be the same because we can make sense of that. In short, it’s easier to think this way.

When we do this, comments like ‘Developers don’t like change’ or ‘Testers like breaking things’ or ‘Product Owners only care about deadlines’ start to be heard. This reinforces stereotypes and we lose the identity, skill and unique experience that the individuals in our teams bring. I don’t work with anyone called ‘Developer’ or ‘Tester’, there are people with names that do those types of activities and they should be recognised as the individuals they are. I like to think that addressing people by their titles is a way in which we distance ourselves from unkind remarks because we’re not frustrated with the person, we’re frustrated with the role that person plays. Unfortunately, in reality, the person being talked about can feel attacked, boxed in and restricted by their title or role. We give persona’s a name because it helps us to connect.

Prototypes

The creation of prototypes enables us to gather feedback without the need to be production ready. Having something we can interact with makes it easier to identify any usability gaps and helps develop a clearer understanding of the vision for the product.

Words only ever give us part of the picture and words can conjure up different images for different people based on their own experience or knowledge in a particular area. If I ask you to think of a vegetable, because that’s a broad category, it’s likely that we are both thinking of different vegetables. While your mental prototype might produce a carrot, mine could produce a cucumber because that was something I encountered earlier in the day. We are both thinking of a vegetable and are both technically correct but we have still failed to have a shared understanding of the same vegetable.

Being explicit in our communication by using pictures as well as clarifying words, a shared understanding is easier to reach.

When we are being implicit, gaps and ambiguity can creep in. We tend to jump to our own conclusions and understanding based on many of our own biases and beliefs that we hold. If unsure about the shared understanding of a word or idea, don’t be shy to ask ‘What does that mean to you?’.

Measuring Success

In more recent times, improvements through iterations has become critical to a product’s success. Many companies have the ability to release code to production ‘when ready’ and users can have immediate access to those updates. When we first start to measure success, there are a lot of unknowns and sometimes a few nasty surprises when we find out we’re not doing quite as well as we thought we were. The natural reaction to something we don’t want to see is to make it go away. I call this the ‘box factor’. When opening a present on christmas morning we are usually either pleased because the gift in the box meets our expectations or we are disappointed because it doesn’t. One of these gifts will be opened while the other will probably remain in the closed box, left to gather dust.

When measuring success, it’s important to be transparent and accepting of both favourable and unfavourable possibilities.

At all times we expect the outcome to indicate either success or that there is still some more work to be done, that improvements are required. This approach means that there are no nasty surprises hiding in the box because we have been transparent about the possible outcomes. Reducing the impact of the surprise of an unfavourable outcome allows us to respond from a more measured and less reactive place.

Empathy

Underpinning everything we do should be empathy, an understanding for our fellow humans. Empathy is the ability to connect with another person by understanding their point of view and recognising emotional responses. During my time in product development, I’ve picked up on three regularly said anti-empathic statements:

“At least they can…”

“There’s a workaround for…”

“Don’t worry, it’s just an edge case”

Hearing any of these in a review or product meeting immediately raises red flags for me and makes me re look at the solution in front of us. It’s a great indicator that there is still more work to be done. These statements show that we have failed to understand not only our customer and their user goals, but also that throughout development, the team has failed to communicate or understand a shared vision. Creating gaps in the product gives users the space to become frustrated or to experience an unhandled error. We have failed to map out their entire journey and have left metaphorical rivers and no bridge to cross them with.

By finding new ways to connect with the people around us in our work places, we can help create environments that foster innovation and products people love to build and use.

The Evolution of UAT

User Acceptance Testing. It’s a phase of testing that’s bothered me for a while. Not because I think it’s a waste of time, in fact, quite the opposite. It’s bothered me because I think it hasn’t been given the care, attention and expertise it deserves.

The intent of UAT has always been a noble one. We wanted our users to accept our product and accept what we had built. The way we went about proving that acceptance though was deeply flawed. The process looked a little like this:

Step 1: Find some users or, if users aren’t available, you can do it yourself

Step 2: Write a script for the user which outlines exactly how they should test the system step by step in predefined workflows

Step 3: Get the user to complete each step putting a ‘Pass’ mark next to each one

Step 4: Write any questions or confusion from the users down as ‘training issues’ (If you’re testing it yourself, you probably won’t have any questions because you’ve already tested it in the system and integration test phases and you know it works. Besides, you’re not the one who has to use it on a daily basis anyway)

The problem with this approach is that it seeks to validate the product even if it invalidates the experience of the person using it. At this point it’s not really about your user accepting your product, it’s about your product functionally working when put in front of an end user.

How do we solve this problem?

Firstly we need to understand our users. We need to understand their needs, their activities and the context in which they are performing those activities. By doing this we change the meaning of UAT into a list of things our users will accept, things that will delight our customers once we’re done. Once the feature goes live, User Confirmation Testing can commence. Confirmation testing allows us to measure the degree to which we were successful against our initial assumptions, hypotheses and insights. Both qualitative measures such as customer feedback and quantitative measures through the use of data analytics can help us to identify what our users have accepted and what they have rejected. This gives us the opportunity to research, build, monitor, iterate and improve.  

UAT done in accordance with this new meaning is hard. It involves a lot of effort, time, understanding and empathy. Not all of the “defects” we find during the first UAT phase will be ironed out so confirmation testing shows us where we’ve been successful and where we need to iterate and improve. This pushes testing to the start of the development lifecycle and helps to prevent usability flaws in our finished product. Perhaps, most importantly, it provides the opportunity to test before the code is written, as the code is being written and after the code has been deployed adding value at each of these stages.

At a time where people have so much choice and greater expectations that software should be engaging and intuitive, understanding who we are building software for can help us to meet those expectations. It is no longer good enough to be happy with “work arounds” or “at least they can still <insert anything unrelated to the thing they can’t, but really want to do>” attitudes towards software development. Focusing on what our customers value and delivering the things that will really make a difference give us a head start on any competitors in our market.

Every time a new idea for a feature comes into a team we can all strive to understand who we are building it for and what problems this change will solve. If we can’t answer these two questions, it’s time to put the deerstalker on and seek out the answers!