It’s a fairly common misconception in Agile product development that the faster you can get features out of the door, the better the team must be performing. This is particularly true for teams who are just beginning to transition to Agile.
According to the 12 Principles of Agile, working software is the primary measure of progress - so why would we slow down development work to do anything other than ship features? Well, like any activity, software can be created to a high standard, or corners can be cut - and the more corners cut up front, the harder it is to go back and fix mistakes.
It sounds obvious when you spell it out, and yet it's all too easy for teams to allow speed to become the primary measure of success. It’s highly risky to create a product without an investment in quality and only a focus on speed: it’s much better to invest time upfront to ensure the product is built on robust foundations, rather than risk creating buggy software - albeit quickly. Speed without quality is a false economy in the long-term, and it could end up costing an unknown amount of effort to fix prior to release.
That’s why, at TAB, we qualify the primary Agile principle not just as working software, but high quality working software.
Instilling quality throughout the team
As part of every project, our testing team undertakes exploratory testing to look for issues in the code - but it is a mistake, and unfortunately not an uncommon one in the industry, to regard quality as solely the responsibility of the testers.
In fact, the best way to create high quality software is by instilling a sense of ownership and pride in high quality work across the whole team, and right from the start.
In this post, I want to take a closer look at one way we do that at TAB: using continuous integration within the development team to ensure our code is robust from the outset.
Ensuring the development team take responsibility for product quality is not only a key way to create robust and reliable software, it actually also makes life easier for both devs and testers.
CI (continuous integration) is an Agile software development technique introduced by XP to facilitate this, and avoid ‘integration hell’ - by which I mean a state where all developers merge their code before a release, and cross their fingers that it will all work together smoothly. In the long run, CI lowers the risk of creating buggy code that leads to frustrating, time consuming bug hunts.
Integration is incredibly complex - so if we take the big bang approach, and try to bring everything together at once, it becomes incredibly risky. Issues that do surface are far more difficult to track down and fix once all the coding has been ‘done’. So, how do we mitigate this risk?
With CI, developers integrate their code in small, well-tested chunks rather than all at once. This means that we are better able to track and minimise the number of bugs passed down the chain. In turn, we can achieve far greater speed and efficiency in the long run - not to mention a more robust and reliable product.
CI in practice
Say we are building a web app that lets users purchase shoes on the go. We have two front-end web developers on the project.
To ensure that everyone knows what they are doing for the next two weeks, the front-end devs and the rest of the team will review the user stories and decide what to take into the sprint.
In this case, the Product Owner has deemed the highest priority story: “As a user, I want to filter shoes by colour.”
At the beginning of the sprint, one dev starts writing code on her local environment, implementing automated tests known as unit tests. These tests check small chunks of the code in their simplest form and alert her to anything that isn’t working as expected.
When she has completed the code and everything seems to be working as expected, she will ask her fellow front-end dev to code review her work - proofing the code for any errors that might break the app. When she gets the thumbs up from her peer, she will commit the code to the main branch.
Generally, the team will have several environments set up for their code to automatically progress through before it is pushed live. This set up means that even if changes to the data are made on one environment, it will not affect the rest of the team. For instance, if a tester wants to remove all red shoes from the data to ensure none appear, the tester can do this without affecting the data on the other environments.
If the code passes the tests set up in an environment, it will be deployed to the next environment up. If automated, this significantly speeds up deployment times. However, if the code does not pass the tests in an environment, the development team will see which tests are failing and fix their code.
This simple method means that quality is instilled at each level of the development process so that by the time your product is in your users’ hands, you can be sure that it is robust and reliable. Furthermore, by investing time upfront, you are reducing the likelihood of introducing bugs later in the process - so there is far less risk incurred by the team as you approach your release date.
Spend the time to eliminate risk
Working software is the primary measure of progress. However, don't fall into the trap of believing speed is the decisive factor. While spending time mitigating risk upfront may feel counterintuitive and slow you down initially, it will ultimately increase your speed and efficiency in the long run and enable you to ensure that what you deliver is high quality from the outset - up to release and beyond.
To receive the next part of this post straight to your inbox, sign up for our fortnightly newsletter. Part two will focus on how we use workshops and team building exercises to enhance communication - ultimately helping to prioritise quality as a team-wide responsibility.