Visualise and collectively discover test scenarios in 1 hour: a practical guide
20 years ago, when I started my software career as a tester. Agile hadn’t taken off in a big way and waterfall was the de-facto delivery model. It was primarily the responsibility of the tester at the end of the delivery funnel to identify test cases, document and execute them. The developers contributed by reviewing 100s of pages of test case documentation. The process of test case identification, documentation and execution primarily belonged to the ‘test team’ which was considered a separate discipline and were most of the times a different team.
This way of working had it’s disadvantages:
- There was less collaboration and co-ownership of the software quality.
- The tester most of the times worked in a silo.
- Less collaboration also meant less alignment on the expected outcomes thus resulting in bug-wars.
- Test case identification and documentation took way more time and wasn’t efficient.
Now as I work in Agile teams, things have changed drastically. Agile doesn’t advocate huge amount of documentation and encourages collaboration and co-ownership. But based on where the organisation is in their agile journey, I still see some patterns that I wanted to point out:
- QA is still responsible for test scenario identification, documentation coding and execution and lack of team ownership.
- POs/PMs/specific business users within the organisation has valuable knowledge about the product or the service but their knowledge is not leveraged.
My organisation has been ahead in the agile curve and my team was quite mature as well. But we did have some gaps and as a way to improve quality of the deliverables and reduce incoming defects, I introduced an experiment to:
- Increase efficiency: Target was to be able to identify majority of test scenarios, categorise and prioritise within ~1–2 hours.
- Encourage team ownership of quality: Involve the whole team to improve ownership and accountability of quality
- Involve PMs/POs in the process: Invite contribution from the PMs/POs and improve awareness of the high level design and test plan.
So how did I go about this exercise ?
- Made the process visual and collaborative. I preferred white boarding rather using word, excel or PPT.
- Mandatory for the whole agile team but it could include the architect, PM/PO etc.
- Organised 1-2 hours workshop for this. Such 1–2 hours time box worked well for us as we slice our work for 3–4 weeks at a time delivered usually over 2 iterations of 2 weeks each. I would imagine the duration of the workshop would depend on a few factors like how big the slice of work is, are you trying to do this exercise for a smaller slice or a bigger initiative, complexity of the implementation, how used to the team gets to this model etc
- Set goal for the workshop. Shortly into the workshop if we realise we are not progressing fast enough, we would reset the target to get the results we need for the immediate 1–2 weeks or the areas that are highest priority for us. Organise another session afterwards to conclude the remaining.
- It helps to make the session more effective and efficient if, as a facilitator, you can articulate the goal and scope of the workshop prior to the session.
- Organised enough stickies and markers, this is a frugal exercise.
One ground rule: this is not a software design review session hence we consciously avoided discussions on the design unless that became necessary and evident outcome of the session.
Step 1: 5-10 mins
We drew a basic end to end block diagram of the implementation. This didn’t have to be a perfect diagram but something that we could draw in 5 mins and which the team felt was enough to depict a honest snapshot of our design.
What we gained:
- My team consisted of developers with specialised skills (CRM/Fullstack).
- On some occasions the CRM devs weren’t aware of the FS implementation and vice-versa.
- Drawing such an easy conceptual block diagram collectively gave another opportunity for everyone in the team to align their understanding of the whole design. Including me as a Delivery Lead.
- Though PMs/POs may not always need to know in depth implementation details, these sessions provided them opportunity to be aware of the high level design, be aware of the test strategy and contribute.
- The PMs/POs work with the team to create the experience they are envisioning. You will be surprised how very often they can contribute to the test scenarios with their curiosity, expertise in the end to end product vision and what they expect the product to deliver.
Step 2: 10 mins
- Now each team member wrote the test scenarios on sticky notes and stuck them at relevant points in the block diagram.
What we gained?
- We had about 80–90% of the test scenarios within 20 mins.
- Everyone in the team had responsibility to contribute towards the test strategy and discovery of test scenarios. This was a crucial step towards collective ownership and accountability of quality.
- The PMs/POs/Architects got opportunity be aware of the test strategy and contribute.
Step 3: 10–15 mins
- Now the team got together to discuss the test scenarios in the stickies, ask questions, provide clarifications and suggestions.
- After the discussions if the team felt the test scenarios were inadequate, we would have another go with Step 2 to identify missing scenarios.
- My observation has been that the most beneficial part of this exercise is the ensuing discussion among the team members. There were disagreements, alignments, joy of discovering tricky scenarios and sense of confidence in what we were delivering.
- The team didn’t target any specific categories of test scenarios. We covered all that were applicable to us like functional, NFRs, integration and so on
- One interesting outcome of these sessions was that as we discussed the test scenarios, we uncovered points of failure and this encouraged discussions on the design but more importantly on the alerts, logs, Service Level Objectives and monitoring on our dashboards.
Step 4: 10–15 mins
- Based on the test cases discovered we group the test scenarios under similar categories and create a matrix.
At this stage the facilitator (in this example it was me and most other cases it was our QA) does need to make sure we are going towards our intended outcomes.
In the diagram below:
- X — group the test scenarios under various categories like: error handling, data related, performance, functionality etc.
- Y— test cases we have already figured and are implemented mostly through unit tests, test cases that we have identified and planned already, test cases that we discovered new.
Step 5: 15 mins
- Now that we have identified new test cases, it was time to prioritise them.
Step 6: 10–20 mins
- Team discussed the test strategy and test cases, how we can embed them into our CI/CD pipeline, unit tests etc
Step 7:
Once back to our desks:
- Most of these stickies would translate into acceptance criteria of our user stories.
- Some test cases, primarily performance related and some due to the dependency on other connected systems, would get translated into separate user stories.
This ritual became one of the fun and effective ways of collaboration and working towards collective ownership.
Here is a summary of what we benefitted from these practices:
- Once we started doing this exercise, team knew the drill and we could figure out 80–90% of the test cases, categorise and prioritise within 1–2 hours.
- This was a collective team effort — so there was team ownership and accountability.
- The Devs became familiar with the test cases earlier on and we could implement these scenarios as part of our unit tests and as part of our CI/CD pipeline.
- Some of the stakeholders like PMs/POs got opportunity to participate in the exercise, listen to the discussions and be aware of the testing the product will go through. They had the opportunity to be curious and contribute as well.
For me the greatest benefit was increased collaboration, teams ownership and time saving as we delivered great quality products for our users.