Teaching Exploratory Testing with Code
Copied mainly from a message on slack. Now has some links.
So, over on the Exploratory Test Academy Slack channel, Maaret Pyhäjärvi wrote
I hear @James Lyndsay has been teaching exploring with automation, and was curious on how you ended up framing the session. I am really curious on the experiences of actively bringing code into space of exploratory testing, since so many people are framing it as learning on UI level and the jump to thinking in terms of what automation enables may be a significant one.
and I answered (something like):
How do I teach exploration with code?
It's good to set the scene. As part of that interaction, I set out my relevant opinions, which help to frame the exercises. They are:
- exploration without tools is weak and slow
- tools give explorers powers of bulk data generation and of data analysis
- automating a fixed user journey relies instead on less-automatable work such as test design, broad-bandwidth observation, judgement
I work with a couple of exercises to explore those ideas.
- A puzzle, or recently raster reveal to look at exploring the behaviours of a software system. I use this to try to give participants a recent experience of having a revelation; a novel hypothesis based on aggregating information, rather than confirming pre-existing expectations.
- A tool which does simple conversion; number in, sentence out, along the lines of 4 -> “4m is 400 cm”. I use this to try to give participants an interactive experience of different ways into an artefact (behaviour, code, config data, tests, fixes, release notes). I also provide a bulk input facility – indeed, three; one which takes anything, one with pre-generated data, one with a data generation tool. I ask participants to design their exploratory tests to do many tests at the same time, to judge the results in aggregate / spot unusual oddnesses, then to dig in to surprises. Using this, their key automation is in data generation, with judgement to ‘eyeball’ any oddnesses in the output.
- I consciously don’t set up an exercise to have an automatable UI, and I don’t ask participants to automate sequences of actions early in a workshop. I do that later, automating sequences of similar actions. I realise that I don’t think I’ve ever had an exercise in which participants are asked to automate login / data entry / check entered data – yet this is what I see many tests doing at clients. Perhaps I need to change.
As we play, we'll highlight experiences of the group either in this exercise or in their work. I try to find moments to touch on:
- the difference between long-lived automation expected to verify value over some fraction of the life of a product, and short-lived automation to reveal surprises
- how existing automation can be re-purposed for exploration (i.e. take fixed examples and parameterise, switch bulk tests to approval tests, measure and aggregate/take trends of performance)
- supporting judgement for automation – either in building judgement and using tools to get to a point where one can use it, or taking test approaches built around automatable judgement (fuzz tests, approval tests)
- allowing themselves to ‘cheat’
- what ‘completeness’ might look like – and how one’s sense of ‘complete’ changes one’s aim and approaches
- building one’s sense of product risk, and how that might inform one’s testing and choice of automation
- automation within exploration as enabling experimentation
Does that sound fun? Does it sound useful?
If so, then know that's the kind of thing I teach. This bit has been part of what I teach since my [[Diagnosis Workshop]], and has existed in something like this form since [[Bulk Testing and Visualisation]], and has been turing up regularly in [[Insights into Exploratory Testing]] and some other workshops since.
I ran these exercises in late March 2022 for infi.nl (see Lee and Veerle's video for a review) and for the ET Slack channel.
Sign in to leave reactions on posts
Sign in or become a Workroom Productions member to read and leave comments.