Making a Record of your Exploratory Testing

Jan 30, 2024 (Feb 1, 2024) Loading...

A mashup of sources, for whittling later. Here's what I mean about the whittling. I'm aware that LambdaTest featured this post, and I'll sort it out shortly.

Most of the time, exploratory testing records are kept privately for the tester who tested. Plenty of testers rely on their memory.

Testers working in teams might use those notes to illustrate what they did and what they found, or to help them share how they worked, or to help work out why they worked as they did.

Organisations might want those notes to be kept around for audit, or as some unspecified artefact for a fuzzy use some time in a possible future. If you're in an organisation that wants to keep notes like this, look into the purposes, benefits and costs – you may find that some needs are illusory, or un-owned. If you find that one group wants every artefact kept, but there's no way to search and no budget to store, then there's some organisational cognitive dissonance going on; you can choose to find out what un-corporate arse is being covered by this nebulous need, and choose to go give it a kick.

I keep records to help my mind to remain available in the present, and to support other minds later.

Purposes: in Three Rs

  • Remember – Mnemonic help (me)
  • Review – Sharing, improvement (me, my team)
  • Return – Historical analysis, long-term project memory (whoever comes along)

What to record

The following is lifted from my note What to Record.

If you make a plan, write it down. If you're just tootling along all planless, you need a strategy, an approach. A sticky note will do. There are no excuses - accept no substitutes.
You'll want to remember the actions you take, the data you use, your expectations, your observations - including the time. Don't necessarily limit yourself to exactly what you're testing - you're working in some kind of context. You'll get better at this over time; there's an instinct that comes with practice that lets you separate the wheat from the chaff. There's always going to be a bit of chaff.
Keep track of things that repeat. Even if nothing happens. Dullness is a virtue in most working systems. And without track of dullness, how will you notice . . .
Surprises. Is that a goat among the sheep? If you didn't expect it, it's worth writing down. If someone else wouldn't expect it, it's a bug. Perhaps you've seen an exploitation. Have you a hypothesis? Are you making a model? And when you've supported your hypothesis, found a potential bug, had a surprise, or the dullness is just too much to bear, you need to . . . .
Make a Decision - many people get so used to testing by instinct, or by the book, that they don't notice they're making decisions. Worse, they've no idea what the decisions might have been. Scripted testing can be decisionless, but decisions are key to exploration. When you decide to take a different approach, to try different data, or just to consciously do exactly the same thing again, but watching more closely this time, you're taking a decision. Make a quick note.

For me, recording Decisions is the key to remembering all the other stuff, and here are three flipdowns for other stuff...

Identifiers:

  • who
  • when
  • what

Qualities:

  • risk
  • estimated time
  • dependencies

As you go:

  • actions
  • events
  • data
  • expectations
  • bugs
  • plans
  • interruptions
  • actual time taken
  • time wanted
  • problems
💡
Notation
- Item
* A more important item - sometimes used for 'return to this'
! One you'll want to remember at the end of the test.
!! is typically a bug, sometimes qualified with a spare ? or ¿.
[ An aside - a thought or observation that needs to go down, but that isn't in the flow]
¿ Something I'm not sure of - may need more tests
? A question for someone, or something
Plenty of arrows and circles - not forgetting diagrams, underlining, tables etc.

Use of Records

Some audiences – including future you – will want to use your exploratory notes. They might want to see what's been done already, to have evidence of a problem, to look for the absence of evidence of a problem, to see what else might be done, to look for new leads, to learn about how you worked.

Searchability is important for this – and so electronic forms may be more useful; typed notes, keylogging and files, video transcripts.

Some test teams standardise on one note-taking approach; a massive mindmap, a shared OneNote, rich-text on a session-based Jira ticket, a set of sequential blank books. Some don't standardise on notes taken in-the-moment. Others put all long-term information in the issue tracker.

What do you, your teams, and your institution do?


On the Ministry of Testing Club's discussion board, I wrote:

In my experience, people / committees making processes which require screenshots / videos sometimes imagine this need. If you have the chance to look at the decision to include the need, you might dig into what the recordings might be used for, who might be using them, and whether the expected benefit is worth the practical cost.
In one medical device org, Audit (when asked) were clear that they only required such records when seeking to know that a known problem had been fixed, and the fix checked. The process owners wanted most checks like that to be automated – and with that clarity, the long-term need to make and store detailed + searchable notes basically went away.
The testers, however, used more detailed (and more temporary) records to illustrate what they’d found, when sharing within the team. The benefit they saw was that spreading that out shared skills, and bought greater expertise to bear on that path through the system. I imagine that it also made the team more resilient to departures. Looking at the rest of this thread, it’s worth noting that their target didn’t have much of a screen-based UI, and that their exploration was typically around changing setup and simulated environmental inputs, and measuring outcomes and some internals.
In a regulator, I saw testing notes (made in Word / Notepad / markdown / knotted string) attached to whatever represented the act of doing work. The org used Jira and ADO and wiki and OneNote and auditable doc storage – and I saw notes kept in all those places (relying on fragile links). Each approach suited (and was made by) its small, typically isolated group of testers. When (rarely) people outside these teams asked for older or more-detailed records, those outsiders wanted, in effect, magic recall. From an organisational point of view, those notes were unfindable, unsearchable and unknown.
If / when I teach this stuff, I ask people to think of purpose by framing for their audience (us / people who know us / people who don’t know us) and timescales (right now / at a foreseeable juncture / later than we imagine). And, in terms of what to record, there’s the last few paragraphs of What to Record. Which were written in a fever dream half a life ago, and so demonstrate that one’s notes, written for you for right now, may still be useful to someone you’ve not met, who lives in some unimaginable future.

Member reactions

Reactions are loading...

Sign in to leave reactions on posts

Comments

Sign in or become a Workroom Productions member to read and leave comments.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.