Day 7 (BBC 23Q2)

A 1-hour workshop on exploring, and how we explore software systems

Open Miro frame. Return to central page

Application Workshop part 1

We'll start with a poll on Zoom. You'll describe a system that you test, we'll look at the results together. This poll will give me necessary context, and should also help us to see what the group is working on.

I hope that you see your systems as things to be explored.

You may be planning specific explorations, after (or during) these sessions. I hope that you'll share your plans with the group, and take a learning opportunity to review  your budget and purpose beforehand, and can use this group to share your exploration when it's done.

Session-Based Testing

We looked at Session-Based Testing  in earlier sessions. We'll revisit today...

Charters and timeboxes reflect the purpose of an exploratory session.
See: Working with Exploration II – Work.
Judgement guides Exploratory Testing.
See: Exploration and Testing.

Exercise: Your Charters

10 minutes, groups or individually, then 10 mins debrief

Using your own system as a subject, and with a specifc duration/budget in mind write at least four charters. You could take the following as guides if yu need structure.

  • explore in a new way
  • investigate a known bug
  • search for surprises
  • work through a list

We'll talk about those charters

If I was writing charters for 10 minute explorations of the timer, I might write:

  • New way: does running several timers in different tabs do anything unexpected?
  • New way: does changing the system closk have any effect (don't bother – too destructive)
  • Known bug: measure how long the timer runs for several settings; 60s, 300s, 30s, 1s
  • Surprises: watch someone else use the timer for 5 mins – watch for confusion / misuse / resolution, capture their comments.
  • List: (automate this) find a list of invalid times – do they all come up invalid? Generate valid times from minutes to days – do they all come up valid? Can I input some of the invalid times that are displayed under buggy circumstances?
  • run the js through eslint or similar – what's odd? Any = for ==? Anything in code which is deprecated?
  • can this be embedded in different pages? Does the source work bare?
  • what does ChatGPT make of it?
  • scan for this errors
  • how does it look on mobile?

Sessions and Charters

Many circles overlapping an open-ended box

A session is a unit of time. It is typically 10-120 minutes long; long enough to be useful, short enough to be done in one piece. A session is done once.

A charter is a unit of work. It has a purpose. A charter may be done over and over again, by different people. When planned, it's often given a duration – the duration indicates how much time the team is prepared to spend not how long it should take.

The team may plan to run the same session several times during a testing period, with different people, or as the target changes.

Anyone can add a new charter – and new charters are often added. When tester needs to continue an investigation, but wants to respect the priorities decided earlier, they will add a charter to the big pile of charters, then add that charter somewhere in the list of prioritised charters – which may bump another charter out.

In Explore It, Elisabeth Hendrickson suggests

A simple three part template
Target: Where are you exploring? It could be a feature, a requirement, or a module.
Resources: What resources will you bring with you? Resources can be anything: a tool, a data set, a technique, a configuration, or perhaps an interdependent feature.
Information: What kind of information are you hoping to find? Are you characterizing the security, performance, reliability, capability, usability, or some other aspect of the system? Are you looking for consistency of design or violations of a standard

I've found it helpful to consider a charter with a starting point, a way to generate or iterate, and a limit (which may work with the generator), and to explicily set out my framework of judgement.

Examples:

  • Starting with a basic flow that adds a new [entity], explore and notate several alternative paths, including different means of entry, stop-and-resume, undo, backtracking, authorisation/re-authorisation. Extract the records for each entry made during testing, compare them with each other and for sanity. Capture your judgement of what might be an exhaustive set, and what might be adequate.
  • List all entities which listen for user input. Starting with a clean system, use a tool to fuzz those entities for [10 minutes]. Watch the fuzzer, looking and noting UI surprises. Examine the underlying logs and storage for records which are in some way odd – out of order, invalid format, unexpected contents.
  • Explore resets and interruptions while changing from one active scenario to another – does anything hang over from one scenario after the reset or switch?

Examples:

  • run [these items of] code through a [static analysis tool], looking for regex with [this known issue]
  • look at the last three items in the failed transactions pile, and uncover why they failed
  • take 5 minutes to scrub through an hour of screen recording – what sticks out?

Think about the verbs you're be using. For observe kinds of words, you may need to be able to change the circumstances, or might sample live while noting what the circumstances are. If you're measureing, you may be able to take advantage of existing automation, and should be able to compare several measurements to lok for trends and patterns. If you're investigateing, you may have a problem in mind rather than a  novel surprise – you may want to set up some sort of observer to tell you when the problem occurs, or look through logs and databases to see if already has.

‼️
If you're typically checking, you might be working through your expectations rather than the system.

Charter Starters

Use these to give shape to early exploratory test efforts.

These are unlikely to be useful charters on their own – but they may help to provoke ideas, clarify priority, and broaden or refine context.

  • Note behaviours, events and dependencies from switch-on to fully-available
  • Map possible actions from whatever reasonably-steady state the system stabilises at after switch-on. Are you mapping user actions, actions of the system, or actions of a supporting system?
  • How many different ways can the system, or a component within that system, stop working (i.e. move from a steady, sustainable state to one unresponsive ton all but switch-on)? Try each when the system is working hard - use logs and other tools to observe behaviours.
  • Pre-design a complex real-world scenario, then try to get through it. Keep track of the lines of enquiry / blocked routes / potential problems, and chase them down.
  • What data items can be added (i.e. consumable data)? Which details of those items are mandatory, and which are optional? Can any be changed afterwards? Is it possible to delete the item? Does adding an item allow other actions? Does adding an item allow different items to be added? What relationships can be set up between different items, and what exist by default? Can items be linked to others of the same type? Can items be linked to others of different types? Are relationships one-to-one, many-to-one, one-to-many, many-to-many? What restrictions and constraints be found?
  • Try none-, one-, two-, many- with a given entity relationship
  • Explore existing histories of existing data entities (that keep historical information). Look for bad/dirty data, ways that history could be distorted, and the different ways that history can be used (basic retrieval against time, summary, time-slice, lifecycle).
  • Identify data which is changed automatically, or actions which change based on a change in time, and explore activity around those changes.
  • Respond to error X by pressing ahead with action.
  • Identify potential nouns and verbs - i.e. what actions can you take, and what can you act upon? Are there other entities that can take action? What would their nouns and verbs be? Are there tests here? Are there tools to allow them?
  • Identify scope and some answers to the following: In what ways can input or stimulus be introduced to the system under test? What can be input at each of those points? What inputs are accepted or rejected? Can the conditions of acceptance or rejection change? Are some points of inputs unavailable because they're closed? Are some points of input unavailable because the test team cannot reach them? Which points of input are open to the user, and which to non-users? Are some users restricted in their access?
  • Identify scope and some answers to the following: In what ways can the system produce output or stimulate another system? What kinds of information is made available, and to what sort of audience? Is an output a push, a pull, a dialogue? Can points of output also accept input?
  • Explore configuration, or administration interfaces. Identify environmental and configuration data, and potential error/rejection conditions.
  • Consider multiple-use scenarios. With two or more simultaneous users, try to identify potential multiple-use problems. Which of these can be triggered with existing kit? If triggered, what problems could be seen with existing kit, and what might need extra kit? Try to trigger the problems that are reachable with existing kit.
  • Explore contents of help text (and similar) to identify unexpected functionality
  • Assess for usability from various points of view - expert, novice, low-tech kit, high-tech kit, various special needs
  • Take activity outside normal use to observe potential for unexpected failure; fast/slow action, repetition, fullness, emptiness, corruption/abuse, unexpected/inadequate environment.
  • Identify ways in which user-configurable technology could affect communication with the user. Identify ways in which user-configurable technology could affect communication with the system.
  • Pass code, data or output through an automated process to gain a new perspective (i.e. HTML through a validation tool, a website through a link mapper, strip text from files, dump data to Excel, job control language through a search tool)
  • Go through Edgren’s “Little Black Book on Test Design”, Whittaker's "How to Break..." series, Bach's heuristics, Hendrickson's cheat sheet, Beizer-Vinter's bug taxonomy, your old bug reports, novel tests from past projects to trigger new ideas.

Writing charters takes practice. A single charter often gives a scope, a purpose and a method (though you may see limits, goals, pathologies and design outlines). You could approach it by considering...

  • Existing bugs – diagnosis
  • Known attacks / suspected exploits / typical pathologies
  • Demonstrations – just off the edges
  • Questions from training and user assessment

A collection of charters works together, but should always be regarded as incomplete. You're investing resources as you work on whatever you choose to be best, not trying to complete a necesary set. A collection for a given purpose (to guide testing for the next week, say) is selected for that purpose and is designed to be incomplete.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.