Exploratory Testing – Hands off the UI


I'll update this when things change

We'll play in Miro (as below)

We'll probably use the TestLab:Facility in Gather.town

We'll try to run on Weds 30 March at 18:00 CEST / 17:00 London time – though check here and on slack for updates.

Warm up

What do you want to from today's session?

If you want to get more specific: what do you want to change about how you test?

We'll use this miro board for several exercises, and as slides. Here's the direct link to let you open in a separate window or in the app.


  • Exercise in realisation and evidence
  • Basics of exploration in testing – revealing risk vs verifying value
  • Exercise to find surprising behaviours / qualities
  • Using automation in exploration

... then questions, if you have them.


Raster Reveal Exercise (~3x1 min, debriefs)

1-minute exploration, short debrief, repeat

Explore the image: move over parts where you want to see more.

Then we'll talk about it. Some questions:

  • When did you know what you were looking at?
  • How did you know?
  • Did you think it was one thing before revealing another?
  • How could you describe your actions? Would your actions transfer to another person? Another picture?
  • What role do your knowledge, skills and judgement play in exploring this?


You may want to go to converter's own page to use it more directly.

Priming Exercise (2 mins + debrief)

Check out the story, the examples, the tests, the release notes.

Try some values out by hand. Take a look at the basic / slider / numeric inputs – all these are browser-based, but can give you a swifter means of interaction.

Debrief question: what have you found out about the underlying system?

Judgement Exercise – in groups (5 mins + debrief)

Use the pre-filled list to see how several values are converted. Feel free to change those values.

Debrief questions:

  • Do you see any surprises – reasons to change the model of the underlying system which you build in the previous exercise?
  • Do you judge any of the outputs to be wrong? What are you basing that judgement on?
  • How did you change this collection of tests? Why?

Targeting Exercise – in groups (3 mins + debrief)

Imagine you've been given the task of testing the underlying system. You'll skip the browser entirely. In groups of 3-6, consider one or two of the following questions.

  • What's worth verifying?
  • What's worth exploring?
  • How might you automate the exploration part?

Debrief: put short answers on the miro board, we'll pick up from there.

Exploring with Data Exercise – in groups (up to 10 mins + debrief)

Use the list box (or the data generation tool) to try out several values at once. Examine your results, and try more data to dig into what you find.

The UI will get in your way: feel free to use an external tool to generate lots of data, and to look at your results. If you want a tool to generate, work with the 'generated bulk input' option.

As you work, put answers on the board to the following questions

Debrief questions:

  • what data did you choose to generate first?
  • what did you change in later tests?

Wrapping up

Look back over your notes. Find the "conclusions" section on the board. Use stickies to share actions and insights. We'll pick out a few to elaborate.


Questions are the best bit. I'm happy to take any question, any time – I'll defer longer answers and conversations to the end. There's a space on the board for you put yours.

About James

James is a tester. He's been testing since 1996, and teaching explorartory testing since 2001. He's won awards for his work in the testing community, and for his writing. In his day job, he's a test strategist and mentor.

Twitter: @workroomprds

LinkedIn: James Lyndsay

Site: workroom-productions.com

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.