Hands-on, Tooled-up Testing for Odin

A page of materials for my workshop at OdinConf in Sept 2022.

Photo by Tiard Schulz / Unsplash

Thank you for choosing to come to this workshop!

Here's the workshop description.

Before the workshop

Equipment

This is a hands-on workshop. To get hands-on, you'll need a laptop with a modern browser and an internet connection.

If you don't have access to a laptop, you'll be able to get some value using a tablet. You'll even be able to get some hands-on experience using a smartphone...

Working

To help me, in this large group, I'll ask you to consider working in small groups. The room is laid out with 5-6 tables each seating 5-6 people.

The exercises will support you if you prefer to work alone (though you'll need to share a table).

We'll learn best from experience, and from each other, so there will be plenty of chances to share your work with the room, and to discover how others approach the problems.

We'll have a shared Miro board – I'll link to it here on the day.

Tools and technology

You do not need to read or write code, nor to install tools. However, you will need to work with test design involving thousands of tests. I'll give you tiny in-browser tools to generate data, to run tests, and to gather output.

You are welcome to use your own tools. For instance, you might use excel to generate data, browser tools to check out decision flow in the system under test, datagraph to analyse its responses.

I've chosen to supply most tools, and to run them in the browser, becuase I want to make the test system easy to access in the workshop, and to make it as accessible as possible a wide variety of testers. I've not made that choice becuase it reflects any particular technology.

System Under Test

We will focus on behaviours of the underlying code and data, and will only briefly explore presentation in the browser. The system-under-test is not a webpage; the  browser provides a useful test environment for teaching.

Logistics

You should already know location and timings. Here's what I belive to be true, but trust emails from the organisers more than the details here!

We'll be at Høyres kurs og konferansesenter, in room Ibsen.

We'll run from 09:00 to 16:00, with short breaks as necessary and lunch from 12:15 at Tivander, in the same building.

I'm James Lyndsay. My email is jdl@workroom-productions.com. My phone is +447904158752.


Workshop Structure

You’ll work with a simple system, taking several different ways to explore its true characteristics. You’ll dig into bugs, release notes, interfaces, configuration and changes, running thousands of exploratory experiments to reveal and understand the system’s behaviours.

I intend that you'll gain direct experience of test techniques suitable for exploring behaviours with bulk tests. These exercises will, I hope (and with guidance), suit new testers, experienced testers, and people who manage testing work – expecially if you share skills. They should also work if you prefer to work alone.

Morning: Starting with a single simple field, we’ll design small tests, and use recognised techniques to grow them into collections. We’ll look at equivalences and boundaries in input and output, at ranges and distributions, at collections to explore validation and at ways to manage the results of bulk testing.

Afternoon: Moving to more complex elements with several interrelated fields, we’ll build exploratory data with combinations and refinements, looking at ways that we can manage the generation and interpretation of our tests.

Shared Area

Miro Board

Raster Reveal Exercise

1-minute exploration, short debrief, repeat

Explore the image: move over parts where you want to see more.

Then we'll talk about it. Some questions:

  • When did you know what you were looking at?
  • How did you know?
  • Did you think it was one thing before revealing another?
  • How could you describe your actions? Would your actions transfer to another person? Another picture?
  • What role do your knowledge, skills and judgement play in exploring this?

Workshop Kickoff – What do you want?

Talk with each other. Add notes to the board. Group the notes.


Exercises – Converter

Here's Converter_v3

Priming Exercise

Put some numbers into the text box. Put in some more.

What do you know now which you didn't before? How does that help you understand what's going on? How does that understnading change your next steps?

Debrief – what did you find? what did you imagine? what might you do next?

Slider

Work with the slider to reveal more about the system under test. Do you need more than a minute?

Debrief – What's different about your interactions with the system under test, when you're using the slider?

Parametric slider

The slider lets you run lots of contiguous inputs, and to see the output.

The parametric slider lets you choose where the ends are.

Choose the start and ends of the slider.

Debrief: Share your slider ends (and perhaps why you chose them). Share what you learned about the system under test.

Sampling

While playing with the system, arrive at comments on the following:

What samples of this system can you take?

How does «speed» of sampling change your approach?

Debrief Question: What does "complete" look like? Is it useful?

Partitioning

Find groups in your test input, and output. What makes a group?

Add your groups to the board.

Debrief question: Are all samples within a group next to each other?

Configurations

Consider the configuration information

Debrief question: Does this change or add to your understanding of partitions? Of system behaviour?

Judgement

Use the pre-filled list to see how several values are converted. Feel free to change those values.

Debrief questions: 

  • Do you see any surprises – reasons to change the model of the underlying system which you build in the previous exercise?
  • Do you judge any of the outputs to be wrong? What are you basing that judgement on?
  • How did you change this collection of tests? Why?

Test design – based on partitions

We'll build a test design together.

Exploration based on partitions

Find and explore boundaries between adjacent partitions

Look for partitions you've missed

Verify that a partition is worth treating as a partition. Verify partition behaviour...

Debrief – what do you know, now?

Information about the system

Check out the story, the examples, the tests. 

Do use the system as you woth through these artefacts

Debrief question: what have you found out about the underlying system?

Information about change

Read the release notes.

Debrief Question 1: Does this change your understanding of the system? What now catches your interest?

Debrief Question 2: With a real system, what other artefacts might you seek out?

Targeting

  • What's worth verifying?
  • What's worth exploring?
  • How might you automate the exploration part?

Debrief: put short answers on the miro board, we'll pick up from there.

Exploring with Data Exercise

Examine your results, and try more data to dig into what you find.

Do use an external tool to generate lots of data, and to look at your results. If you want a tool to generate, work with the 'generated bulk input' option.

As you work, put answers on the board to the following questions

Debrief questions:

  • what data did you choose to generate first?
  • what did you change in later tests?

Exercises – Rater

Here's Rater

Rater is more complex, and will need time to understand the problem space, the system, and the test generation.

Kickoff rater

What areas seem interesting to explore?

What areas seem important to check?

What configuration will you need to do this work?

Our environment has some bulk generation tools – what would you build to add to these?

What would you build to look for domains / properties in output?

What surprises can you find?

We'll build exercises, and perhaps tools, as we go...

Other places to look

Analyse your output with Raw graphing tool

Check results of gnerated tests with Property testing

Use the Big List of Naughty Strings if you want to get to the edges of input. Better; use it (an others) as sources of data, and 'approve' those behaviours you wish to verify regularly.

GitHub - minimaxir/big-list-of-naughty-strings: The Big List of Naughty Strings is a list of strings which have a high probability of causing issues when used as user-input data.
The Big List of Naughty Strings is a list of strings which have a high probability of causing issues when used as user-input data. - GitHub - minimaxir/big-list-of-naughty-strings: The Big List of ...


Bugs in generators

You will, of course, find problems in my generators and runners. I've tried to minmise those, and I welcome any bug reports in any form. You'll need to distingush between the probems I've (unintentionally) left in the tools, and those I've (intentionally) left in the target system

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.