Photo by Luis Morera / Unsplash

Exploring the BlackBox Puzzles

Exploration Feb 24, 2022 (Sep 18, 2023) Loading...

This is more polished, awaiting feedback... and follow-up articles.

My BlackBox Puzzles help you explore the ways that you build and test models.

To help people focus on model they are building, I don't give requirements. Indeed, I explicitly ask people to to work without requirements. Working without requirements can be counterintuitive, and is uncomfortable for some. Nonetheless, I think it’s a worthwhile skill to gain – and the puzzles give you a safe and responsive place to play as you learn.

You can't easily test one of these abstract puzzles – what would you judge it against? So put that task to one side for now, explore until you have a model, then test your model.

Focus Yourself

Set aside time to play, and decide on your purpose for that time. You'll need more than 5 minutes, and I wouldn't want to spend more than 30.

You'll want to guide your time with some purpose. If you don't feel comfortable with setting your purpose, here are some hints.

Exploration is Play with Purpose

Alan Richardson put this thought into my head. Read What is Exploratory Testing? in his Dear EvilTester book to get closer the the source. I expect I'm conflating intent with purpose, here.

In exploring a system, you’re looking to develop a mental model of relationships within the system. I use and teach several techniques which lead to sharable models. You’ll have your own, I expect. If not you’ll start to develop them, right now.

Here are three broad approaches which work well with the puzzles.

  • List components, seeing what can be worked with directly, what reacts, and keeping a record of what you observe. Many people, faced with a UI, list recognisible UI components. That's a fine place to start: You might choose a different interface. You’ll be making a map of input->output, looking at data transformations. Perhaps you'll see some equivalence classes in your records –  set of inputs or outputs which behave in symmetrical ways.
  • Seek to model states and events – observe behaviours, consider events that seem to have an effect, look out for collections of behaviours that seem to persist together, and what makes a change. Many people, faced with a UI, look at how the subject's UI reacts to them. That's a fine place to start: You might look for reactions that aren't in the UI, and for events that you don't personally trigger.
  • Look inside, and go digging for anything that might help you in the code. You can see structures in the code, read my UI labels and function names, and check resource use. Maybe you'll use the debugger to track what's going on in execution, or check the console and local storage, or mess with with the HTML and CSS to understand the browser’s perception of what’s happening in the DOM. It's javascript, so it's all open – unless the puzzle talks to a server somewhere. Do you reckon the puzzle is running anything remotely?

Each italicised phrase above is something to guide your exploration. Can you think of other ways you might explore? Can you coax those techniques into a shareable model?

As these are games, be aware when you're stymied by a sense of not wanting to cheat – does a particular method seem inappropriate because it reveals too much? Why is that a problem? Would it be a problem in work? Let's be clear: I invite you to rename 'cheating', and to do it anyway.

Being simple and purposeless, my puzzles don't respond well to several common alternatives that you might find in a commercial setting – subscribers get to see those (and to comment) below.

Making Notes moves you from Good to Great

Some testers trust their minds to recall all the salient parts, and to discard the irrelevant. I don't trust mine, so I keep notes. My notes – when I keep them – let me step out of, and step back into, my exploration; I often regret not keeping them when I'm bounced out of an exploration that I had carelessly slipped into. Notes help me to recall and refine more reliably, to see new perspectives, and to manage distractions more easily. If you're not keeping notes, consider how you're mananging those aspects of your exploration.  

Hypotheses Arise

At some point you’ll find yourself building models and hypotheses. You may not recognise them until they are well-formed – play with the Raster Reveal exercise get a feel for their arrival. Those which turn up before you've engaged aren't to be trusted as readily as those with some evidence. Mine tend to arise unbidden from my subconscous, and they improve when I work with them. Our mechanism of discovery influences the models which are formed, and those models in turn suggest alternative approaches. Assumptions and shortcuts may help you or lead you away – you need to choose how and when to follow them.

You're making a model of a software-based system, so you’ll probably consider where you see dependencies, and whether you think that you’re seeing deterministic behaviour. You’ll wonder what history is kept, if any, how data is created / updated / deleted, and think of one-to-many relations. You’ll model what is being stored, and what is being consumed – seeing an API ( if it's available) might give some clues...

Done?

Hopefully, you feel done before the time is up (you did set a time, at the top?). If you can describe the system in a tweet, you'll probably know that you can. If you want reassurance, or congratulations, email or DM me, and I'll respond. If you want kudos, try teaching someone on your team how to solve it, in general, and see what you both learn about testing. Then go stand up in front of a group and do it again. Do let me know.

If you don't feel you've achieved much in the time you've wasted, then do take a break. You may find that your massive pattern-processing kit needs a moment without being fed new stuff, so it can process what it has. Look over your notes in a day or two and see what turns up.

If youve got insights, write them down in a place where you'll see them later today, tomorrow morning, and a few more times over the next week or so. That way, they'll stick and you can say that you taight yourself by playing with a puzzle.

Testing involves judgement

These puzzles are made so that you can summarise what the system is doing in a sentence or two – for my own discipline, I've described each puzzle in a 140-character tweet. I believe that each description can allow someone else to predict the behaviour of the unique parts of each puzzle. Ask me nicely, and I'll share.

To reach your own summary, your exploration will need to move from measuring to condensing those measurements into a model, and you’ll need to build tests to verify that model. You'll spot symmetries and patterns, depedencies and hidden information. You'll verify your observations and dig into areas which seem obscure to you. In doing that, your're testing.

Those tests will verify (or refute) your model.

The Limits of a System

If you find yourself seeking bugs in my code, I’m delighted – please let me know what you find and I'll fix the ones I can. However, I built these puzzles so that looking for code bugs is not (necessarily) the most interesting thing to do. I want explorers and testers to move towards understanding the behaviours of a system, and away from easy bugs. There is a temptation to declare that your work is done when you find an easy bug. If you accept that temptation, that's your choice – I regret leaving in easy bugs precisely because the stop people exploring.

I’ve built these tiny systems to explore: By design, you won’t be able to judge much. But I hope you’ll feel your judgement turn on and off; it will guide your exploration, and it’s helpful to recognise when it’s doing that, as you may be being guided by an assumption.

As these puzzles are built into web pages, perhaps you’ll feel that without requirements, the only thing you can test are web standards. You’ll act towards those areas you can judge, and you'll compare your observations with your own complex internal model – does the thing resize well, does it degrade gracefully, is it open to known attacks? Do run then puzzles through code analysers or cross-device browsers to see what standards are being broken, and what I’ve failed to code for. At that point, you’re working with my artefact, as a test subject, not trying to find the patterns in the system I’ve tried to build. It’s a subtle distinction – use these machines, if you like, to discover your preference.

This post was inspired by a question from Trisha Chetani, who asked the following question:

how I would approach doing exploratory testing on puzzle 29 & puzzle 31?
Subscribers get to see ways that don't work on the puzzles...

Member reactions

Reactions are loading...

Sign in to leave reactions on posts

Tags

Comments

Sign in or become a Workroom Productions member to read and leave comments.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.