A Framework for Judgement In Testing

While exploring, I use this framework to guide the way that I judge something that has bothered me.

  • inconsistency
    • external (vs an external authority i.e. a spec)
    • internal (vs another similar thing)
    • cultural (vs my own expectations)
  • absence: something is surprising by its absence
  • extra: something is surprising by its presence

On inconsistency...

Before I can spot bugs on behalf of an external authority, I need to work to understand it. In practice, this means that I need to know the spec, I need to understand the regulations, I need to have worked with an end-user to develop empathy. If I've seen this kind of inconsistency, that's easy: it's a bug.

Tools are often helpful in picking out internal inconsistencies, especially where there's lots to sift through; pixel shifts in the UI, configuration differences, data entities, log weirdnesses. The art is in knowing which is the bug – but perhaps we don't always need to make that judgement, if we've got access to someone  who cares.

I find that it's natural (and therefore easy) to spot cultural inconsistencies, and hard to persuade someone else – or even myself. So while I might notice the problem, I might not be able to make a decision, let alone take action.

more to come