Working with Answers to Open Questions
When I've not got specifc questions, I prefer to use more-exploratory requests rather than questions. I'll say “Tell me about testing” rather than ask “Do you test?”
What I get back is likely to be unstructured, full of gaps and irrelevancies, and phrased in local dialect. It’s harder to process than an answer which conforms to my structures, matches my bias, uses my terminology.
I choose to spend the time and attention processing answers to open questions because I think that such answers are likely to tell me something true, and which I don’t know.
If I want to make that value accessible, I need to make the procesing easier. How do I (currently think I) do that?
I prepare for processing – I’ll generally have a couple of questions that I can start with. I’ll know some of my model in advance. I’ll try to read into the person I’m talking to – have a look at what tasks they have on a board, what contributions they’ve made to documents and to repositories, what meetings they run or regularly attend, their recent posts in company social stuff, their LinkedIn. I’ll have thought about how I’ll process their information, so that I can b clear with them.
I take notes. I used to take notes on paper, and I prefer to, but my paper notes get lost. So I type as I listen, looking at the person I'm talking with.
I mark my notes. I can’t mark with the same ease as when I’m scribbling, but I want to be able to glance back and see what’s needs a further question (marked
??), a double-check (marked
!!), what is a challenge to my assumptions and model (generally
I read my notes. After a conversation, I take time to read and process. That means that when I book a 15 minute meeting with someone, I’ll give myself another 15 minus immediately afterwards to read over, fix any autocorrect awfulness, and summarise. At the moment, I’m separating things that are said and I want to record (`described`), things I’ve perceived(`observed`), things that seem to have evidence (`inferred`), patterns which seem to underly the conversation (`working model`) and things I want to do (`action`).
I update my model. There’s no point in asking if you’re not going to listen. If I want my models to get better over time, I have to recognise that they are not yet as good as they could be. If I believe that no one is logging usability bugs, then I meet someone who’s logged a bunch, that belief can die. If A and B don’t mean the same thing when talking about ‘negative testing’, I need to manage that disparity myself, rather than impose my model on them.
I think about the conversation – what did I learn? What did I miss? What challenged my assumptions? What do I want to remember?
I visualise my model – it can be easier to add detail to a model expressed as a picture. Drawing several different pictures of the same model gives me different perspectives, and helps when a fundamental revision requires a new visualisation.
At some point, I’ll typically come back to aggregate – sometimes I’ll know how I’ll need to do that in advance (so I can prepare), and sometimes it’s a surprise (so I need to work with what I’ve got). If the tech I’m using allows it, I’ll tag notes so that I can easily look for conversations about particular projects or ideas. I’ll often read my summary and from there I can go back to the information in the notes.
Sign in to leave reactions on posts
Sign in or become a Workroom Productions member to read and leave comments.