3D dot voting
I love dot voting. Although telling people to stick dots on their favourite concepts may sound more like playschool than work, it’s a great way to reach group consensus at the end of a workshop. I use it all the time.
But sometimes dot voting just doesn’t work.
Say you’re dealing with senior business people used to making complex decisions after considering lots of data. The simplicity of dot voting feels like a fraud to them, not empirical, not grounded in their world of numerical analysis.
Two modifications can make it work. First, allow them to score ideas against multiple criteria. Secondly, give them immediate feedback on scores to help inform the final dot vote. And all on a workshop timescale.
Is it possible? I think so. Welcome to dot voting in three dimensions.
Scoring is easy enough…
The fundamental principle is to score each idea against multiple criteria.
My example had seven criteria – cost, effort, legal issues, user needs, differentiation, business goals and personal liking – each of which could be scored None / Low / Medium / High.
To capture the scores, make a simple, colourful and appealing scoresheet. After each idea is discussed, hand out the sheets, give people a minute and then collect them. It sounds fussy, but it works fine.
…but you need the magic spreadsheet
12 people scoring 10 ideas and against 7 criteria is 840 datapoints! It’s hard to visualise the data quickly enough to use in the workshop.
This is where you need the magic spreadsheet:
- the first worksheet contains a matrix for entering all the scores for each of the ideas – the goal here is super easy data entry.
- the second worksheet averages scores into three groups – feasibility (cost, effort, legal issues), customer impact (user needs, differentiation) and business appetite (business goals, personal liking).
- the final worksheet is a chart showing each idea in three dimensions– feasibility (x-axis), customer impact (y-axis) and business appetite (bubble size).
You need two people, one to facilitate and the other to enter scores in the spreadsheet as you go. Once the data’s in, adjust the axes to fit the data range and you’re left with a striking visualisation of how ideas map to criteria.
People love talking about pictures
The visualisation is the killer element. It prompts great discussion.
Are there any obvious winners? Should we focus on ideas with most impact, or biggest feasibility? Which ideas were written off altogether? Are there any interesting outliers? Is it what we expected to see? Where are all the ideas that are easy to do but also have a massive impact?
Only now, having (i) discussed the ideas (ii) scored the ideas and (iii) discussed the aggregated scores, is everyone ready for…the final dot vote.
Fine, but why would I go to this much trouble?
There are many benefits to dot voting in three dimensions:
- generates confidence – it’s not scientific or objective, but it means looking at the problem from multiple angles and leads to a more rounded decision
- fosters understanding – group discussion allows raising of technical worries so specialist concerns (e.g IT and legal) get reflected in group scores
- contains feedback loops – discussions influence scoring, scoring influences prioritisation – which captures some of the magic of the Delphi method
- gives the group ownership – the facilitator doesn’t take part in the voting, so the group is wholly responsible for making the decisions
- harnesses visualisation – the abstract representation of scores focuses the group on characteristics that matter most to the business
- allows segmentation – run it with different groups and compare the results to see differences between, for example, marketing and IT viewpoints
- accommodates introverts – no matter how strident the discussion, each individual scores alone (although extroverts can still influence discussions).
There are pitfalls of course. You need a balanced group so scores aren’t skewed. You need good criteria and sensible dimensions. And you might have to kill your favourite ideas when the group throws them out!
The final analysis
Does this work any better than simple dot voting? Maybe, maybe not.
In my example there was an extremely strong correlation between ‘personal liking’ and final consensus. Perhaps not surprisingly, considering (a) we all know dot voting works and (b) cognitive psychology suggests that many ‘rational’ decisions are really just manifestations of our emotional responses.
But it’s a good tool for when people don’t trust a dot vote. And, if you like that kind of thing, the data visualisation reveal is pretty fun too…
Let me know what you think about this on @myddelton. Feel free to adapt my scoresheet or spreadsheet to use your preferred criteria, dimensions and labelling. These ideas owe a debt to @leisa and @petegale.