The world is becoming evermore precise – well at least our desire to measure it, quantify it, model it with ever more precision, is increasing. But it’s time to step back and think about just what these models are intended to accomplish.
Our work on ‘attention approximation‘ is all about understanding where people are placing their attention – while realising that there is a level of precision, moving above which, renders the model we are trying to create, useless – unfit for purpose.
I’m pleased that this idea of inexactness pre-dates our work, as this months Communication of of the ACM conveys in their article ‘Inexact Design: Beyond Fault-Tolerance’. In which, Krishna Palem’s prescription for building faster computers “If you are willing to live with inexact answers, you can compute more efficiently,” is pursued. While his work is about efficiency of the hardware:
‘Palem and his colleagues tried out the idea, which they also call “probabilistic computing,” first in CMOS. In 2004, they showed in simulations that “probabilistically correct CMOS gates” (or switches), since dubbed PCMOS gates, could be made 300% more energy efficient by reducing the probability of accuracy 1.3%, from .988 to .975.’
We think there are similarities in the concept, if not the domain, to our own work and that the idea of increased precision can have a pejorative knock on effect in which unintended consequences proliferate.
It makes me wonder if we all need to take a step back and defocus to refocus.