top of page

"I don't know"

I know that I don’t know (the Socratic Paradox, though not attributed to Socrates)


Does not knowing make us smarter?


In Taleb’s Black Swan theory of uncertainty, a chapter is dedicated to the concept of Epistemocracy. This is a dream land populated by the epistemocrat - someone who holds their own knowledge to be suspect. Who reflects, and agonzies over their knowledge and the possibility that it's not sufficiently complete.


This is not a fool.


While the fool generally has the audacity and courage to pretend to know, the epistemocrat is a brave soul that can say “I don’t know”.



When asking to predict an event, saying "I don't know" can be the true evidence of intelligence and introspection. This is the one who accepts the existence of Black Swan (ie unpredictable) events and the impossibility of accurate predictions.


There is an element of social taboo to say “I don’t know” and admit uncertainty or lack of knowledge. However this makes us uniquely human.


Dan Pink in his book When on timing discusses the claim/myth that breakfast is a crucial meal of the day and that those who skip it will have health issues (decreased metabolism, low energy etc). He presents varied research on the topic from both sides of the argument, until he comes to a Leading British Nutritionist who just said “we don’t know”. As an expert and scientist, it must take some courage to admit this. But it is often the truth. No matter how much data or evidence we have, sometimes we cannot reach a concrete conclusion. It is just speculation.


Thus to say “I don’t know” can actually present us as being much smarter than if we would try to assume something or hope that we can bluff our way out. This is also recommended in job interviews. If you don’t know something, admit it but show willingness to find out.


This has an interesting relevance not only within the context of us as humans, but also of machines and algorithms.



When AI doesn't know


After reading the Black Swan about 3 years ago, I related it to the development of Artificial Intelligence (specifically machine learning) systems. Prediction algorithms are trained to recognise stuff. Computer vision algorithms are designed to recognise objects in an images for tasks ranging from fun to life critical.


In Natural Language Processing (NLP) algorithms classify and categorise text. Within the NLP domain, it is surprisingly difficult to create an algorithm to recognise a piece of text as Unknown (ie not belonging to a known category) rather than to generalise it as belonging somewhere. Language is much more difficult to analyse than images. There is linguistic ambiguity, there are different forms of expression and the algorithm will by default try to approximate a prediction outcome (depending on a set probability) to a specific category. It’s designed to automate and help.


Basically the default is ‘something’ and not blank. In the case of image recognition (especially if we consider automated vehicles and medical sensors) the ability of AI to say “I don’t know” and hand over to the human expert could be a matter of life and death. Therefore algorithms need to be made more intelligent by ‘admitting’ that they don’t know and then reverting to the human for guidance and intervention.



The Turing test


This has an intriguing connection to the famous Turing test, whereby an AI system’s ultimate success was measured by whether it can fool a human conversant that it was also human. In this case saying “I don’t know” too often (depending on the question of course) can be easily perceived as a machine failing to understand what we are saying, when in fact it could be a very honest person!


“Answering “I don’t know” is reasonably safe for the computer, and might even make it seem more human—we would expect a child to answer “I don’t know” to some of the questions too, such as the request for the square root of two. However, if a computer gives this answer too often, or for a very simple question, then again it would reveal its identity. “



Our own reactions


I wonder what is our internal driver to a question, event or situation where we might express doubt in our knowledge?


It depends on our character, our state of mind. Which of the scenarios below do you identify with?


  • you are tired and can't be bothered to answer a child's n-tieth question and decide to draw the line with "I don't know" hoping for some peace.

  • you suffer from low confidence or imposter syndrome, and might use "I don't know", biting your tongue, as a defense mechanism to protect yourself from the eventuality that you might be wrong (though you often know the answer). Easier to admit that, then to suffer the 'shame' of being wrong (why we shame people who are wrong is also a worthwhile reflection).

  • you have the enviable ability to deduce an answer on the spot, by instantaneously cross referencing all available stored knowledge, a little bit like a machine learning algorithm, and providing an approximate response. You are saved from admitting you don't know, but is an approximate response also good enough?

  • you are completely comfortable to say "Hey, I don't know, but let me check".

  • you will only admit that over your dead body. You will come up with any convoluted explanation, hoping that your audience is too confused to ask more or that you might hit a nugget of gold among the endless preaching.


Next time someone tells you I don't know, how will you react? Will you feel empathy that they are perhaps not confident enough and show them support to express themselves. Will you help them find the answer together, because we are after all human?


Conversely, next time someone wraps you up in an explanation that doesn't make sense, make sure you call BS and ask them more questions!


And for a brain shake up, Ozzy also doesn't know. He must be human.



34 views0 comments

Recent Posts

See All

Comments


bottom of page