Written with Josh Doidge.
What is confirmation bias?
Remember the last time you found yourself in the middle of a heated discussion with a friend saying something like, “Paddington Bear is the best movie of all time!” Just when you were about to lose the argument, you managed to pull up a nondescript article from Google that remotely proved your point. Sure, you may have silenced your friend into a confused stupor, but were you really right? We’ve noticed in our research and studies that people think they are right all the time. We think this is a disconcerting trend, and something that requires your attention.
Humans, by nature, have inherent biases. Biases (or heuristics) are rules of thumb or shortcuts we use. Unlike certain rules of thumb, biases contain systematic errors in our decision making and judgments as well as remembering our past and emotions. We use biases to simplify and understand the infinitely complex world. In the process, we fall prey to them because of our limited cognitive resources, which get used up very quickly. While there is nothing illogical about using heuristics and biases, these biases can close us off to what’s really happening in the world.
One bias that is particularly important in clinical psychology is called the confirmation bias. Confirmation bias is where you look for information that already confirms the theory or existing beliefs you have. Confirmation bias is particularly deadly because we never seek information that disproves our beliefs.
Psychology on confirmation bias
Piaget, one of the most influential developmental psychologists of all time, had a theory of assimilation and accommodation for how we represent the world. Assimilation involves incorporating new information or experience into an existing framework or existing ideas. Accommodation is the opposite: it is the process by which old ideas or schemas are altered, or new schemas are developed as a result of new beliefs. In order to function in the world, we need both assimilation and accommodation.
A person who believes they get everything right is a person who no longer engages in any active serious accommodation, and they seek to hear the same story over and over. We tend to rely heavily on confirmation bias while forming our narratives. Have you ever come up with a story that explains all the tragedies and imagined tragedies that befall you in your life? Even as your friends and loved ones challenge you about some part of the narrative or try to poke holes in it, you insist that it is immutable? This would be an example of confirmation bias as well as assimilation.
Confirmation bias makes it hard for people to learn new things, gain insight into themselves, or other people because they just seek information they already know. They might also choose to hang out with people with the same political views or ideology, to confirm what they already know and believe. We see this so often in a fractured polarized political sphere: Left vs Right. Confirmation bias leads us to minimize the issues and the imperfections of our own theories and beliefs, but this cannot lead to real conversation or engagement.
Can AI help us?
Can Artificial Intelligence help us overcome confirmation bias? With advances in Artificial Intelligence, you might wager that an AI system, more intelligent than humans, would be able to solve the confirmation bias problem. That is, in the hypothetical future, we might trust an unbiased agent to arbitrate our minds’ conundrums. This type of agent needs to have two desirable properties: a higher form of intelligence than humans and impartiality. But, can an AI agent possibly be unbiased? Before we can answer that question, we must briefly review how AI systems currently work. All AI agents (also known as models) either teach themselves how to perform a task, or are trained by human beings. The former type of learning is called unsupervised learning, and the latter is known as supervised learning. To differentiate these two forms of learning, consider the task of classifying whether a tumour is benign or malignant. In a supervised learning setting, a human expert (a radiologist in this case), labels a series of tumour images as malignant or benign. In an unsupervised learning setting, the agent looks at a series of images and autonomously learns whether an image is benign or not.
Generally, unsupervised learning is preferred over supervised learning as it does not require a manual labelling process. Manual labelling can be very expensive since it requires domain expertise. Moreover, it can also induce human biases, and therefore an unsupervised learning is preferred for tasks that are sensitive to bias. How do we know whether an AI agent is performing well on the given task? A performance metric (such as accuracy and speed) always accompany a given task. The changes in the metric serve as a feedback mechanism for the agent to perform better on the task. The metric, however, can turn out to be a problem. Goodhart’s law states that “when a metric becomes the target, it ceases to be a good metric”.
In our daily lives we see examples that adhere to this law: sales people trying to sell products at higher discounts so as to meet their monthly targets or students memorizing text without understanding it so as to score well on a test. AI models are not an exception to this law either. In supervised learning, supposing that 99% of the tumours in the labelled dataset are benign, the model might learn to predict all tumours to be benign. Although this model is foolish, it is extremely accurate! Since we care more about correctly identifying malignant tumours, this “all-benign” model will serve no practical purpose. Similarly, in the unsupervised learning setting, a model might selectively pick data points that helps it achieve a higher performance metric. If it is known that in the real-world, 99% of all tumours are benign, the model might discard all instances of malignant tumours in the dataset. It might selectively pick benign cases of tumours so as to attain a higher performance metric in the real world, thereby inducing confirmation bias into the system. Even in an ideal unsupervised learning setting, the agent can demonstrate confirmation bias.
Confirmation bias is harmful for so many of us because it prevents us from updating our model of the world. An actual map of the world is almost never an accurate representation of the world’s terrain. A good cartographer understands that the mapping is not the same as the actual forests, oceans, and mountains, and will seek to improve the model by facing what they got wrong. A lazy cartographer, seeking to confirm their bias, will simply find reasons why their model is right. Don’t get us wrong! We all want to get our models right all the time. But chances are if you have a model that is predicting something with 100% accuracy, it is not accounting for certain types of data or information. The model likely does not adequately look at information that disconfirms itself, either. This is why it is so important to get the model right most of the time and be okay with being wrong some of the time.
So, what can you do?
Currently, the Artificial Intelligence community addresses the bias problem by evaluating a model on several metrics. Yet, biases can still find their way into models through various sources, such as the origin of the dataset or the choice of metrics (humans still get to decide the metrics!). At the same time, as AI models are becoming more advanced, they are also becoming increasingly uninterpretable. If we can’t interpret the model, we might never know if it has become truly unbiased. What we can do, however, is be aware of our confirmation biases and learn to avoid them. As the age-old aphorism goes, “know thyself!”. One of the most effective problem-solving strategies, or ways to improve on an argument, while looking at the world more effectively, is seeking out information for why you are wrong. Or seeking out information to disconfirm your hypothesis not just confirm it. Doing this simple step can be life-altering.
If you’re a person willing to disconfirm your beliefs, your theories, and accept you are not right about everything (the wisdom traditions call this humility), your models will adapt in manageable ways allowing you to make small important changes that can bring about a big difference in your life!