Understanding the noisy landscape of decision-making
This article delves into how we, as humans, make choices and how sometimes random and unrelated things can sway our decisions. This is what we call ‘noise’. It’s like trying to think clearly in a room full of chatter.
Unlike humans, artificial intelligence (AI) excels at avoiding this noise by processing vast quantities of information in a logical way. This article delves into real-life examples and research to illustrate how noise disrupts choices, even in crucial business decisions. Additionally, it touches on the intriguing idea that machines may possess a form of ‘intuition’ that could surpass human ‘gut feelings’. Ultimately, the article sheds light on how AI could be the solution to cutting through the subtle influences that affect our choices.
Human decision-making in a maze of complexity
Human decision-making involves various elements like feelings, biases, and external influences, creating a complex puzzle. Despite efforts to make logical choices, we can be disrupted by ‘noise’ – random, unrelated factors that infiltrate our decision-making, particularly in critical areas like finance, healthcare, and job recruitment, leading to poor choices.
On the contrary, AI seems to have an advantage over the human brain. Its capacity to process vast quantities of data and employ strict, logical rules allows AI to potentially avoid distractions caused by noise, unlike humans. Some researchers are using advanced AI programmes like GPT to explore how our minds work – looking at how we connect ideas, understand thoughts, perceive personalities, make decisions, and navigate complex domains like the stock market.
Centring on the concept of ‘noise’ in decision-making, this blog explores how AI could enhance results by cutting through this noise. By identifying both human stumbling points and AI strengths, the goal is to refine decision-making across various fields, making it sharper, more effective, and efficient.
Noise, creativity, and AI: unravelling the connections
In his book ‘Noise,’ Daniel Kahneman describes noise as mental clutter that is undesirable in our decision-making. Meanwhile, creativity, which we often value and strive for, involves thinking differently and exploring new perspectives. A lot of studies, like those mentioned by Grant in 2016, focus on effectively evaluating creative ideas. Creativity involves a type of thinking called divergent thinking, which is quite like the concept of noise in psychology.
Assessing innovative ideas often involves bias influenced by noise. While Kahneman suggests that AI systems are free from ‘noise’, the introduction of Chat GPT in 2023, based on probability algorithms, introduces randomness and noise. Despite this, this article explores the idea that even an AI system with its kind of noise can be more consistent and less prone to error in judgment compared to humans.
Decoding machine-human decision-making dynamics
We’re exploring how machines and humans differ in their decision-making, particularly in predictions and judgments. Existing research indicates that even basic computer models can outperform humans in these tasks. Specifically, we’re focusing on various versions of the GPT programme – versions 3, 3.5, and 4. This programme is always updating, and although earlier studies showed it made similar errors in judgment as humans, newer versions seem to be getting better at avoiding these mistakes.
One big factor in decision-making is what we call ‘noise,’ which includes different types of biases. Humans frequently depend on intuition or ‘gut feelings’, which can lead to biased decisions. Despite some viewing intuition as useful, like in evaluating start-up projects, most behavioural economics research suggests it’s not ideal, particularly in situations lacking emotional or social cues. We’ve found that GPT, unlike humans, doesn’t fall into this trap of intuition bias.
Take creativity, for instance. It’s kind of like noise as both involve diverse thinking approaches. Some argue that diminishing noise may also decrease creativity. In evaluating creative works, greater unpredictability is anticipated compared to standard products, emphasising the need to narrow the range of variation in predictions.
Machine learning: Imperfections, insights, and bias mitigation
My research indicates that basic guidelines enable successful predictions for about half of Kickstarter projects. AI tools like ChatGPT outperform, achieving an 84% accuracy, while the average person may only make correct guesses 20% of the time. While humans can sometimes make mistakes based on intuition, it seems machines are currently better equipped to avoid these pitfalls. GPT technology has been evolving rapidly. The latest versions, like GPT-3.5 and GPT-4, are much better at answering tricky questions that test understanding and reasoning. They’re even good at spotting when a question might be a bit off.
Despite machine learning’s imperfections and occasional unpredictability, it generally proves more reliable than human judgment. However, it’s not just about how much noise or error there is in the process. We also must think about biases – the unintentional slants that can creep in. A system completely devoid of noise doesn’t always produce the most precise outcomes. Machine learning, on the other hand, often finds patterns and insights we might miss, especially in creative works. This is like solving the ‘broken leg problem’ – understanding unexpected situations that don’t fit the usual patterns.
Concerns about machine learning bias, particularly in areas like race, gender, or minority issues, are common. However, as these algorithms advance in complexity, the perceived bias might be less significant. Machine learning’s ability to process vast amounts of data, far beyond human capacity, helps mitigate potential biases. Tools like GPT, with internet access, prove valuable in addressing complex matters, such as evaluating creativity. This extensive data access aids in preventing basic mistakes and oversights.