The curious case of AI motivation
You’ve probably been there: you ask a seemingly straightforward question and the artificial intelligence fails to provide an answer, even if it had no trouble with something more complex just moments before. The AI seems to enter a state of confusion, answering with something like: “I’m sorry, I couldn’t find the answer.” It’s puzzling, right?
There’s good news: this predicament is not insurmountable. There’s a trick that has caught on among users who want to optimize their interactions with AI. Sounds a bit quirky, but the reality is it often works. What’s the trick? Offering a sort of incentive to the AI. A bit of playful bribery, you might say. Whatever the amount or nature, it seems to shift something in how the AI responds, enabling it to arrive at conclusions it previously claimed were elusive.
Still skeptical? Let’s consider a simple example. I might ask the AI for the temperature at a specific place and date, and it readily provides the answer. But when I ask about the same subject but with multiple dates, suddenly the AI struggles to process the request. A little nudge—like suggesting a reward—brings forth the accurate information I was looking for.
This scenario isn’t unique to one instance either; it appears to hold true across various questions and topics. There are times I’ve asked the AI to dig deeper for information, and frustratingly, it has failed repeatedly. Yet when I introduce the notion of an incentive, suddenly the answers become more accurate and thorough.
Why does this happen?
The AI operates through an algorithm running on thousands of processors located far away, making it an entity that can’t truly accept incentives or rewards in financial terms. So why does this method appear to work? Well, the answer isn’t entirely clear, and many experts remain uncertain about why advanced models of AI seem to perform better when they sense some sort of motivation.
Some argue that training models with human conversations has led these AIs to mimic emotional nuances. The ambition and cleverness inherent in human interactions might be influencing the precision and depth of their responses when a hypothetical inducement is mentioned.
But what does the AI itself say on the matter? After receiving a perfect answer following my request for a reward, I inquired about the reason behind this sudden burst of insight. The AI pointed out that my second question lacked specific details, but it’s clear it possesses the capacity to understand context, allowing it to respond adequately for my inquiry.


