The AI default and why we need to be careful

31 March 2025 by
5 minutes read
AI warning - reflection of code on a persons face

The other day at a social event (an adult scavenger hunt, if you must know), I watched as people defaulted straight to AI for answers. Without attempting to solve puzzles on their own, I heard echoes of ‘just ask AI’ from some key players. For fun, I did a poll the next day and found that a third of the group of 22 (including myself) had personally used AI at some point during the hunt. That’s huge.

Unsurprisingly, all of these AI users would be considered tech-savvy, early adopters. You know the type – the crypto enthusiasts with an army of gadgets that need charging daily. But what about everyone else?

While some people might use AI sporadically during their work or personal life, others may not use it at all. AI adoption exists on a spectrum, and where someone lands is a deeply personal choice. So, as AI embracers watch closely to see how the tech evolves, others might experiment or simply observe through gritted teeth as their everyday lives rapidly change around them. Inevitably, that’s a scary place to be.

Subtle, but rapid AI integrations

This divide becomes more worrying given how covertly AI is embedding itself into our daily lives. Just last month, Microsoft raised 365 costs to bundle in AI tools like Copilot. Many users may never even use these tools, yet they’re still footing the bill for their development. And we’ll probably see this more as tech companies look to recoup some of the massive investments ploughed into them.

This is just one example of how AI is creeping into everyday tasks at home and work in ways we barely notice. We see AI integrations cropping up in other places too, such as the entertainment sphere. Take Spotify for instance, where you can now use AI to generate a playlist based on your chosen prompt. While this is a great feature if you genuinely have no idea what to listen to, it does take the fun out of browsing through your favourite artists. Listening to music curated by AI isn’t quite the same as exploring for yourself, is it?

The real problem with AI over-reliance

Using AI isn’t inherently an issue – in fact sometimes it’s encouraged. It’s time efficient, easy to use and accessible – all positives when we’re just so busy.

But take my son, for example. He’s learning programming so sometimes turns to AI for help. Or rather, to fix his code for him. Something dangerous if you ask me, because if he doesn’t have a base knowledge of coding to start with, how can he develop crucial problem-solving skills for bug fixing?

So AI is potentially replacing tasks that people spend years mastering. And by relying on AI too much, we risk losing the ability to think critically and creatively, or worse, not developing these skills at all. And my son isn’t the only one. Along with my scavenger hunting friends, lots of people now habitually go straight to AI rather than putting pen to paper themselves – or at a minimum checking AI outputs before claiming them as their own. This is basically becoming the digital equivalent of copying someone’s homework, but with much higher stakes for our collective futures.

You’ve probably already noticed AI’s fingerprints everywhere recently – like the cringe-inducing overuse of buzzwords like elevate (a personal pet peeve) in ads or LinkedIn posts littered with far too many emojis. These telltale signs of AI overuse signal a worrying lack of human thinking.

And therein lies the issue.

Keeping the human in the loop

While AI is speedy at processing and condensing lots of data, it doesn’t necessarily understand context, intent, or nuance the same way we (humans) do. It can’t grasp the subtle tension, the unsaid hesitations, or the creative leaps that come from being close to real human experiences. And this is the part that AI will struggle to replace.

Remember, AI isn’t infallible either. Tech goes down from time to time, whether due to extreme cases like cyber attacks or just routine upgrades and releases. Right now, for example, Claude (an AI assistant like ChatGPT) is down, meaning no access whatsoever. A good reminder that if we rely too much on AI, we could be caught off guard when it’s unexpectedly unavailable. Good thing I still know how to work without AI.

So while AI is convenient and helpful, we could leave ourselves exposed to some pretty big flaws of the tech if we’re not careful. Notable limitations include biases and hallucinations (AI confidently responding in a false or misleading way). While many of us know they exist and sense-check responses, I worry that others might carelessly use AI outputs (or AI slop as it’s become known) without question.

That’s why human oversight matters so much – not only for day to day AI use, but especially for insight. Just because we can churn through mounds of data quickly, it doesn’t mean we should only do that. We should also be flexing our insight muscles – challenging data, questioning the findings and generally working to understand people in the most natural, intuitive ways we can.

Finding the balance for AI in research

In reality, while using AI to win a scavenger hunt is completely harmless, it represents a bigger trend. If people start to choose AI over thinking for themselves, we might lose the thing that makes us most irreplaceable – our humanness and our ability to understand human behaviour best. At its core, insight is about interpreting people’s needs, asking the right questions, and interpreting the nuances that AI simply can’t grasp. So why on earth would we give away the best part of our job?

This is an important detail for us at Whycatcher. We’ve built AI solutions into our platform to support human oversight in the form of real-time probing (keeping behaviours natural) and analysis (to make tons of media content more accessible for clients). Our aim isn’t to replace human interpretation, but to empower users to make informed choices, rather than force them into using AI blindly.

And there might be times you don’t want to use AI in your research – for example, with particularly sensitive business data (even though Whycatcher is completely private and secure). We’ll always respect your view on how to use it, and can turn it off entirely if you’d rather. But if you do want to use AI, we’ll guide you with tips and advice so that you can use it with confidence and get deep, trustworthy insights. We know how to turn AI to your advantage, so you get the most out of it.

Whatever you decide, we’re here for you. Get in touch to find out more about Whycatcher and our AI tools – we’d love to hear from you.

Written by

Maria is our Digital Lead.

She is curiously passionate about human behaviour, tech and data. So Whycatcher is truly home for her.

Maria can solve a rubix cube in 1min 12 seconds and has 63 houseplants (so probably needs to get out more).