AI has a ‘That’s a Feature, not a Product’ Problem

2 min read

May 22, 2024

I remember the first time I got access to GPT. After a decade long habit of setting alarms on Siri, GAssistant and Alexa, my first interactions with GPT were elementary. Realizing my folly instantly, I had to pause and recalibrate my approach; exponentially increasing the complexity of the things I asked. In a few minutes I realized this was a step change in computing. In the last two years however that delight has worn off. Is this because the human mind is fickle, or is there something else happening here?

I use everything built with AI daily - ChatGPT, Copilot, Claude, Gemini, Perplexity and more. And yet, despite my proficiency in these tools, I’ve failed to convince my wife and best friends to use them. My wife works at the forefront of edge computing and IoT, our best friends (a cancer specialist and a peds anesthesiologist), all three couldn’t care less; let alone pay $20/month. Their collective reasoning, “these assistants are very tedious to use.”

My wife likes more control over the tools she uses, “when I’m looking for content around gardening I want a tool to enable serendipity. AI assistants remove the joy of discovery Google and Reddit provide. Links let you traverse the web freely, unlike a conversational interface which is linear, shifts the heavy lifting on you, and makes you think.”  

That’s the first rule of UX, “Don’t Make Me Think.” Also happens to be the one of the OG design books. 

Both doctors aren’t keen to upgrade either, “assistants are good at many things, just not great at any one thing.” It’s also something they refuse to use at work, “we can’t trust the responses enough while evaluating cases so we stick to Epocrates and Nuance.”

My team knows this about me. I believe the day these GPT-powered assistants will reach the quality I expect them to will be the day it can recommend and manage my vacation for me. Not the kind that Google demoed at I/O, where Gemini assembled a bunch of random content visually. But one that understands my tastes, knows the kind of food I like, how I prefer to travel, leans more into experiences I would enjoy over picking popular spots from Trip Advisor. Assistants today are really not personal and that’s a huge problem. 

So, to summarize the issues for consumers so far:

  1. Paradigm Inconsistency - the current paradigm where we have to ask a question for everything in a conversational manner doesn’t render the best experience. 

  2. Cognitive Load - the current metaphor puts a lot on the user to find the optimal path of prompt to outcome ratio. 

  3. Use Case - the assistants are too general and not really good at any one single compelling thing that can reasonably convert people to pull out their wallets. 

  4. Zero Trust - these assistants don’t invoke trust in critical situations and leave a lot of room for interpretation. 

  5. Missing Personalization - none of these assistants feel personal to any one individual and simply regurgitate patterns instead of applying contextual learning. 


Maybe I’m extrapolating from a sample size of three, but let’s compare these sentiments to some public data. ChatGPT is currently stagnating according to several reports indicating declining web traffic. In January 2024, it saw an 11% decline and the iPhone app has plateaued at around 414 million users. While the numbers are 3rd party and only OpenAI would have a clear picture of this, Sam Altman hasn’t been shy about the state of GPT today, “GPT-4 kinda sucks. At best it’s sort of like a good brainstorming partner.” So the question becomes, as a consumer, is $20/month a worthy enough investment? For most people, that answer is a quick no. On the flip side, every model now being developed is almost 90% cheaper to build than the last one. OpenAI has already made GPT-3.5 free to use without even having an account and this trend might continue to GPT-4 once the next version (not likely to be called 5) comes out. This leaves little to no reason for most consumers to even upgrade to the premium models. Bottom line here really is there are no compelling use cases (yet) for consumers to warrant an upgrade. 

So, what do we do?

We need to get out of the current fixed mindset of slapping AI on the side into existing products or creating general assistants. Instead, focus on building entirely new digital experiences and dedicated agents that can be orchestrated into contextual flows. 

The most fundamental thing we need to understand is model innovation has still been the focus across the landscape. We are yet to see solution innovation, primarily because the model out of the box is so incredibly capable today. The people who understand that distinction and can leap beyond the model to deliver a truly unique consumer solution will breakthrough.

At the advent of the iPhone, Apple led with the “there’s an app for that” philosophy. We need a new philosophy for this new paradigm. Maybe defining this new paradigm will lead us to creating the compelling product experiences that will warrant consumers to consider carving out a budget for a new product. Until then, the current state of AI is simply a great addition to the current roster of products we already use rather than the entirely new thing yet to be invented.