All articles
ai

The interface is the product - AI revolution is a UI/UX revolution

Jef Raskin, the designer who originated the Macintosh project, wrote in The Humane Interface that “as far as the customer is concerned, the interface is the product.” He wrote that in 2001. It was true then. It is more consequential now than at any point since.

The difference is the direction of the problem. In 2001, Raskin’s concern was that bad interfaces obscured good software. The functionality existed; users just could not reach it. Today, the problem has inverted. The functionality exists at a scale no human can fully comprehend and the interface does not just obscure it. The interface determines what users believe is possible at all. When users cannot see the ceiling, the interface becomes the ceiling.

This is not a minor UX observation. It is the central AI product design challenge of the current era. And most teams building AI products are not treating it that way. The work that remains is making AI interfaces personal and contextual - going well beyond the chatbot as the default interaction model.

The window became a wall

For most of computing history, the interface was a window. A menu exposed a list of functions. A form collected specific inputs. A dashboard surfaced data that existed in a database. The relationship between UI and capability was essentially one of translation: here is what the system can do, expressed in terms a human can navigate.

That model held through the web era and through mobile. The screen changed shape. The interaction patterns changed. But the underlying logic stayed the same. Design meant making existing functionality discoverable and usable.

AI breaks this. A large language model does not have a discrete function list. It has a probability distribution over possible outputs, shaped by training data at a scale that no product team can fully inventory. The capability is not a list to be exposed. It is an emergent space to be explored and the interface determines whether users explore it at all, and in which directions.

Nielsen Norman Group’s AI UX research makes this concrete: users abandon AI tools not primarily because the outputs are wrong, but because they cannot form accurate mental models of what to ask or expect. The problem is not performance. It is perception. The interface failed to communicate what the system could do before the user gave up and left.

This is a different class of design problem than anything the industry has faced before. The interface is no longer a window into functionality. It is the boundary of perceived capability. What users believe the system can do, and when, and how reliably, that is now a product of design, not of the underlying model.

The text box moment

When ChatGPT launched in late 2022, it shipped with a single text input field and a blinking cursor. No categories. No example prompts initially. No structured form to fill out. Just: say something.

Prior AI interfaces, even technically capable ones, had layered on structured inputs, dropdowns, workflow builders. They were trying to help users by constraining the interaction. The constraint became a ceiling. Users who could not fit their need into the provided structure concluded the system could not help them.

The empty text box removed that ceiling from sight. It lowered activation energy below any prior AI interface. The implied message was: whatever you are thinking, try it here. That message was the product, as much as any model output.

But this is also where the chatbot’s limitation becomes visible. The text box was a brilliant entry point. It is not a sufficient long-term interface architecture. It assumes that users always know what to ask. It offers no ambient context. It resets on every session. It treats every user, regardless of their role, their history, their current task, or the time of day, identically.

The text box democratised access to AI capability. It did not personalise it.

The chatbot is not the destination

There is a version of the current AI product landscape where everything becomes a chat interface. One box. Universal input. The promise is simplicity. The reality, at scale, is that simplicity without context is just a different kind of friction.

Consider what a doctor needs from an AI tool at 8am during a ward round versus what they need at 3pm reviewing discharge summaries. The capability set might be identical. The relevant surface of that capability, the questions worth asking, the outputs worth showing, the errors worth flagging, is entirely different. A single text box treats both moments the same.

Or consider an analyst who uses an AI tool every day. Their mental model of the system deepens over time. Their needs become more specific. The interface that was appropriate when they were learning to trust the system is not the interface that serves them when they are trying to work at speed. A static chat interface does not adapt to either dimension.

Anthropic’s own writing on Claude’s character design acknowledges that personality, tone calibration, and conversational defaults are deliberate design choices that shape user trust and return rate. These are not incidental outputs of the model. They are chosen positions, maintained intentionally, because they produce measurable differences in how users engage. That is interface design operating at the level of behaviour, not layout, not navigation, but the texture of the interaction itself.

The teams building products that last are not treating the chat interface as the final form. They are treating it as one surface among many, valuable for exploration, insufficient for expertise.

What the winning products are building instead

Three patterns are emerging in AI products that are retaining users and expanding the depth of engagement over time.

Progressive disclosure of capability. Rather than confronting users with an empty box and hoping they will discover what the system can do, well-designed AI interfaces surface relevant capability at the moment it becomes useful. Not a list of features. Not a tutorial. A contextual suggestion, timed to what the user is doing and what the system has observed about their patterns. The interface teaches the model’s capability through use, not through documentation.

Legible AI action. Figma CPO Yuhki Yamashita, in conversation with Lenny Rachitsky, put this clearly: the hardest problem is not what AI can do, but how to show users what it did and why. When AI takes an action, summarising a document, restructuring data, generating a recommendation, the interface needs to make that action legible. Not just showing the output, but making the reasoning visible enough that users can calibrate their trust appropriately. Opacity produces either over-trust or abandonment. Neither produces a useful working relationship.

Personalisation and contextualisation at the interface layer. This is the one most teams are underinvesting in. The AI does not need to change. The surface that presents it does. Same model, different interface, shaped around the user’s role, their current task, and what the system has learned about how they work. This is exactly the approach behind Pelaris — the same coaching intelligence, presented through a different interface shaped around each athlete’s sport, readiness, and goals.

Personal and contextual, not universal

This is the design shift that has not yet been fully named. Apps and dashboards are not going away. The idea that AI agents will simply replace structured interfaces misunderstands what those interfaces do. They do not just route inputs to outputs. They carry context. They reflect the user’s role. They surface the right subset of capability at the right moment.

What changes is not the existence of the interface. It is the degree to which the interface is static.

A traditional dashboard shows the same layout to every user with the same permissions. An AI-informed interface shows each user the data, actions, and prompts that are most relevant to them, based on their role, their current task, the time of day, what they worked on yesterday, and what the system has learned about how they make decisions. Same underlying product. Entirely different surface.

This is not a minor iteration on existing design practice. It may be the largest shift in how humans interact with software since the graphical user interface was introduced. The GUI made computing accessible to people who could not write code. The web extended that access across distance. Mobile extended it across time and location. Each of those transitions reshaped entire industries, created new platform monopolies, and made previously dominant interface patterns obsolete.

Contextual, personalised AI interfaces represent the next transition in that sequence. The products that figure out how to present the right capability, to the right user, in the right moment, without requiring that user to know what to ask, will do to current software what the GUI did to the command line.

Where builder attention is going wrong

Most product teams building on AI today are allocating the majority of their engineering attention to two things: prompt engineering and model selection. Which model to call. How to structure the context window. What temperature settings produce acceptable outputs.

These are real problems. They are not the bottleneck.

The bottleneck is that users are interacting with interfaces that do not reflect what they know, what they are doing, or what they are capable of. The model produces a useful output. The interface fails to present it in a way the user can act on. The user disengages. The team concludes the model needs work.

The interaction design loop needs to run at the same cadence as the model evaluation loop. Treating the interface as a hypothesis about what users believe the system can do, and testing it with the same rigour applied to model outputs, is not an optional add-on to AI product development. It is the work.

This holds differently for internal tools versus consumer products, but the principle is identical. Internal tools serve users who can be trained, who have context, who will use the product under some degree of obligation. The feedback loop is faster and more legible. But if those users cannot accurately predict when the AI will help them and when it will not, they will route around it. The model quality is irrelevant if the trust calibration is broken.

Consumer products face the same problem at a higher difficulty setting, with no fallback.

The interface is the product

Raskin was right twenty-five years ago. The interface is the product. Not a layer on top of it. Not a way of presenting it. The product, as experienced.

What has changed is the nature of the challenge. Then, the goal was to make static functionality discoverable. Now, the goal is to make emergent, dynamic capability personally useful, surfacing what each user needs, in the context they are operating in, at the moment it matters.

The teams treating that as a design problem, and resourcing it accordingly, are building the next generation of products. The teams still treating it as an interface layer on top of a model are building a text box. The text box had its moment. The moment has passed.