All Blogs

The Future of Interfaces

Discussion

Discussion

Discussion

July 1, 2025

July 1, 2025

July 1, 2025

Interactions Define Interfaces

An interface acts as a common boundary between two systems, enabling interaction. What fundamentally characterizes an interface is the nature of this interaction. It consists of two core elements: how we communicate an action to the system and how the system responds. Over time, these modes of interaction have significantly evolved from typing to speaking, to simply pointing at a system; and from printed outputs to digital displays, haptic feedback, and voice-based responses.


This time, the evolution of interfaces is taking a different turn. While older modes of interaction like typing and voice commands still coexist and remain relevant, we are now advancing into AR and VR territories. However, the most significant shift is being driven by our familiar companion, AI. The advent of transformers and large language models (LLMs) has unlocked entirely new possibilities, ones we’ve long imagined in science fiction.

Think about how we interact with our phones today. We open the Photos app, select an image, tap “edit,” and manually adjust it. It may seem like a simple task, but it’s not necessarily easy for everyone. Now, imagine having Iron Man’s Jarvis at your side. Instead of navigating menus, you could simply say, “Hey Jarvis, I don’t like the tone of this image, can you make it feel like a crisp November day in New York?”


Notice the difference? The interaction is no longer just between the user and the system. There’s now an added layer that interprets our commands and communicates with the system on our behalf. This layer represents the future of interfaces: an embedded, intelligent intermediary.


While interfaces have always functioned as translators between user input and machine language, what’s different now is where and how that translation happens. For the first time, it’s happening live, right in front of us, and more importantly, it’s taking place on the user’s side, not the system’s. This means the user is in control, with the interface adapting to them, rather than the other way around.

There are already real-world examples of this emerging trend, though they’re not widely adopted yet. Take the browser “DIA,” for instance it allows users to control browser actions using AI, such as navigating tabs through simple prompts. Even Samsung’s Bixby, often overlooked, had the ability to perform system-level tasks like adjusting settings or deleting photos through voice commands.

These AI Interfaces Will Make Us Smart or Dumb?

The answer is both. With the rise of large language models (LLMs), these embedded intelligent interfaces have become capable of understanding and responding in natural language. We no longer need to learn the specific commands or language of a tool, as long as the AI understands it and we can express ourselves in natural language, the interface acts as the translator.

But how does this impact our own intelligence? In one sense, it raises the baseline anyone can now use complex tools with advanced proficiency, making everyone appear smarter. On the other hand, it shifts the scale. If advanced capabilities become universally accessible, the distinction between a novice and an expert blurs. When everyone is “smart,” what does being smart really mean?

No matter how powerful we become with tools, what truly sets us apart is our creativity and that remains firmly in our hands. With AI-driven interfaces, the barrier of needing to master complex tools is disappearing, which is a huge breakthrough for the field of Human-Computer Interaction. After all, the goal has always been to make technology simpler and more accessible. If this technology delivers on its promise, it could make creation easier and more intuitive for anyone driven by passion and imagination.

What Shape Will the Next Gen Interfaces Take?

No shape. I came across this concept in my Conversational UX class with Prof. Shruthi Chivukula, where I learned the term “Generative Interfaces.” This doesn’t refer to generative AI, but to interfaces that create themselves at runtime, adapting to the specific needs of the moment.

This idea captures the essence of what future interfaces might be. Designers will still define the underlying rules, but the actual interface that appears will be shaped by context, which opens the door to infinite possibilities. In a way, AI-powered interfaces are helping us, as designers, achieve the long-standing goal of creating more inclusive technology. At the same time, they introduce an entirely new design frontier. Just like in life, when one chapter ends, another begins and this next chapter doesn’t feel far off. I’m only excited about the shift that lies ahead.

It is by logic that we prove, but by intuition that we create.