Writing

Thoughts on design, AI, and the future of interfaces

Remote Work Didn't Kill Innovation. We Did.

I've worked remotely since 2020. The transition was smoother than most — half my team already worked remotely on occasion, my boss was fully remote, and we had the infrastructure in place. Over the following years, we leaned into it. We experimented with remote events, huddles, and online rituals to keep teams connected. Some of it worked, some didn't, but the overall impression is that we've built a connected, efficient, and thriving remote culture.

But a connected culture and an innovative culture aren't the same thing.

The familiar criticisms of remote work go beyond connection and empathy. The deeper problem is that spontaneous collaboration disappears. Everything becomes a scheduled meeting. Calendars fill up. And somewhere along the way, we lose the space for "hey, what if..." and "you know, I had an idea..." — the moments that actually catalyze innovation.

So is in-person work more innovative by nature? I don't think so.

I just came off a week-long AI innovation lab in Toronto, and I saw real collaboration and innovation first-hand. It was certainly different from our usual video calls — people stepping on each other to speak one at a time, the odd camera off, everyone multitasking themselves into oblivion. But what I took away wasn't that remote work kills innovation. It's that our expectations around remote work do.

Remote work has done an invisible job of turning everyone into subcontractors at their own company.

It's become easier to follow rules than to make them — or break them. The default mode is execution, not exploration.

I believe my team is an exception to this, and I think there are a few reasons why. First, we're designers. Innovation is baked into the role — we never left the playing field. Second, we're collaborative-software-first. We work in Figma, where every design is shared and every cursor is visible. We can see each other working and jump into conversation at any time, just like sitting next to someone. Third — and most importantly — we have space to innovate because leadership creates it. At the innovation lab, the mandate was simple: innovate. That single permission allowed everyone to drop their firefighting priorities and think as creatively and ambitiously as possible for an entire week. And magic happened.

The problem, of course, is that it was one week. Everyone is back to squashing bugs and triaging feature requests. But what was gained isn't lost. The muscle is clearly there — we all saw it. And I don't think being together in a building was the magic ingredient. It was collaboration-first structure, a mandate to create, and permission to forget the usual.

We can build just as innovative a workplace remotely as we can in person. But we have to be far more intentional about creating the space and setting the boundaries to do it.

In-person work comes with some of that implied. Remote work doesn't. That gap requires deliberate design.

If we don't create these spaces in remote tech — real, recurring, protected spaces for exploration — we'll lose our innovative edge and drift into assembly-line product development. The kind that slowly stagnates everything those early, ambitious creators built.

The creators are still here. We just need to give them permission to create again.

AI-ccessibility: The Problem WCAG Couldn't Solve

I've spent the last 7 years meticulously testing interfaces with screen readers and assistive devices, ensuring WCAG compliance—but also that real users who are blind or don't have hands can navigate ecommerce checkout flows and other involved tasks.

The tools and standards like ARIA are incredible.

But they never quite nailed the entire experience.

The goal was always to make the experience as close to identical for sighted and blind users as possible.

Some tasks are just inherently visual.

Take choosing a seat at a venue.

Writing an appropriate description for each seat—plus nice-to-know details like "how far away is the bathroom?"—was a pipe dream for small teams with good intentions.

We did our best. We followed the standards. We tested rigorously.

But we were still asking blind users to navigate a fundamentally visual problem through workarounds.

Enter the new world order.

Natural language models can understand more context and language than I would have believed possible 7 years ago.

I see this as the dawn of a new age in accessibility work.

Chatbots that function inside software are a boon to any user who struggles with traditional interfaces and hardware.

Voice assistants were atrocious just a few years ago—nothing but frustrating corrections and marveling at how dumb the "smart" world was.

Times are changing.

Users have already started buying things directly through AI chat interfaces, and it's only going to get better and easier.

Here's what changes:

Instead of a blind user navigating a seat map through ARIA labels, they ask:

"I need an aisle seat near the front, close to accessible bathrooms."

The AI understands the request, knows the venue layout, considers accessibility needs, and offers options with context a screen reader never could.

That's not a workaround. That's an actually equivalent experience.

There's reason to be careful with this behavior—AI isn't perfect, trust takes time, and we can't abandon the standards that got us here.

But I see no reason to think we won't all be doing this soon, wondering how we ever went without it.

After 7 years of trying to make visual interfaces accessible through technical compliance, I'm excited to finally, confidently provide an interface I would consider as accessible as possible. Not because it follows WCAG to the letter. Because it meets users where they are and solves their actual problems.

Come what may—I'm here for it.

The Interface Won't Die—It'll Just Stop Asking Questions

With AI coming for all our hard-won prototypical user behaviors, what's the future of interface design?

Tough question. We haven't seen AI meaningfully cross into most everyday products yet. Chatbots exist—occasionally helpful—but our main interfaces are still the same. Forms get filled, buttons get clicked, toggles get… toggled.

But once AI like Claude and MCP can access most products that exist today, what's a lowly interface to do? Our beloved drop-shadowed, expertly-radiused combo boxes are going to miss the sweet touch of a mouse cursor as a more powerful force rips through configuration with cat-like reflexes.

If you've spent a decade figuring these problems out—like me—hefty change feels imminent.

Users don't need form fields anymore. They can just ask in their native language for a task to be completed, and it becomes completed. So what will they need, these people I've spent my career asking questions to so I can architect screens to make their lives better?

We probably don't quite know yet. Some people like talking directly to AI. Some are still chatting on a desktop in full screen. But we can assume a couple of things.

Here's my hypothesis on the future of UI in the Age of AI:

Interfaces won't disappear. They'll just stop asking questions.

Before, we had configuration screens where users input language, toggle toggles, spend time setting things up, then check it themselves before clicking submit or save.

My guess? We can drastically change this to be only about confirmation—more informative than interactive.

Think about it:

  • Users need to know what's happening (the state of the machine—is it executing? What task?)
  • Users need to confirm or double-check work (especially at first, trust won't cut it)
  • Users need context for what just happened and what's next

Claude plays with this already—"ruminating, concocting, etc." But as AI gets trusted with more complex tasks, I think users will want something more specific. Coding agents like Claude Code show this with task lists and execution plans. Perfect.

So what does this look like?

Instead of "fill out this form," the interface becomes:

  • Here's what's happening right now
  • Here's what I just did—is this correct?
  • Here's your history of actions
  • Here's what's scheduled
  • Here's what I suggest you do next

Imagine showing up to work with everything you need to do waiting for approval or edit. You click "accept" a few times and move on.

The interface becomes a dashboard of confirmation instead of a maze of input fields.

Designers who spent years perfecting form validation? That skill doesn't disappear—it transforms into designing clarity, trust, and verification flows.

The question isn't whether UI survives AI. It's whether we're ready to design interfaces that inform instead of interrogate.