The Renters Rights Act Isn’t Just a Compliance Problem. It’s a Systems Problem.
The Renters Rights Act Isn’t Just a Compliance Problem. It’s a Systems Problem.
The Renters Rights Act Isn’t Just a Compliance Problem. It’s a Systems Problem.
© 2008 - 2026 AppSphere Consultants. All Rights Reserved.
Asyym Technology Ltd Trading as AppSphere Consultants.

AI-assisted development is moving fast. Too fast in some cases. We recently inherited a telecoms onboarding workflow with no error handling and no fallback journeys. Here’s what we found.
Amar Mohammed
01 05 26
There’s a moment in almost every AI-assisted development project where everything looks promising.
The workflow runs. The logic is clean. The demo is smooth. Someone in the room says “this is incredible” and they’re not wrong.
Then real users arrive.
We picked up a telecoms client onboarding project that had been built using AI assisted development. Fast turnaround. Visually coherent. Functionally plausible.
When we got into it properly, one thing stood out immediately.
There was no error handling. Not a single fallback.
Every API call assumed it would succeed. Every address assumed it existed in the database. Every user was assumed to complete the journey in sequence, without hesitation, without going back, without getting confused or abandoning halfway through.
The workflow had been designed for a fictional user. One who enters valid data first time, never makes a mistake and never stops.
Real users are not that user.
A postcode that returns no results. A user who closes the browser halfway through step three. A new customer who doesn’t understand what’s being asked and just stops. None of these had been accounted for. There was no alternative path. No recovery journey. No plan for what happens when the assumption breaks.
In a telecoms onboarding flow, these aren’t edge cases. They’re a significant percentage of your actual traffic.
AI coding tools are genuinely impressive. They can produce functional code at a speed that wasn’t possible two years ago. That’s real and it’s not going away.
But they have a fundamental limitation that doesn’t show up in the demo.
They don’t ask what happens when things go wrong.
That question - What happens when this breaks, when the user doesn’t behave as expected, when the external system doesn’t respond - is not a technical question. It’s an experiential one. It comes from having watched real users interact with real systems and seen the hundred ways a journey can fall apart that nobody anticipated in the planning session.
There’s something else we see consistently in vibe coded builds. Duplication.
AI tools have a tendency to recreate features rather than extend them. So instead of growing a function over time; building on it, connecting it to adjacent parts of the system, making it more capable. You end up with multiple versions of the same thing sitting in different parts of the codebase. Each one slightly different. None of them authoritative.
At the start this looks like efficiency. At scale it looks like technical debt.
User journeys need to be defined before a single line of code is written. Not at a high level. In detail.
What does the user do if the postcode doesn’t exist? What happens if they abandon at step four and come back two days later? What does the system do if the payment API times out? What is the recovery path for someone who gets stuck?
These questions don’t feel exciting. They don’t make for a good demo. But they are the difference between a product that works for real users and one that works for the person who built it.
AI tools will not ask these questions unprompted. They will build what they are told to build and they will build it confidently whether or not the foundation is sound.
If you are currently building with AI assisted development (or evaluating whether to) here is the honest advice.
The tools are capable. Use them. But bring experienced hands to define the user journeys before the build starts, review the error handling before anything goes to production, and make sure someone is asking the uncomfortable questions about what happens when things go wrong.
The cracks in a vibe coded build don’t appear on day one. They appear at scale, when real users arrive and the assumptions embedded in the code start to meet reality.
At AppSphere we’ve spent years working on the user journey and systems architecture side of this. Making sure the workflows that run businesses are built for how people actually behave, not how we wish they would.
If you’re seeing cracks in something you’ve built, or want to sense check something before it goes live, we’re happy to take a look.
This post is based on real project experience. Client details have been anonymised.
Loading related posts…
The Renters Rights Act Isn’t Just a Compliance Problem. It’s a Systems Problem.
The Renters Rights Act Isn’t Just a Compliance Problem. It’s a Systems Problem.
Lead with the legislation urgency: The Renters' Rights Act comes into force on 1 May 2026, and for small landlords managing a handful of properties, the compliance obligations have grown significantly
Lead with the legislation urgency: The Renters' Rights Act comes into force on 1 May 2026, and for small landlords managing a handful of properties, the compliance obligations have grown significantly