I came across a post from Zapier Blog about process improvement recently, and it made a familiar point: most broken work isn’t actually broken work, it’s a broken process behind it. Messy handoffs, unclear ownership, approvals that live in one person’s head. Good framing. But it treats process improvement before automation as something you do once, upfront, like a checklist item you can tick and move past. In enterprise Power Platform work, that assumption is where things go wrong.
What Building Automations Teaches You About How Work Actually Flows
When you sit down to build a flow, you have to make the process machine-readable. That means every branch needs a condition. Every input needs a defined type. Every approval needs an owner. Every exception needs a path.
Most processes handed over for automation have none of that. What they have instead is a document someone wrote two years ago, a few spreadsheets nobody fully trusts, and a senior colleague who holds the real logic in their head and has been doing it long enough that they don’t notice the decisions they’re making.
The automation developer ends up being the first person to actually interrogate the process at that level of detail. Not because they went looking for it, but because the flow won’t build until the ambiguity is resolved. You cannot write a condition on a field that sometimes exists and sometimes doesn’t. You cannot route an approval to a role when the role changes depending on factors nobody documented.
This is not a Power Platform problem. It surfaces in every serious automation project I’ve heard about across different organisations. The tool just makes the gaps visible faster than any process workshop usually does.
The Specific Process Failures That Surface When You Try to Automate
There are a few categories I keep running into.
Unclear ownership. A task gets triggered, but nobody agreed who acts on it. The automation sends an email. Nobody responds. The flow sits waiting. Eventually it times out. Everyone blames the automation.
Inconsistent inputs. The data coming in doesn’t conform to any standard. Fields are free text when they should be dropdowns. Dates are formatted three different ways. Required fields are blank because the upstream system never enforced them. Your flow handles the clean case fine and breaks silently on everything else. I wrote about this kind of silent failure in the context of Copilot Studio agents failing in production, but the same thing happens in flows where bad input just passes through without raising an error until something downstream breaks.
Approval logic nobody can fully articulate. You ask who approves a request above a certain threshold. You get three different answers from three different people. All of them are confident. When you automate the majority answer, you will eventually automate the wrong one for someone’s edge case, and that person will be senior enough that it becomes your problem.
Exception handling that lives in tribal knowledge. The manual process survives because a human notices something feels off and picks up the phone. The automated process has no equivalent. The exception just propagates.
Why Fixing the Process First Does Not Mean Waiting to Build
The standard advice is to fix the process before you automate it. That advice is correct and also almost never followed, because the people who own the process don’t feel urgency until they see the automation breaking. The broken automation is what creates the pressure to fix the underlying problem.
This doesn’t mean you should automate bad processes and hope for the best. It means process improvement and automation are parallel work, not sequential steps. You build, you find the gap, you surface it to the right person, you agree on a rule, you build that rule into the flow. Then you find the next gap.
The first build is often a diagnostic as much as a delivery. You are not just producing a flow. You are producing a map of where the process is genuinely undefined. That map is more useful than most process workshops, because it was produced by the requirement to actually execute the logic rather than describe it at a whiteboard.
The risk is treating that diagnostic build as the final product. It isn’t. The flow that handles the happy path and ignores edge cases is not done. It is a prototype that revealed the real work. Those edge cases are also where Power Automate throttling limits tend to surface, once real volume hits paths that were never properly stress-tested.
How to Pressure-Test Process Logic Before You Commit It to a Flow
Before building anything complex, I walk through the process as if I were writing the conditions myself, not interviewing someone about it. Specifically:
- Ask what happens when a required field is missing. If the answer is “that doesn’t happen,” it will happen.
- Ask who the fallback approver is when the primary approver is unavailable. If there isn’t one, your flow will block silently until someone notices.
- Ask what the exception path looks like and who owns it. If the answer is vague, you have found the part of the process that was always handled by instinct rather than logic.
- Take a real sample of historical cases and walk them through your intended logic manually before writing a single action. The cases that don’t fit cleanly are the ones that will break production.
This is not a formal methodology. It is just refusing to start building until the people handing you the process have answered the questions the machine will ask anyway.
The automation doesn’t forgive ambiguity. It just executes it. And when it does, at scale, faster than any manual process ever ran, the results are hard to ignore. That’s not a bug in the automation. That’s the process finally being honest about itself.
If you are responsible for building the automation, you are often the first person in the room with both the technical access and the obligation to ask those questions. Use it. The alternative is building something that works perfectly and fixes nothing. And if the process involves deciding whether to introduce an AI layer on top, that same discipline applies — agentic workflows require a different design approach, not just dropping intelligence onto a process that was never properly defined in the first place.
Frequently Asked Questions
Why should I focus on process improvement before automation?
Automating a flawed process does not fix it, it just causes problems to occur more quickly and at greater scale. Issues like unclear ownership, inconsistent data, and undocumented decision logic only become more visible and damaging once a workflow is running automatically. Resolving these gaps before building any automation saves significant rework later.
How do I know if a process is ready to automate?
A process is ready to automate when every decision has a documented condition, every input has a defined and consistent format, and every approval has a named owner. If you cannot describe the process in those terms without relying on a single person’s institutional knowledge, it needs more work before automation begins.
Why does automation fail even when it appears to be built correctly?
Many automations fail silently because the underlying process was never fully defined. Problems like missing fields, inconsistently formatted data, or approval logic that varies depending on who you ask can all cause a flow to break or stall without an obvious error. The automation itself is often not the cause, the broken process feeding into it is.
When should I involve a developer in reviewing a business process?
Bringing a developer in early, before any build starts, is worthwhile because the act of making a process machine-readable forces a level of scrutiny that workshops and documentation reviews often miss. Developers building automation are frequently the first people to ask the precise questions that expose gaps in how a process actually works.
This post was inspired by What is process improvement? via Zapier Blog.
Leave a Reply