Tag: Flow Performance

  • Power Automate Throttling Limits Will Break Your Flow in Production

    Power Automate Throttling Limits Will Break Your Flow in Production

    If you have ever had a Power Automate flow run perfectly in testing and then start failing two weeks after go-live, Power Automate throttling limits are a likely culprit. Not a bug in your logic. Not a connector issue. Just the platform telling you that you asked for too much, too fast.

    This post is not about what throttling is in theory. It is about what it looks like when it hits you, and what you can actually do about it.

    What Power Automate Throttling Actually Looks Like

    Throttling in Power Automate surfaces as HTTP 429 errors. You will see them in your flow run history as failed actions, usually on connector calls. SharePoint, Dataverse, and HTTP actions are the most common places I see them show up.

    The problem is that most people do not notice at first. The flow has retry logic built in by default, so it quietly retries and sometimes succeeds. Then load increases. Retries stack up. Runs queue behind each other. Eventually things time out or fail hard, and by then you have a real incident on your hands.

    I ran into this building a document processing flow internally. Under testing with twenty files it was fine. Under real load with several hundred files triggered in a short window, the SharePoint connector started returning 429s, retries piled up, and runs that should take seconds were taking minutes or failing entirely.

    Understanding the Two Layers of Throttling

    There are two distinct layers and conflating them leads to bad fixes.

    The first is platform-level throttling. Power Automate itself limits how many actions a flow can execute per minute and per day depending on your licence tier. Performance tier and Attended RPA add-ons give you higher limits. If you are running high-volume flows on a standard per-user licence, you will hit these limits faster than you expect.

    The second is connector-level throttling. This is imposed by the service on the other end, not by Power Automate. SharePoint has API call limits per user per minute. Dataverse has its own service protection limits. An external API you are calling over HTTP has whatever limits its vendor decided on. Power Automate has no control over these, and the retry behaviour it adds does not always help if you are genuinely over the limit.

    Most tutorials only mention one of these. Then your flow breaks in prod and you spend an afternoon figuring out which layer you actually hit.

    How to Handle Power Automate Throttling Limits

    There is no single fix. The right approach depends on which layer is throttling you and why.

    Slow down intentional bulk operations. If your flow is processing items in a loop, add a Delay action inside the loop. Even a one or two second delay dramatically reduces API pressure. It feels wrong to add artificial waits, but it is far better than random failures.

    Reduce concurrency. By default, Apply to Each runs with a concurrency of 20 or 50 depending on settings. Dropping this to 1 or 5 is often enough to stop triggering connector-level throttling. Yes, your flow will run slower. That is usually acceptable. Failed runs are not.

    Batch instead of looping. SharePoint and Dataverse both support batch operations. If you are creating or updating records one at a time in a loop, look at whether you can batch those calls. Fewer requests means less throttling exposure.

    Check your licence tier against your actual volume. This one people skip. If you are running flows that process thousands of actions per day, look at your licence entitlements honestly. The Power Automate Process licence exists for high-volume scenarios. Using a per-user licence for something that genuinely needs a process licence is not a workaround, it is a problem waiting to happen.

    Do not rely on default retry logic as a strategy. The built-in retry handles transient blips. It is not designed to absorb sustained throttling. If your flow needs retries to survive normal operating conditions, that is a signal to fix the root cause, not tune the retry settings.

    The Monitoring Gap

    Most teams I talk to have no visibility into throttling until something breaks. Flow run history shows failures, but it does not surface throttling patterns clearly. Setting up alerts on failed runs is table stakes. What is less common is tracking run duration over time. A flow that starts taking twice as long to complete is often being quietly throttled before it starts failing outright.

    Azure Monitor and the Power Platform admin centre both give you data here. Use them before users start sending messages asking why the automation is slow.

    The Bottom Line

    Power Automate throttling limits are not a corner case. They are something you will hit if your flows handle real enterprise volume. The fix is almost never a single setting. It is a combination of slowing down bulk operations, reducing concurrency, batching where possible, and being honest about whether your licence matches your workload. If you are also thinking about how automation fits into larger orchestration patterns, agentic workflows are not just fancy automation and require a fundamentally different design approach from the start.

    Test under realistic load before go-live. Not twenty items. The actual volume you expect in week three after rollout.

    Frequently Asked Questions

    What are Power Automate throttling limits and why do they cause flows to fail?

    Power Automate throttling limits are restrictions on how many actions or API calls your flow can make within a given time window. There are two layers: platform-level limits set by Microsoft based on your licence tier, and connector-level limits imposed by external services like SharePoint or Dataverse. When these limits are exceeded, you get HTTP 429 errors that can cause flows to fail or time out under real production load.

    Why does my Power Automate flow work in testing but fail in production?

    Testing typically uses a small number of records or files, which stays well within throttling thresholds. Once real users and data volumes are involved, API call rates increase and throttling kicks in. Built-in retry logic can mask the problem initially, but as load grows the retries stack up and flows start timing out or failing outright.

    How do I fix throttling errors in a Power Automate loop?

    Adding a Delay action inside your loop is one of the most effective ways to reduce API pressure during bulk operations. Even a one to two second pause between iterations can significantly cut the rate of connector calls and prevent 429 errors from accumulating.

    How do I know if my Power Automate flow is being throttled?

    Check your flow run history for failed actions showing HTTP 429 responses, which is the standard signal that a throttling limit has been hit. You may also notice runs taking much longer than expected, since the built-in retry logic can quietly delay execution before an eventual hard failure.