Three SASE deployment mistakes we see every quarter (and how to avoid them)

Three SASE deployment mistakes that turn six-week projects into six-month ordeals. Learn how to spot and fix them before they stall your rollout.
Cross-functional team reviewing a SASE deployment checklist during a planning meeting, highlighting the importance of stakeholder alignment.

What are the most common SASE deployment mistakes?

The three most frequent SASE deployment mistakes are treating the migration as a tool swap instead of an architecture change, copying legacy VPN access rules into a ZTNA model without rethinking them, and leaving old infrastructure running “just in case” long after it should have been retired. Each one adds months to the timeline and erodes the security gains that justified the project in the first place.

After helping dozens of mid-market organisations roll out SASE, three SASE deployment mistakes keep showing up. Not technical edge cases. Organisational patterns that turn a six-week project into a six-month one. We wrote about 10 common ZTNA mistakes earlier this year. The list that follows is broader. These are the patterns that stall entire SASE programmes, not just ZTNA configurations.

Mistake 1: treating SASE as another tool on the stack

The most damaging mistake happens before the first policy is written. Teams approach SASE as a product purchase, not an architecture change. They slot it next to existing firewalls and VPNs as an additional layer, rather than the replacement for most of them.

We saw this recently with a 200-user logistics company. Their IT manager purchased a SASE licence, configured it alongside their existing firewall cluster, and started routing some traffic through the new platform. Six months later they were running both systems in parallel, paying for both, and managing policies in two consoles. The security posture had barely changed because nobody had decided which system was authoritative.

This pattern is more common than most vendors will admit. Research from Gartner suggests that fewer than one in ten organisations extract the full value from their SASE investment. The reason is almost never the technology. It is the failure to treat SASE as a consolidation project rather than an addition.

The fix starts before procurement. Define which tools SASE will replace, not supplement. Map your current stack, your firewalls, VPN concentrators, web gateways, standalone SD-WAN, and mark each one with a target retirement date. If you cannot name what SASE replaces, you are adding complexity rather than removing it. The escaping the Frankenstack post walks through this consolidation logic in detail.

The practical tip: before you sign anything, write one sentence that completes “after SASE is fully deployed, we will no longer need ______.” If that sentence is hard to write, step back and revisit your architecture plan.

Mistake 2: copying VPN rules into ZTNA and calling it Zero Trust

The second mistake is subtler. Teams migrate their legacy firewall and VPN access rules directly into the new ZTNA model. “Finance team” gets the same broad access they had through the VPN tunnel. “Remote workers” inherits the same group policies. The old permission structure gets a new label. Nothing actually changes.

This is the “lift-and-shift” trap. It feels efficient because it avoids the hard conversations about who really needs access to what. But it defeats the entire point of Zero Trust. A compromised account in a lift-and-shift ZTNA deployment has the same blast radius as a compromised VPN account, which is to say, far too wide.

The deeper problem is usually dirty identity data. Most mid-market organisations have identity providers full of stale group memberships, inherited permissions, and generic service accounts that nobody has audited in years. Plugging that mess into a ZTNA platform just digitises the dysfunction. Industry data consistently shows that identity hygiene issues are among the top causes of access control failures, and they are entirely avoidable.

The fix has four steps. First, sanitise your identity provider before migration. Remove stale accounts, audit group memberships, and align groups to actual business roles. Second, map users to applications, not networks. Ask “which three applications does this role actually use daily?” instead of “which network segment do they need?” Third, start with read-only, low-risk applications in your pilot. A wiki or timekeeping system lets your team learn the model before the stakes get high. Fourth, enable device posture checks from day one. Skipping them during the pilot, which many teams do to reduce friction, trains users to expect access without security validation.

That last point matters more than it seems. Once you establish the precedent that unmanaged devices get the same access as managed ones, rolling back that expectation becomes a political fight. Do it right from the start.

For a deeper look at how identity providers integrate with Zero Trust access, that guide covers the specific integration patterns that prevent this mistake.

Mistake 3: leaving legacy infrastructure running “just in case”

The third mistake is the one that feels responsible but quietly undermines everything. Teams complete their SASE rollout. Users are connected through ZTNA. Policies are working. Then someone says: “Let’s keep the old VPN gateway running as a fallback. Just in case.”

Months later, the VPN is still active. The old firewall rules still exist. Jump servers that were supposed to be decommissioned are still reachable. Each one is an unmonitored access path that bypasses every Zero Trust control you just built.

We have seen this in nearly every engagement. The reasoning always sounds prudent. “What if the new platform has an outage?” “What if we need to roll back?” But the risk calculation is backwards. A dormant VPN concentrator with stale firmware and broad access rules is not a safety net. It is an attack surface. Cyber insurance claims data from 2025 shows that legacy VPN infrastructure is involved in the majority of ransomware entry points across the Benelux. The systems that teams keep “just in case” are the ones that get compromised.

The fix is a decommissioning schedule, written down and assigned. For every application you migrate to SASE, set a date when the old access path gets disabled. Not “eventually.” A specific calendar date, typically two to four weeks after successful migration, with a documented fallback procedure if something goes wrong during that window.

Here is what that looks like in practice:

Phase Action Timeline
Migration complete Application verified on SASE, users confirmed Day 0
Monitoring period Watch for access issues, track support tickets Days 1-14
Old path disabled VPN/firewall rule removed, documented Day 15
Infrastructure retired Hardware decommissioned, licences cancelled Day 30

The key insight: decommissioning is not a cleanup task you do after the project. It is part of the project. Each phase of your SASE architecture rollout should include “retire the thing this replaces” as a deliverable, not an afterthought.

How to spot these patterns before they slow you down

If you can answer three questions honestly before deployment begins, you avoid roughly 80% of the delay we see in the field.

First: what exactly will we turn off after SASE is deployed? If the answer is “nothing,” you are adding a tool, not changing your architecture. Go back to the architecture plan.

Second: have we audited our identity provider in the last six months? If group memberships and permissions have not been reviewed, you will import problems directly into ZTNA. Clean identity data is the single biggest predictor of a smooth rollout.

Third: who owns the decommissioning schedule? If nobody has a name and a date next to each legacy component, those components will still be running a year from now. Name the owner. Set the dates.

The organisations that answer these three questions before they start tend to finish on time. The ones that skip them tend to call us six months later asking why everything is taking so long.

For teams evaluating vendors right now, the SASE vendor comparison framework covers what to look for beyond feature checklists, including how to assess whether a platform’s architecture actually supports the consolidation and simplicity that prevents these mistakes.

Frequently asked questions

How long should a SASE pilot phase last?

Two to four weeks for the initial pilot covering remote access for 50 to 100 users is typical for mid-market organisations. The pilot should include at least three to five applications, with device posture checks enabled from the start. Extending the pilot beyond six weeks usually means the team is avoiding a decision rather than gathering data.

Should we decommission our firewall before or after SASE rollout?

After. Run both in parallel during the migration, but with a firm decommissioning schedule. Most organisations keep perimeter firewalls for north-south traffic during the transition and retire them as SASE coverage expands to cover those functions. The danger is not running both temporarily. The danger is running both permanently.

What is the biggest time-waster in a SASE deployment?

Dirty identity data. Organisations that start migrating before cleaning up their identity provider spend weeks troubleshooting access issues that have nothing to do with the SASE platform. Stale group memberships, orphaned accounts, and vague role definitions create friction at every step. Budget at least a week for identity hygiene before the technical migration begins.

Do we need a dedicated project manager for SASE?

Not necessarily a full-time project manager, but someone needs to own the timeline, the decommissioning schedule, and the cross-team coordination. SASE touches networking, security, and identity, three domains that often sit in different teams. Without someone keeping all three aligned, decisions stall. For mid-market teams, this is often the IT manager or a senior engineer with a clear mandate.

Can we roll out SASE without disrupting daily operations?

Yes, if you take a phased approach. Start with remote access through ZTNA, then layer in web security, then add site connectivity through SD-WAN. Each phase delivers value independently while keeping existing infrastructure active as a fallback. The combination of a cloud-native architecture and a single management console makes phased rollout practical even for small teams. It is why we built Jimber to support exactly that pattern, component by component, without requiring a big-bang migration.

Ready to avoid the mistakes that slow down most SASE deployments? Book a demo and we will map a phased rollout that fits your timeline and your team.

Find out how we can protect your business

In our demo call we’ll show you how our technology works and how it can help you secure your data from cyber threats.

Cybersecurity
Are you an integrator or distributor?

Need an affordable cybersecurity solution for your customers?

We’d love to help you get your customers on board.

checkmark

White glove onboarding

checkmark

Team trainings

checkmark

Dedicated customer service rep

checkmark

Invoices for each client

checkmark

Security and Privacy guaranteed