Your security dashboard shows 99.7% of threats blocked. Impressive number. But what about the other 0.3%? In an organisation with thousands of web requests per day, that small percentage translates to dozens of potential incidents slipping through.
Traditional Secure Web Gateway (SWG) metrics were designed for a simpler time. Traffic flowed through a central gateway, threats had recognisable signatures, and blocking known bad sites was enough. That world is gone. Attackers now use polymorphic malware, host payloads on legitimate cloud services, and spin up domains that live for hours before disappearing.
The metrics that made sense a decade ago now provide a false sense of security. This guide explains which measurements actually tell you something useful about your web security posture.
The problem with blockrates
Security vendors love blockrate statistics. Marketing materials promise 99.9% detection rates against known malware. Independent tests measure how many samples from a malware zoo get caught. These numbers look reassuring in a presentation.
They don’t reflect reality.
Blockrates measure performance against yesterday’s threats. When a new phishing site launches, it has no reputation. When malware gets repackaged with different code, hash-based detection misses it. When attackers hide command-and-control traffic inside legitimate cloud services, domain filtering can’t help without breaking productivity.
The math is brutal. A 99.5% detection rate sounds excellent until you calculate what it means at scale. An organisation generating 100,000 web requests per day would see 500 potentially malicious requests pass through undetected. Every single day.
This is before considering the detection gap for truly novel threats. Zero-day exploits, by definition, have no signature. Highly targeted attacks get crafted specifically to evade the filters your organisation uses. The threats that matter most are precisely the ones that blockrates can’t measure.
False positives: the hidden cost nobody tracks
Chasing higher blockrates creates a second problem. When security tools become more aggressive, they start blocking legitimate traffic. A marketing team can’t access a competitor’s website. A developer gets cut off from a code repository. A sales rep loses access to a prospect’s portal.
Each false positive triggers a chain reaction. The user submits a helpdesk ticket. IT investigates. Someone adds an exception. The security team loses confidence in their rules. Users learn to request blanket whitelists.
Alert fatigue compounds the damage. When analysts see dozens of false positives daily, they start ignoring alerts. The real threat notification gets lost in the noise.
Most organisations don’t measure false positive rates systematically. They track the number of threats blocked but not the legitimate requests incorrectly flagged. This creates a blind spot. You’re optimising for a metric that looks good in reports while ignoring the one that determines whether your security actually works in practice.
Uncategorised sites: where attacks actually happen
A significant portion of new malware infections come from websites that haven’t been categorised yet. Traditional filtering forces a binary choice: block all uncategorised sites and frustrate users, or allow them and accept the risk.
Neither option works well. Blocking everything creates constant friction. Allowing everything creates vulnerability. Most organisations end up with an inconsistent policy that varies by department, location and whoever complained loudest.
This is where measuring the isolation rate becomes more meaningful than measuring the block rate. When uncategorised traffic gets rendered in an isolated container, users maintain access while the risk gets neutralised. The relevant metric shifts from “what percentage of uncategorised sites did we block” to “what percentage of uncategorised traffic runs in isolation.”
Mean time to contain: the metric that actually matters
Security operations teams track three time-based metrics. Mean Time to Detect (MTTD) measures how long before someone notices an incident. Mean Time to Contain (MTTC) measures how long to stop the bleeding. Mean Time to Recover (MTTR) measures the path back to normal operations.
In traditional architectures, these metrics are sequential. Detection must happen before containment can begin. This creates a window of exposure where attackers can move laterally, escalate privileges and exfiltrate data. Even with advanced EDR tools, the MTTC can range from minutes to hours depending on analyst availability and incident complexity.
The breakthrough with isolation-based security is structural. When web content executes in a disposable cloud container instead of on the endpoint, containment happens automatically. A user clicks a malicious link, the payload runs in the container, and the threat is contained from the first millisecond. No detection required. No analyst intervention needed.
The session ends, the container gets destroyed, and the malware disappears with it. MTTC drops to zero because containment is the default state, not a response to be triggered.
| Phase | Traditional response | Isolation-based response |
| Initial access | Malicious code reaches endpoint | Code runs in remote container |
| Detection | Depends on signatures and behaviour analysis (minutes to days) | Not required for protection |
| Containment | Analyst isolates endpoint after detection (minutes to hours) | Immediate. Threat confined from start |
| Recovery | Device reimaging, backup restoration (hours to days) | Container destroyed automatically |
| Business impact | Potential data loss, lateral movement, downtime | None. Endpoint never compromised |
What to measure instead
If blockrates and traditional MTTC don’t tell the full story, what should you track?
Isolation coverage. What percentage of web traffic runs through isolation? This applies especially to uncategorised sites, newly registered domains and any destination where reputation data is limited. Higher isolation coverage means less reliance on detection accuracy.
User experience metrics. Security that frustrates users eventually gets bypassed. Track end-to-end latency for isolated sessions, session initialisation time, and bandwidth overhead. If users notice the security layer, adoption suffers.
False positive rate. Measure how often legitimate traffic gets incorrectly blocked. Track the volume of helpdesk tickets related to web access. Monitor how frequently exceptions need to be added. A declining false positive rate indicates improving accuracy.
Segmentation granularity. How much of your internal traffic is controlled and authenticated? Micro-segmentation limits lateral movement when something does go wrong. Measure east-west traffic visibility and the percentage of connections that pass through identity-based policy enforcement.
Operational efficiency. Count the hours spent on firewall rule management, appliance patching and incident investigation. Consolidated platforms reduce this overhead. The time recovered can go toward proactive threat hunting instead of reactive maintenance.
The financial case for different metrics
Cybersecurity gets treated as a cost centre. But measuring the right things reveals clear return on investment.
Reimaging costs. Cleaning and reinstalling an infected laptop takes IT time and costs user productivity. Industry estimates put this between €300 and €600 per incident, often more for complex infections. If browser isolation prevents endpoint infections, calculate the savings based on your historical reimaging frequency.
Tool consolidation. A unified platform replaces separate licences for VPN, standalone SWG, firewall appliances and point solutions. Fewer tools means lower licence costs and less management overhead.
Audit preparation. NIS2 requires documented security controls and incident response capabilities. Centralised logging and reporting from a single platform reduces the time spent assembling evidence for audits.
Practical examples
Manufacturing company with IT/OT convergence
Traditional metrics showed good blockrates for web traffic. But unmanaged industrial devices couldn’t run agents and sat on flat network segments. By measuring segmentation coverage instead, the security team identified that compromising one printer could enable lateral movement to production systems. Implementing inline isolation for agentless devices and tracking isolation rate per device category closed the gap.
Professional services firm with mobile workforce
The existing SWG showed excellent detection statistics. However, measuring false positive rate revealed that consultants working from client sites faced constant access problems. Tracking user experience metrics alongside security metrics led to a policy shift toward isolation for risky categories rather than blocking, reducing helpdesk tickets while maintaining protection.
Municipality with distributed locations
Security teams tracked blocked threats per site but had no visibility into containment speed. After a phishing incident where lateral movement took hours to stop, they prioritised MTTC as a metric. Implementing isolation-based controls brought theoretical MTTC to near zero for web-borne threats.
Security that measures what matters
The numbers on your dashboard only help if they measure the right things. Blockrates tell you how well you’re fighting yesterday’s war. Isolation coverage tells you whether tomorrow’s unknown threats can reach your endpoints.
Jimber’s SASE platform combines Secure Web Gateway, Browser Isolation, ZTNA and SD-WAN in one console. Measurement happens across all components, providing visibility into the metrics that actually indicate security posture.
Book a demo to see how the platform makes real security measurable.
Frequently asked questions
Don’t higher blockrates mean better security?
Not necessarily. Blockrates measure performance against known threats. They don’t account for novel attacks, don’t reflect false positive impact, and can create a misleading sense of protection. Isolation rate is often more meaningful.
How does isolation affect user experience?
Modern isolation techniques like DOM mirroring deliver near-native browsing performance for most sites. Pixel streaming provides maximum security for high-risk destinations with slightly higher bandwidth use. Measuring latency and user satisfaction helps find the right balance.
What metrics should I report to leadership?
Focus on business impact. Track incidents that reached endpoints versus those contained in isolation. Report time and cost savings from reduced reimaging and tool consolidation. Show compliance readiness through centralised logging and audit evidence.
How does this relate to NIS2 compliance?
NIS2 requires proportionate security measures and incident response capabilities. Metrics like MTTC, isolation coverage and segmentation granularity demonstrate meaningful risk reduction. Centralised reporting supports audit requirements.
Can MSPs use these metrics across multiple customers?
Yes. Multi-tenant platforms enable consistent measurement across customer environments. Metrics like isolation rate and false positive rate can be benchmarked across the portfolio, identifying outliers that need attention.