
Submarine cable cuts in the Red Sea pushed latency from 50ms to 300ms+ across six ISPs. Here's how I redesigned SLA checks in FortiGate SD-WAN to keep links stable and reliable.
Normally, we keep internet latency to Google under 50ms across the SD-WAN fabric. That's the SLA target.
But when the Red Sea submarine cables went down (SMW4, IMEWE, EIG, FALCON), latency jumped past 300ms on all six ISPs at once. Links started flapping, users complained of "slow internet," and the SD-WAN was marking circuits inactive every few minutes.
This post covers:
1.1.1.1 isn’t a good SLA targetThe outage was not local. All six ISPs spiked to the same ~300ms latency to Google. This was a regional upstream event, not a provider issue.

With latency thresholds breached, the FortiGate started failing SLA checks, causing links to flap between active/inactive. That made the problem worse.
To stabilize the fabric, I had to:
This way, links stayed online, even if latency was poor.
I tested Cloudflare 1.1.1.1 as an SLA target. It showed just 3ms latency, which looked perfect-but users were still complaining. Why? Because Cloudflare has local PoPs in Pakistan, so probes terminate inside the country. Google’s DNS (8.8.8.8) doesn’t have local nodes; traffic hairpins to India, UAE, or Singapore, which reflects the real international path. That’s why Google is a better SLA target for measuring true internet performance, while Cloudflare can give a false sense of “everything is fine.”
So while Cloudflare showed 3ms, users still felt the slowdown. It hid the upstream issue, which is why Cloudflare is not a good SLA choice for real performance monitoring.
Instead of relying on one SLA, I built three:
8.8.4.4 with latency/jitter/packet loss thresholds.
google.com, but only with packet loss threshold active.
8.8.8.8 with latency/jitter/packet loss thresholds.
Each ISP member participates in all three SLA checks. This way, if one target is unreachable or skewed, others balance it out.
To confirm, I ran:
diagnose sys sdwan service 4 7The output showed all six ISPs alive and validated across SLA probes:

End-users saw “slow internet.”
The dashboard showed 300ms to Google.
The real issue was broken cables under the Red Sea.
With adjusted SLA policies and multi-target monitoring, the SD-WAN stayed stable and services continued running-even while the internet backbone itself was under repair.