.webp&w=3840&q=75)
How ClickUp Enables Outcome-Based Project Management (Not Just Task Tracking)
🕓 February 15, 2026

Every enterprise runs on connectivity. Whether it's a branch office syncing files with a data center, a regional office placing VoIP calls to headquarters, or a cloud-hosted application serving users across dozens of locations — all of it depends on reliable, secure, and performant site to site WAN connectivity.
For decades, organizations addressed this problem with MPLS circuits and point-to-point VPN tunnels — expensive, rigid, and operationally intensive architectures that were designed for a world where applications lived in one place and traffic was predictable. That world no longer exists.
Today, workloads span private data centers, public cloud environments, and SaaS platforms simultaneously. Users work from branches, campuses, home offices, and mobile devices. Traffic patterns are dynamic, application requirements are diverse, and the expectation of always-on connectivity is universal.
The modern answer to site to site WAN connectivity is a cloud-native, policy-driven architecture that automatically establishes secure tunnels between all sites, manages routing centrally, enforces traffic steering through granular network policy, and provides the flexibility to use multiple transport options — Cato Cloud backbone, Internet off-cloud, and MPLS — based on application requirements.
This guide explains exactly how site to site WAN connectivity works in the Cato Cloud, what transport options are available, how to steer traffic intelligently across them, and how to manage WAN connectivity policy with identity-based and application-aware controls.
Site to site WAN connectivity refers to the secure, routed network connectivity established between two or more physical or virtual locations — branch offices, data centers, cloud environments, or any combination — over a shared Wide Area Network (WAN).
Unlike remote access VPN, which connects individual users to a network, site to site WAN connectivity connects entire network environments to each other. Traffic between sites flows as if the networks are locally connected, regardless of the physical distance or the underlying transport technology carrying that traffic.
In a traditional architecture, site to site WAN connectivity required manually provisioned MPLS circuits, statically configured VPN tunnels between individual appliances, and complex routing protocols to propagate reachability information across the enterprise. Each new site meant new configuration, new circuits, new tunnel definitions, and new routing entries.
In a cloud-native WAN architecture like Cato Cloud, all of this is automated. Sites connect to the Cato Cloud through Cato Sockets, and the cloud manages routing, tunnel establishment, and reachability for all sites in a single, centrally managed routing context — from the moment a site completes basic provisioning.
Also Read: Simplifying IT Operations with Cato SASE: Reducing Complexity and Enhancing Performance
After completing the basic provisioning process, sites and SDP (Software Defined Perimeter) users automatically establish secure connectivity to the Cato Cloud. From that point, all routing information for all sites and all SDP users is managed by the Cato Cloud in a shared routing context.
This shared routing context is the foundation of Cato's site to site WAN architecture. All routing information between sites and SDP users is maintained in a single routing table in the Cato Cloud, and this shared table enables layer-3 connectivity between every site and every user connected to the account — automatically, without manual tunnel configuration between individual sites.
The practical implication is significant. In a traditional architecture, connecting ten sites in a full mesh requires 45 individual VPN tunnels, each manually configured and maintained. In Cato Cloud, all ten sites connect to the cloud, and full layer-3 reachability between all of them is established automatically through the shared routing context. Adding an eleventh site requires provisioning that site — the connectivity to all existing sites follows automatically.
This model also extends naturally to SDP users — remote and mobile users who connect to the Cato Cloud through the Cato Client. Once connected, SDP users participate in the same routing context as all sites, with WAN Firewall policy controlling what they can reach, using the same identity-based and application-aware controls that govern site to site traffic.
Cato Cloud supports three distinct transport options for site to site WAN connectivity. The right transport for any given traffic flow depends on performance requirements, cost considerations, existing infrastructure, and application characteristics. All three can coexist, and the Network policy controls which traffic uses which transport.
The Cato Cloud backbone is the default WAN transport for all Socket sites. When a site connects to the Cato Cloud, all WAN traffic is routed through the Cato PoP infrastructure by default — benefiting from the global backbone's optimized routing, built-in security inspection, QoS enforcement, and performance characteristics.
This is the recommended transport for most traffic, particularly for Internet-bound traffic, cloud application access, and any flows where security inspection, QoS, and centralized policy enforcement are required. The Cato backbone provides consistent performance and visibility that commodity Internet and legacy MPLS circuits cannot match at equivalent cost.
The off-cloud transport enables direct site to site tunnel connectivity over the public Internet, bypassing the Cato Cloud backbone for specific traffic flows. This creates a Socket-to-Socket direct VPN mesh using DTLS tunnels — the same encryption technology used for Socket-to-PoP connections.
A critical feature of the off-cloud transport is automatic discovery. All Socket sites are preconfigured for off-cloud transport on precedence 1 and 2 links by default. Sockets automatically discover each other over the Internet transport and create a full mesh topology without any manual tunnel configuration. No static peer definitions, no pre-shared key exchanges, no manual IP configuration — the mesh builds itself.
The off-cloud transport is ideal for high-volume, latency-tolerant traffic between sites in the same geographic region where the Cato backbone path adds unnecessary latency or where direct Internet connectivity between sites is faster than the backhauled PoP path. A typical use case is large-scale backup traffic between branch sites and a regional data center — traffic that is high in volume, tolerant of variable latency, and does not require the full security inspection stack applied to production application traffic.
Administrators retain full control over which sites participate in the off-cloud mesh. Off-cloud transport can be disabled for specific sites or for specific links within a site, preventing those sites from joining the automatic discovery and mesh-building process. This is useful for sites with strict security requirements where all traffic must traverse the Cato security stack, or for sites where direct Internet connectivity is unavailable or unreliable.
The Alternative WAN transport — commonly used to carry existing MPLS infrastructure — provides a third path for site to site connectivity. Organizations that have invested in MPLS circuits and want to preserve that investment while migrating to a cloud-native WAN architecture can integrate their MPLS transport into the Cato Network policy alongside the Cato backbone and Internet off-cloud paths.
Like the off-cloud transport, MPLS connectivity in Cato uses automatic discovery. Each site configured for the MPLS transport automatically discovers all remote sites with a similar configuration, and point-to-point VPN tunnels are automatically established between all relevant Socket sites over the MPLS transport — again, without manual tunnel configuration between individual site pairs.
The MPLS transport is particularly well-suited for latency-sensitive, high-priority traffic that benefits from the deterministic performance characteristics of private MPLS circuits. Voice over IP (VoIP), real-time video conferencing, and latency-sensitive financial applications are typical workloads that organizations prefer to route over MPLS when available.
Also Read: What is SASE? | Secure Access Service Edge Explained
Connectivity between sites in the Cato Cloud is not open by default. The WAN Firewall provides the policy layer that controls which traffic is permitted or blocked between sites, users, hosts, and applications — replacing the implicit trust of traditional site to site VPN with an identity-based and application-aware access control model.
This is a significant architectural distinction from legacy WAN approaches. In a traditional MPLS or VPN-based WAN, once a tunnel is established between two sites, all traffic between those sites is typically permitted. The network topology itself implies access. In Cato Cloud, establishing site to site connectivity through the shared routing context does not imply permission. The WAN Firewall must explicitly permit traffic for communication to occur.
The WAN Firewall policy supports extremely granular rule definitions. Rules can be written to permit or block traffic based on source and destination site, user identity, user group membership, specific hosts or host groups, application identity (not just port and protocol), time of day, and risk signals. This granularity enables organizations to enforce least-privilege access principles at the WAN layer — the same principles that Zero Trust applies to application access.
A branch office can be permitted to reach only specific services in the data center rather than the entire data center network segment. A contractor's device can be permitted to communicate with specific application servers but blocked from reaching infrastructure systems. A guest network segment at a branch can be isolated from all WAN traffic entirely while still receiving Internet access through the Cato Cloud. All of this is expressed in WAN Firewall rules, centrally managed, and applied consistently across all sites.
Beyond the WAN Firewall, which controls whether traffic is permitted at all, the Network policy controls how permitted traffic is routed — specifically, which WAN transport carries which traffic flows.
The Cato Cloud backbone is the default transport for all WAN traffic. Network policy rules can override this default for specific traffic flows, directing them to the off-cloud Internet transport or the MPLS Alternative WAN transport instead.
Network rules can be scoped to any combination of sites, applications, user groups, hosts, protocols, and ports — giving network architects precise control over traffic engineering across the WAN. The rules are evaluated in order, and the first matching rule determines the transport for that traffic flow.
Two practical examples illustrate how this works in enterprise environments.
Example 1: Routing backup traffic off-cloud. A data center site performs nightly backups from branch sites using SMBv3. This backup traffic is high-volume and latency-tolerant, and routing it through the Cato backbone adds cost and consumes backbone bandwidth unnecessarily. A Network rule can be defined to steer all SMBv3 traffic between branch sites and the data center site to the off-cloud transport — routing it directly over the Internet via the automatic Socket-to-Socket DTLS mesh. Backbone bandwidth is preserved for latency-sensitive production traffic, and backup performance is unaffected.
Example 2: Routing VoIP over MPLS. The organization runs VoIP across all Socket sites and has existing MPLS circuits between major locations. VoIP traffic is highly sensitive to latency, jitter, and packet loss — characteristics that MPLS handles better than commodity Internet. A Network rule can be defined to steer all VoIP traffic between Socket sites to the Alternative WAN (MPLS) transport, ensuring voice quality is maintained using the deterministic performance of the private circuit while all other WAN traffic continues to flow through the Cato backbone.
These two rules can coexist in the same Network policy, each applying to their specific traffic types while the default Cato Cloud transport handles everything else. The result is a multi-transport WAN architecture where traffic is automatically directed to the most appropriate path based on its characteristics — without manual route configuration on individual appliances.
Beyond transport selection, the Network policy provides additional controls that optimize WAN traffic behavior across all transport options.
QoS Bandwidth Management. Quality of Service (QoS) rules can be applied to WAN traffic to prioritize critical applications and ensure that high-priority workloads receive the bandwidth they require even when links are congested. QoS policies in Cato are defined centrally and applied consistently across all sites — replacing the site-by-site QoS configuration required in traditional WAN architectures.
TCP Optimization. TCP optimization settings can be applied to specific WAN traffic flows to improve throughput and reduce effective latency on long-distance or high-latency WAN paths. TCP acceleration is particularly effective for bulk data transfers and application protocols that are sensitive to round-trip time, improving application performance without changes to the applications themselves.
These capabilities — transport selection, QoS, and TCP optimization — are all expressed in the same Network policy framework and managed through the same centralized interface, providing a unified traffic engineering capability across the entire enterprise WAN.
Also Read: Avoiding Compliance Penalties with Cato SASE: Meeting Regulatory Standards Effortlessly
Zero-configuration mesh networking. Sites connect to the Cato Cloud and automatically gain full layer-3 reachability to all other connected sites through the shared routing context. Off-cloud and MPLS meshes build themselves through automatic discovery. Adding a new site never requires configuring tunnels to existing sites.
Multi-transport flexibility. Organizations are not locked into a single WAN transport. The Cato backbone, Internet off-cloud, and MPLS can all coexist, with Network policy directing each traffic flow to the most appropriate path based on application requirements, cost considerations, and performance characteristics.
Identity-based and application-aware policy. The WAN Firewall replaces implicit trust between sites with explicit, granular policy — the same Zero Trust principles applied at the WAN layer that ZTNA applies at the application layer.
Centralized management and visibility. All sites, all transports, all policies, and all traffic metrics are managed through a single interface. There is no per-site appliance configuration, no distributed policy management, and no need to reconcile settings across multiple management consoles.
Integrated security for all WAN traffic. Traffic flowing through the Cato backbone is subject to the full Cato security stack — Threat Prevention, CASB, DLP — without requiring separate security appliances at each site. Security is delivered as a cloud service, applied consistently to all WAN traffic.
Scalability without complexity. Whether an organization has 5 sites or 500, the architecture scales without a corresponding increase in management complexity. The shared routing context, automatic discovery, and centralized policy model remain the same regardless of network size.
The era of manually provisioned MPLS circuits and point-to-point VPN tunnels as the foundation of enterprise WAN is ending — not because the technology failed, but because the enterprise environment it was designed for no longer exists.
Modern enterprise networks span dozens or hundreds of locations, multiple cloud environments, and a distributed workforce that expects the same application performance and security regardless of where they connect from. Meeting those expectations requires a WAN architecture that is automated, policy-driven, multi-transport, and centrally managed.
Cato Cloud delivers exactly that. Sites connect, routing is automatic, meshes build themselves, traffic is steered intelligently across Cato backbone, Internet off-cloud, and MPLS transports based on application requirements, and the WAN Firewall enforces identity-based and application-aware access control across every site, every user, and every flow.
For network architects evaluating enterprise WAN options, or for organizations currently managing the operational burden of traditional MPLS and multi-vendor VPN infrastructure, the path to a simpler, more secure, and more scalable WAN starts with a cloud-native architecture — and Cato Cloud's site to site connectivity model is one of the most mature implementations available today.
Site to site WAN connectivity refers to the secure, routed network connections between multiple physical or virtual locations — such as branch offices, data centers, and cloud environments — over a Wide Area Network. It enables traffic to flow between these locations as if they were locally connected, regardless of distance or transport technology.
After basic provisioning, Cato Sockets at each site automatically connect to the Cato Cloud and join a shared routing context. This shared routing table provides automatic layer-3 reachability between all connected sites and SDP users without requiring manual tunnel configuration between individual site pairs.
The off-cloud transport enables direct Socket-to-Socket VPN tunnels over the public Internet, bypassing the Cato Cloud backbone. Sockets automatically discover each other and build a full DTLS-encrypted mesh topology without manual configuration. It is ideal for high-volume, latency-tolerant traffic between nearby sites, such as backup operations.
Alternative WAN (Alt WAN) is the Cato transport option for existing MPLS circuits or other private WAN links. Sites configured for Alt WAN automatically discover each other and establish point-to-point DTLS tunnels over the MPLS transport, enabling organizations to preserve their MPLS investment while integrating it into Cato's centralized policy framework.

Surbhi Suhane is an experienced digital marketing and content specialist with deep expertise in Getting Things Done (GTD) methodology and process automation. Adept at optimizing workflows and leveraging automation tools to enhance productivity and deliver impactful results in content creation and SEO optimization.
Share it with friends!
share your thoughts