Microsoft Teams SBC-as-a-Service: TLS, FQDN, DNS, and Auto-Failover Explained
Microsoft Teams Direct Routing requires an SBC — a Session Border Controller — sitting between your tenant and the PSTN carrier. For most teams it's the single most painful part of a Teams Phone rollout: cost, certificates, FQDNs, DNS, redundancy, and patching. SBC-as-a-Service collapses all of it into a one-line PowerShell command. Here's exactly how it works, where the gotchas hide, and how DIDHub does it.
2026-05-14 · 11 min read · Microsoft Teams
By Daria Kesselman · DIDHub editorial
What an SBC actually is
A Session Border Controller is a SIP-aware proxy that sits at the edge of two networks — your phone system on one side, an upstream carrier on the other — and translates between them. It's called a “border controller” because it lives on the border, controlling everything that crosses it: signaling (SIP), media (RTP/SRTP), authentication, encryption, codec negotiation, NAT traversal, fraud limiters, call rate caps, topology hiding, and failover.
For Microsoft Teams Direct Routing specifically, an SBC has one job: be the carrier-side gateway Teams talks to. Teams won't connect directly to a SIP trunk — it requires a certified SBC in between, speaking SIP-over-TLS on a specific port (5061), presenting a valid TLS certificate, with the right FQDN, the right cipher suite, and the right SIP signaling profile. Get any of those wrong and Teams refuses to send calls.
Historically this has been a hardware appliance (AudioCodes, Ribbon SBC, Cisco CUBE) or a software product (AnyNode, Patton, Avaya) deployed on-premises or in a VM in Azure. Both work; both are also expensive, slow to deploy, and fragile in ways that surprise teams who haven't run carrier infrastructure before.
Why Teams Direct Routing makes the SBC problem hard
The painful bit isn't the SIP signaling — it's the certificate chain and the FQDN binding. Microsoft's requirements for an SBC are tight, and miss any one of them and Teams will reject your trunk:
- Public DNS FQDN. The SBC must be reachable at a publicly-resolvable fully-qualified domain name, e.g.
sbc.acme.com. Teams looks up this FQDN and connects directly — no NAT hole-punching, no STUN, no signaling proxy from Microsoft's side. The hostname is what Teams trusts. - Valid public TLS certificate. The certificate must be issued by a Microsoft-trusted public CA (DigiCert, GlobalSign, Sectigo, GoDaddy, Let's Encrypt is also accepted as of 2024). It must cover the exact FQDN above —
sbc.acme.com— either via SAN or as the primary CN. Self-signed certificates are rejected. - Tenant domain ownership. The FQDN's parent domain (in this case
acme.com) must already be verified in your Microsoft 365 tenant via the standard MX/TXT verification process. If you haven't addedacme.comto your tenant, Microsoft won't acceptsbc.acme.comas a Direct Routing gateway. - SIP profile. Teams uses a specific SIP signaling profile (RFC 3261-compliant but with Microsoft-specific extensions for media bypass, Local Media Optimization, codec preference). The SBC has to speak this dialect or calls fail at SIP INVITE.
- SRTP for media. Microsoft Teams requires media encryption over SRTP. The SBC negotiates SRTP keys via SDP and decrypts/re-encrypts at the boundary.
- Certificate auto-renewal. Certificates expire. Most public CAs issue 90-day or 1-year certs. If your cert lapses, every Teams call breaks at exactly midnight on expiry day. The renewal process has to be automated — manual renewal is a known cause of 2am outages.
For a company doing this in-house, this means: buying an SBC (10K-50K USD for hardware, 3-15K/yr for software), provisioning compute (on-prem or in Azure), buying or generating a TLS certificate, setting up auto-renewal, configuring the SIP signaling profile to match Microsoft's expectations, registering the FQDN in your tenant, opening firewall ports, and staying on top of patches as Microsoft revs the spec. Roughly 4-12 weeks for a first-time team to get to a working production call.
SBC-as-a-Service: what changes
SBC-as-a-Service is a managed offering: the carrier (in this case DIDHub) operates the SBC fleet, provisions per-tenant certificates, manages the FQDN/DNS, handles SIP-profile compliance with Microsoft's spec, and exposes a single PowerShell command that wires Teams to it.
The customer never sees the SBC. They don't buy hardware. They don't install software. They don't generate certificates. They don't open firewall ports. They don't patch anything. The cost moves from a capex spike (10K-50K up front) to a flat per-month fee bundled with the DID rental. And the time-to-first-call drops from weeks to about 15 minutes.
The hard part isn't the SIP routing — carrier-grade Kamailio/OpenSIPS plus a Microsoft-compatible signaling profile is well-understood infrastructure. The hard part is the certificate and FQDN story: doing per-tenant TLS cleanly, at scale, with auto-renewal, without forcing each customer to give up control of their own domain. That's what we want to unpack.
The two FQDN options — and why both exist
Every Teams Direct Routing tenant needs a unique FQDN for its SBC connection. DIDHub offers two ways to handle this. Both end at the same SIP infrastructure; they differ in who owns the hostname and where the cert lives.
Option A: DIDHub-hosted FQDN — company.teams.didhub.io
The simplest path. You pick a subdomain — say acme.teams.didhub.io — and DIDHub provisions everything: DNS record, TLS certificate (Let's Encrypt or DigiCert depending on plan), routing rules. From your side, you run one PowerShell command:
New-CsOnlinePSTNGateway -Fqdn "acme.teams.didhub.io" ` -SipSignalingPort 5061 ` -MaxConcurrentSessions 1000 ` -Enabled $true
That's it. Teams now routes outbound PSTN calls through DIDHub's SBC fleet via TLS to acme.teams.didhub.io:5061. Inbound calls from your DIDs ring Teams users. Certificate renewal is fully automatic; you never touch it.
The catch: the FQDN is on DIDHub's domain, not yours. From a Microsoft Teams admin's point of view this is transparent — users don't see the FQDN. But some IT teams prefer their own domain for audit, brand consistency, or because their security baseline requires it.
Option B: Bring Your Own Domain — sbc.acme.com
You pick a subdomain on your own domain — sbc.acme.com — and let DIDHub manage the underlying resolution. Two sub-patterns here, depending on how much DNS control you want to delegate:
B1. CNAME delegation (recommended). You add a single CNAME record in your DNS: sbc.acme.com → acme.teams.didhub.io. DIDHub handles certificate issuance for sbc.acme.com via an automated ACME flow that uses DNS-01 validation against the CNAME chain. Renewal is fully automatic; you never touch the cert or your DNS again after the initial CNAME.
B2. NS delegation (large enterprise). You delegate sbc.acme.com as a subdomain entirely — via NS records pointing at DIDHub's authoritative nameservers. DIDHub then publishes A/AAAA records for the SBC IPs directly, with health-check-based failover. This is the model for customers running multi-region active-active failover where DNS-level steering matters more than the slight added setup.
In both BYO patterns, the certificate is issued for sbc.acme.com (your domain, your FQDN) with DIDHub managing the issuance lifecycle. From Microsoft's side it's indistinguishable from an SBC you operate yourself — the FQDN is on your domain, the cert is valid for your domain, the tenant verifies your domain. The operational burden is just gone.
Setup PowerShell looks identical to Option A — just the FQDN value changes:
New-CsOnlinePSTNGateway -Fqdn "sbc.acme.com" ` -SipSignalingPort 5061 ` -MaxConcurrentSessions 1000 ` -Enabled $true
How DIDHub solved the certificate problem
The hard engineering question behind SBC-as-a-Service is this: how do you issue, renew, and rotate TLS certificates for hundreds or thousands of customer-owned FQDNs — without giving DIDHub control of customer domains, without customers having to upload private keys, and without hand-rolling cert lifecycle for each customer?
The answer is the ACME protocol (Automatic Certificate Management Environment, RFC 8555) running DNS-01 challenges, plus a delegated-validation pattern via CNAME. The flow:
- Customer adds one CNAME in their DNS:
sbc.acme.com→acme.teams.didhub.io. Once. - DIDHub's certificate orchestrator wakes up and asks Let's Encrypt (or DigiCert, depending on the customer tier) for a certificate covering
sbc.acme.com. - The CA issues a DNS-01 challenge: “prove you control
_acme-challenge.sbc.acme.com.” - Because
sbc.acme.comCNAMEs toacme.teams.didhub.io, the validation chain follows the CNAME — and DIDHub controls the target. DIDHub publishes the TXT record on its own domain, the CA validates, the certificate issues. The customer's domain is never touched after the initial CNAME. - The cert is installed on the SBC fleet, served via SNI for
sbc.acme.com, and the orchestrator schedules the next renewal 60 days out (well before the 90-day Let's Encrypt expiry). - Renewal happens silently every ~60 days. The customer never sees it.
This pattern is widely used by CDNs (Cloudflare, Fastly), TLS termination products (Let's Encrypt itself, Caddy with on-demand TLS), and a handful of carrier SBC-as-a-Service offerings. The novelty isn't the technique; it's applying it cleanly to Microsoft Teams Direct Routing, which historically expected enterprise-controlled cert lifecycle and didn't play well with automated 90-day issuance.
Adding the FQDN to Microsoft — the actual steps
Regardless of which FQDN option you pick (DIDHub-hosted or BYO), the Microsoft side is the same five steps. Each step is one PowerShell command. The whole sequence takes about 10 minutes once the prerequisites are in place.
Prerequisite: the FQDN's parent domain must already be verified in your tenant. If your FQDN is sbc.acme.com, then acme.com must be a verified domain in Microsoft 365 (Admin center → Settings → Domains). This is a one-time per-domain step. For DIDHub-hosted FQDNs (acme.teams.didhub.io), DIDHub manages the parent-domain verification once for all customers — you don't need to verify anything.
# Step 1 — Connect to the Teams PowerShell module Connect-MicrosoftTeams # Step 2 — Register the SBC as a PSTN gateway New-CsOnlinePSTNGateway -Fqdn "sbc.acme.com" ` -SipSignalingPort 5061 ` -MaxConcurrentSessions 1000 ` -Enabled $true # Step 3 — Create a PSTN usage record (a "namespace" for routing) Set-CsOnlinePstnUsage -Identity Global -Usage @{Add="DIDHub-Global"} # Step 4 — Create a voice route that uses the gateway New-CsOnlineVoiceRoute -Identity "DIDHub-AllCalls" ` -NumberPattern ".*" ` -OnlinePstnGatewayList "sbc.acme.com" ` -OnlinePstnUsages "DIDHub-Global" # Step 5 — Create and assign a voice routing policy to users New-CsOnlineVoiceRoutingPolicy "DIDHub-Users" -OnlinePstnUsages "DIDHub-Global" Grant-CsOnlineVoiceRoutingPolicy -Identity "alice@acme.com" -PolicyName "DIDHub-Users"
That's the full Teams side. Step 5 grants the policy to one user; in production you'd assign by group via Grant-CsOnlineVoiceRoutingPolicy -Global or per-AD-group. The DID assignment (which phone number rings whom) is a separate step via Set-CsPhoneNumberAssignment.
Within ~60 seconds of step 2, Teams attempts a TLS handshake to sbc.acme.com:5061. If the cert validates and the SBC responds with a healthy OPTIONS reply, Microsoft marks the gateway as up and you can place a test call.
The redundancy story — multi-PoP with auto-failover
A single SBC is a single point of failure. Microsoft's spec allows multiple PSTN gateways per tenant, and Direct Routing handles failover natively — if the primary gateway stops responding to OPTIONS pings, Teams routes to the next gateway in the policy's gateway list within 30-60 seconds.
DIDHub's SBC fleet runs as active-active across 8 regional PoPs (US East, US West, EU West, MENA, India, APAC, ANZAC, LATAM) plus 200+ Cloudflare anycast edge locations for SIP-over-WebSocket where applicable. Every customer FQDN resolves to a regionally-steered cluster, with health-check-based failover at three layers:
- DNS layer. The A records behind every FQDN have a 60-second TTL. Health checks against each SBC node in the cluster run every 10 seconds; failed nodes are pulled from the record set within 30 seconds. For BYO domains using NS delegation (Option B2), this happens directly on customer-domain DNS. For DIDHub-hosted FQDNs and CNAME-delegated BYO, the failover happens on
*.teams.didhub.ioand inherits through the CNAME. - Anycast routing layer. Each SBC node sits behind a Cloudflare-fronted anycast IP. If an entire PoP goes offline, the anycast routing automatically steers new connections to the next-closest healthy PoP within seconds, before DNS TTL even matters.
- Microsoft Teams gateway-list layer. For customers who want belt-and-suspenders, DIDHub provisions two FQDNs (e.g.
sbc-primary.acme.comandsbc-failover.acme.com) backed by independent SBC clusters in different regions. Both go into the Teams gateway list. If the primary cluster fails so completely that DNS and anycast can't save it, Teams fails over to the secondary within its OPTIONS probe interval.
In practice the DNS + anycast layers handle every real-world outage we've seen — PoP outages, BGP route flaps, certificate-renewal-window issues, software upgrades. The gateway-list layer is the “hairy enterprise customer wants two FQDNs” lever, available on request.
DNS — the part everyone forgets
The DNS story is what makes SBC-as-a-Service actually work in production. Three things are happening behind the scenes for every customer FQDN:
- Authoritative DNS. Either DIDHub runs the authoritative DNS for the FQDN (DIDHub-hosted or NS-delegated BYO) or DIDHub's record is the CNAME target (CNAME-delegated BYO). Either way DIDHub controls the actual A/AAAA records that Microsoft Teams resolves to.
- Health-check-driven record updates. When an SBC node fails its OPTIONS health check, DIDHub's control plane writes a new A record (typically within 30 seconds) removing that node. Teams' next DNS resolution picks up the change inside its 60-second TTL.
- Regional steering. The same FQDN resolves to different IPs based on the resolver's geographic origin — Teams clients in EU see EU SBC IPs, US clients see US SBC IPs. This keeps SIP signaling latency under 30ms for the regional leg and avoids cross-continent backhaul on every call setup.
The customer doesn't configure any of this. They add one CNAME (BYO) or zero records (DIDHub-hosted) and the rest is invisible.
When this matters — and when it doesn't
If you're a 5-user company that just needs cheap PSTN on Teams, the SBC story doesn't matter to you — Microsoft Calling Plans (or any Direct Routing offering) gets you there. SBC-as-a-Service starts to matter at the point you have one or more of these constraints:
- You want global PSTN coverage Microsoft Calling Plans don't offer. Calling Plans cover a fraction of countries, with per-user-per-country bundling that gets expensive fast. Direct Routing + a wholesale carrier like DIDHub covers 130+ countries at carrier wholesale.
- You don't want to operate SBC infrastructure. SBC ops requires SIP expertise, TLS expertise, and 24/7 ownership. Most IT teams correctly conclude this isn't their core competency.
- You want predictable cost. Per-DID monthly + per-minute is much easier to forecast than per-seat Calling Plans, especially when seat count fluctuates.
- You want to keep your number when you change Teams setups. BYOC numbers stay yours; Microsoft Calling Plans numbers are tied to the per-user license.
- You need to integrate with AI voice agents or non-Teams platforms. SBC-as-a-Service on a wholesale carrier means the same DID can simultaneously serve Teams users and an AI voice agent on a different SIP trunk. Calling Plans can't do this.
Trying it
DIDHub's Teams SBC-as-a-Service is included with any DID rental at no extra per-month fee for the SBC itself — you pay only for the DIDs and per-minute usage. To test:
- Sign up at didhub.io/signup.
- Pick one or more DIDs from the 130+ supported countries.
- In the dashboard, request a Teams SBC FQDN. You'll get either a DIDHub-hosted one immediately (e.g.
yourcompany.teams.didhub.io) or a setup checklist for BYO domain. - Run the 5 PowerShell commands above. ~10 minutes.
- Place a test call from a Teams desktop client — it should connect within 1-2 seconds.
See the full Teams Direct Routing setup guide for the BYO-domain version, the cert-renewal lifecycle docs, and the multi-PoP failover architecture. For larger deployments (500+ Teams users, multi-region failover, dedicated SBC clusters) email sales@didhub.io — the same product underneath but with a few enterprise-only knobs (private SBC pool, custom cert authorities, per-region routing policies).
Further reading
More from the blog
AI Voice Agents Need Real Phone Numbers
Vapi, Retell AI, ElevenLabs, Bland, Synthflow, LiveKit Agents, Pipecat — AI voice platforms need real DIDs with STIR/SHAKEN A-attestat
Origin Based Rating (OBR): How Local DIDs Save 30-100× on Outbound
Origin Based Rating is the wholesale-telecom pricing model where per-minute outbound depends on the calling DID's country, not just the dest
STIR/SHAKEN Explained: Why Your US Calls Are Going to Spam
Attestation A vs B vs C, why mobile carriers flag your calls as Spam Likely, why most BYOC SIP trunks are stuck at B-attestation, and what i
Microsoft Teams Phone
Honest cost comparison of Microsoft Teams Phone with Direct Routing (BYOC) versus Microsoft Calling Plans for a typical 50-user SMB. Licensi
Ready to get a number?
Pick a DID in 130+ countries from $1.99/month. Activates instantly on most numbers.