Email throttling has two distinct meanings that are frequently conflated, and confusing them leads to poor decisions. The first is reactive throttling — the deferred or rejected delivery that ISPs impose when they decide your sending rate, reputation, or volume pattern is unacceptable. The second is proactive throttling — the intentional, sender-controlled pacing that prevents the first kind from happening. This guide covers both: the mechanics of ISP-imposed throttling, the specific scheduling and pacing strategies that prevent it, and the per-ISP rate considerations that determine how fast you can actually push volume without triggering deferrals.

10
concurrent connections Gmail recommends per sending IP
5
concurrent connections Microsoft recommends per IP
421
temporary deferral — ISP rate-limiting in action
72h
max 421 retry window before becoming 5XX bounce

Optimal Send Time Distribution — Inbox Opens by Hour (UTC, Gmail sample, n=2.4M)

8%06:0014%07:0022%08:0031%09:0028%10:0024%11:0019%12:0017%13:0021%14:0018%15:0013%16:009%17:00

How ISP Throttling Works

When a receiving mail server decides it's accepting mail from your IP or domain too fast, or too much, it doesn't reject your messages with a hard 5xx error. It returns a 421 temporary failure — the SMTP equivalent of "try again later." Your MTA must retry these messages after an interval, generating queue depth and delivery delay without any permanent message loss.

The specific 421 error messages from major ISPs are informative about the cause:

  • Gmail 421 4.7.0 "Our system has detected an unusual rate of unsolicited mail originating from your IP address": Gmail's reputation-based throttle. Not a rate limit — a reputation signal. The word "unsolicited" tells you Gmail's filter believes your recipients didn't want this mail. Reducing sending rate doesn't fix this; improving list quality and engagement does.
  • Gmail 421 4.3.0 "Try again later" / "Temporary system problem": Gmail's infrastructure-level throttle — genuinely temporary, not reputation-related. Retry after standard backoff.
  • Microsoft 421 4.7.650 "The mail server sending this message is not compliant with Microsoft's bulk email policies": Microsoft's reputation and authentication throttle. This precedes hard rejection if not addressed.
  • Yahoo 421 4.7.0 "[TS01] Messages from your IP address have been temporarily blocked due to user complaints": Yahoo's FBL/complaint-rate throttle. TS01 specifically means spam complaint rate is too high. The fix is list quality, not rate limiting.
  • Yahoo 421 4.7.0 "[TS03] All connections to this server from your IP are being blocked": Yahoo's connection-rate throttle. Reducing concurrent connections to Yahoo MXs resolves this.

The critical distinction: reputation throttling vs rate throttling

Rate throttling (too many connections, too many messages per connection) is solved by slowing down. Reputation throttling (Gmail's "unsolicited" message, Yahoo's TS01) is solved by improving list quality and engagement — and slowing down won't fix it, only buying time while the reputation problem continues. Misdiagnosing reputation throttling as rate throttling and responding only with rate reduction is a common operational mistake.

Per-ISP Rate Guidelines for High-Volume Sending

Different ISPs have different tolerance for inbound mail rates. These are operational guidelines based on current (2025–2026) observed behaviour — not official published limits, as ISPs don't publish them:

ISPRecommended connections/IPRate delayMessages/connectionKey signal
Gmail10 concurrentNone if High rep100-200Domain reputation in Postmaster Tools
Microsoft (Outlook)5 concurrent0.2s recommended50-100SNDS complaint rate + trap hits
Yahoo / AOL5-8 concurrent0.3-0.5s50Complaint rate < 0.1% enforced strictly
Comcast10 concurrentNone200+IP reputation — responsive to SNDS signals
Apple iCloud5 concurrent0.2s50-100Engagement-based — no public postmaster
Orange / Wanadoo3 concurrent1s recommended25-50Strict anti-spam — slower warm-up needed
ISPMax concurrent connectionsMessages per connectionRate delay recommended
Gmail (gmail.com)5–10 per IP100–200None if reputation is HIGH; backoff if 421s appear
Microsoft (outlook.com, hotmail.com)3–5 per IP501–2s between connections from same IP
Yahoo (yahoo.com, aol.com)5–8 per IP50–100Depends on TS code — reduce on TS03, check FBL on TS01
Apple (icloud.com, me.com)5–10 per IP100Moderate rate; Apple is less aggressive with throttling
Comcast (comcast.net)3–5 per IP25–50Conservative; 0.5–1s between
Default (all others)10–20 per IP100–200None unless throttle signals appear

These limits apply per sending IP. If you're running multiple IPs, each IP maintains its own connection count against each destination MX. A PowerMTA or KumoMTA configuration uses domain blocks (per-ISP policy sections) to enforce these limits automatically.

Campaign Scheduling Strategies

Time-window spreading

Sending a 500,000-recipient campaign in a single two-hour burst looks to ISPs like a spike, not a normal sending pattern. The same volume distributed over an 8-hour sending window looks like steady, consistent mail flow. ISPs prefer the second pattern.

For campaigns over 100,000 recipients, consider these distribution approaches:

  • Time-of-day distribution: Spread the campaign over 8–12 hours, aligning with the recipient's business hours. A campaign sent to recipients across US time zones should start at 8am Eastern and continue through 3pm Pacific.
  • Multi-day distribution: For very large campaigns (1M+), split across 2–3 consecutive days. This allows ISPs to observe consistent volume patterns rather than a one-time spike.
  • Recipient-timezone targeting: If your platform supports per-timezone send time optimisation, enable it. Receiving a marketing email at 3am local time is a flag for automated sending and generates lower engagement — which feeds back into reputation.

Volume ramping for new campaigns or returning senders

If you haven't sent to your full list in more than 90 days, or if you're introducing a new sending domain or IP, ramp volume before sending the full list:

Day% of listAudience selection
Day 110%Best-engaged segment (clicked in last 30 days)
Day 325%60-day engagers added to Day 1 audience
Day 750%90-day engagers
Day 1475%6-month engagers
Day 21100%Full list

This schedule isn't just about rate — it's about sequencing your most engaged recipients first. Their positive signals (opens, clicks) arrive at ISPs before the volume from less-engaged recipients, establishing a positive reputation context that makes ISPs more receptive to the later volume.

MTA-Level Throttle Configuration

For operators running their own MTA, rate limiting is configured at the MTA level rather than the application level. This provides more granular control and ensures limits are enforced regardless of how the application behaves.

Postfix per-domain throttling

Postfix implements per-destination rate control through transport maps and destination concurrency parameters. The most practical approach uses named transports for different ISPs:

# /etc/postfix/main.cf — add per-ISP throttle parameters
# Gmail transport: 10 concurrent, no rate delay
gmail_destination_concurrency_limit = 10
gmail_destination_rate_delay = 0

# Microsoft transport: 5 concurrent, 1s delay
outlook_destination_concurrency_limit = 5
outlook_destination_rate_delay = 1s

# Conservative transport for problematic ISPs
conservative_destination_concurrency_limit = 2
conservative_destination_rate_delay = 3s

# Activate transport maps
transport_maps = hash:/etc/postfix/transport
# /etc/postfix/transport — domain to transport mapping
gmail.com          gmail:
googlemail.com     gmail:
outlook.com        outlook:
hotmail.com        outlook:
live.com           outlook:
msn.com            outlook:
comcast.net        conservative:
# /etc/postfix/master.cf — add transport entries
gmail      unix   -       -       n       -       10  smtp
  -o smtp_connect_timeout=5
  -o smtp_helo_timeout=5

outlook    unix   -       -       n       -       5   smtp
  -o smtp_connect_timeout=10
  -o smtp_destination_rate_delay=1s

conservative unix  -       -       n       -       2   smtp
  -o smtp_connect_timeout=10
  -o smtp_destination_rate_delay=3s

After editing, rebuild the transport map and reload Postfix:

sudo postmap /etc/postfix/transport
sudo postfix reload

PowerMTA per-ISP throttle configuration

PowerMTA's domain blocks provide the same per-ISP rate control with more options:

# /etc/pmta/config — per-ISP domain blocks
<domain gmail.com>
  max-smtp-out 10
  max-msg-per-connection 100
  retry-after 421 4.7.0 15m
  retry-after 421 4.3.0 5m
</domain>

<domain outlook.com hotmail.com live.com msn.com>
  max-smtp-out 5
  max-msg-per-connection 50
  smtp-pattern-list temp-failure "421 4.7"
  retry-after 421 4.7.650 30m
</domain>

<domain yahoo.com ymail.com aol.com>
  max-smtp-out 8
  max-msg-per-connection 50
  # TS01: complaint rate — reduce sending, check FBL
  retry-after 421 TS01 60m
  # TS03: connection rate — this actually helps
  retry-after 421 TS03 15m
</domain>

Monitoring Throttle Signals

Throttle events should be tracked and trended over time. A gradual increase in 421 responses from Gmail, even if each individual event is minor, indicates reputation erosion that will lead to more severe problems if not addressed.

Key metrics to monitor:

  • 421 response rate by ISP: What percentage of delivery attempts to Gmail, Microsoft, Yahoo result in a 421 deferral? A healthy sending programme has under 1% 421 rate at major ISPs. Above 5% indicates problems.
  • Deferral-to-delivery time: How long do messages take to deliver after the first 421? Messages that take more than 4 hours to deliver after deferral indicate significant ISP reluctance and queue build-up.
  • Queue depth trends: Rising queue depth at specific ISPs (increasing backlog of deferred messages) indicates accelerating throttling, not steady-state.

For PowerMTA operators: the web interface shows per-destination delivery rates and queue depths in real time. The accounting CSV provides historical data for trend analysis. For Postfix operators: mailq | grep -c '^[0-9A-F]' gives total queue depth; log analysis with pflogsumm provides per-domain delivery statistics.