A mid-sized German e-commerce retailer with 3.8 million opted-in subscribers had been operating on a shared sending infrastructure through a US-based ESP for four years. Monthly sending volume averaged 12 million messages, split between promotional campaigns (70%) and transactional order/shipping notifications (30%).
By early 2024, the company was experiencing significant deliverability deterioration. Gmail inbox placement had dropped from a baseline of ~87% to 41% over eight months. German ISPs — GMX, Web.de, T-Online — were deferring 18–23% of messages. Monthly ESP costs had risen to €14,200 with no resolution path offered.
Presenting Problems
- Gmail inbox placement at 41% — confirmed via GlockApps seed-list testing
- Google Postmaster Tools showing domain reputation at LOW, spam rate at 0.31%
- Shared IP pool contamination: unknown co-tenants generating complaint volume the company could not control
- Transactional and promotional traffic mixed on the same IP pool, causing order confirmation delays
- No per-ISP throttle visibility or control through the shared ESP interface
- DMARC alignment failures on ~12% of messages due to ESP subdomain sending
Our initial assessment involved a full header analysis of 200 sample messages, accounting log review provided by the client's dev team, and Google Postmaster Tools data export. The root causes were stratified:
- Complaint rate: 0.31% Gmail spam rate across the shared pool — driven primarily by re-engagement campaigns sent to 2+ year inactive segments
- IP reputation: Two IPs in the shared pool had Spamhaus SBL listings from other tenants
- DMARC misalignment: ESP was sending via a subdomain (mail.esp-provider.com) causing From: domain misalignment
- No traffic separation: A 5-minute flash sale campaign was disrupting transactional delivery for 40–90 minutes after send
ISP Deferral Rate Before Migration
The architecture required isolating three distinct traffic categories onto separate IP pools with independent reputation profiles, while maintaining a unified sending identity from the client's own domain.
Pool architecture:
Warming schedule: 6-week structured ramp beginning with the transactional pool (lowest reputation risk) and highest-engagement promotional segments. Gmail warming started at 800 messages/day, doubling weekly as Postmaster Tools domain reputation moved from MEDIUM to HIGH.
Gmail Domain Reputation Score During Warming
Inbox Placement at Gmail
ISP Deferral Rate After Migration
The difference between the shared ESP and the new infrastructure was immediate in the Postmaster Tools data. Gmail domain reputation moved from LOW to MEDIUM in the first two weeks of warming, before we'd even reached 10% of our normal volume. That told us the contamination was entirely coming from the shared pool — not from our own sending behavior.
(from 41%)
€14,200 → €5,400
across all ISPs
(from 0.31%)
Technical Assessment: Infrastructure Layers Examined
The infrastructure assessment for this engagement covered four layers: authentication configuration (SPF, DKIM, DMARC alignment), IP reputation status (Postmaster Tools, SNDS, blacklist check), PowerMTA configuration review (domain blocks, throttle settings, bounce handling), and operational practices (list hygiene frequency, bounce processing latency, FBL enrollment and processing status).
Authentication issues were the highest-priority finding. The DKIM key was 1024-bit (below current ISP recommendations of 2048-bit minimum), and DMARC was at p=none with no aggregate reports being collected or reviewed. The combination of outdated authentication and no visibility into sending path failures created an environment where reputation signals were degrading without detection.
Infrastructure Rebuild: Configuration Decisions
IP Pool Architecture
The IP pool was rebuilt with traffic type separation as the primary design principle. Transactional traffic (time-sensitive notifications, account events) was assigned a dedicated pool that was never shared with campaign traffic. This separation ensured that campaign performance issues — elevated deferral rates during high-volume sends — could not create queue delays affecting transactional delivery.
| Pool | Traffic Type | IPs | max-smtp-out | Protection Level |
|---|---|---|---|---|
| trans-pool | Transactional notifications | 2 | 10 per IP | Highest — never paused or degraded |
| campaign-pool | Marketing campaigns | 3-4 | 8 per IP | Standard — subject to reputation management |
| warming-pool | New IP warming | As needed | 2-3 per IP | Conservative — warming schedule only |
PowerMTA Domain Block Configuration
ISP-specific domain blocks were configured for each major destination: Gmail (max-smtp-out: 8, retry-after: 15m), Outlook (max-smtp-out: 5, retry-after: 20m), Yahoo (max-smtp-out: 6, retry-after: 15m), and ISP-specific configurations for European providers including GMX, Web.de, T-Online, and OVH. Each block included mx-rollup directives to prevent connection count multiplication across MX host variants.
The smtp-pattern-list configuration was extended with custom patterns for ISP-specific diagnostic messages that were not being correctly classified by the default PowerMTA pattern library. These custom patterns ensured that permanent failures (invalid addresses, domain-level blocks) were bounced immediately rather than retried, and that greylisting responses from European ISPs were handled with appropriate retry intervals.
Authentication Upgrade
DKIM keys were rotated to 2048-bit RSA on all sending domains. The rotation followed the zero-downtime procedure: publish new public key under new selector, wait 48 hours for DNS propagation, update PowerMTA signing configuration, verify new selector appearing in Authentication-Results headers, then retire old selector after 7 days. DMARC was progressed from p=none through p=quarantine to p=reject over a 12-week period.
Results After 90 DaysSeed test improvement
All major ISPs
Gmail
All domains
Operational Monitoring: What Changed Permanently
The infrastructure changes produced immediate delivery improvement, but the operational changes — the monitoring discipline and response protocols — are what sustain that improvement over time. Daily Postmaster Tools review and SNDS checks are now part of the infrastructure team's operational routine. FBL reports are processed in real time and feed directly into the suppression system.
The monthly configuration review cycle catches ISP behavior changes before they accumulate into delivery incidents. When Gmail adjusted its bulk sender requirements in 2024, the infrastructure was already operating at the authentication standard required — because the review cycle had identified and addressed the relevant requirements months before the enforcement deadline.
The technical changes in this engagement were straightforward. The more significant work was establishing the monitoring discipline that prevents the gradual drift that caused the original problems — an infrastructure that meets today's ISP requirements but has no ongoing review process will fall behind those requirements within 12-18 months.
— Cloud Server for Email Infrastructure TeamRunning on a shared ESP with deteriorating deliverability?
We conduct infrastructure assessments that identify whether the problem is your sending behavior or your infrastructure — before you commit to a migration.

