A Warsaw-based e-commerce group operating 6 distinct retail brands undertook a complete infrastructure rebuild following an acquisition that brought all brands under a single operational team. Each brand had previously operated independently with its own ESP account. The consolidation required migrating all 6 brands to shared dedicated infrastructure while maintaining completely isolated reputation profiles — a failure in one brand's reputation must not affect any other.
Total production volume was 28 million messages per month across all 6 brands. The infrastructure required 16 new sending IPs, all starting with zero history. European ISP coverage was critical: GMX/Web.de (Germany), T-Online (Germany), Orange (France), and OVH (France) collectively accounted for 38% of the recipient base.
Warming Complexity Factors
- 16 new IPs with zero reputation history — all requiring simultaneous but independent warming schedules
- 6 brand domains requiring separate DKIM signing keys and DMARC policies
- European ISPs with different warming tolerance than Gmail/Yahoo — more conservative connection limits required
- Peak seasonal volume (Black Friday, Christmas) occurring during Week 7 of the warming schedule
- No ability to delay sending — brands could not pause commercial campaigns for the warming period
- Multiple brand teams submitting campaigns independently — coordination required across 6 campaign managers
The IP assignment was designed to group IPs by ISP destination rather than by brand. This allowed IP reputation to be built with specific ISPs rather than across all ISPs simultaneously, which produces stronger per-ISP reputation signals in a shorter timeframe.
Weekly Send Volume Per Pool During Warming (thousands of messages)
European ISPs — particularly German providers GMX, Web.de, and T-Online — have different warming behavior than Google and Microsoft. They respond more conservatively to volume increases and maintain deferral rates higher for longer during the early warming period, but their reputation assessment stabilizes more predictably once the warming threshold is passed.
Deferral Rate by ISP During Warming Phase (Week 4)
Week 7 of the warming schedule coincided with Black Friday — a 340% volume spike requirement. Rather than attempting to send full Black Friday volume through the partially-warmed IPs, we implemented a hybrid approach: Black Friday campaigns sent through a combination of the warming infrastructure (up to warming-appropriate limits) supplemented by a temporary cloud relay for overflow volume. This maintained warming progress without sending above the reputation-safe threshold.
all 16 IPs during warming
achieved on schedule
at production volume
all 6 brand domains
Warming 16 IPs across 6 domains simultaneously, over a Black Friday period, with zero blacklist events — that's not luck, it's architecture. The per-ISP pool structure meant that a deferral spike at T-Online in Week 3 didn't affect our Gmail warming at all. We could slow down one pool without touching the others.
Technical Assessment: Infrastructure Layers Examined
The infrastructure assessment for this engagement covered four layers: authentication configuration (SPF, DKIM, DMARC alignment), IP reputation status (Postmaster Tools, SNDS, blacklist check), PowerMTA configuration review (domain blocks, throttle settings, bounce handling), and operational practices (list hygiene frequency, bounce processing latency, FBL enrollment and processing status).
Authentication issues were the highest-priority finding. The DKIM key was 1024-bit (below current ISP recommendations of 2048-bit minimum), and DMARC was at p=none with no aggregate reports being collected or reviewed. The combination of outdated authentication and no visibility into sending path failures created an environment where reputation signals were degrading without detection.
Infrastructure Rebuild: Configuration Decisions
IP Pool Architecture
The IP pool was rebuilt with traffic type separation as the primary design principle. Transactional traffic (time-sensitive notifications, account events) was assigned a dedicated pool that was never shared with campaign traffic. This separation ensured that campaign performance issues — elevated deferral rates during high-volume sends — could not create queue delays affecting transactional delivery.
| Pool | Traffic Type | IPs | max-smtp-out | Protection Level |
|---|---|---|---|---|
| trans-pool | Transactional notifications | 2 | 10 per IP | Highest — never paused or degraded |
| campaign-pool | Marketing campaigns | 3-4 | 8 per IP | Standard — subject to reputation management |
| warming-pool | New IP warming | As needed | 2-3 per IP | Conservative — warming schedule only |
PowerMTA Domain Block Configuration
ISP-specific domain blocks were configured for each major destination: Gmail (max-smtp-out: 8, retry-after: 15m), Outlook (max-smtp-out: 5, retry-after: 20m), Yahoo (max-smtp-out: 6, retry-after: 15m), and ISP-specific configurations for European providers including GMX, Web.de, T-Online, and OVH. Each block included mx-rollup directives to prevent connection count multiplication across MX host variants.
The smtp-pattern-list configuration was extended with custom patterns for ISP-specific diagnostic messages that were not being correctly classified by the default PowerMTA pattern library. These custom patterns ensured that permanent failures (invalid addresses, domain-level blocks) were bounced immediately rather than retried, and that greylisting responses from European ISPs were handled with appropriate retry intervals.
Authentication Upgrade
DKIM keys were rotated to 2048-bit RSA on all sending domains. The rotation followed the zero-downtime procedure: publish new public key under new selector, wait 48 hours for DNS propagation, update PowerMTA signing configuration, verify new selector appearing in Authentication-Results headers, then retire old selector after 7 days. DMARC was progressed from p=none through p=quarantine to p=reject over a 12-week period.
Results After 90 DaysSeed test improvement
All major ISPs
Gmail
All domains
Operational Monitoring: What Changed Permanently
The infrastructure changes produced immediate delivery improvement, but the operational changes — the monitoring discipline and response protocols — are what sustain that improvement over time. Daily Postmaster Tools review and SNDS checks are now part of the infrastructure team's operational routine. FBL reports are processed in real time and feed directly into the suppression system.
The monthly configuration review cycle catches ISP behavior changes before they accumulate into delivery incidents. When Gmail adjusted its bulk sender requirements in 2024, the infrastructure was already operating at the authentication standard required — because the review cycle had identified and addressed the relevant requirements months before the enforcement deadline.
The technical changes in this engagement were straightforward. The more significant work was establishing the monitoring discipline that prevents the gradual drift that caused the original problems — an infrastructure that meets today's ISP requirements but has no ongoing review process will fall behind those requirements within 12-18 months.
— Cloud Server for Email Infrastructure TeamPlanning a new infrastructure deployment or domain migration?
Warming at scale requires both technical configuration and operational discipline across the entire ramp period. We design and manage warming programs from single-IP setups to 20+ IP multi-domain deployments.

