A Madrid-based e-commerce group running three brands had added a B2B prospecting function to their marketing department. The business development team began sending cold outreach sequences — 15,000 to 20,000 messages per week — through the same PowerMTA environment and IP pools as the existing customer marketing campaigns serving 650,000 opt-in subscribers.
Within six weeks, Google Postmaster Tools showed the primary sending domain dropping from HIGH to MEDIUM reputation. Eight weeks later: LOW. Campaign inbox placement fell from 87% to 31% at Gmail. Complaint rate exceeded 0.4% — well above the 0.1% threshold at which Gmail begins applying negative reputation signals.
Root causeCold email to non-opted-in recipients inherently generates higher complaint rates. Recipients who receive unsolicited B2B prospecting email and mark it as spam — even if the message is legally compliant — contribute spam signals that affect the sending IP's and domain's reputation. When those IPs and domains are shared with legitimate opt-in marketing, the reputation damage applies to both traffic streams.
SolutionCold outreach was migrated to a completely separate infrastructure: new domain (not the brand domain), new IP addresses, separate PowerMTA instance. The brand domains and their associated IP pools were reserved exclusively for opt-in marketing and transactional email.
Warm marketing reputation recovered to HIGH on Postmaster Tools at week 10, with Gmail inbox placement reaching 91% at week 12. The cold outreach infrastructure on separate IPs reached a stable 42% inbox rate — consistent with industry benchmarks for cold B2B email to non-opted-in recipients.
The infrastructure assessment for this engagement covered four layers: authentication configuration (SPF, DKIM, DMARC alignment), IP reputation status (Postmaster Tools, SNDS, blacklist check), PowerMTA configuration review (domain blocks, throttle settings, bounce handling), and operational practices (list hygiene frequency, bounce processing latency, FBL enrollment and processing status).
Authentication issues were the highest-priority finding. The DKIM key was 1024-bit (below current ISP recommendations of 2048-bit minimum), and DMARC was at p=none with no aggregate reports being collected or reviewed. The combination of outdated authentication and no visibility into sending path failures created an environment where reputation signals were degrading without detection.
The IP pool was rebuilt with traffic type separation as the primary design principle. Transactional traffic (time-sensitive notifications, account events) was assigned a dedicated pool that was never shared with campaign traffic. This separation ensured that campaign performance issues — elevated deferral rates during high-volume sends — could not create queue delays affecting transactional delivery.
| Pool | Traffic Type | IPs | max-smtp-out | Protection Level |
|---|---|---|---|---|
| trans-pool | Transactional notifications | 2 | 10 per IP | Highest — never paused or degraded |
| campaign-pool | Marketing campaigns | 3-4 | 8 per IP | Standard — subject to reputation management |
| warming-pool | New IP warming | As needed | 2-3 per IP | Conservative — warming schedule only |
ISP-specific domain blocks were configured for each major destination: Gmail (max-smtp-out: 8, retry-after: 15m), Outlook (max-smtp-out: 5, retry-after: 20m), Yahoo (max-smtp-out: 6, retry-after: 15m), and ISP-specific configurations for European providers including GMX, Web.de, T-Online, and OVH. Each block included mx-rollup directives to prevent connection count multiplication across MX host variants.
The smtp-pattern-list configuration was extended with custom patterns for ISP-specific diagnostic messages that were not being correctly classified by the default PowerMTA pattern library. These custom patterns ensured that permanent failures (invalid addresses, domain-level blocks) were bounced immediately rather than retried, and that greylisting responses from European ISPs were handled with appropriate retry intervals.
DKIM keys were rotated to 2048-bit RSA on all sending domains. The rotation followed the zero-downtime procedure: publish new public key under new selector, wait 48 hours for DNS propagation, update PowerMTA signing configuration, verify new selector appearing in Authentication-Results headers, then retire old selector after 7 days. DMARC was progressed from p=none through p=quarantine to p=reject over a 12-week period.
Results After 90 DaysThe infrastructure changes produced immediate delivery improvement, but the operational changes — the monitoring discipline and response protocols — are what sustain that improvement over time. Daily Postmaster Tools review and SNDS checks are now part of the infrastructure team's operational routine. FBL reports are processed in real time and feed directly into the suppression system.
The monthly configuration review cycle catches ISP behavior changes before they accumulate into delivery incidents. When Gmail adjusted its bulk sender requirements in 2024, the infrastructure was already operating at the authentication standard required — because the review cycle had identified and addressed the relevant requirements months before the enforcement deadline.
The technical changes in this engagement were straightforward. The more significant work was establishing the monitoring discipline that prevents the gradual drift that caused the original problems — an infrastructure that meets today's ISP requirements but has no ongoing review process will fall behind those requirements within 12-18 months.
— Cloud Server for Email Infrastructure TeamThe infrastructure improvements achieved in this engagement represent a point-in-time improvement, not a permanent outcome. Email deliverability is an ongoing operational discipline — ISP filtering systems evolve, list composition changes with growth, and the configuration settings that are optimal today may need adjustment in six months. The monitoring and review processes established during this engagement are what sustain the improved performance over time.
Key ongoing practices established: daily Postmaster Tools and SNDS review integrated into the operations team's monitoring dashboard, real-time FBL complaint processing feeding directly into the suppression system, quarterly DKIM key rotation cadence, and monthly ISP-specific configuration review against current best practices. These practices take less time than a single delivery incident response — and they prevent the incidents.
One of the less-visible benefits of well-managed dedicated infrastructure is that it compounds over time. ISP reputation systems give weight to consistent historical behavior — a sender with 18 months of clean sending history recovers from a single incident faster than a sender with inconsistent history. The reputation capital built over time becomes a form of infrastructure resilience that is not visible in day-to-day metrics but matters significantly during incidents.
The infrastructure patterns in this case study recur across different sender types and volumes. A technical assessment identifies which apply to your environment and what the remediation sequence looks like for your specific configuration.
Contact the technical team to discuss your specific situation. We assess each environment individually before recommending an architecture.