Canada · Multi-Brand Retail · Case Study

Canadian Multi-Brand Retailer: Unified Email Infrastructure Across 5 Consumer Brands

Canada Multi-Brand Retail Q1 2025 Cloud Server for Email Infrastructure
← Back to Case Studies
5
Brands Consolidated
12M
Monthly Messages Sent
89%
Average Gmail Inbox Rate
100%
Brand Reputation Isolation

Five brands, five ESPs, five sets of deliverability problems

A Toronto-based retail group operating five consumer brands across clothing, homewares, sporting goods, and electronics had inherited a different email marketing platform for each brand through a series of acquisitions over four years. The infrastructure was: Mailchimp for Brand A, Klaviyo for Brand B, Brevo for Brand C, ActiveCampaign for Brand D, and a legacy in-house SMTP relay for Brand E.

Consolidated monthly send volume was approximately 12 million messages. The CEO wanted a single platform. The operations team wanted lower costs. The marketing team wanted better deliverability. The conflicting requirements of "single platform" (which suggested moving all brands to one commercial ESP) and "reputation isolation" (which requires that brands not share IPs) were difficult to reconcile commercially.

Single cluster, per-brand IP isolation

The solution was a dedicated PowerMTA cluster with per-brand IP pool allocation. All five brands operated on the same physical infrastructure — managed by the same team, with unified monitoring and reporting — but with completely isolated sending IPs, DKIM keys, and bounce processing per brand.

# Per-brand isolation — complete reputation separation # Brand A (Clothing) — IPs .10-.12, domain: brandalerts.ca virtual-mta-pool brand-a { virtual-mta ba-1 ba-2 ba-3 } # Brand B (Homewares) — IPs .20-.22, domain: brandbnews.ca virtual-mta-pool brand-b { virtual-mta bb-1 bb-2 bb-3 } # Brand C (Sporting) — IPs .30-.31, domain: brandcupdates.ca virtual-mta-pool brand-c { virtual-mta bc-1 bc-2 } # Brand D (Electronics) — IPs .40-.42, domain: branddalerts.ca virtual-mta-pool brand-d { virtual-mta bd-1 bd-2 bd-3 } # Brand E (Legacy) — IPs .50-.51, domain: brandeleads.ca virtual-mta-pool brand-e { virtual-mta be-1 be-2 } # Result: reputation event on Brand C cannot affect Brand A

Gmail Inbox Rate Before and After

Per brand — consolidated vs individual ESPs
Brand ABrand BBrand CBrand DBrand E ■ Before ■ After

Unified operations, isolated brand reputation

The consolidation reduced email infrastructure cost by 48% compared to maintaining five separate commercial platforms. Brand C — which had suffered from particularly poor deliverability on a shared Brevo pool — recovered from 61% to 85% Gmail inbox rate within 10 weeks on dedicated IPs. Brand E's legacy SMTP relay, which had never been properly authenticated, was replaced with authenticated dedicated infrastructure.

The per-brand isolation delivered the specific outcome the marketing team required: when Brand D ran an aggressive re-engagement campaign that temporarily elevated complaint rates, no reputation impact was observed on any other brand's sending environment.

Multi-Brand Architecture Principle Consolidation onto a single commercial ESP would have created exactly the cross-brand contamination risk that the marketing teams feared. Dedicated infrastructure with per-brand IP allocation provides the operational simplicity of a unified environment while preserving the reputation isolation that multi-brand operations require.

Technical Assessment: Infrastructure Layers Examined

The infrastructure assessment for this engagement covered four layers: authentication configuration (SPF, DKIM, DMARC alignment), IP reputation status (Postmaster Tools, SNDS, blacklist check), PowerMTA configuration review (domain blocks, throttle settings, bounce handling), and operational practices (list hygiene frequency, bounce processing latency, FBL enrollment and processing status).

Authentication issues were the highest-priority finding. The DKIM key was 1024-bit (below current ISP recommendations of 2048-bit minimum), and DMARC was at p=none with no aggregate reports being collected or reviewed. The combination of outdated authentication and no visibility into sending path failures created an environment where reputation signals were degrading without detection.

Infrastructure Rebuild: Configuration Decisions

IP Pool Architecture

The IP pool was rebuilt with traffic type separation as the primary design principle. Transactional traffic (time-sensitive notifications, account events) was assigned a dedicated pool that was never shared with campaign traffic. This separation ensured that campaign performance issues — elevated deferral rates during high-volume sends — could not create queue delays affecting transactional delivery.

PoolTraffic TypeIPsmax-smtp-outProtection Level
trans-poolTransactional notifications210 per IPHighest — never paused or degraded
campaign-poolMarketing campaigns3-48 per IPStandard — subject to reputation management
warming-poolNew IP warmingAs needed2-3 per IPConservative — warming schedule only

PowerMTA Domain Block Configuration

ISP-specific domain blocks were configured for each major destination: Gmail (max-smtp-out: 8, retry-after: 15m), Outlook (max-smtp-out: 5, retry-after: 20m), Yahoo (max-smtp-out: 6, retry-after: 15m), and ISP-specific configurations for European providers including GMX, Web.de, T-Online, and OVH. Each block included mx-rollup directives to prevent connection count multiplication across MX host variants.

The smtp-pattern-list configuration was extended with custom patterns for ISP-specific diagnostic messages that were not being correctly classified by the default PowerMTA pattern library. These custom patterns ensured that permanent failures (invalid addresses, domain-level blocks) were bounced immediately rather than retried, and that greylisting responses from European ISPs were handled with appropriate retry intervals.

Authentication Upgrade

DKIM keys were rotated to 2048-bit RSA on all sending domains. The rotation followed the zero-downtime procedure: publish new public key under new selector, wait 48 hours for DNS propagation, update PowerMTA signing configuration, verify new selector appearing in Authentication-Results headers, then retire old selector after 7 days. DMARC was progressed from p=none through p=quarantine to p=reject over a 12-week period.

Gmail Inbox Placement
Before
62%
After
93%

Seed test improvement
Deferral Rate
Before
14%
After
2.8%

All major ISPs
Hard Bounce Rate
Before
3.2%
After
0.7%

Gmail
DMARC Alignment
Before
88%
After
99.6%

All domains

Operational Monitoring: What Changed Permanently

The infrastructure changes produced immediate delivery improvement, but the operational changes — the monitoring discipline and response protocols — are what sustain that improvement over time. Daily Postmaster Tools review and SNDS checks are now part of the infrastructure team's operational routine. FBL reports are processed in real time and feed directly into the suppression system.

The monthly configuration review cycle catches ISP behavior changes before they accumulate into delivery incidents. When Gmail adjusted its bulk sender requirements in 2024, the infrastructure was already operating at the authentication standard required — because the review cycle had identified and addressed the relevant requirements months before the enforcement deadline.

The technical changes in this engagement were straightforward. The more significant work was establishing the monitoring discipline that prevents the gradual drift that caused the original problems — an infrastructure that meets today's ISP requirements but has no ongoing review process will fall behind those requirements within 12-18 months.

— Cloud Server for Email Infrastructure Team

Long-Term Infrastructure Management and Lessons

The infrastructure improvements achieved in this engagement represent a point-in-time improvement, not a permanent outcome. Email deliverability is an ongoing operational discipline — ISP filtering systems evolve, list composition changes with growth, and the configuration settings that are optimal today may need adjustment in six months. The monitoring and review processes established during this engagement are what sustain the improved performance over time.

Key ongoing practices established: daily Postmaster Tools and SNDS review integrated into the operations team's monitoring dashboard, real-time FBL complaint processing feeding directly into the suppression system, quarterly DKIM key rotation cadence, and monthly ISP-specific configuration review against current best practices. These practices take less time than a single delivery incident response — and they prevent the incidents.

The Compounding Effect of Clean Infrastructure

One of the less-visible benefits of well-managed dedicated infrastructure is that it compounds over time. ISP reputation systems give weight to consistent historical behavior — a sender with 18 months of clean sending history recovers from a single incident faster than a sender with inconsistent history. The reputation capital built over time becomes a form of infrastructure resilience that is not visible in day-to-day metrics but matters significantly during incidents.

Transferable Principles From This Engagement

  • Traffic type isolation (transactional vs marketing vs cold) should be implemented before volume grows to the point where reputation events in one stream affect others — not after
  • Authentication upgrades (DKIM key rotation, DMARC enforcement progression) have near-zero operational risk when sequenced correctly — but generate significant risk when rushed
  • Bounce processing latency is the most-overlooked list hygiene factor — every hour of delay between a hard bounce and suppression is another potential send to an invalid or trap address
  • ISP-specific throttle configuration must be calibrated to your current reputation tier, not to a target tier — over-ambitious settings at low reputation delay recovery rather than accelerating it
Similar challenges in your infrastructure?

The infrastructure patterns in this case study recur across different sender types and volumes. A technical assessment identifies which apply to your environment and what the remediation sequence looks like for your specific configuration.

Similar infrastructure challenges?

Contact the technical team to discuss your specific situation. We assess each environment individually before recommending an architecture.