TECHNICAL REFERENCE

MailWizz FAQ — Technical Configuration & Production Operations

Technical questions about MailWizz architecture, delivery server integration, bounce management, tracking configuration, and scaling for high-volume email deployments. Written for engineers and system administrators operating production MailWizz environments.

ARCHITECTURE & INTEGRATION WITH SMTP INFRASTRUCTURE

Technical questions about MailWizz's internal architecture, how it connects to SMTP relay layers, and configuration patterns for high-volume production deployments.

MailWizz connects to external SMTP servers through its Delivery Server configuration (under Backend → Servers → Delivery Servers). Each Delivery Server record defines the SMTP host, port, authentication method (PLAIN, LOGIN, or CRAM-MD5), username, password, and encryption type (None, TLS, or SSL). For PowerMTA integration specifically, the PowerMTA server must have an SMTP listener configured to accept authenticated connections from the MailWizz application server's IP address. The listener in PowerMTA's config.conf specifies the binding address and port, and the smtp-auth configuration within the listener block defines which authentication methods are accepted. MailWizz sends to PowerMTA via this authenticated listener, and PowerMTA handles the outbound delivery from the appropriate virtual-MTA pool. Multiple Delivery Servers in MailWizz can point to the same PowerMTA instance on different ports — enabling traffic type separation by having transactional campaigns use a Delivery Server on port 587 while bulk campaigns use a Delivery Server on port 2525, with PowerMTA routing each port to a different virtual-mta-pool.
MailWizz's database performance at high volume is primarily affected by three categories of configuration: connection pooling, query cache, and table optimization. MailWizz does not implement persistent database connection pooling natively — each request cycle opens and closes connections. Under high queue processing load, max_connections in MySQL/MariaDB should be set to at least 200–300 for MailWizz deployments processing millions of messages. The innodb_buffer_pool_size setting is the most impactful single MySQL parameter — it should be set to 70–80% of available RAM on a dedicated database server, or 40–50% on a combined application/database server. InnoDB is the required storage engine for MailWizz tables that use row-level locking during queue processing. Tables that grow large under high volume — particularly mw_campaign_delivery_log and mw_email_blacklist — require periodic archiving or partitioning to maintain query performance. The mw_email_blacklist table in particular should have a composite index on (email, status) to support the lookup queries performed for each recipient during queue processing.
MailWizz's queue processing operates through a combination of the campaign queue daemon and cron-based workers. The send-campaigns cron job (php -q /path/to/apps/console/console.php send-campaigns) dispatches campaign sends, and the queue workers process the actual message delivery. Queue worker concurrency is controlled by the campaigns.send.parallel application parameter in MailWizz's configuration (Backend → Settings → Campaigns), which determines how many campaigns can be processed in parallel. The campaigns.send.subscribers_chunk_size parameter controls how many subscriber records are loaded into memory per processing batch — increasing this value improves throughput but increases RAM consumption. For high-volume deployments, running MailWizz queue workers as a dedicated process manager (supervisord) rather than relying solely on cron provides more consistent throughput and allows worker restarts without cron interval dependency. The MailWizz campaign daemon (apps/console/console.php campaigns-queue) processes the internal queue and is the primary throughput bottleneck — dedicating sufficient PHP workers and ensuring MySQL query latency stays below 10ms for blacklist lookups is the principal optimization target.
MailWizz's Delivery Server rotation is configured at the campaign level and through Delivery Server group assignments. In the campaign settings, the Delivery Servers field allows selection of specific servers or groups for that campaign. When multiple servers are selected, MailWizz rotates through them based on the rotation algorithm — either round-robin or by delivery server weight. Delivery server weights are set in the Delivery Server record and allow traffic distribution to be proportional (e.g., 60% through one server, 40% through another). For IP pool separation by campaign type, the recommended MailWizz configuration is: (1) create Delivery Server Groups corresponding to traffic types (e.g., Bulk Marketing Pool, Transactional Pool), (2) assign the appropriate PowerMTA SMTP listeners to each group, (3) configure campaign templates to default to the appropriate group. This configuration means that IP pool selection is controlled at the MailWizz campaign configuration level rather than requiring separate application instances or routing rules in the relay layer.
DELIVERABILITY CONFIGURATION & BOUNCE MANAGEMENT

Questions about configuring MailWizz's tracking, bounce processing, suppression lists, and authentication settings to maintain high deliverability in production environments.

MailWizz's bounce processing is configured through a dedicated Bounce Server record (Backend → Servers → Bounce Servers) that specifies a mailbox PowerMTA or another SMTP server delivers bounce messages to. MailWizz connects to this mailbox via IMAP or POP3 and processes received bounce messages against its bounce pattern database. The bounce server is associated with a Delivery Server record — when PowerMTA generates a bounce for a message delivered through a specific MailWizz Delivery Server, the MAIL FROM address on that message contains an envelope address that resolves to the configured bounce mailbox. Hard bounce classification triggers immediate subscriber blacklisting — the email address is added to the global suppression list and will not be included in future campaigns. Soft bounce handling is configurable: after a configurable number of soft bounces within a time window (Backend → Settings → Campaigns → Soft Bounce Count to Unsubscribe), the address is also blacklisted. The bounce pattern configuration file (apps/common/data/email-bounce-patterns.php) can be extended with custom patterns for ISP-specific bounce messages not covered by the default set — essential for accurate classification of European ISP bounce formats that differ from North American conventions.
Gmail and Yahoo's bulk sender requirements mandate that messages include a List-Unsubscribe header with both a mailto: unsubscribe address and an HTTPS one-click unsubscribe URL (RFC 8058 compliant), and that the List-Unsubscribe-Post header is present indicating support for POST-based one-click unsubscription. MailWizz adds List-Unsubscribe headers automatically for campaigns — the system generates a unique unsubscribe URL per subscriber and includes it in the List-Unsubscribe header. However, compliance with the one-click requirement (List-Unsubscribe-Post: List-Unsubscribe=One-Click) requires MailWizz version 2.1.x or later, which added RFC 8058 support. To verify compliance, inspect the raw headers of a sent message: both List-Unsubscribe: <https://...>, <mailto:...> and List-Unsubscribe-Post: List-Unsubscribe=One-Click must be present. If only the mailto: form is present, or if List-Unsubscribe-Post is absent, the installation requires updating or manual header injection via a custom plugin or pre-send hook. The one-click endpoint must respond to HTTP POST requests with a 200 status code — MailWizz's unsubscribe handler supports this natively in compliant versions.
MailWizz's global email blacklist (mw_email_blacklist table) is checked for every recipient during queue processing before each message is dispatched. At scale — blacklists of several million entries are common in long-running deployments — this check becomes a significant source of database query latency if the table is not properly indexed. The primary index on the table should cover the email column with a hash or prefix index if using a VARCHAR with length above 191 characters (MySQL utf8mb4 index length limit). At lists exceeding five million entries, query latency for the blacklist check can exceed five milliseconds per lookup — which, multiplied across millions of recipients per campaign, becomes the primary throughput bottleneck in queue processing. Two mitigation approaches: (1) periodic archival of blacklist entries older than 24 months that have not been re-added (these rarely affect active campaigns but continue consuming index space), and (2) for very high volume environments, implementing a Redis-based blacklist cache layer in front of MySQL using a MailWizz extension, which reduces the per-lookup latency from 3–8ms to under 1ms for cached entries.
MailWizz's tracking links (click tracking and open tracking pixel) use a tracking domain that is configured in Backend → Settings → Common → Tracking Domain. The tracking domain should be a subdomain or a separate domain entirely from the primary sending domain — not the root domain. The reason is reputational: tracking domains appear in message headers and in redirected URLs. ISPs evaluate these domains as part of content filtering, and a tracking domain that accumulates spam complaints or appears on URL blacklists (Surbl, URIBL) will cause content filtering events independent of the sending IP or domain reputation. A dedicated tracking subdomain (track.yoursendingdomain.com) that is not used for any other purpose provides a clean reputation baseline. SSL/TLS is required for the tracking domain — Gmail in particular shows security warnings for HTTP-only tracking URLs in some contexts, which affects click rates and triggers additional content scrutiny. The tracking subdomain must have a valid TLS certificate, a working HTTPS server (Apache or Nginx), and the MailWizz tracking application deployed to the document root of that domain.
SCALING, MULTI-SERVER DEPLOYMENTS & ADVANCED OPERATIONS

Technical questions about scaling MailWizz beyond single-server deployments, managing multi-instance configurations, and operational procedures for production environments.

A multi-server MailWizz deployment separates the application into three layers: the web frontend (handling the admin and subscriber-facing application), the queue processing workers (running the send-campaigns daemon and cron tasks), and the database (MySQL/MariaDB). The shared filesystem requirement is the primary complexity — MailWizz's apps/common/runtime/ and apps/frontend/runtime/ directories must be accessible from both the web frontend servers and the queue worker servers simultaneously. This is addressed via NFS mount or a shared storage service. The MailWizz configuration (apps/common/config/main.php) must be identical across all application servers, with all servers pointing to the same database host. Queue workers should run only on designated worker servers — not on web frontend servers — to prevent queue processing from consuming resources needed for web response. The queue worker servers do not need to run a web server; they require only PHP-CLI, the MailWizz codebase, and access to the shared filesystem and database. Supervisord on each worker server manages the queue daemon processes, configured to restart on failure and maintain the desired number of concurrent worker processes.
The queries that most commonly cause performance degradation in high-volume MailWizz deployments are: (1) the subscriber blacklist check per recipient during queue processing, (2) the campaign subscriber list queries that fetch batches of recipients for dispatch, (3) the campaign delivery log insert queries that record sent message metadata, and (4) the open and click tracking insert queries from the tracking application. MySQL's slow query log (slow_query_log = ON, long_query_time = 0.1) configured to capture queries exceeding 100ms provides the initial visibility. In production, the most common early warning signs of database performance degradation affecting queue throughput are: queue processing throughput dropping below expected rates without a corresponding change in campaign volume (indicating database latency increase), PHP queue workers showing high wait times in SHOW PROCESSLIST output (indicating query queue buildup), and mw_campaign_delivery_log insert latency increasing (this table grows continuously and requires periodic archiving or partitioning — unarchived logs exceeding 50–100 million rows commonly cause this symptom). The InnoDB row lock wait timeout can be reached by concurrent queue workers competing for the same subscriber records — SHOW ENGINE INNODB STATUS reveals deadlock patterns that indicate this condition.
IP rotation can be implemented at two layers in a MailWizz + PowerMTA stack, and the correct choice depends on the isolation objective. MailWizz-level rotation — using multiple Delivery Server records pointing to different PowerMTA SMTP listeners, each listener bound to a specific IP pool — provides campaign-level isolation. Campaign A routes through Delivery Server 1 (bound to IP pool A), Campaign B through Delivery Server 2 (bound to IP pool B). This approach is appropriate when different campaigns have meaningfully different audience quality or complaint rate profiles and should not share reputation. PowerMTA-level rotation — round-robin across virtual-MTA addresses within a single pool, or weighted distribution across pools — operates below the campaign visibility level and provides IP-level distribution without campaign-level isolation. It is appropriate for distributing load across multiple IPs within a single reputation tier. The reputation isolation question determines which layer to use: if the objective is preventing one campaign's reputation events from affecting another campaign, MailWizz-level Delivery Server assignment provides the control boundary. If the objective is distributing volume across multiple IPs within a consistent reputation tier, PowerMTA-level virtual-MTA pool rotation is the appropriate mechanism.
PHP configuration for high-volume MailWizz queue processing requires attention to memory limits, execution time, and socket handling. Key parameters in php.ini for queue workers: memory_limit = 512M (queue workers that process large subscriber batches will exceed the default 128M limit — 256M minimum, 512M recommended for large deployments), max_execution_time = 0 (queue daemon processes run continuously — execution time limits must be disabled for CLI workers), pcre.recursion_limit = 100000 (MailWizz uses complex regex for email validation — default recursion limits can cause silent validation failures on some address formats). Required PHP extensions that are commonly absent in minimal PHP installations: php-imap (required for bounce server IMAP processing — the most commonly missing extension in fresh installations), php-curl (required for API integrations and tracking domain verification), php-gd (required for the template editor image handling), php-mbstring (required for multi-byte character handling in subscriber names and content), and php-redis (optional but strongly recommended for caching on high-volume deployments — MailWizz's Redis cache integration reduces database load significantly when configured). The php-imap extension absence is the most common cause of bounce server connection failures in MailWizz deployments.

Running MailWizz in Production?

We design and operate managed infrastructure environments using PowerMTA and MailWizz for high-volume senders. Configuration, reputation management, and operational oversight — fully managed.