June 2025 · POWERMTA TECHNICAL REFERENCE

PowerMTA Throughput Bottleneck Diagnosis — Disk, Network, Thread, and ISP Limits

June 2025 PowerMTA 6.x PowerMTA Throughput Bottleneck

PowerMTA throughput bottlenecks manifest as a delivery rate that cannot increase despite adding more IPs or connections. The bottleneck may be in storage (spool disk I/O), OS resources (file descriptors, network ports), PowerMTA process threads, or ISP-imposed limits on the destination side. Identifying which layer is the constraint before making configuration changes is the key to effective tuning — blind changes rarely solve the correct problem.

Section 1

Measuring Current Throughput

# Real-time throughput
pmta show status
# Output: Delivering: X msgs/sec | queue depth monitoring: Y | Active connections: Z

# Historical throughput from accounting log format (hourly)
awk -F, 'NR>1 && $1=="d" {hour=substr($2,1,13); count[hour]++} END {
    for(h in count) print h, count[h]
}' /var/log/pmta/accounting.csv | sort | tail -24
# Shows delivered messages per hour for last 24 hours

# HTTP API polling for monitoring dashboards
curl -s http://localhost:8080/status.json
Section 2

Bottleneck Types and Diagnosis

BottleneckSymptomDiagnosisResolution
Spool disk I/OHigh iowait; throughput plateauiostat -x 5 shows device at 100%Move spool to NVMe SSD
File descriptor limit"Too many open files" in logcat /proc/$(pgrep pmta)/limitsSet LimitNOFILE=65535
Port exhaustionMany CLOSE_WAIT socketsss -s | grep CLOSE_WAITExpand ip_local_port_range
Thread contentionOne CPU at 100%; plateautop shows single core maxedIncrease queue-processing-threads
ISP connection limitsAll connections full, low ratepmta show vmta per-IP statsAdd more IPs to pool
Section 3

Spool Disk Performance

The spool directory performance has intensive random read-write operations at high volume. HDD storage becomes a bottleneck above approximately 100,000 messages per hour per server. NVMe SSD spool allows 5-10x higher throughput from the same PowerMTA configuration without any software changes.

# Benchmark current spool disk
fio --name=spool-test --filename=/var/spool/pmta/fio-test \
    --size=1G --rw=randrw --bs=4k --iodepth=32 --runtime=30 --time_based
# Target: >10,000 IOPS for medium volume; >50,000 IOPS for high volume
# HDD typically provides 100-200 IOPS — NVMe SSD provides 500,000+ IOPS

# Move spool to SSD:
# 1. systemctl stop pmta
# 2. mkfs.xfs /dev/nvme0n1 && mount /dev/nvme0n1 /mnt/ssd
# 3. mv /var/spool/pmta/* /mnt/ssd/
# 4. Update /etc/pmta/config: spool-dir /mnt/ssd
# 5. systemctl start pmta
Section 4

Network Port Exhaustion Tuning

# Expand outbound port range (default 32768-60999 = 28,231 ports)
# At high volume this exhausts quickly — each SMTP connection uses one port
cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_local_port_range = 10240 65535    # 55,295 available ports
net.ipv4.tcp_fin_timeout = 10                  # Release ports faster
net.ipv4.tcp_tw_reuse = 1                      # Reuse TIME_WAIT sockets
net.core.somaxconn = 65535
EOF
sysctl -p

# Verify port availability
ss -s | grep -E "orphans|TIME-WAIT"
# TIME-WAIT count should be low (<5000) after tuning
Section 5

Queue Processing Thread Configuration

# In /etc/pmta/config — increase for multi-core servers
queue-processing-threads  8    # Match to CPU core count
smtp-service-threads      32   # Should be >= total max connections
num-hosts                 512  # Maximum concurrent destination hosts

pmta reload

# Monitor thread impact:
pmta show status
# If throughput increases after change: threads were the bottleneck
# If no change: bottleneck is elsewhere (disk, network, or ISP limits)
Section 6

ISP-Side Throughput Limits

ISP limits determine maximum effective throughput regardless of your infrastructure quality. Gmail, Outlook, and Yahoo all enforce per-IP and per-connection limits. The only way to increase throughput beyond single-IP limits is to add more warmed IPs to your sending pool. Each additional warmed IP adds proportional throughput capacity.

# Throughput calculation:
# Gmail: 8 connections × 150 msgs/session × 10 IPs = 12,000 msgs/cycle
# At 1 cycle per minute: 720,000 msgs/hr to Gmail with 10 warmed IPs

# Check per-IP throughput distribution
awk -F, 'NR>1 && $1=="d" {count[$15]++} END {
    for(ip in count) print ip, count[ip]
}' /var/log/pmta/accounting.csv | sort -k2 -rn
FAQ

Frequently Asked Questions

How do I measure actual PowerMTA throughput? +
What limits PowerMTA throughput most often? +
How many messages per hour can PowerMTA deliver? +
Does PowerMTA support clustering for throughput? +
What spool disk is recommended for high throughput? +
Section

Bottleneck Layer Identification

Identify the active bottleneck before changing configuration: (1) If spool disk is at 100% utilization during high throughput → storage bottleneck. (2) If one CPU core is at 100% while others are idle → thread configuration bottleneck. (3) If many CLOSE_WAIT or TIME_WAIT sockets → network port bottleneck. (4) If ISP shows all connections at max but low delivery rate → ISP-side limit. Each requires a different fix; applying the wrong fix wastes time and may worsen the original bottleneck.

Accounting Log Analysis for This Configuration

Monitor this configuration area through the PowerMTA accounting log's dsnDiag field. Filter accounting records for the specific ISP domains affected by this configuration and group dsnDiag responses by first 60 characters to identify the dominant error patterns. A deferral rate above 5% at any single ISP warrants investigation; above 15% requires immediate volume reduction and configuration review.

The dlvSourceIp field in the accounting log enables per-IP analysis within this configuration context. Comparing per-IP deferral rates identifies whether a configuration issue affects all IPs in a pool uniformly (configuration problem) or just specific IPs (reputation or IP-specific problem). This distinction determines the correct remediation path.

Calibrating to Your Current Environment

The parameter values documented in this reference are appropriate for established, warmed IPs with HIGH reputation at the target ISP. New or warming IPs, and IPs with MEDIUM or LOW reputation, require more conservative values. Move up incrementally as reputation signals confirm the infrastructure can sustain additional throughput. Review ISP-specific configuration monthly — Postmaster Tools reputation tier changes and SNDS status changes are the primary triggers.

Operating PowerMTA at production volume?

We manage PowerMTA environments for high-volume senders — configuration, IP warming schedule, daily reputation monitoring, and operational response. Fully managed. No self-service.

Need a Managed PowerMTA Environment?

Cloud Server for Email operates fully managed PowerMTA infrastructure from EU-based dedicated servers. Daily monitoring, per-ISP domain block optimization, IP warming management, and incident response included.

Need PowerMTA support?

Our team works with PowerMTA daily. Contact us for a technical consultation on your specific configuration.