Zammad Performance Tuning Guide
Practical, Measurable Approach (Single‑VM Focus)
Hello everyone,
This guide summarizes our field‑tested approach to improving Zammad performance. It is not a universal recipe. The objective is simple: measure first, identify the real bottleneck, and apply targeted changes—one at a time. It turned our laggy patience-tester Zammad instance into a snappy fast user experience.
1. Start with measurement
Before tuning, understand where time is spent:
- PostgreSQL (query time and frequency)
- Zammad (
production.logrequest timings) - Elasticsearch (cluster health and GC behavior)
- Redis (latency and blocked clients)
- Nginx (upstream errors)
- Host metrics (CPU, RAM, disk I/O)
If the system is mostly idle but the UI feels slow, the root cause is usually request fan‑out or inefficient request patterns—not lack of hardware.
2. Single‑VM sizing (understanding resource contention)
Reference:
All‑in‑one deployments are common, but every component shares the same resources:
- Elasticsearch: ~4–6 GB heap on a 16 GB node
- PostgreSQL: ~3–4 GB effective working set
- Zammad workers: primarily CPU consumers as they scale
- Redis / Memcached: typically hundreds of MB
- OS page cache: must remain available for performance stability
CPU is shared across web workers, background jobs, PostgreSQL, and Elasticsearch. Increasing one layer directly impacts the others.
3. PostgreSQL (requirements and baseline tuning)
Reference:
- https://docs.zammad.org/en/latest/appendix/performance-tuning.html
- https://docs.zammad.org/en/latest/install/requirements.html
Ensure connection capacity is correctly sized:
rake zammad:db:max_connections
Recommended starting ranges:
shared_buffers: 20–25% of RAMeffective_cache_size: 50–75% of RAMwork_mem: 8–32 MBmaintenance_work_mem: 256 MB–1 GBcheckpoint_timeout: 10–15 minmax_wal_size: 2–8 GBcheckpoint_completion_target: ~0.9random_page_cost: 1.1–1.5 (SSD)
Example baseline:
shared_buffers = 4GB
effective_cache_size = 16GB
maintenance_work_mem = 256MB
checkpoint_timeout = 15min
checkpoint_completion_target = 0.9
max_wal_size = 4GB
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 20MB
track_io_timing = on
Use pg_stat_statements to identify high‑frequency queries and real hotspots.
4. UI fan‑out (a frequent root cause)
A single UI action can trigger dozens or hundreds of queries (tickets, articles, permissions, tags).
Focus on:
- queries per request
- repeated endpoints
- polling frequency
This pattern often explains why the database appears fast while the UI feels slow.
5. Nginx (reduce overhead, cache safely)
Reference:
Baseline configuration:
upstream zammad_backend {
server 127.0.0.1:3000;
keepalive 32;
}
server {
listen 443 ssl http2;
gzip on;
gzip_types text/plain text/css application/json application/javascript;
location /assets/ {
expires 7d;
add_header Cache-Control "public, immutable";
access_log off;
}
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_read_timeout 300;
proxy_pass http://zammad_backend;
}
}
Key considerations:
- Keepalive + HTTP/1.1 improves connection reuse
- Gzip reduces payload size and improves perceived performance
- Cache
/assets/conservatively (7–30 days) for fingerprinted files - Avoid caching dynamic application responses at the proxy layer
6. Memcached (official, optional)
Reference:
- https://docs.zammad.org/en/latest/install/docker-compose.html#memcached
- https://docs.zammad.org/en/latest/appendix/performance-tuning.html
Memcached can reduce repeated computation in read‑heavy or multi‑node environments.
Installation:
apt install memcached
systemctl enable --now memcached
Basic setup (~256–512 MB):
zammad config:set MEMCACHE_SERVERS=127.0.0.1:11211
Its effectiveness depends on cache hit rate. It does not help in highly dynamic workloads.
7. Elasticsearch (memory and CPU balance)
Reference:
Check cluster status:
curl localhost:9200/_cluster/health?pretty
Guidelines:
- Set fixed heap (
-Xms = -Xmx) - Use ~4–6 GB heap on a 16 GB VM
- Avoid starving PostgreSQL or the OS
Monitor:
- GC pauses
- heap usage above ~75–80%
- indexing backlog
Elasticsearch competes heavily for both CPU and RAM. Oversizing it can degrade overall system performance.
8. Redis (basic health check)
Reference:
redis-cli info
Ensure:
- no blocked clients
- no connection errors
- no memory pressure
9. Zammad performance parameters
Reference:
Tune these together, not independently:
ZAMMAD_WEB_CONCURRENCYZAMMAD_PROCESS_DELAYED_JOBS_WORKERSZAMMAD_PROCESS_DELAYED_JOBS_WORKER_THREADSZAMMAD_PROCESS_SESSION_JOBS_WORKERS
Guidelines:
- increase only if CPU headroom exists
- ensure PostgreSQL connections are sufficient
- adjust incrementally and measure impact
10. Recommended tuning order
- Measure
- Validate PostgreSQL connections
- Identify high‑impact queries
- Analyze request fan‑out
- Tune PostgreSQL
- Size Elasticsearch appropriately
- Validate Redis and Memcached
- Adjust Zammad workers carefully
- Optimize Nginx
- Re‑measure
Always change one variable at a time.
11. What to avoid
- Blindly adding CPU or RAM
- Copying configurations without context
- Optimizing a single layer in isolation
- Introducing unsupported components (e.g., PgBouncer) before validating the baseline
12. Closing
Zammad performance is inherently multi‑layered:
- many small operations accumulate into latency
- resource contention is common in single‑VM setups
- the largest gains come from identifying the true bottleneck
Measure first. Then optimize the layer that is actually doing the work.
Happy to compare approaches or discuss specific cases.