Add another webserver zammad for load balancing


  • Used Zammad version: V3.5
  • Used Zammad installation source: (source, package, …) package
  • Operating system: centos7
  • Browser + version: firefox

Expected behavior:

  • for high availability I want to create another web server zammad and performe loadbalancing

Actual behavior:

  • I have an increasing number of customers (more than 500),
  • I have 2 VM one for database and the other for web
  • I want to add another web server for load balancing

Is that doable?

Loadbalancing is not supported by Zammad.
It’s not a stateless app which would bring issues you have to overcome.

Failover would be a more feasable solution.

No matter which flavor you choose, both are out of community scope and technically very very very complicated. I wouldn’t suggest to do it as of now.

It would be nice if Zammad would support a scale-out setup. I’m not sure how it works with the package install’s, but in the docker installation the zammad application is split in several services. what you could do for example is run each service (or an sub-set of these) on a seperate instance. for example you could run the railsserver in vm1, the nginx serviers on vm2, the scheduler on vm3 and the websocket service on vm4. this way you alleviate the load on each server, possible being able to handle more requests. Also you could increase the allotted memory to memcache, so you have more space available for caching.

Hi @nassimos

I have not tried this approach yet with Zammad but it should be rather easy to implement, as I have done it with other web applications.

You will need a GlusterFS mounted on both VMs and keepalived + Pacemaker to handle the HA capabilities like the floating IP you would use to access the load-balanced Zammad instance.
Then you need to properly configure your nginx servers on both VMs to listen to the virtual IP.

In theory, it should work… unless @MrGeneration or any other Zammad Core Team thinks otherwise.
But as you have been told, this has nothing to do with Zammad but with Systems Administration.

I would love to hear about your progress.


This is indeed more a System Administration question. I’m running Zammad with GlusterFS as storage backend, But i suppose any shared storage would do (e.g. NFS, etc.).
If you would opt for docker swarm/k8s, you would get the keepalived/pacemaker and vip out of the box.
From a System Administration point it would be good to know what parts are needed to keep an correct state, as it seems (but could be wrong) session data is stored in the database. So which state is keept in the applications it self? For example you could configure a reverse proxy (haproxy, traeffic, envoy, etc) to keep sticky session, so clients are always routed to the same server for example on based on a cookie (this has downsides for failover). I’m not sure if that would be enough to serve each client a correct state.

Sorry but I can’t and won’t provide further input on possible scaling of those kind of things.
This is high consulting part that is not trivial.

If I give tips and notes here and somebody goes wrong, I don’t want to be the one that’s blamed.

I already said that I suggest against that plan because Zammad is not stateful.
I think it will be much worse in docker context.

I agree it shouldn’t be answered here. But it would be nice if it was in the documentation somewhere or maybe a blogpost/wiki. Just put a big disclaimer on it, that it’s not officially supported. Maybe the community is willing to support it in the end. For me it doesn’t really matter i’ve an install with 4 agents and about 15 messages a day. Except pure curiosity.

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.