I’ve been using Zammad for 3 years now. We have grown from 3 to 5 agents in the meantime, there are about 7000 tickets in the system.
I started using Zammad on an all-in-one system with 16GB of RAM. This offered a good performance for the first 2 years. Last year, however, things started to become worse.
Elasticsearch started heavily eating up memory. While the system uses about 6GB RAM with everything except Elasticsearch running, as soon as I turn on Elasticsearch, the 16GB RAM are being exhausted and the swap partition (4GB in my case) is also being filled. Naturally, this slows the whole system.
For other reasons as well, I decided to move Zammad to a new server with a lot more power and RAM. Everything works well and fast. On this new server, I’m running the current version of Zammad (package) on Ubuntu 22, files are stored on the FS. No performance tuning applied. For Elasticsearch, in accordance with the documentation, the following to configuration entries were made:
http.max_content_length: 400mb
indices.query.bool.max_clause_count: 2000
Unchecked, Elasticsearch takes up a whopping 33GB of RAM on this new system. CPU load of the whole system is very low, though.
I understand that I can instruct Elastic to limit its memory usage by setting Xmx and Xms. Luckily, after the move I’m in the situation that I don’t have to, there’s plenty of RAM available.
Still, I wonder if this is the kind of memory usage one would expect to see? After all, according to the hardware requirements mentioned in the documentation, for just 5 agents, 8GB RAM should be sufficient for Zammad including Elasticsearch.
And say I wanted to reduce memory consumption of Elasticsearch, what number would still be safe (for Xmx and Xms) without notably losing performance? If it is even possible to give such an estimate.