Performance of 3.4 for team of 10

  • Used Zammad version: 3.4
  • Used Zammad installation source: (source, package, …): source
  • Operating system: Ubuntu 18.04 server
  • Browser + version: Client all (Mac/Win) w Chrome

Expected behavior:

  • Would like the Zammad system to be more usable. We upped the specs on a VM dedicated on Digital Ocean to 6cpus and 16gb RAM. Didn’t help much. Wondering if we have some kind of misconfig.

Actual behavior:

  • Screens showing Overviews, tickets, etc. are all very slow to respond/refresh

Steps to reproduce the behavior:

And for the 10 people, there are 85 sessions in the Sessions list. Is that normal? Do we need to have a method to remove those? Some go back 30 days. Or is that a non-issue?

Btw, our School tech support team is new to Zammad, but we like it so far.

When we run the Health Check… seeing this…

{“healthy”:false,“message”:“Failed to run scheduled job ‘Generate Session data’. Cause: Failed to run after 10 tries #\u003cErrno::EMFILE: Too many open files @ dir_initialize - /opt/zammad/tmp/websocket_production/\u003e”,“issues”:[“Failed to run scheduled job ‘Generate Session data’. Cause: Failed to run after 10 tries #\u003cErrno::EMFILE: Too many open files @ dir_initialize - /opt/zammad/tmp/websocket_production/\u003e”],“actions”:[“restart_failed_jobs”],“token”:“xxxxxxxxxxxx”}

Actually, may have resolved the issue. The Health Monitor is a nice tool.
We had some IMAP accounts that were recently recreated and were not allowing access. Resolved that. Then still had the Sessions Open Files error. Restarted.
Health monitor clean… performance back to normal so far.

You seem to have a variation of issues.
First of all: Ensure that your postgresql server (if being used) allows more connections than the default of 100. For this have a look at it’s configuration file, you should find max_connections there. Raise it to at least 500. Don’t forget to restart your service.

Also, too many open files is an indicator that your system limits to current opened files. This can have various reasons, but is more of a OS issue than a Zammad issue. I have never seen this on a system with a single Zammad installation. If you have several services beside Zammad on the same machine, Zammad might just push you over this limit.

Spoiler alert: This issue will appear within 72hours (my guess). Rebooting does not solve it, raise the limits on your OS if needed. This may also cause your OS do not allow processes to run as expected and degrade service quality greatly.

Further information on possible tweaking can be found here:

Personally I think you want to touch your web concurrency. If you also need more schedulers for just 10 agents, you proberbly have tons of overviews which might not be what you really want. (also have a look at

Thank you for your reply. The performance has been greatly improved. Actually, we’re probably going to move the db to another server for a little extra performance and to connect to Redash for reporting. Unless someone else has created some nice “power reports”, we want some more granular reporting.

We’ll be sure to document our steps and share for others.

1 Like

I personally always like to suggest staying away from the database itself if possible.
If you only read it’s not too bad, but writing is the worst idea you can have (not saying you’ll do, don’t worry).

We had good experiences with Grafana in connection with elasticsearch.
Technically all those information are available in Elasticsearch as well - it’s proberbly a bit more performant than your database which may negatively affect your Zammad performance at worst when creating a report.

Anyway, you’ll find a good solution, I’m sure. :raised_hands:

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.