Important: If you are a Zammad Support or hosted customer and experience a technical issue, please refer to: support@zammad.com using your zammad-hostname / or company contract.
Used Zammad version: 2.6
Used Zammad installation source: (source, package, …) source
Operating system: OpenSuSe 42.3
Browser + version: Firefox 62.03 Chrome 69.0
ElasticSearch Version: 5.6.12
Our Zammad system still has a bad performance. Sometimes you get 500 errors from the webserver or when you call a ticket or reload the page. After approx. 5-10 minutes whole has calmed down again. Usually the problem occurs when 2-5 agents work on the system at the same time.
What does this process do? Job BackgroundJobSearchIndex
This process runs every second according to log.
And the search doesn’t bring much results e.g. if I search for a ticket number(#2018100904461) no result comes and the result field remains empty.
What hardware specifications does your Zammad-System have?
Sounds like your Zammad is not able to keep up with index jobs and stuff. I bett you got a ton of delayedjobs in your instance.
Elasticsearch is reachable. We have recently received 500 error messages in the browser that the default gateway is not reachable. Unfortunately the log files are very large (190 - 250 MB) to upload.
we also had some performance issues in the past so we tried to increase WEB_CONCURRENCY like suggested in some other performance related threads. However this lead to many internal 500 errors on our system.
So we edited WEB_CONCURRENCY back to 1 which removed all the 500 errors we were getting. So a higher value for WEB_CONCURRENCY than 1 might be the reason for the errors you see.
The actual performance fix for us was to move attachment storage from database to filesystem.
You can do that under Admin->Settings->System->Storage and executing the command there in rails console.
After changing only this setting we received a performance increase from ~10 s for a database search to < 1 s for a database search.
Maybe this helps you as well, if you have not done that already.
there could be a few different things going on with the 500 error after increasing web concurrency.
one thing it could be is the postgres max_connections. check your postgres logs for errors related to having no spare connections.
Also turn on the slow query log and see if there are any queries taking more than 2 seconds, you could also be getting deadlock or other timeouts.
Please check if your websocket server is running and check the config for the nginx host.
Zammad uses an AJAX fallback if no websocket server can be found, which is much slower.
Yesterday we installed the update 2.7. Since then we have had the problem that no more mails are retrieved or that the activity stream is not updated. Yesterday I noticed that the port 6042 is running again but today after the update the port is not available again although websocket is running.
zammad-websocket.service - LSB: websocket component of zammad
Loaded: loaded (/etc/init.d/zammad-websocket; bad; vendor preset: disabled)
Active: active (exited) since Thu 2018-10-25 16:44:20 CEST; 18h ago
Docs: man:systemd-sysv-generator(8)
Tasks: 0 (limit: 512)
Oct 25 16:44:20 zammad systemd[1]: Starting LSB: websocket component of zammad…
Oct 25 16:44:20 zammad systemd[1]: Started LSB: websocket component of zammad.