Important: If you are a Zammad Support or hosted customer and experience a technical issue, please refer to: support@zammad.com using your zammad-hostname / or company contract.
Used Zammad version: 2.6.x
Used Zammad installation source: (source, package, …) deb
I’ve seen customer systems with 6 cores and 16GB RAM doing just fine with 30-50 users. At this state you should stick to SSDs or even better NVM as hard drive will be too slow and brake your system down.
The Scheduler has been optimized, but it may be that 40 Agents still need too much CPU time for zammad to keep up (sticky note here is background jobs and cti).
I’ve got a 2 year old second hand server with 32 Intel® Xeon® CPU E5-2667 v2 @ 3.30GHz cores and 128 gigs of memory (it was only $2k AUD) it doesn’t break a sweat with 10 concurrent users. It could easily run on 2 cores. memory is never wasted, I’m not sure how little memory I could get away with, but I’ve configure postgres and elasticsearch to make use of as much as possible.
With my limited experience I’ve found clock speed matters atm because the job queue runner is a single thread. If you have a choice between cores and clock speed go for less cores and higher clock speed. Things like ldap integration will add 100s of jobs to the queue every hour and some jobs take many seconds to process. The more agents logged in the more jobs added to the queue too as they work. If the job queue gets backed up their interface doesn’t block, they can still work, but there might be a delay for them to be notified of a new ticket.
The more tickets you have the more memory you can give to elasticsearch and postgres, the recommendation for elasticsearch is not to go above 30G for a single node. I’ve got 20k tickets, elasticesearch’s store size is 400M. I got a bit of headroom there.
postgres needs a bit of tuning, there will be a lot of updating the session table, with 10 users I see an occasional backlog of threads waiting to update it, sometimes 500ms. My postgres config is here [SOLVED] High number of delayed_jobs
ok, thanks a lot. I’ll give your information to our system-dispatcher.
We don’t use AD, but 4 mailboxes which receive about 50 to 60 mails per hour. that in combination with a slow login, costs us the most time at the moment.
btw, the deadlock timeout setting I originally set didn’t fix anything and at 15 seconds it means if there’s a deadlock 2 or more threads will have to wait 15 seconds for it to timeout. the default of 1 second is fine, but if you’re seeing queries take longer than 1 second you could bump it up to just above the max query time.
for SSDs the general recommendation is to set elevator=noop if you google elevator noop you should get hits for your distro on how to do it, I edit /etc/default/grub with
GRUB_CMDLINE_LINUX=“elevator=noop”
and run update-grub
and I have a shell script with
#!/bin/bash
if [ "$1" != "performance" -a "$1" != "ondemand" ]; then
echo $0 '<performance|ondemand>'
exit
fi
for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n "$1" > $CPUFREQ; done
at 8am I set it to performance and at 6pm I set it to ondemand
We run into another problem:
After changing to better hardware, puma needs up to 100% every 5 minutes. With 24 cores this seems extremely high. Unfortunately, this also means that, as long as puma is running, no interaction in the web interface is possible and the users are partly kicked out (error 405 or 500). This is of course fatal for the introduction of a new system.
Something isn’t right, either a config issue or a bug.
puma is at 100% right? that’s 1 core.
Can’t suggest anything without seeing the processes running and logs (there are many different logs to look at, postgres, elasticsearch, zammad, syslog, probably more) and sometimes turning the zammad debug log on is required.
nope, puma has a consumption of near 100% of all cores.
I’ll see if I can give you all needed information. what do you need in special? postgres, elasticsearch, zammad, syslog?
it sounds like you need to turn zammad debugging on, I don’t really know how but there are examples in this forum from zammad core developers. You put some ruby file in /opt/zammad and run it.
I’m not sure, but I don’t think it’s just for ldap. The top block looks like a generic turn debugging on, the last line kicks off an ldap job, you don’t need the last line.
sorry, idk, maybe it doesn’t turn debugging on for already running processes but just the process on the bottom line? (the ldap line which I said wasn’t needed)
No doubt thor will help you soon.