Hardware Recommendation for 30 - 40 users

Infos:

Important:
If you are a Zammad Support or hosted customer and experience a technical issue, please refer to: support@zammad.com using your zammad-hostname / or company contract.

  • Used Zammad version: 2.6.x
  • Used Zammad installation source: (source, package, …) deb
  • Operating system: ubuntu 16.04

your information in this thread Memory usage in Zammad and here https://docs.zammad.org/en/latest/prerequisites-hardware.html about the hardware requirements differ.

We need to host zammad for currently 30 - 40 concurrent users. Do you have a recommendation for that?

I’ve seen customer systems with 6 cores and 16GB RAM doing just fine with 30-50 users. At this state you should stick to SSDs or even better NVM as hard drive will be too slow and brake your system down.

The Scheduler has been optimized, but it may be that 40 Agents still need too much CPU time for zammad to keep up (sticky note here is background jobs and cti).

I’ve got a 2 year old second hand server with 32 Intel® Xeon® CPU E5-2667 v2 @ 3.30GHz cores and 128 gigs of memory (it was only $2k AUD) it doesn’t break a sweat with 10 concurrent users. It could easily run on 2 cores. memory is never wasted, I’m not sure how little memory I could get away with, but I’ve configure postgres and elasticsearch to make use of as much as possible.

With my limited experience I’ve found clock speed matters atm because the job queue runner is a single thread. If you have a choice between cores and clock speed go for less cores and higher clock speed. Things like ldap integration will add 100s of jobs to the queue every hour and some jobs take many seconds to process. The more agents logged in the more jobs added to the queue too as they work. If the job queue gets backed up their interface doesn’t block, they can still work, but there might be a delay for them to be notified of a new ticket.

The more tickets you have the more memory you can give to elasticsearch and postgres, the recommendation for elasticsearch is not to go above 30G for a single node. I’ve got 20k tickets, elasticesearch’s store size is 400M. I got a bit of headroom there.

postgres needs a bit of tuning, there will be a lot of updating the session table, with 10 users I see an occasional backlog of threads waiting to update it, sometimes 500ms. My postgres config is here [SOLVED] High number of delayed_jobs

1 Like

ok, thanks a lot. I’ll give your information to our system-dispatcher.

We don’t use AD, but 4 mailboxes which receive about 50 to 60 mails per hour. that in combination with a slow login, costs us the most time at the moment.

btw, the deadlock timeout setting I originally set didn’t fix anything and at 15 seconds it means if there’s a deadlock 2 or more threads will have to wait 15 seconds for it to timeout. the default of 1 second is fine, but if you’re seeing queries take longer than 1 second you could bump it up to just above the max query time.

ok, thank you. we’ll give that a try

and I thought of 2 other things.

for SSDs the general recommendation is to set elevator=noop if you google elevator noop you should get hits for your distro on how to do it, I edit /etc/default/grub with

GRUB_CMDLINE_LINUX=“elevator=noop”

and run update-grub

and I have a shell script with

#!/bin/bash

if [ "$1" != "performance" -a "$1" != "ondemand" ]; then
        echo $0 '<performance|ondemand>'
        exit
fi

for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n "$1" > $CPUFREQ; done

at 8am I set it to performance and at 6pm I set it to ondemand

Thanks a lot. It worked. We also gave 8GB for elasticsearch.

1 Like

We run into another problem:
After changing to better hardware, puma needs up to 100% every 5 minutes. With 24 cores this seems extremely high. Unfortunately, this also means that, as long as puma is running, no interaction in the web interface is possible and the users are partly kicked out (error 405 or 500). This is of course fatal for the introduction of a new system.

can anyone help us?

100% is 1 cpu fully utilised, 1000% are 10 cpus fully utilised. What do the logs say?

Sorry for the late reply. Logs are looking good. No error-Messages besides the already known (ERROR -- : Unable to get asset for 'customer': #<NameError: uninitialized constant Customer>). But what I see is that the CPU-consumption gets higher for each concurrent user and when we reached 6 concurrent users, we are at a 100% consumption.

Something isn’t right, either a config issue or a bug.
puma is at 100% right? that’s 1 core.
Can’t suggest anything without seeing the processes running and logs (there are many different logs to look at, postgres, elasticsearch, zammad, syslog, probably more) and sometimes turning the zammad debug log on is required.

nope, puma has a consumption of near 100% of all cores.
I’ll see if I can give you all needed information. what do you need in special? postgres, elasticsearch, zammad, syslog?

it sounds like you need to turn zammad debugging on, I don’t really know how but there are examples in this forum from zammad core developers. You put some ruby file in /opt/zammad and run it.

try this one

@thorsteneckel the corresponding file seems only to be for LDAP-things. how should I rewrite it to cover our situation?

I’m not sure, but I don’t think it’s just for ldap. The top block looks like a generic turn debugging on, the last line kicks off an ldap job, you don’t need the last line.

ok, I’ll give it a try
edit: OK, didn’t worked-> No Output and no debug_issue.log file

sorry, idk, maybe it doesn’t turn debugging on for already running processes but just the process on the bottom line? (the ldap line which I said wasn’t needed)
No doubt thor will help you soon.

Thorsten is on holiday for 2 weeks - I cannot assist at this issue in the moment, sorry about that.

I found another way to turn debugging on.
edit /opt/zammad/config/environments/production.rb

look for

  config.log_level = :info

change it to

  config.log_level = :debug

then restart zammad.
there’s probably a better way, but that way works.