Queueing strategy for delayed jobs

Hi folks!

I wonder whether zammad uses some queueing strategy for delayed jobs.

  • Used Zammad version: 3.3.0
  • Used Zammad installation source: zammad-docker-compose
  • Operating system: docker host: ubuntu
  • Browser + version: –

Expected behavior:

  • after creating an article of type ‘email’ an email is sent with minimized delay.
  • a delayed job to send mail has higher priority than a job to index objects

Actual behavior:

  • if the queue with delayed jobs contains many (300k) indexing jobs, then mails are not sent directly after creating articles of type mail

Steps to reproduce the behavior:

  • docker-compose stop elasticsearch
  • do many changes on data (e.g. modify user names)
  • docker-compose start elasticsearch
  • create an article of type email

What I did…

I found: After upgrade No Mails and Delayed::Backend::ActiveRecord::Job id: 66271



there were 300k Jobs

so I run:


May be I could destry only the indexing jobs… :thinking:

Then I searched for Delayed::Job and found this: https://github.com/collectiveidea/delayed_job .
Is this the same Delayed::Job ??
It seams to have features like :priority and named queues.

Is somethink like this used by zammad?

I took a glance at the code in master branch.
I am a complete noob in rails and in ruby, so maybe I’m using the wrong vocabular…
For me, it seams, that TicketArticleCommunicateEmailJob and SearchIndexJob are both derived from ApplicationJob and ApplicationJob includes HasQueuingPriority which sets queue_with_priority by default to 200 and for low_prioirty to 300. SearchIndexJob is using low_priority.

The docs of Delayed::Job states:

:priority (number): lower numbers run first

By default all jobs are scheduled with priority = 0 , which is top priority. You can change this by setting Delayed::Worker.default_priority to something else. Lower numbers have higher priority.

So it seams, that the answer to my question is: “Yes, in zammad mailing for articles has higher priority than indexing”. But then I wonder, why the mails were not sent?

Maybe I’m on the wrong track and there’s another explanation. Maybe someone has an idea?

If you have 300k delayed jobs in the queue, you have much bigger issues than email communication.
You should find out where this error comes from, Zammad should either log the reason to STDOUT or log/production.log.

To me it seems like your host is not able to keep up.
So ensure your system satisfies our hardware requirenments.

While we do use delayed jobs for both searchindexing and mail processing, up to Zammad 3.3 there’s no specific priotization. Also, please note that you should never ever run a Delayed::Job.destroy_all in a productive system!

This also removes tasks to send out mails and here’s where the real issue begins.
While fiddling with prioties might “work” visually, it will not fix your real issue.

300k background jobs is WAY to high.

Yes, I promise! :wink:
The system is in test phase so far.

you have much bigger issues

Sure, you are completely right :slight_smile:

I think, my bigger issue was, that the elasticsearch container didn’t run during a mass data update. So after I started the elasticsearch container, zammad did his best to work off the 300k jobs.

And I think, if we would wait enough time, it would finish. But in the meantime my practically bigger problem was, that the communication to customers was not possible (at least it would be … in production mode).

And to be clear: Zammad is doing well here, the initial issue was my fault.

up to Zammad 3.3 there’s no specific priotization

Thanks for helping!

Thanks for clarification :slight_smile:

I perfectly understand that situation during working hours for mails which need to leave like… NOW. :smiley:

If you’re okay with “waiting”, Zammad will work off those jobs and send out the affected mails later on. They won’t be lost. If that’s good enough for you that is.

Technically you can enforce running those jobs manually via console, however, you’re risking Zammad to send out those mails double, because the scheduler might be at the same background job in the moment you process it manually.

Not sure if this is really something you might want.

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.