Found numerous Zammad issues

Infos:

  • Used Zammad version: 3.1
  • Used Zammad installation source: docker
  • Operating system: CentOS

We are running Zammad in Docker using -3.1.0-32 version for now and have some issues.

I should mention that we were upgrading from 2.8 to 3.1 and have experienced a multitude of issues. We at the moment have a lot of workarounds running, such as ‘rake jobs:work’ to get delayed task to be processed and would appreciate some assistance.

At the moment the most important issue is Error 1, emails not being properly processed into Zammad without a script to occasionally process them. Would be good to at least have some insight into this:

Error 1) In the Zammad scheduler process we have the following errors

They are the same as: E-Mails cant be processed - RecordNotFound

Here is our logs:

I, [2020-01-08T03:01:03.284929 #1-69999712716540] INFO -- : Enqueued SearchIndexJob (Job ID: 4374880d-0336-4654-be3f-8e123f533bc9) to DelayedJob(default) with arguments: "User", 6

"ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/3a96e14ea07cd5d705927167467be986.eml, please create an issue at https://github.com/zammad/zammad/issues"

"ERROR: #<ActiveRecord::RecordNotFound: Couldn't find Ticket without an ID>"

E, [2020-01-08T03:01:03.592264 #1-69999712716540] ERROR -- : ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/3a96e14ea07cd5d705927167467be986.eml, please create an issue at https://github.com/zammad/zammad/issues

E, [2020-01-08T03:01:03.592300 #1-69999712716540] ERROR -- : Couldn't find Ticket without an ID (ActiveRecord::RecordNotFound)

and yes after that I could run >rails r ‘Channel::EmailParser.process_unprocessable_mails’ which processed them all without any errors, but we don’t want to constantly run this on a cron job to process emails. (Another one of our workarounds atm)

**Error 2)**Also in Scheduler we could see a problem that appears from time to time.

ActiveRecord::ConnectionTimeoutError - could not obtain a database connection within 5 seconds. The max pool size is currently 50; consider increasing it

Then we raise timeout from 5sec to 10sec. Seems like it helped a little because before that change we had that error every 15minutes. Now it is about once every 2 hours.

Error 3) Another issue is that I have a Delayed::Job.count() increasing up to hundreds and thousands (may become 10k jobs) and the only solution for me is to run jobs:work inside a container. It starts to process all jobs and continue to process them nice, but again this is another work around)

gosu 1000:1000 rake jobs:work

I could guess that it is caused because of errors in scheduler mentioned above.

But in that process we also receive some errors from time to time:

log
/usr/local/bundle/gems/rake-12.3.2/exe/rake:27:in `'
Tasks: TOP => jobs:work
(See full trace by running task with --trace)
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
rake aborted!
ActiveRecord::StatementInvalid: PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly
This probably means the server terminated abnormally

before or while processing the request.
: UPDATE "delayed_jobs" SET "locked_by" = NULL, "locked_at" = NULL WHERE "delayed_jobs"."locked_by" = $1

/usr/local/bundle/gems/rake-12.3.2/exe/rake:27:in `'

Caused by:

PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly

This probably means the server terminated abnormally

before or while processing the request.

/usr/local/bundle/gems/rake-12.3.2/exe/rake:27:in `'

Caused by:

PG::Error: invalid encoding name: utf8

/usr/local/bundle/gems/rake-12.3.2/exe/rake:27:in `'

Caused by:

ActiveRecord::StatementInvalid: PG::ConnectionBad: PQconsumeInput() server closed the connection unexpectedly

This probably means the server terminated abnormally

before or while processing the request.

: UPDATE "delayed_jobs" SET locked_at = '2020-01-08 04:11:14.386955', locked_by = 'host:e67b914f058d pid:1' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2020-01-08 04:11:14.386472' AND (locked_at IS NULL OR locked_at < '2020-01-08 00:11:14.386498') OR locked_by = 'host:e67b914f058d pid:1') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *

**Error 4)**The same as described here:

Our logs:

>E, [2020-01-07T03:46:44.883545 #1-47276871602100] ERROR -- : Unable to get asset for 'customer': #<NameError: uninitialized constant Customer>

Don’t know what to do with that

Error 5) I have noticed this error as well:

Mosty saw this at each start of the service after it was rebooted

E, [2020-01-07T03:45:41.312836 #1-47276855767320] ERROR -- : undefined method `first' for nil:NilClass (NoMethodError)

p.s.

some more long logs
I, [2020-01-08T05:11:52.538255 #1-69999712716540]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2020-01-08T05:11:52.544544 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/quotes-zammad port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:11:52.988588 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:11:52.988681 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:11:52.999612 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/kkts_it port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:11:53.234404 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:11:53.234518 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:11:53.249797 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/hpdemo port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2020-01-08T05:11:53.475366 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:11:53.475418 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:11:53.485808 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/issues port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:11:53.713328 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:11:53.713392 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:11:53.724839 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/bookings@kkts.com.au port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2020-01-08T05:11:54.565158 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:11:54.565232 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:24.581077 #1-69999712716540]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2020-01-08T05:12:24.583496 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/quotes-zammad port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:12:24.808119 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:12:24.808221 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:24.819328 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/kkts_it port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:12:25.043624 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:12:25.043662 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:25.052124 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/hpdemo port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2020-01-08T05:12:25.506691 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:12:25.506758 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:25.516692 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/issues port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:12:25.749541 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:12:25.749608 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:25.759934 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/bookings@kkts.com.au port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2020-01-08T05:12:26.230881 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:12:26.230921 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:56.243370 #1-69999712716540]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2020-01-08T05:12:56.245518 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/quotes-zammad port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:12:56.595543 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:12:56.595613 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:56.605897 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/kkts_it port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:12:56.831371 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:12:56.831428 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:56.840636 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/hpdemo port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2020-01-08T05:12:57.053070 #1-69999712716540]  INFO -- :  - message 1/1
I, [2020-01-08T05:12:57.535248 #1-69999712716540]  INFO -- : Process email with msgid '<DF4PR8401MB103512605DC02CAB030DF5ADA93E0@DF4PR8401MB1035.NAMPRD84.PROD.OUTLOOK.COM>'
I, [2020-01-08T05:12:57.571767 #1-69999712716540]  INFO -- :  process filter Bookings ...
I, [2020-01-08T05:12:57.572310 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'AU HP DEMO <hpdemo@kkts.com.au>, "Kotitsas, Louie" <elias.kotitsas@hp.com>, "McCutcheon, Cindy (Inside Sales Lead)" <cindy.mccutcheon@hp.com>' on bookings@kkts.com.au, but should
I, [2020-01-08T05:12:57.572370 #1-69999712716540]  INFO -- :  process filter Quotes ...
I, [2020-01-08T05:12:57.572613 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'AU HP DEMO <hpdemo@kkts.com.au>, "Kotitsas, Louie" <elias.kotitsas@hp.com>, "McCutcheon, Cindy (Inside Sales Lead)" <cindy.mccutcheon@hp.com>' on quotes@kkts.com.au, but should
I, [2020-01-08T05:12:57.572669 #1-69999712716540]  INFO -- :  process filter hpdemo ...
I, [2020-01-08T05:12:57.574026 #1-69999712716540]  INFO -- :   perform 'x-zammad-ticket-group_id' = '{"value"=>"8"}'
I, [2020-01-08T05:12:57.574081 #1-69999712716540]  INFO -- :  process filter issues ...
I, [2020-01-08T05:12:57.574331 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'AU HP DEMO <hpdemo@kkts.com.au>, "Kotitsas, Louie" <elias.kotitsas@hp.com>, "McCutcheon, Cindy (Inside Sales Lead)" <cindy.mccutcheon@hp.com>' on issues@kkts.com.au, but should
I, [2020-01-08T05:12:57.574388 #1-69999712716540]  INFO -- :  process filter kkts_it ...
I, [2020-01-08T05:12:57.574605 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'AU HP DEMO <hpdemo@kkts.com.au>, "Kotitsas, Louie" <elias.kotitsas@hp.com>, "McCutcheon, Cindy (Inside Sales Lead)" <cindy.mccutcheon@hp.com>' on kkts_it@mail.kkts.com.au, but should
I, [2020-01-08T05:12:57.617277 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 6eac7259-9d59-4042-969e-ac7443c41d13) to DelayedJob(default) with arguments: "Ticket", 93278
I, [2020-01-08T05:12:57.622080 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 27b1708e-ff8b-4ca5-ad44-e0e5e9f81eea) to DelayedJob(default) with arguments: "User", 6401
I, [2020-01-08T05:12:57.633720 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 05adf9b8-612e-4de4-8ca6-e118f38b2233) to DelayedJob(default) with arguments: "StatsStore", 20253
I, [2020-01-08T05:12:57.636500 #1-69999712716540]  INFO -- : set_attributes_by_x_headers header x-zammad-article-preferences found. Assign preferences={"send-auto-response"=>true, "is-auto-response"=>false}
I, [2020-01-08T05:12:57.769243 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 66d6bada-3cf3-403c-bcf9-585063880ac6) to DelayedJob(default) with arguments: "Ticket", 93278
I, [2020-01-08T05:12:57.774018 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: ee367a49-dd79-40f2-9962-e22ed32d0dca) to DelayedJob(default) with arguments: "User", 6401
"ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/c0f1c6bde272ae8f26ccc05628d3ab81.eml, please create an issue at https://github.com/zammad/zammad/issues"
"ERROR: #<ActiveRecord::RecordNotFound: Couldn't find Ticket::Article without an ID>"
E, [2020-01-08T05:12:58.078986 #1-69999712716540] ERROR -- : ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/c0f1c6bde272ae8f26ccc05628d3ab81.eml, please create an issue at https://github.com/zammad/zammad/issues
E, [2020-01-08T05:12:58.079049 #1-69999712716540] ERROR -- : Couldn't find Ticket::Article without an ID (ActiveRecord::RecordNotFound)
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation/finder_methods.rb:433:in `find_with_ids'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation/finder_methods.rb:69:in `find'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/querying.rb:5:in `find'
/usr/local/bundle/gems/composite_primary_keys-11.2.0/lib/composite_primary_keys/core.rb:24:in `find'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/persistence.rb:609:in `block in reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/scoping/default.rb:34:in `block in unscoped'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation.rb:281:in `scoping'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/scoping/default.rb:34:in `unscoped'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/persistence.rb:609:in `reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/attribute_methods/dirty.rb:31:in `reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/associations.rb:255:in `reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/autosave_association.rb:231:in `reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/aggregations.rb:15:in `reload'
/opt/zammad/app/models/channel/email_parser.rb:303:in `_process'
/opt/zammad/app/models/channel/email_parser.rb:115:in `block in process'
/usr/local/lib/ruby/2.5.0/timeout.rb:93:in `block in timeout'
/usr/local/lib/ruby/2.5.0/timeout.rb:33:in `block in catch'
/usr/local/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/usr/local/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/usr/local/lib/ruby/2.5.0/timeout.rb:108:in `timeout'
/opt/zammad/app/models/channel/email_parser.rb:114:in `process'
/opt/zammad/app/models/channel/driver/imap.rb:254:in `block in fetch'
/opt/zammad/app/models/channel/driver/imap.rb:211:in `each'
/opt/zammad/app/models/channel/driver/imap.rb:211:in `fetch'
/opt/zammad/app/models/channel.rb:56:in `fetch'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation/delegation.rb:71:in `each'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation/delegation.rb:71:in `each'
/opt/zammad/app/models/channel.rb:30:in `fetch'
(eval):1:in `_start_job'
/opt/zammad/app/models/scheduler.rb:286:in `eval'
/opt/zammad/app/models/scheduler.rb:286:in `_start_job'
/opt/zammad/app/models/scheduler.rb:239:in `block (2 levels) in start_job'
/opt/zammad/app/models/scheduler.rb:237:in `loop'
/opt/zammad/app/models/scheduler.rb:237:in `block in start_job'
/usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
I, [2020-01-08T05:12:58.124981 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:58.135220 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/issues port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:12:58.363360 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:12:58.363482 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:12:58.373119 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/bookings@kkts.com.au port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2020-01-08T05:12:59.609926 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:12:59.609996 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:13:29.626284 #1-69999712716540]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2020-01-08T05:13:29.628559 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/quotes-zammad port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:13:30.094857 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:13:30.094912 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:13:30.106024 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/kkts_it port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:13:30.338665 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:13:30.338721 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:13:30.348578 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/hpdemo port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2020-01-08T05:13:30.565098 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:13:30.565164 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:13:30.575706 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/issues port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:13:30.801823 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:13:30.801877 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:13:30.811443 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/bookings@kkts.com.au port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2020-01-08T05:13:31.339060 #1-69999712716540]  INFO -- :  - message 1/1
I, [2020-01-08T05:13:31.500708 #1-69999712716540]  INFO -- : Process email with msgid '<1578460503185909564@mail.kkts.com.au>'
I, [2020-01-08T05:13:31.522683 #1-69999712716540]  INFO -- :  process filter Bookings ...
I, [2020-01-08T05:13:31.524619 #1-69999712716540]  INFO -- :   perform 'x-zammad-ticket-group_id' = '{"value"=>"2"}'
I, [2020-01-08T05:13:31.524712 #1-69999712716540]  INFO -- :  process filter Quotes ...
I, [2020-01-08T05:13:31.525191 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'bookings@kkts.com.au' on quotes@kkts.com.au, but should
I, [2020-01-08T05:13:31.525244 #1-69999712716540]  INFO -- :  process filter hpdemo ...
I, [2020-01-08T05:13:31.525529 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'bookings@kkts.com.au' on hpdemo@kkts.com.au, but should
I, [2020-01-08T05:13:31.525571 #1-69999712716540]  INFO -- :  process filter issues ...
I, [2020-01-08T05:13:31.525889 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'bookings@kkts.com.au' on issues@kkts.com.au, but should
I, [2020-01-08T05:13:31.525931 #1-69999712716540]  INFO -- :  process filter kkts_it ...
I, [2020-01-08T05:13:31.526222 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'bookings@kkts.com.au' on kkts_it@mail.kkts.com.au, but should
I, [2020-01-08T05:13:31.528778 #1-69999712716540]  INFO -- : set_attributes_by_x_headers header x-zammad-ticket-group_id found. Assign group_id=2
I, [2020-01-08T05:13:31.528933 #1-69999712716540]  INFO -- : set_attributes_by_x_headers header x-zammad-ticket-customer_id found. Assign customer_id=6
I, [2020-01-08T05:13:31.556124 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 12283317-d65c-4bc8-91ad-bcbb020b48ff) to DelayedJob(default) with arguments: "Ticket", 93350
I, [2020-01-08T05:13:31.562970 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 190b1d41-bd1b-452c-b4d9-a76a7a7b6010) to DelayedJob(default) with arguments: "User", 6
I, [2020-01-08T05:13:31.570163 #1-69999712716540]  INFO -- : set_attributes_by_x_headers header x-zammad-article-sender found Agent
I, [2020-01-08T05:13:31.570261 #1-69999712716540]  INFO -- : set_attributes_by_x_headers found Ticket::Article::Sender lookup for 'Agent'
I, [2020-01-08T05:13:31.570771 #1-69999712716540]  INFO -- : set_attributes_by_x_headers assign Ticket::Article sender_id=1
I, [2020-01-08T05:13:31.571016 #1-69999712716540]  INFO -- : set_attributes_by_x_headers header x-zammad-article-preferences found. Assign preferences={"send-auto-response"=>true, "is-auto-response"=>false}
I, [2020-01-08T05:13:31.655410 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 33d0e896-921c-4661-bea2-d8d3a39ebbd9) to DelayedJob(default) with arguments: "Ticket", 93350
I, [2020-01-08T05:13:31.662495 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 96f0bafc-9024-4095-af03-cb05e638fe44) to DelayedJob(default) with arguments: "User", 6
"ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/d0a022be16d59a0e864fe16e1a2086c7.eml, please create an issue at https://github.com/zammad/zammad/issues"
"ERROR: #<ActiveRecord::RecordNotFound: Couldn't find Ticket without an ID>"
E, [2020-01-08T05:13:31.936427 #1-69999712716540] ERROR -- : ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/d0a022be16d59a0e864fe16e1a2086c7.eml, please create an issue at https://github.com/zammad/zammad/issues
E, [2020-01-08T05:13:31.936502 #1-69999712716540] ERROR -- : Couldn't find Ticket without an ID (ActiveRecord::RecordNotFound)
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation/finder_methods.rb:433:in `find_with_ids'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation/finder_methods.rb:69:in `find'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/querying.rb:5:in `find'
/usr/local/bundle/gems/composite_primary_keys-11.2.0/lib/composite_primary_keys/core.rb:24:in `find'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/persistence.rb:609:in `block in reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/scoping/default.rb:34:in `block in unscoped'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation.rb:281:in `scoping'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/scoping/default.rb:34:in `unscoped'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/persistence.rb:609:in `reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/attribute_methods/dirty.rb:31:in `reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/associations.rb:255:in `reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/autosave_association.rb:231:in `reload'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/aggregations.rb:15:in `reload'
/opt/zammad/app/models/channel/email_parser.rb:302:in `_process'
/opt/zammad/app/models/channel/email_parser.rb:115:in `block in process'
/usr/local/lib/ruby/2.5.0/timeout.rb:93:in `block in timeout'
/usr/local/lib/ruby/2.5.0/timeout.rb:33:in `block in catch'
/usr/local/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/usr/local/lib/ruby/2.5.0/timeout.rb:33:in `catch'
/usr/local/lib/ruby/2.5.0/timeout.rb:108:in `timeout'
/opt/zammad/app/models/channel/email_parser.rb:114:in `process'
/opt/zammad/app/models/channel/driver/imap.rb:254:in `block in fetch'
/opt/zammad/app/models/channel/driver/imap.rb:211:in `each'
/opt/zammad/app/models/channel/driver/imap.rb:211:in `fetch'
/opt/zammad/app/models/channel.rb:56:in `fetch'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation/delegation.rb:71:in `each'
/usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/relation/delegation.rb:71:in `each'
/opt/zammad/app/models/channel.rb:30:in `fetch'
(eval):1:in `_start_job'
/opt/zammad/app/models/scheduler.rb:286:in `eval'
/opt/zammad/app/models/scheduler.rb:286:in `_start_job'
/opt/zammad/app/models/scheduler.rb:239:in `block (2 levels) in start_job'
/opt/zammad/app/models/scheduler.rb:237:in `loop'
/opt/zammad/app/models/scheduler.rb:237:in `block in start_job'
/usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
I, [2020-01-08T05:13:31.980599 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:14:01.995587 #1-69999712716540]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2020-01-08T05:14:01.999311 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/quotes-zammad port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:14:02.379880 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:14:02.379943 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:14:02.393698 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/kkts_it port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=false)
I, [2020-01-08T05:14:03.464821 #1-69999712716540]  INFO -- :  - no message
I, [2020-01-08T05:14:03.464885 #1-69999712716540]  INFO -- : done
I, [2020-01-08T05:14:03.475380 #1-69999712716540]  INFO -- : fetching imap (mail.kkts.com.au/hpdemo port=993,ssl=true,starttls=false,folder=INBOX,keep_on_server=true)
I, [2020-01-08T05:14:04.769123 #1-69999712716540]  INFO -- :  - message 1/1
I, [2020-01-08T05:14:05.035208 #1-69999712716540]  INFO -- : Process email with msgid '<CS1PR8401MB072832607946D5F9B2DC45B4E53E0@CS1PR8401MB0728.NAMPRD84.PROD.OUTLOOK.COM>'
I, [2020-01-08T05:14:05.079159 #1-69999712716540]  INFO -- :  process filter Bookings ...
I, [2020-01-08T05:14:05.080549 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'AU HP DEMO <hpdemo@kkts.com.au>, Otis Chen <otis.chen@jbhifi.com.au>' on bookings@kkts.com.au, but should
I, [2020-01-08T05:14:05.080606 #1-69999712716540]  INFO -- :  process filter Quotes ...
I, [2020-01-08T05:14:05.080817 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'AU HP DEMO <hpdemo@kkts.com.au>, Otis Chen <otis.chen@jbhifi.com.au>' on quotes@kkts.com.au, but should
I, [2020-01-08T05:14:05.080860 #1-69999712716540]  INFO -- :  process filter hpdemo ...
I, [2020-01-08T05:14:05.082852 #1-69999712716540]  INFO -- :   perform 'x-zammad-ticket-group_id' = '{"value"=>"8"}'
I, [2020-01-08T05:14:05.082910 #1-69999712716540]  INFO -- :  process filter issues ...
I, [2020-01-08T05:14:05.083147 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'AU HP DEMO <hpdemo@kkts.com.au>, Otis Chen <otis.chen@jbhifi.com.au>' on issues@kkts.com.au, but should
I, [2020-01-08T05:14:05.083198 #1-69999712716540]  INFO -- :  process filter kkts_it ...
I, [2020-01-08T05:14:05.083429 #1-69999712716540]  INFO -- :   not matching x-any-recipient:'AU HP DEMO <hpdemo@kkts.com.au>, Otis Chen <otis.chen@jbhifi.com.au>' on kkts_it@mail.kkts.com.au, but should
I, [2020-01-08T05:14:05.162831 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: b8109a6e-88e1-488f-aa0e-86ea1a60c1f2) to DelayedJob(default) with arguments: "Ticket", 86736
I, [2020-01-08T05:14:05.168433 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 9d5087de-6932-49a9-82a8-986e7f1c1efa) to DelayedJob(default) with arguments: "User", 7091
I, [2020-01-08T05:14:05.182639 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: bd8f6960-9f87-4356-ad29-6c3a70cd0f83) to DelayedJob(default) with arguments: "StatsStore", 20254
I, [2020-01-08T05:14:05.186573 #1-69999712716540]  INFO -- : set_attributes_by_x_headers header x-zammad-article-preferences found. Assign preferences={"send-auto-response"=>true, "is-auto-response"=>false}
I, [2020-01-08T05:14:05.278402 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: 7034f9f4-9311-440f-9f4a-8132aacee30e) to DelayedJob(default) with arguments: "Ticket", 86736
I, [2020-01-08T05:14:05.283828 #1-69999712716540]  INFO -- : Enqueued SearchIndexJob (Job ID: a2dd80af-0abf-4da9-a4ed-d5984c92e820) to DelayedJob(default) with arguments: "User", 7091
E, [2020-01-08T05:14:05.472638 #1-69999712716540] ERROR -- : missing session file for '47213409905700', remove session.
"ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/4e87687c36449778629abebc67501b78.eml, please create an issue at https://github.com/zammad/zammad/issues"
"ERROR: #<ActiveRecord::RecordNotFound: Couldn't find Ticket::Article without an ID>"
E, [2020-01-08T05:14:05.582677 #1-69999712716540] ERROR -- : ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/4e87687c36449778629abebc67501b78.eml, please create an issue at https://github.com/zammad/zammad/issues

Please note that Zammad 3.1 is no longer up to date, you might want to try out Zammad 3.2 (read this before: Zammad 3.2 is available )

Also, most of your issues sound like performance issues.
Please provide the following information:

* Number of CPU cores available to Zammad
* Used CPU (e.g. Intel Xeon xyz)
* Available memory for for Zammad
* Number of Tickets (rails r 'p Ticket.count')
* Number of Users (rails r 'p User.count')
* Number of Agents (concurrent)
* Number of overviews (rails r 'p Overview.count')

Above covers at least issue 1 and 3.


Error 3

By default postgresql allows a maximum of 100 connections. This is usually fine, your environment however seems to have a load of workload, you may want to increase the number to 250. This should be fine.

I can not reproduce issue 4 - haven’t seen that in a while.

Issue 5 can be ignored and is just a warning for signature recognition. It doesn’t break anything. Promise!


Another interesting question for your mail issue would be, if it’s always the same type of mail that fails and whats inside. Some specific mails can cause issues. But as you can import them later, it’s most likely just a performance issue.

Our updates:
Increase MEM for each containers up to 15Gb was 5Gb
Overview.count : 43
User.count: 8197
Ticket.count: 83266
Number of Agents (total) 30
Number of Agents (concurrent) ~15-20?

Thinking about upgrade up-to 3.2, but I have fear to get new issues and unable to roll back to 3.1 because of new DB migrations in 3.2.

About pg connections
We are using zammad-docker-compose:zammad-postgresql-3.1.0-32 image where configuration set to max_conn = 300
Looking into DB during a working day:

(select count(*) used from pg_stat_activity) q1,
(select setting::int res_for_super from pg_settings where name=$$superuser_reserved_connections$$) q2,
(select setting::int max_conn from pg_settings where name=$$max_connections$$) q3;
 used | res_for_super | max_conn
------+---------------+----------
  174 |             3 |      300
(1 row)

I don’t see used more than 200, so I guess pgsql configuration is Ok.

About ActiveRecord::ConnectionTimeoutError

Still had that error and after that it stops to process Delayed::Job normally and I have to start rake jobs:work to get them processed.

E, [2020-01-09T02:23:42.696584 #1-47016038127400] ERROR -- : 2020-01-09T02:23:42+0000: [Worker(host:84aecf3b20e8 pid:1)] Job SearchIndexJob [60bfcab3-12d3-4561-aa60-1742f4ef3368] from DelayedJob(default) with arguments: ["Ticket", 93248] (id=1562539) (queue=default) FAILED (0 prior attempts) with ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 6.840 seconds); all pooled connections were in use
E, [2020-01-09T02:23:42.696733 #1-47016038127400] ERROR -- : 2020-01-09T02:23:42+0000: [Worker(host:84aecf3b20e8 pid:1)] Job SearchIndexJob [60bfcab3-12d3-4561-aa60-1742f4ef3368] from DelayedJob(default) with arguments: ["Ticket", 93248] (id=1562539) (queue=default) FAILED permanently because of 1 consecutive failures
I, [2020-01-09T02:23:43.681592 #1-70186872064300]  INFO -- : execute Import::OTRS.diff_worker (try_count 0)...
E, [2020-01-09T02:23:47.111488 #1-70186290246240] ERROR -- : thread_client 46949339110080 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 8.287 seconds); all pooled connections were in use>
E, [2020-01-09T02:23:47.111668 #1-70186290246240] ERROR -- : /usr/local/bundle/gems/activerecord-5.2.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:201:in `block in wait_poll'

And my question is about bundle exec which runs scheduler script. Does it use the same config file as railsserver?
Because in the log above we can see that message like pool within 5.000 seconds (waited 8.287 seconds);, but we have 10sec timeout in our database.yml file.
My guess is that the scheduler application uses the default settings for a pool which is pool=5, timeout=5000 which could not suit your environment for some reason.
Is there a way to check this for schedular application? I know how to check it for railsserver. rails r 'p ActiveRecord::Base.connection_config' inside a railsserver container, but can’t get how to check it for bundle exec

root@9e15d658a00c:/opt/zammad# cat config/database.yml
production:
  adapter: postgresql
  database: zammad_production
  pool: 50
  timeout: 10000
  encoding: utf8
  username: postgres
  password:
  host: zammad-postgresql
  port: 5432

All Zammad processes use the database.yml configuration. So the pool limit for each process is 50.

43 Overviews sounds quite a lot to me.
Do you really need them all?

Especially up to Zammad 3.1 we process all overviews per user which will mean that -if your user sees 40 overviews- and you have 20 concurrent agents online, Zammad will need to render 800 overviews in total on each run. This eats a lot of CPU time and would explain your delayed job issue.

Zammad 3.2 came with optimizations on memory usuage, delayed jobs and overview handling.
You might still have performance issues, but it should get better.

Also, what really will help (without updating) is to reduce your overviews if possible.

Ensure that you don’t have more than 2100 Elements in your overviews (you can’t see more any way).
As it e.g. makes no sense to display all closed tickets of history.

Maybe that helps.

1) Pool timeout

If so, why I see error message with 5sec timeout, but database.yml has 10sec?

root@9e15d658a00c:/opt/zammad# cat config/database.yml
production:
  adapter: postgresql
  database: zammad_production
  pool: 50
  timeout: 10000

->

E, [2020-01-09T02:23:47.111488 #1-70186290246240] ERROR -- : thread_client 46949339110080 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 8.287 seconds); all pooled connections were in use>

?

2) Overviews count

Thanks for the advice. We reduced overview, will see the load at Mon

3) Search Index rebuild

Yesterday we launched index rebuild rake searchindex:rebuild and got that error after 2-3 hours of rebuilding


/opt/zammad/app/models/concerns/has_search_index_backend.rb:145:in `rescue in block in search_index_reload'
/opt/zammad/app/models/concerns/has_search_index_backend.rb:139:in `block in search_index_reload'
/opt/zammad/app/models/concerns/has_search_index_backend.rb:134:in `each'
/opt/zammad/app/models/concerns/has_search_index_backend.rb:134:in `search_index_reload'
/opt/zammad/lib/tasks/search_index_es.rake:143:in `block (3 levels) in <main>'
/opt/zammad/lib/tasks/search_index_es.rake:139:in `each'
/opt/zammad/lib/tasks/search_index_es.rake:139:in `block (2 levels) in <main>'
/opt/zammad/lib/tasks/search_index_es.rake:153:in `block (2 levels) in <main>'
/usr/local/bundle/gems/rake-12.3.2/exe/rake:27:in `<top (required)>'

Caused by:
Unable to process POST request to elasticsearch URL 'http://zammad-elasticsearch:9200/zammad_production/Ticket/40846?pipeline=zammad813516161384'. Check the response and payload for detailed information:

Response:
#<UserAgent::Result:0x000055c63c055e08 @success=false, @body=nil, @data=nil, @code=0, @content_type=nil, @error="#<SocketError: Failed to open TCP connection to zammad-elasticsearch:9200 (getaddrinfo: Temporary failure in name resolution)>">

But I am completely sure that we don’t have any name resolution issues. To confirm that I run 2 tests. First is
while true; do echo $(date) - $(getent ahosts zammad-elasticsearch) >> 1.log; sleep 1; done;
and second is
while true; do nc zammad-elasticsearch 9200 -zv >> 1.log 2>&1; sleep 1; done;
Both of them were working for more than 12 hours without any issue.

4) asset for ‘customer’ error

Also, I could say that problem #4 from my initial message happens too often I guess. 5-10 times in a minute. I’m talking about

>E, [2020-01-07T03:46:44.883545 #1-47276871602100] ERROR -- : Unable to get asset for 'customer': #<NameError: uninitialized constant Customer>

and related issue topic ERROR -- : Unable to get asset for 'customer': #<NameError: uninitialized constant Customer>

How can I try to trace the problem? I guess that we have an error client or user somehow in the DB, maybe with an undefined mandatory field or smt. Just my assumption

5) additional worker
And we still have to launch additional task jobs:work to get jobs done, but this issue we had on 2.8 too. Not related with the upgrade process.

I’m back with still the same error. :hot_face:
@MrGeneration
Now we are using the latest Zammad-docker-compose build 3.3.0-23. Also, we reduce numbers of overview too which helped some, but we still have an issue with ActiveRecord::ConnectionTimeoutError

Lastest logs:

Logs
E, [2020-04-20T13:56:33.152864 #1-69903011597040] ERROR -- : thread_client 910178439 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.000 seconds); all pooled connections were in use>
E, [2020-04-20T13:56:33.153524 #1-69903011597040] ERROR -- : /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:201:in `block in wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `loop'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:151:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:277:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `block in poll'
  /usr/local/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:155:in `synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:818:in `acquire_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:538:in `checkout'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:382:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1033:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:90:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `block in pluck'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation.rb:584:in `skip_query_cache_if_necessary'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `pluck'
  /opt/zammad/app/models/ticket/overviews.rb:25:in `all'
  /opt/zammad/app/models/ticket/overviews.rb:96:in `index'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:47:in `load'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:109:in `push'
  /opt/zammad/lib/sessions/client.rb:67:in `each'
  /opt/zammad/lib/sessions/client.rb:67:in `block in fetch'
  /opt/zammad/lib/sessions/client.rb:23:in `loop'
  /opt/zammad/lib/sessions/client.rb:23:in `fetch'
  /opt/zammad/lib/sessions/client.rb:7:in `initialize'
  /opt/zammad/lib/sessions.rb:721:in `new'
  /opt/zammad/lib/sessions.rb:721:in `thread_client'
  /opt/zammad/lib/sessions.rb:676:in `block (3 levels) in jobs'
  /usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
E, [2020-04-20T13:56:33.850705 #1-69902340157720] ERROR -- : thread_client 8682542937 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.000 seconds); all pooled connections were in use>
E, [2020-04-20T13:56:33.850910 #1-69902340157720] ERROR -- : /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:201:in `block in wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `loop'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:151:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:277:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `block in poll'
  /usr/local/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:155:in `synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:818:in `acquire_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:538:in `checkout'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:382:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1033:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:90:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `block in pluck'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation.rb:584:in `skip_query_cache_if_necessary'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `pluck'
  /opt/zammad/app/models/ticket/overviews.rb:25:in `all'
  /opt/zammad/app/models/ticket/overviews.rb:96:in `index'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:47:in `load'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:109:in `push'
  /opt/zammad/lib/sessions/client.rb:67:in `each'
  /opt/zammad/lib/sessions/client.rb:67:in `block in fetch'
  /opt/zammad/lib/sessions/client.rb:23:in `loop'
  /opt/zammad/lib/sessions/client.rb:23:in `fetch'
  /opt/zammad/lib/sessions/client.rb:7:in `initialize'
  /opt/zammad/lib/sessions.rb:721:in `new'
  /opt/zammad/lib/sessions.rb:721:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:676:in `block (3 levels) in jobs'
  /usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
E, [2020-04-20T13:56:33.917864 #1-69902710535620] ERROR -- : thread_client 4284649913 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.000 seconds); all pooled connections were in use>
E, [2020-04-20T13:56:33.918007 #1-69902710535620] ERROR -- : /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:201:in `block in wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `loop'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:151:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:277:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `block in poll'
  /usr/local/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:155:in `synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:818:in `acquire_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:538:in `checkout'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:382:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1033:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:90:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `block in pluck'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation.rb:584:in `skip_query_cache_if_necessary'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `pluck'
  /opt/zammad/app/models/ticket/overviews.rb:25:in `all'
  /opt/zammad/app/models/ticket/overviews.rb:96:in `index'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:47:in `load'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:109:in `push'
  /opt/zammad/lib/sessions/client.rb:67:in `each'
  /opt/zammad/lib/sessions/client.rb:67:in `block in fetch'
  /opt/zammad/lib/sessions/client.rb:23:in `loop'
  /opt/zammad/lib/sessions/client.rb:23:in `fetch'
  /opt/zammad/lib/sessions/client.rb:7:in `initialize'
  /opt/zammad/lib/sessions.rb:721:in `new'
  /opt/zammad/lib/sessions.rb:721:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:676:in `block (3 levels) in jobs'
  /usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
E, [2020-04-20T13:56:34.125140 #1-69903013401040] ERROR -- : thread_client 8434718608 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.000 seconds); all pooled connections were in use>
E, [2020-04-20T13:56:34.125300 #1-69903013401040] ERROR -- : /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:201:in `block in wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `loop'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:151:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:277:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `block in poll'
  /usr/local/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:155:in `synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:818:in `acquire_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:538:in `checkout'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:382:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1033:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:90:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `block in pluck'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation.rb:584:in `skip_query_cache_if_necessary'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `pluck'
  /opt/zammad/app/models/ticket/overviews.rb:25:in `all'
  /opt/zammad/app/models/ticket/overviews.rb:96:in `index'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:47:in `load'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:109:in `push'
  /opt/zammad/lib/sessions/client.rb:67:in `each'
  /opt/zammad/lib/sessions/client.rb:67:in `block in fetch'
  /opt/zammad/lib/sessions/client.rb:23:in `loop'
  /opt/zammad/lib/sessions/client.rb:23:in `fetch'
  /opt/zammad/lib/sessions/client.rb:7:in `initialize'
  /opt/zammad/lib/sessions.rb:721:in `new'
  /opt/zammad/lib/sessions.rb:721:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:676:in `block (3 levels) in jobs'
  /usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
E, [2020-04-20T13:56:34.137680 #1-69902708277980] ERROR -- : thread_client 1816282746 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.000 seconds); all pooled connections were in use>
E, [2020-04-20T13:56:34.137812 #1-69902708277980] ERROR -- : /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:201:in `block in wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `loop'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:151:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:277:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `block in poll'
  /usr/local/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:155:in `synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:818:in `acquire_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:538:in `checkout'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:382:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1033:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:90:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `block in pluck'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation.rb:584:in `skip_query_cache_if_necessary'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `pluck'
  /opt/zammad/app/models/ticket/overviews.rb:25:in `all'
  /opt/zammad/app/models/ticket/overviews.rb:96:in `index'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:47:in `load'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:109:in `push'
  /opt/zammad/lib/sessions/client.rb:67:in `each'
  /opt/zammad/lib/sessions/client.rb:67:in `block in fetch'
  /opt/zammad/lib/sessions/client.rb:23:in `loop'
  /opt/zammad/lib/sessions/client.rb:23:in `fetch'
  /opt/zammad/lib/sessions/client.rb:7:in `initialize'
  /opt/zammad/lib/sessions.rb:721:in `new'
  /opt/zammad/lib/sessions.rb:721:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:676:in `block (3 levels) in jobs'
  /usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
E, [2020-04-20T13:56:34.156492 #1-69902674276880] ERROR -- : thread_client 4293474396 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.000 seconds); all pooled connections were in use>
E, [2020-04-20T13:56:34.156608 #1-69902674276880] ERROR -- : /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:201:in `block in wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `loop'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:151:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:277:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `block in poll'
  /usr/local/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:155:in `synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:818:in `acquire_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:538:in `checkout'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:382:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1033:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:90:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `block in pluck'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation.rb:584:in `skip_query_cache_if_necessary'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `pluck'
  /opt/zammad/app/models/ticket/overviews.rb:25:in `all'
  /opt/zammad/app/models/ticket/overviews.rb:96:in `index'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:47:in `load'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:109:in `push'
  /opt/zammad/lib/sessions/client.rb:67:in `each'
  /opt/zammad/lib/sessions/client.rb:67:in `block in fetch'
  /opt/zammad/lib/sessions/client.rb:23:in `loop'
  /opt/zammad/lib/sessions/client.rb:23:in `fetch'
  /opt/zammad/lib/sessions/client.rb:7:in `initialize'
  /opt/zammad/lib/sessions.rb:721:in `new'
  /opt/zammad/lib/sessions.rb:721:in `thread_client'
  /opt/zammad/lib/sessions.rb:676:in `block (3 levels) in jobs'
  /usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
E, [2020-04-20T13:56:34.756273 #1-69902709495320] ERROR -- : Can't reconnect to database #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.000 seconds); all pooled connections were in use>
E, [2020-04-20T13:56:35.136549 #1-69902779060000] ERROR -- : thread_client 5845860019 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.000 seconds); all pooled connections were in use>
E, [2020-04-20T13:56:35.136715 #1-69902779060000] ERROR -- : /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:201:in `block in wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `loop'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:151:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:277:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `block in poll'
  /usr/local/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:155:in `synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:818:in `acquire_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:538:in `checkout'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:382:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1033:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:90:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `block in pluck'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation.rb:584:in `skip_query_cache_if_necessary'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `pluck'
  /opt/zammad/app/models/ticket/overviews.rb:25:in `all'
  /opt/zammad/app/models/ticket/overviews.rb:96:in `index'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:47:in `load'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:109:in `push'
  /opt/zammad/lib/sessions/client.rb:67:in `each'
  /opt/zammad/lib/sessions/client.rb:67:in `block in fetch'
  /opt/zammad/lib/sessions/client.rb:23:in `loop'
  /opt/zammad/lib/sessions/client.rb:23:in `fetch'
  /opt/zammad/lib/sessions/client.rb:7:in `initialize'
  /opt/zammad/lib/sessions.rb:721:in `new'
  /opt/zammad/lib/sessions.rb:721:in `thread_client'
  /opt/zammad/lib/sessions.rb:743:in `rescue in thread_client'
  /opt/zammad/lib/sessions.rb:720:in `thread_client'
  /opt/zammad/lib/sessions.rb:676:in `block (3 levels) in jobs'
  /usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
E, [2020-04-20T13:56:35.158839 #1-69902644550640] ERROR -- : thread_client 606376955 exited with error #<ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.002 seconds); all pooled connections were in use>
E, [2020-04-20T13:56:35.159448 #1-69902644550640] ERROR -- : /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:201:in `block in wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `loop'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `wait_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:151:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:277:in `internal_poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `block in poll'
  /usr/local/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:155:in `synchronize'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:145:in `poll'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:818:in `acquire_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:538:in `checkout'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:382:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:1033:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:118:in `retrieve_connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/connection_handling.rb:90:in `connection'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `block in pluck'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation.rb:584:in `skip_query_cache_if_necessary'
  /usr/local/bundle/gems/activerecord-5.2.4.2/lib/active_record/relation/calculations.rb:197:in `pluck'
  /opt/zammad/app/models/ticket/overviews.rb:25:in `all'
  /opt/zammad/app/models/ticket/overviews.rb:96:in `index'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:47:in `load'
  /opt/zammad/lib/sessions/backend/ticket_overview_list.rb:109:in `push'
  /opt/zammad/lib/sessions/client.rb:67:in `each'
  /opt/zammad/lib/sessions/client.rb:67:in `block in fetch'
  /opt/zammad/lib/sessions/client.rb:23:in `loop'
  /opt/zammad/lib/sessions/client.rb:23:in `fetch'
  /opt/zammad/lib/sessions/client.rb:7:in `initialize'
  /opt/zammad/lib/sessions.rb:721:in `new'
  /opt/zammad/lib/sessions.rb:721:in `thread_client'
  /opt/zammad/lib/sessions.rb:676:in `block (3 levels) in jobs'
  /usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
bundler: failed to load command: script/scheduler.rb (script/scheduler.rb)
I, [2020-04-20T13:56:35.923247 #1-47246098277820]  INFO -- : Scheduler stopped.

Timeout in database.yaml is set to 10000. Psql max connections changed from 300 to 500. Nothing happened and as you can see the output is still about 5sec timeouts no regards to database.yaml parameter.

The scheduler is not working for more than 4-5 minutes and fails with that error.

@MrGeneration
I’ve made some investigations and found out again that somehow Scheduler doesn’t take connection parameters from database.yaml file. I don’t know how to check it from the Scheduler runtime, but looking from the DB side it looks right that when it hits 50 connections Scheduler fails.

Select only connections from scheduler:

zammad_production=# select count(*)
from pg_stat_activity
where datname = 'zammad_production' and application_name = 'script/scheduler.rb';
 count
-------
    50
(1 row)

Also PSQL’s settings:

zammad_production=# SHOW max_connections;
 max_connections
-----------------
 500
(1 row)

database.yaml

root@50fdfc6d8ad9:/opt/zammad# cat config/database.yml
production:
  adapter: postgresql
  database: zammad_production
  pool: 50
  timeout: 10000
  encoding: utf8
  username: zammad
  password: zammad
  host: zammad-postgresql
  port: 5432

Do you have any ideas?

Sorry but I can’t help further.
Except for the enormous timeout value (which I have no idea why you’d want that) you’ve done exactly what helps within source code and package installations. If you restarted postgresql after setting higher max_connections (or started it with that value to being with) it should be totally fine.

It’s also correct that the scheduler uses up to 50 connections based on the pool limit (per process setting). Maybe your installation isn’t able to free 50 (!) connections at the same time and you’d e.g. need 52 connections at that time.

I’m quite curious why this would happen, but it’s out of scope. I’m sorry.

Hi @NickMos! I recently encountered the same problem and basically tried the same actions to solve it - no luck so far. Did you find a solution for it in the meantime?

thx, David

Hi, @davidspiola
I can’t give you good news. All we did is that we updated to the latest version at that moment and rebuilt the ES index. That is all. No special actions. As we have orchestration of dockers, the scheduler was restarted each time it falls with an error, but after a month or so I saw that it stopped falling, without any reason. Now I’m afraid to make any new updates cause it is working fine for a couple of months.
I have a guess that maybe we had some old(from v.1.9) data (tickets) that cause an issue for the scheduler, but I can’t prove that and doesn’t explain the max_connections issue at all.

Thx for your replay @NickMos. I guess I’m one step closer. For the record: I’m using the latest stable zammad docker-compose setup in a docker swarm setup (basically the same thing). All containers share the same mounted /opt/zammad directory.

It seems to work fine if I raise the default pool size from 50 to 300 if I just restart the scheduler container. But after the restart of the complete stack the value in /opt/zammad/config/database.yml is 50 again. I further investigated and found the database initialization process in https://github.com/zammad/zammad-docker-compose/blob/master/containers/zammad/docker-entrypoint.sh::53. It is using a template located at /opt/zammad/contrib/packager.io/database.yml.pkgr. I modified the template to my needs (which is mounted) but after a complete restart of the stack the value in template is 50 again. Somewhere the database.yml.pkgr is altered on each restart, but I have no clue why and where. A simple solution, I don’t like, would be to add the pool size as an env variable and modify the docker-entrypoint.sh accordingly and create my own container build.

What your thoughts on this? Any ideas for a better solution? Maybe @MrGeneration as a opinion too :slight_smile:. I have to admit that I’m not a rails pro :wink:

bests,
David

@davidspiola
I ran ‘zammad-init’ command (zammad-init service from docker-compose.yml) only once for the initial procedure and all my other experiments were without zammad-init so I didn’t have an issue with overwriting config files and etc.
And yes, you need to rebuild the image if you want to have a modified template file inside.
For example, you can remove zammad-init service from docker-compose.yml for your tests.
Also, I don’t like zammad-init function from the entry-point because of chown -R command (LOC::106) - we have so many files that it will do chown like forever…

I really don’t see a reason why you would want to bump up the maximum allowed database connections. It’s set to 50 by default per process.
By default postgresql only allows 100 concurrent connections which, if you have a busy instance, is easilly reached.

This answer on stakcoverflow could lead you into the right direction: https://stackoverflow.com/a/47253068

Our docker-compose container is a standard postgresql-container as far as I can tell.

TL;DR: Raising the connection limit on the database config of Zammad will greatly worsen your problem.

Hi @MrGeneration thx for your reply! I updated the postgres and the pool settings because my scheduler container restarted every 3 minutes with the error message “pool size limit reached”. I agree that might not be the best solution, but the only one I have right now. If I understand correctly, you are saying the one process is using more the 50 db connections simultaneously, correct? Is there a way to figure out more details about running processes?

If, like by default, your database.yml says pool: 50 this means that Zammad is allowed to use up to 50 connections per process. This usually means:

  • puma
  • scheduler
  • websocket

And proberbly something I currently can’t think of.
By default you technically can easily hit 150 concurrent connections to your database on this setup.

We do so for speed reasons.
This is why I’m sure that the error should come from the database server not Zammad nor the scheduler itself. Maybe the exact error message would enlight us more on that case.

Apart from that, I’m sorry, I’m not too deep within docker, so I can’t answer your other question.

Alright! I did some further testing, and I get the feeling I can narrow it down:

  • For now, I reduced the pool size to 100. I’ll further reduce it if I don’t get “pool limit reached” exception
  • My Postgres max_connections setting is 1000 and I don’t have any DB resource limitations
  • I recreated the elastic search index
  • I checked the delayed job queue = 0
  • I only have the default overviews
  • I removed all sessions in the database

So far, so good. No errors in the logs except one in the railsserver container -> Invalid client_id in receive loop! (Exceptions::UnprocessableEntity) and some strange behavior.

This error pattern pops up constantly in the browser console and in the log:
200 => POST https://domain/api/v1/message_send 422 => POST https://domain/api/v1/message_receive

I also noticed that ticket updated calls are very slow. It takes about 30 seconds for a response:
`https://domain/api/v1/tickets/14665?all=true

IMO the login is slow as well. But search queries, opening tickets, or refreshing the UI is operating normally. Sounds like pur write performce, right?

Are there any other reasons except the database for this behavior, and do you know why I do get Exceptions::UnprocessableEntity all the time?

THX for your help

Is the sign that your websocket is not working. If that happens for all clients, you’ll want to fix that.
The Ajax fallback (the result of above two ports) creates much more load on the system, so you want to get that fixed.

HTTP 422 can have various reasons. My guess, in combination with your information on removed Sessions is that there’s a client that didn’t notice that and still tries to be authenticated and send data.

But I can’t tell from here, it’s a shot in the dark.
After fixing the websockets, the performance should get better. You should also see quite some load on the Zammad processes (especially rails / puma).

Hi @MrGeneration, properly setting up the WebSocket behind my reverse proxy fixed all symptoms! Zammad runs without any modifications now. Thx for your support!