Error when creating SLA or Ticket that belongs to an SLA, after update to 5.0.2

  • Used Zammad version: 5.0.2
  • Used Zammad installation type: package
  • Operating system: debian 10
  • Browser + version: Chrome 95

Expected behavior:

  • Moving Tickets to groups, which have an SLA or create new SLA

Actual behavior:

  • Creating new SLA or moving tickets to groups, which have SLA, fails with the following error:

undefined local variable or method `response_time’ for #Sla:0x00007f3fa01e2120 Did you mean? respond_to?

Steps to reproduce the behavior:

  • Create an new simple SLA with the standard settings (state = open, response time = xx:xx, calendar = default calender)

I’ve take a look at the production log. The parameters send from the “create sla”-function are as follows:

Parameters: {“name”=>“Test-SLA”, “first_response_time”=>“720”, “response_time”=>"", “update_time”=>“720”, “solution_time”=>“720”, “condition”=>{“ticket.state_id”=>{“operator”=>“is”, “value”=>“2”}}, “calendar_id”=>“1”, “id”=>“c-7”}

In the database, there is no column “response_time” in the table “slas”, i think there is the error.

Upgrade from 5.0.2-1636732214.a9b57e57.buster to 5.0.2-1637576537.10be8dab.buster fixes the issue.

Your first update installation didn’t complete successful for whatever reason.
Especially broke at database migration part.

This can have several reasons and is impossible to understand at this point.
Never ever run unattended Zammad updates because you’ll then notice these kind of issues a lot faster (usually).

The second upgrade simply just was able to run the database migration which is why it “magically” fixed your issue.

Thanks for your answer. I have done the same update on another machine (not sure, if it has Debian Buster or Debian Stretch) with the same behavior. On this machine i manuall started the database migration after the update and the issue was fixed. Maybee there is an error on the update process. (I think, both machines had a version below 5.x before the update)

I know that’s not helping but just as a small feedback:
We bulk upgraded over 600 Zammad instances without your mentioned issues.

Failing database migrations during upgrade can have various reasons and may have different origins that Zammad directly if unlucky. Possibly both hosts you have had the same requirements that made the issue arise.

Maybe I installed it the same way, I don’t remember it. I thought I saw several threads here that had a similar problem, which was fixed by running db: migrate manually. Anyway, my future upgrade process will always be “take a VM snapshot, do the upgrade and do a db: migrate to avoid migration failures”.

Thank you for your feddback.

1 Like

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.