Important: If you are a Zammad Support or hosted customer and experience a technical issue, please refer to: support@zammad.com using your zammad-hostname / or company contract.
Used Zammad version: 2.5
Used Zammad installation source: (source, package, …) Package
Operating system: Centos7
Browser + version: Firefox 61.0.1
Expected behavior:
I have to import ~25000 Tickets from our old ticketsystem into zammad using the REST api. Now I run into an issue with the production.log filling up my diskspace, because every attachment is saved as a base64 string into the log. is there a possability to prevent this? In general I dont see much use of the attachment data in the log, but I may oversee some use-cases…
I’ve tried to set the log_level to :warn or :error using the Rails.logger.level command to get temporary rid of the INFO entries but it does not seem to have any effect on the running instance.
Sure, I could add more space to the log volume, but imho it seems like a unnecessary waste of diskspace.
Actual behavior:
Diskspace of /var/log volume is depleting.
Steps to reproduce the behavior:
Add some GBs of attachements anf have a look at the size of production.log
In general this is a wanted behavior for reasoning about what happened. However, a workaround would be to add the parameter that pollutes your log to the list of filtered parameters in config/initializers/filter_parameter_logging.rb. Please remember to restart your Zammad instance afterwards. You can remove the item after you the instance is productive.
Thanks for the blazing fast reply!
This seems to be a great option.
Unfortunately I wasn’t able to figure out the correct syntax to blend out the attachment data string, could you please give me some concrete advice?
Further, it seems that only attachments uploaded by the API are treated like this (data in the logs), the ones that are uploaded through th browser do not appear as base64 string… Is this still the expected behaviour?
Thanks to @MrGeneration leading me here Could you provide an example log entry (with ~5 lines before the line included) so I can see where we need to change things? Please remember to anonymize things.
I Think the following lines should be the relevant ones.
I, [2018-08-06T17:08:24.387668 #65361] INFO -- : Processing by TagsController#add as applicatiuon/json
I, [2018-08-06T17:08:24.387750 #65361] INFO -- : Parameters: {"object"=>"Ticket", "o_id"=>"64590", "item"=>"Generell"}
I, [2018-08-06T17:08:24.424380 #65361] INFO -- : Completed 201 Created in 37ms (Views: 0.2ms | ActiveRecord: 11.8ms)
I, [2018-08-06T17:08:24.529699 #65361] INFO -- : Started PUT "/api/v1/tickets/64590" for 127.0.0.1 at 2018-08-06 17:08:24 +0200
I, [2018-08-06T17:08:24.533296 #65361] INFO -- : Processing by TicketsController#update as applicatiuon/json
I, [2018-08-06T17:08:24.533382 #65361] INFO -- : Parameters: {"id"=>"64590", "article"=>{"subject"=>"Beilagen", "body"=>"Importierte Beilagen", "content_type"=>"text/html", "type"=>"note", "internal"=>"false", "attachments"=>[{"filename"=>"hello.txt", "data"=>"aGVsbG8=", "mime-type"=>"text/plain"}]}}
I, [2018-08-06T17:08:24.605550 #65361] INFO -- : Completed 200 OK in 72ms (Views: 0.5ms | ActiveRecord: 18.0ms)
This was a very tiny Textdocument for testng purposes. In production we also have attachments with a size of several hundred MB and this “data”=>“xxx” strings get huge.
I’ve just tried to add article.attachments.data to the filter and it works like a charm.
Thank you very mutch for this fast workaround, this will help me to keep my logs as clean as possible during the import!