OTRS Importer fails with InternalServerError 500

Infos:

  • Used Zammad version: 2.8.0
  • Used Zammad installation source: (source, package, …): package rpm
  • Operating system: OpenSuse 42.3
  • Browser + version: Chrome 71

Expected behavior:

  • import runs through all tickets in old OTRS 3.3.7 System (roughly 100k Tickets)

Actual behavior:

  • Import fails ~1500 Tickets with Error. (Console Exits)

Until then the log looks fine to me… see below as example

thread#1: Ticket 98651, Article 465361 - Starting import for fingerprint f347f314d9a38ad7c56831ad7dca41829229ae82ca3b84ed8542b1a6b869c779 (file-2)… Queue: [1].
I, [2019-01-03T09:41:03.464027 #2885-47276350211760] INFO – : thread#4: add Ticket::Article.find_by(id: 463551)
I, [2019-01-03T09:41:03.466138 #2885-47276350215520] INFO – : thread#1: Ticket 98651, Article 465361 - Finished import for fingerprint f347f314d9a38ad7c56831ad7dca41829229ae82ca3b84ed8542b1a6b869c779 (file-2)… Queue: [1].
I, [2019-01-03T09:41:03.472511 #2885-47276350214820] INFO – : thread#2: Ticket 98796, Article 467619 - Starting import for fingerprint a3c7e83f85dfa0205d3980ca891ef82df9516999832c7fb0755e367168b9e347 (file-2)… Queue: [2].
I, [2019-01-03T09:41:03.473202 #2885-47276354371000] INFO – : thread#7: Ticket 98692, Article 463828 - Starting import for fingerprint ae6153a62144b9a7e9c8ec5f2b47e0bd7f30618e63665450835f605469cc92b3 (gwcheck.log)… Queue: [7].
I, [2019-01-03T09:41:03.485548 #2885-47276350215520] INFO – : thread#1: add Ticket::Article.find_by(id: 465384)

the following error is produced on the console when it exits (only thread 5 posted)

> thread#5: POST: http://helpdesk.xxxx.com/otrs/public.pl?Action=ZammadMigrator
> thread#5: PARAMS: {:Subaction=>"Export", :Object=>"Ticket", :Limit=>20, :Offset=>1200, :Diff=>0, :Action=>"ZammadMigrator", :Key=>"XXXXX"}


> thread#5: ERROR: Server Error: #<Net::HTTPInternalServerError 500 Internal Server Error readbody=true>!
> /opt/zammad/lib/import/otrs/requester.rb:132:in `post': Zammad Migrator returned an error (RuntimeError)
>         from /opt/zammad/lib/import/otrs/requester.rb:92:in `request_json'
>         from /opt/zammad/lib/import/otrs/requester.rb:79:in `request_result'
>         from /opt/zammad/lib/import/otrs/requester.rb:34:in `load'
>         from /opt/zammad/lib/import/otrs.rb:143:in `import_action'
>         from /opt/zammad/lib/import/otrs.rb:137:in `imported?'
>         from /opt/zammad/lib/import/otrs.rb:101:in `block (3 levels) in threaded_import'
>         from /opt/zammad/lib/import/otrs.rb:95:in `loop'
>         from /opt/zammad/lib/import/otrs.rb:95:in `block (2 levels) in threaded_import'
>         from /opt/zammad/vendor/bundle/ruby/2.4.0/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'

on the apache error log of the OTRS machine I can see the following JSON Error: It happens exactly 6 times and with the 6th time it exits. (Timestamps of log above and below do not match but same errors appear, i just had the logs from the 2nd run handy and purged before)

> [Thu Jan 03 11:47:00 2019] [error] malformed or illegal unicode character in string [\xed\xa0\xbd\xed\xb8\x89\n \nGr], cannot convert to JSON at /opt/otrs/Kernel/cpan-lib/JSON.pm line 154.\n
> [Thu Jan 03 11:47:30 2019] [error] malformed or illegal unicode character in string [\xed\xa0\xbd\xed\xb8\x89\n \nGr], cannot convert to JSON at /opt/otrs/Kernel/cpan-lib/JSON.pm line 154.\n
> [Thu Jan 03 11:48:15 2019] [error] malformed or illegal unicode character in string [\xed\xa0\xbd\xed\xb8\x89\n \nGr], cannot convert to JSON at /opt/otrs/Kernel/cpan-lib/JSON.pm line 154.\n
> [Thu Jan 03 11:53:26 2019] [error] malformed or illegal unicode character in string [\xed\xa0\xbd\xed\xb8\x8a.  Du], cannot convert to JSON at /opt/otrs/Kernel/cpan-lib/JSON.pm line 154.\n
> [Thu Jan 03 11:53:56 2019] [error] malformed or illegal unicode character in string [\xed\xa0\xbd\xed\xb8\x8a.  Du], cannot convert to JSON at /opt/otrs/Kernel/cpan-lib/JSON.pm line 154.\n
> [Thu Jan 03 11:54:41 2019] [error] malformed or illegal unicode character in string [\xed\xa0\xbd\xed\xb8\x8a.  Du], cannot convert to JSON at /opt/otrs/Kernel/cpan-lib/JSON.pm line 154.\n

Steps to reproduce the behavior:

  • setup fresh Zammad
  • Install OTRS migrator for 3.3
  • manual import from console with
zammad run rails c

Setting.set('import_otrs_endpoint', 'http://helpdesk.xxxx.com/otrs/public.pl?Action=ZammadMigrator')
Setting.set('import_otrs_endpoint_key', 'XXXXXXX')
Setting.set('import_mode', true)

 
Import::OTRS.start

Any idea how to identify which tickets contain the supposed malformed unicode characters in OTRS or how to escape them properly during import?

Thanks Alex

OK I’ve identified the ticket causing the zammad migrator on OTRS side to crash:

image

I used the manual URL to get the Migrator data and tracked it down to the ticket that caused the error.

http://helpdesk.XXXX.com/otrs/public.pl?Action=ZammadMigrator&Key=XXXX&Subaction=Export&Object=Ticket&Limit=1&Offset=1236&Diff=0

Any ideas how to continue?

Thanks Alex

As this seems an encoding issue with UTF-8 and there have been at least 2 commits on the 3.3 branch related to encoding I will upgrade the productive system to 3.3.20 first and then try again.

Thanks Alex

@anon29869905 I think this should be handled by Zammad without any hassle?

Thanks for revisiting this. Actually the problem was within the Zammad Exporter on OTRS side. If an article with UTF16 was in the (UTF8 encoded) database the exporter failed. The views in OTRS rendered correctly it was the JSON (cpan-lib)exporter.

To be honest I corrected the articles in the Database and did not update to the latest OTRS 3.3. branch version, so I can not comment if it works with the latest version of the 3.3. branch.

Next issue I had was importing articles with more than 1,5Mio Chars the scheduler fails not gracefully :slight_smile: and the import stops. We unfortunatly have some MIME Data in the Articles of our OTRS System… I would love to have an output of which article caused the failure (right now I go through the production log, and search for the thread that fails and then identify the culprit.

Is there a way to resume an import that failed? I am at the situation that an import fails at article 57000 and I have to reset the DB and start again.

Thanks again for checking on this.

Alex

Ok to avoid an import error where the scheduler stops due to the length of the body of an article we can easily run the following on the OTRS Database:

SELECT * FROM `otrs`.`article` WHERE LENGTH(a_body)>1500000;

I would advise doing that prior to starting the import to avoid having to go back and do the import again. Maybe something to add to the documentation as a helping pointer?

Hi @aelsing - wow! Great work digging into theses issues yourself. Unfortunately the current implementation of the OTRS import doesn’t support an offset import of an unfinished import :confused: We’re working on an improved version which isn’t ready yet and will take some tome to be.

Thanks, but it wouldn’t have been possible without your meaningful logs :slight_smile:

I plan to write a proper summary about the import with the issues I’ve encounterd when I’m done

Regarding my system at hand I’ve finished the import but according to web interface I need to wait for the background process to finish.

While importing my big live ServiceDesk System (100k tickets) I encounter the following message (after 14400s)

image

After the importer finshes ~9h (“no more work” says the log) the BackgroundJobSearchIndex kicks in and runs.

Probably I have to be patient and wait… I dont want to manually put the system out of import mode or ist it necassary due to the length of my import?

image

Thanks Alex

Ok now the process seems finished, no further high CPU usage. It took 5 days for import an indexing on a 4 core vmserver with 8GB Ram

However I am still seeing the "Backgroud process did not start or has not finished! Please contact support message as in above post.

When I manually tell zammad that system init is done with “Setting.set(‘system_init_done’, true)” I can logon and see that the status is not healty. The

hostname.com/api/v1/monitoring/health_check?… Link shows 44 failing background jobs.

{“healthy”:false,“message”:“44 failing background jobs;Failed to run background job #1 ‘BackgroundJobSearchIndex’ 10 time(s) with 130 attempt(s).”,“issues”:[“44 failing background jobs”,“Failed to run background job #1 ‘BackgroundJobSearchIndex’ 10 time(s) with 130 attempt(s).”],“actions”:,“token”:“XXXX”}

In the production log I can see that is seems elasticsearch trying to process large attachments. These attachments are also written to the log (payload…) (it gets huge (<50GB)) as the SearchIndexer presumably tries multiple times for each file.

Any idea how to

a) identify which tickets are causing the issue (production logfile is not really readable anymore) is there a place beside the production.log?
b) resolve the issue (maybe tell zammad to ignore these 44 tickets for indexing?)

Thanks

I’m going a bit around of your question because I can’t answer it all, but it might help.
If you have a big database, you might want to change to filesystem based storage for attachments (this will make great sense for backups and performance). Login as admin and go to Admin-Settings -> System -> Storage

You might also want to use the optional settings (especially for limitting the maximum attachment size): https://docs.zammad.org/en/latest/install-elasticsearch.html#optional-settings

You can to the following, but only if you are not working on Zammad yet. (This is important, as the following command would also affect mails waiting for sendout)

zammad run rails r ‘Background::Job.destroy_all’

After that, recreate the searchindex:
https://docs.zammad.org/en/latest/install-elasticsearch.html#create-elasticsearch-index

If the command above fails, you should be able to identify the bad ticket and reason.

Thanks Marcel,

how do I change the storage location to filesystem before logging into the system. I would like to keep it clean before the import.

Can I make those setting in the ruby console?

Yes yes you can!

zammad run rails c #start le console
Setting.get('storage_provider') #check current setting
Setting.set('storage_provider', 'File') # change to Filesystem (DB will be database)
Setting.get('storage_provider') # double check for safety
Store::File.move('DB', 'File') # Move existing attachments to filesystem (if needed)

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.