Fail to rebuild elastic index

Infos:

  • Used Zammad version: 6.2
  • Used Zammad installation type: package
  • Operating system: ubuntu 22.04
  • Elastic search: 7.17

Expected behavior:

  • Running zammad run rake searchindex:rebuild should rebuild the search index, but during the process an error occurred.

Actual behavior:

Response:
#<UserAgent::Result:0x00007fa0bd2e6278 @success=false, @body=“{"error":{"root_cause":[{"type":"parse_exception","reason":"No processor type exists with name [attachment]","processor_type":"foreach","suppressed":[{"type":"parse_exception","reason":"No processor type exists with name [attachment]","processor_type":"foreach"}]}],"type":"parse_exception","reason":"No processor type exists with name [attachment]","processor_type":"foreach","suppressed":[{"type":"parse_exception","reason":"No processor type exists with name [attachment]","processor_type":"foreach"}]},"status":400}”, @data=nil, @code=“400”, @content_type=nil, @error=“Client Error: #<Net::HTTPBadRequest 400 Bad Request readbody=true>!”, @header={“x-elastic-product”=>“Elasticsearch”, “warning”=>“299 Elasticsearch-7.17.14-774e3bfa4d52e2834e4d9d8d669d77e4e5c1017f "Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See Set up minimal security for Elasticsearch | Elasticsearch Guide [7.17] | Elastic to enable security."”, “content-type”=>“application/json; charset=UTF-8”, “content-length”=>“519”}>

Payload:
{“description”:“Extract zammad-attachment information from arrays”,“processors”:[{“foreach”:{“field”:“article”,“processor”:{“foreach”:{“field”:“_ingest._value.attachment”,“processor”:{“attachment”:{“target_field”:“_ingest._value”,“field”:“_ingest._value._content”,“ignore_failure”:true,“ignore_missing”:true}},“ignore_failure”:true,“ignore_missing”:true}},“ignore_failure”:true,“ignore_missing”:true}},{“foreach”:{“field”:“attachment”,“processor”:{“attachment”:{“target_field”:“_ingest._value”,“field”:“_ingest._value._content”,“ignore_failure”:true,“ignore_missing”:true}},“ignore_failure”:true,“ignore_missing”:true}}]}

Payload size: 0M
/opt/zammad/lib/search_index_backend.rb:744:in make_request_and_validate' /opt/zammad/lib/search_index_backend.rb:88:in block (2 levels) in processors’
/opt/zammad/lib/search_index_backend.rb:72:in each' /opt/zammad/lib/search_index_backend.rb:72:in block in processors’
/opt/zammad/lib/search_index_backend.rb:69:in each' /opt/zammad/lib/search_index_backend.rb:69:in processors’
/opt/zammad/lib/search_index_backend.rb:934:in create_pipeline' /opt/zammad/lib/tasks/zammad/search_index_es.rake:25:in block (3 levels) in ’
/opt/zammad/lib/tasks/zammad/search_index_es.rake:19:in block (3 levels) in <main>' /opt/zammad/lib/tasks/zammad/search_index_es.rake:59:in block (3 levels) in ’
/opt/zammad/vendor/bundle/ruby/3.1.0/gems/rake-13.0.6/exe/rake:27:in <top (required)>' /opt/zammad/bin/bundle:121:in load’
/opt/zammad/bin/bundle:121:in `’
Tasks: TOP => zammad:searchindex:rebuild
(See full trace by running task with --trace)

Tell me please what could be the cause of this problem?

And I can’t load the production.log in because I get an error message: Sorry, the file you are trying to download is not authorized (allowed extensions: jpg, JPEG, GIF or PNG, heic, heif, WebP files, avif website, WebM file and MP4).
*

Hi @ykz. Please read the documentation.

I am having a similar issue rebuilding the search index. Also running 6.2 and Ubuntu 22.04. Elasticsearch 7.17.18. Installing from source. It runs fine with version 6.0.

In 6.2 when I run:
zammad run rake zammad:searchindex:rebuild

The output is:
** Execute zammad:searchindex:create
Creating indexes… rake aborted!
NoMethodError: undefined method `’ for nil:NilClass

settings = Setting.get('es_model_settings')[model.name] || {}
                                           ^^^^^^^^^^^^

/opt/zammad/lib/search_index_backend.rb:874:in `model_settings’

Hi @opdevops. It seems that there is at least one missing migration in your Zammad instance. Maybe zammad run rake db:migrate helps.

1 Like

That worked! I added that step to our process