Error 500 - in Reporting - Elastic Search running


after Update to 3.2 we are having problems with the reporting - in logfile is following:

Unable to process GET request to elasticsearch URL 'http://localhost:9200/zammad_production/Ticket/_search'. Check the response and payload for detailed information:

#<UserAgent::Result:0x00005650e20a8320 @success=false, @body="{\"error\":{\"root_cause\":[{\"type\":\"parsing_exception\",\"reason\":\"[term] query does not support array of values\",\"line\":1,\"col\":47}],\"type\":\"parsing_exception\",\"reason\":\"[term] query does not support array of values\",\"line\":1,\"col\":47},\"status\":400}", @data=nil, @code="400", @content_type=nil, @error="Client Error: #<Net::HTTPBadRequest 400 Bad Request readbody=true>!">


  • Used Zammad version: 3.2
  • Used Zammad installation source: (package)
  • Operating system: Ubuntu
  • Browser + version: Chrome

Expected behavior:

  • show statistics

Actual behavior:

  • error 500

Steps to reproduce the behavior:

  • klick on report

Please provide the following:

  • any custom object you might have added
  • please provide your elasticsearch version
  • are you using the the reporting profile -all- or a custom one?
    • if it’s a custom one, please provide it’s configuration

Edit: Any particular reason you’re using the development version (3.2) `?

ok i’ve investigated.
I am not sure what you mean with custom objects.
But our elastic search version is : 5.6.16

We are using the -all- profile, but i created now one, and now find out if there are more than one entry is marked, like open+closed tickets the error occurs.

Following is in elasticsearch.log if it helps
[2019-11-06T15:50:08,935][DEBUG][o.e.a.s.TransportSearchAction] [d-7HWA8] [zammad_production][4], node[d-7HWA8sT8qYP9qlAqwoLA], [P], s[STARTED], a[id=9ULxw6tRRhajYUwPrW1y9g]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[zammad_production], indicesOptions=IndicesOptions[id=38, ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_alisases_to_multiple_indices=true, forbid_closed_indices=true], types=[User], routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=5, batchedReduceSize=512, preFilterShardSize=128, source={
  "from" : 0,
  "size" : 100,
  "query" : {
    "bool" : {
      "must" : [
          "query_string" : {
            "query" : "iva*",
            "fields" : [ ],
            "use_dis_max" : true,
            "tie_breaker" : 0.0,
            "default_operator" : "and",
            "auto_generate_phrase_queries" : false,
            "max_determinized_states" : 10000,
            "enable_position_increments" : true,
            "fuzziness" : "AUTO",
            "fuzzy_prefix_length" : 0,
            "fuzzy_max_expansions" : 50,
            "phrase_slop" : 0,
            "analyze_wildcard" : true,
            "escape" : false,
            "split_on_whitespace" : true,
            "boost" : 1.0
      "disable_coord" : false,
      "adjust_pure_negative" : true,
      "boost" : 1.0
  "sort" : [
      "active.keyword" : {
        "order" : "desc"
      "updated_at" : {
        "order" : "desc"
      "_score" : {
        "order" : "desc"
}}] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [d-7HWA8][][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.index.query.QueryShardException: No mapping found for [active.keyword] in order to sort on
        at ~[elasticsearch-5.6.16.jar:5.6.16]
        at ~[elasticsearch-5.6.16.jar:5.6.16]
        at ~[elasticsearch-5.6.16.jar:5.6.16]
        at ~[elasticsearch-5.6.16.jar:5.6.16]
        at ~[elasticsearch-5.6.16.jar:5.6.16]
        at ~[elasticsearch-5.6.16.jar:5.6.16]
        at$6.messageReceived( ~[elasticsearch-5.6.16.jar:5.6.16]
        at$6.messageReceived( ~[elasticsearch-5.6.16.jar:5.6.16]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived( ~[elasticsearch-5.6.16.jar:5.6.16]
        at org.elasticsearch.transport.TransportService$7.doRun( [elasticsearch-5.6.16.jar:5.6.16]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun( [elasticsearch-5.6.16.jar:5.6.16]
        at [elasticsearch-5.6.16.jar:5.6.16]
        at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_222]
        at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_222]
        at [?:1.8.0_222]


By now we could verify this as a bug.
This only affects the develop version. I suggest using stable at any time, unless you have a very good reason to life on the dangerous road. :slight_smile:

Issue for subscribing can be found here: Array values (e.g. several states) nuke reporting profiles · Issue #2809 · zammad/zammad · GitHub

this is also very strange, as we just updated with apt-get update…

What does cat /opt/zammad/VERSION return?
While you’re at it, the following output is interesting too:
cat /etc/apt/sources.list.d/zammad.list

cat /opt/zammad/VERSION

deb 16.04 main

Here’s the reason why you get develop.
Who knows what happened here. Problem is, you shouldn’t simply downgrade to ensure you’re not nuking anything. So if you’re already productive, you’ll need to wait for the Zammad 3.2 release.

When updating from 3.2 develop to 3.2 stable, change the zammad.lists content to:
deb 16.04 main

ok, sorry for late reply and thank you.
i ve changed it now as 3.2 is released.

1 Like

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.