Elasticsearch Index get not updated after update a Knowledgebase Answer

Infos:

  • Used Zammad version: 3.6.0-1605532620.a41de24e.bionic
  • Used Zammad installation source: .deb
  • Operating system: Ubuntu Server 16.04
  • Browser + version: Firefox 83.0
    Elasticsearch 7.10.0" + Ingest module

Expected behavior:

  • after Update a Knowledgebase answer, the new “strings” or “keywords” should be searchable with elasticsearch

Actual behavior:

I updated an old answer with multiple Keywords (scanner, scan, handscanner, lagerscanner) cause i cant find the updated answer with this Keywords if i search over the Searchfield in the Knowledge base.

i try to rebuild the searchindex after i see this curious error with: sudo zammad run rake searchindex:rebuild

but with no success. - rebuild worked but no answer get listet if i search about the new written keywords

Any ideas what is wrong with it?

Steps to reproduce the behavior:

  • Overall this works but not in this specific answere

Please let me know if you need any detailed Information to get this properly working

I perform some research (maybe this help to find my problem):

i took a look on the Health Status with

**http://localhost:9200/_cluster/health?pretty**

the response is:

“cluster_name” : “elasticsearch”,
“status” : “yellow”,
“timed_out” : false,
“number_of_nodes” : 1,
“number_of_data_nodes” : 1,
“active_primary_shards” : 9,
“active_shards” : 9,
“relocating_shards” : 0,
“initializing_shards” : 0,
"unassigned_shards" : 9,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 50.0

so im not really an Expert (far away from it) but this looking like a Problem for me and after some googling i mean i should fix this (but no idea if this behaviour is the root of my Problem) …

so i found some Articles on Stackoverflow and tryed around:

first i try to delete this ‘shards’ (remember i got no idea what this should be):

List the shards with statuscode:
curl -XGET localhost:9200/_cat/shards?h=index,shards,state,prirep,unassigned.reason | grep UNASSIGNED zammad_production_ticket UNASSIGNED r CLUSTER_RECOVERED zammad_production_knowledge_base_translation UNASSIGNED r CLUSTER_RECOVERED zammad_production_chat_session UNASSIGNED r CLUSTER_RECOVERED zammad_production_knowledge_base_answer_translation UNASSIGNED r CLUSTER_RECOVERED zammad_production_stats_store UNASSIGNED r CLUSTER_RECOVERED zammad_production_cti_log UNASSIGNED r CLUSTER_RECOVERED zammad_production_user UNASSIGNED r CLUSTER_RECOVERED zammad_production_knowledge_base_category_translation UNASSIGNED r CLUSTER_RECOVERED zammad_production_organization UNASSIGNED r CLUSTER_RECOVERED

and deleted it like Leroy jenkins with:

curl -XGET http://localhost:9200/_cat/shards | grep UNASSIGNED | awk {‘print $1’} | xargs -i curl -XDELETE “http://localhost:9200/{}

The health Status looking good now:
"cluster_name" : "elasticsearch", "status" : "green", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0

but sadly nothing worked any more - i executed:

sudo zammad run rake searchindex:rebuild
and thanks God it working again but after that i still have the yellow Health State :frowning:

so i reasearching again and found a script from a Stackoverflow User that should relocate unassigned shards to nodes - reading good enought for me to try it out :slight_smile:

i modified the script to get it work and exec it…

Script:

`
!/usr/bin/env bash

The script performs force relocation of all unassigned shards,
of all indices to a specified node (NODE variable)

ES_HOST=“localhost”
NODE=“LWkomN95Q4Gwux2jwkk_lA”

curl ${ES_HOST}:9200/_cat/shards > shards
grep “UNASSIGNED” shards > unassigned_shards

while read LINE; do
IFS=" " read -r -a ARRAY <<< “$LINE”
INDEX=${ARRAY[0]}
SHARD=${ARRAY[1]}

echo “Relocating:”
echo “Index: ${INDEX}”
echo “Shard: ${SHARD}”
echo “To node: ${NODE}”

curl -H ‘Content-Type: application/json’ -s -XPOST “${ES_HOST}:9200/_cluster/reroute” -d “{
“commands”: [
{
“allocate_empty_primary”: {
“index”: “${INDEX}”,
“shard”: ${SHARD},
“node”: “${NODE}”,
“accept_data_loss”: true
}
}
]
}”; echo
echo “------------------------------”
done <unassigned_shards

rm shards
rm unassigned_shards

exit 0
`
the big brains here will knowing the ouput:

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1800 100 1800 0 0 75000 0 --:–:-- --:–:-- --:–:-- 78260

Relocating:
Index: zammad_production_knowledge_base_category_translation
Shard: 0
To node: LWkomN95Q4Gwux2jwkk_lA
{“error”:{“root_cause”:[{“type”:“illegal_argument_exception”,“reason”:"[allocate_empty_primary] primary [zammad_production_knowledge_base_category_translation][0] is already assigned"}],“type”:“illegal_argument_exception”,“reason”:"[allocate_empty_primary] primary [zammad_production_knowledge_base_category_translation][0] is already assigned"},“status”:400}

Relocating:
Index: zammad_production_ticket
Shard: 0
To node: LWkomN95Q4Gwux2jwkk_lA
{“error”:{“root_cause”:[{“type”:“illegal_argument_exception”,“reason”:"[allocate_empty_primary] primary [zammad_production_ticket][0] is already assigned"}],“type”:“illegal_argument_exception”,“reason”:"[allocate_empty_primary] primary [zammad_production_ticket][0] is already assigned"},“status”:400}

Relocating:
Index: zammad_production_knowledge_base_answer_translation
Shard: 0
To node: LWkomN95Q4Gwux2jwkk_lA
{“error”:{“root_cause”:[{“type”:“illegal_argument_exception”,“reason”:"[allocate_empty_primary] primary [zammad_production_knowledge_base_answer_translation][0] is already assigned"}],“type”:“illegal_argument_exception”,“reason”:"[allocate_empty_primary] primary [zammad_production_knowledge_base_answer_translation][0] is already assigned"},“status”:400}

and so on…

Maybe im totaly wrong… anyone know the right direction?

Im going further and try to step by step deleting the not allocated shards and i figured out if i drop one specific (not allocated) shard the elasticsearch query get broken and working just after i rebuild the index:

after i curl -XDELETE ‘localhost:9200/zammad_production_knowledge_base_answer_translation"/’,

Zammad - Elasticsearch was broken…

see this Pastebin of my Step by Step deleting:

notice we just use the Knowledge Base (translated) and no more functions from zammad like Tickets etc.

found some more to try:
spx@wissensdatenbank:~$ curl -XGET -u elastic:test "localhost:9200/_cat/shards?v&h=index,shard,prirep,state,store,ip,unassigned.reason" index shard prirep state store ip unassigned.reason zammad_production_knowledge_base_translation 0 p STARTED 6kb 127.0.0.1 zammad_production_knowledge_base_translation 0 r UNASSIGNED INDEX_CREATED zammad_production_stats_store 0 p STARTED 208b 127.0.0.1 zammad_production_stats_store 0 r UNASSIGNED INDEX_CREATED zammad_production_knowledge_base_category_translation 0 p STARTED 11.3kb 127.0.0.1 zammad_production_knowledge_base_category_translation 0 r UNASSIGNED INDEX_CREATED zammad_production_organization 0 p STARTED 14.5kb 127.0.0.1 zammad_production_organization 0 r UNASSIGNED INDEX_CREATED zammad_production_cti_log 0 p STARTED 208b 127.0.0.1 zammad_production_cti_log 0 r UNASSIGNED INDEX_CREATED zammad_production_knowledge_base_answer_translation 0 p STARTED 578.1kb 127.0.0.1 zammad_production_knowledge_base_answer_translation 0 r UNASSIGNED INDEX_CREATED zammad_production_ticket 0 p STARTED 63.5kb 127.0.0.1 zammad_production_ticket 0 r UNASSIGNED INDEX_CREATED zammad_production_user 0 p STARTED 55kb 127.0.0.1 zammad_production_user 0 r UNASSIGNED INDEX_CREATED zammad_production_chat_session 0 p STARTED 208b 127.0.0.1 zammad_production_chat_session 0 r UNASSIGNED INDEX_CREATED

Status: yellow is a normal elasticsearch state in single node clusters.
You can’t get it to green if you don’t have at least 2 nodes in your cluster. If it’s yellow, everything’s good.

Updating the search index in Zammad is a delayed job.
This means that updating the search index (for whatever part in Zammad) may take a couple seconds to minutes. This depends on your systems load alot.

An indicator for issues (beside of Zammads monitoring endpoint) may be the delayed job counter:
zammad run rails r "p Delayed::Job.count"

If that count is fairly high, you may have performance issues.
If so, this page might help you:

Thanks for your detailed explanation - This help me alot and cleared my confusion

1 Like

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.