Elasticsearch killed with out of memory

  • Used Zammad version: 5.0.3-1642086384.7ef5032e.focal
  • Used Zammad installation type: package
  • Operating system: Ubuntu 20.04.3 LTS
  • Browser + version:

Actual behavior:

Hello Zammad specialists,

last week elasticsearch crashed several times with the message “killed because out of memory”

Elesticsearch occupies all memory it can get.

We currently have 4 agents in the system, 390 users and 447 tickets

We then upgraded the machine to 8GB ram, but this morning Elasticsearch again had a warning from the monitoring because the memory was full.

Restarting elastic search doesn’t help and the service occupies up to 4.4GB of memory

I tried using the parameter indices.fielddata.cache.size: to finally limit the memory to 40%

But absolutely no change

Elasticsearch was restarted and an index run was also made

I think elasticsearch is welcome to use memory, but shouldn’t exceed 15 to 20 percent rest

Do you have a tip?

The catch phrase you’re looking for here is heap-size which handles the memory consumption of Elasticsearch.

The Elasticsearch documentation has you covered:

Thanks MrGeneration
I will try if that helps

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.