Global Search after upgrde

  • Used Zammad version: 2.6
  • Used Zammad installation source: package
  • Operating system: CentOs
  • Browser + version: Firefox, Chrome

Expected behavior:

Search —> manage/users
Search ----> Homepage top left side

Actual behavior:

Not working but if i reboot Zammad it will work for some hours and again stop.

This issue occurred after upgrade to latest version.

Please provide elasticsearch version and production log.

The current state of information you shared is not enough to help you. :frowning:

Would you please address me how can i provide this more info?

I think the first place to look would be the elasticsearch logs, I’m not sure where they would be on your system.

maybe these two commands might work for you

journalctl -fu elasticsearch
systemctl status -l elasticsearch

check them when it’s working and compare them to when it stops working.

also search /var/log/message (and or the dmesg command) for the string OOM and another string OOPS

journalctl -fu elasticsearch
– Logs begin at Sun 2018-09-02 21:31:30 EDT. –
Sep 02 17:01:50 mydomain.com systemd[1]: Starting Elasticsearch…
Sep 02 17:01:50 mydomain.com systemd[1]: Started Elasticsearch.
Sep 02 17:01:54 mydomain.com elasticsearch[999]: OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sep 04 08:47:12 mydomain.com systemd[1]: elasticsearch.service: main process exited, code=killed, status=9/KILL
Sep 04 08:47:12 mydomain.com systemd[1]: Unit elasticsearch.service entered failed state.
Sep 04 08:47:12 mydomain.com systemd[1]: elasticsearch.service failed.

systemctl status -l elasticsearch
ق elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: failed (Result: signal) since Tue 2018-09-04 08:47:12 EDT; 1 day 5h ago
Docs: http://www.elastic.co
Process: 999 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefault.path.data=${DATA_DIR} -Edefault.path.conf=${CONF_DIR} (code=killed, signal=KILL)
Process: 980 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 999 (code=killed, signal=KILL)

Sep 02 17:01:50 mydomain.com systemd[1]: Starting Elasticsearch…
Sep 02 17:01:50 mydomain.com systemd[1]: Started Elasticsearch.
Sep 02 17:01:54 mydomain.com elasticsearch[999]: OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sep 04 08:47:12 mydomain.com systemd[1]: elasticsearch.service: main process exited, code=killed, status=9/KILL
Sep 04 08:47:12 mydomain.com systemd[1]: Unit elasticsearch.service entered failed state.
Sep 04 08:47:12 mydomain.com systemd[1]: elasticsearch.service failed.

elasticsearch is being killed, I guess you’re running out of memory, did you check for OOM messages in syslog?
maybe journalctl -xe will show them, otherwise I think /var/log/messages will.
How much memory do you have? how much have you given elasticsearch and postgres?

elasticsearch.log in /var/log/elasticsearch

[2018-09-06T04:06:49,052][INFO ][o.e.n.Node ] [] initializing …
[2018-09-06T04:06:50,664][INFO ][o.e.e.NodeEnvironment ] [jZ5pnOC] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [18.7gb], net total_space [21.9gb], spins? [unknown], types [rootfs]
[2018-09-06T04:06:50,664][INFO ][o.e.e.NodeEnvironment ] [jZ5pnOC] heap size [1.9gb], compressed ordinary object pointers [true]
[2018-09-06T04:06:51,367][INFO ][o.e.n.Node ] node name [jZ5pnOC] derived from node ID [jZ5pnOCOSny2gWq1t7ZiOQ]; set [node.name] to override
[2018-09-06T04:06:51,401][INFO ][o.e.n.Node ] version[5.6.0], pid[1006], build[781a835/2017-09-07T03:09:58.087Z], OS[Linux/3.10.0-514.16.1.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_144/25.144-b01]
[2018-09-06T04:06:51,401][INFO ][o.e.n.Node ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
[2018-09-06T04:07:43,012][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [aggs-matrix-stats]
[2018-09-06T04:07:43,016][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [ingest-common]
[2018-09-06T04:07:43,016][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [lang-expression]
[2018-09-06T04:07:43,017][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [lang-groovy]
[2018-09-06T04:07:43,017][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [lang-mustache]
[2018-09-06T04:07:43,017][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [lang-painless]
[2018-09-06T04:07:43,017][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [parent-join]
[2018-09-06T04:07:43,017][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [percolator]
[2018-09-06T04:07:43,017][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [reindex]
[2018-09-06T04:07:43,017][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [transport-netty3]
[2018-09-06T04:07:43,017][INFO ][o.e.p.PluginsService ] [jZ5pnOC] loaded module [transport-netty4]
[2018-09-06T04:07:43,018][INFO ][o.e.p.PluginsService ] [jZ5pnOC] no plugins loaded
[2018-09-06T04:07:51,641][INFO ][o.e.d.DiscoveryModule ] [jZ5pnOC] using discovery type [zen]
[2018-09-06T04:07:58,877][INFO ][o.e.n.Node ] initialized
[2018-09-06T04:07:58,889][INFO ][o.e.n.Node ] [jZ5pnOC] starting …
[2018-09-06T04:08:01,204][INFO ][o.e.t.TransportService ] [jZ5pnOC] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2018-09-06T04:08:04,899][INFO ][o.e.c.s.ClusterService ] [jZ5pnOC] new_master {jZ5pnOC}{jZ5pnOCOSny2gWq1t7ZiOQ}{zJslEyAVRaKrNkKLuuJ5Qw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-09-06T04:08:05,099][INFO ][o.e.h.n.Netty4HttpServerTransport] [jZ5pnOC] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2018-09-06T04:08:05,099][INFO ][o.e.n.Node ] [jZ5pnOC] started
[2018-09-06T04:08:06,471][INFO ][o.e.g.GatewayService ] [jZ5pnOC] recovered [1] indices into cluster_state
[2018-09-06T04:08:08,000][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][young][8][8] duration [1s], collections [1]/[1.9s], total [1s]/[13.2s], memory [81.2mb]->[34.7mb]/[1.9gb], all_pools {[young] [53.6mb]->[2.8mb]/[66.5mb]}{[survivor] [8.3mb]->[8.3mb]/[8.3mb]}{[old] [19.3mb]->[23.7mb]/[1.9gb]}
[2018-09-06T04:08:08,000][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][8] overhead, spent [1s] collecting in the last [1.9s]
[2018-09-06T04:08:10,801][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][young][10][9] duration [804ms], collections [1]/[1.7s], total [804ms]/[14s], memory [48.4mb]->[33.1mb]/[1.9gb], all_pools {[young] [16.4mb]->[2.2mb]/[66.5mb]}{[survivor] [8.3mb]->[4.6mb]/[8.3mb]}{[old] [23.7mb]->[26.4mb]/[1.9gb]}
[2018-09-06T04:08:10,801][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][10] overhead, spent [804ms] collecting in the last [1.7s]
[2018-09-06T04:08:12,109][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][young][11][10] duration [899ms], collections [1]/[1.3s], total [899ms]/[14.9s], memory [33.1mb]->[35.5mb]/[1.9gb], all_pools {[young] [2.2mb]->[94.7kb]/[66.5mb]}{[survivor] [4.6mb]->[5.9mb]/[8.3mb]}{[old] [26.4mb]->[29.6mb]/[1.9gb]}
[2018-09-06T04:08:12,110][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][11] overhead, spent [899ms] collecting in the last [1.3s]
[2018-09-06T04:08:13,123][INFO ][o.e.c.r.a.AllocationService] [jZ5pnOC] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[zammad_production][2]] …]).
[2018-09-06T04:09:18,345][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][young][77][11] duration [716ms], collections [1]/[1.1s], total [716ms]/[15.6s], memory [81.2mb]->[56.9mb]/[1.9gb], all_pools {[young] [45.6mb]->[436.6kb]/[66.5mb]}{[survivor] [5.9mb]->[5.4mb]/[8.3mb]}{[old] [29.6mb]->[51mb]/[1.9gb]}
[2018-09-06T04:09:18,348][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][77] overhead, spent [716ms] collecting in the last [1.1s]
[2018-09-06T04:09:20,371][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][79] overhead, spent [412ms] collecting in the last [1s]
[2018-09-06T04:10:45,187][INFO ][o.e.c.m.MetaDataMappingService] [jZ5pnOC] [zammad_production/x4wSm8gwTA-h0PO4MubXvw] update_mapping [Ticket]
[2018-09-06T04:10:46,672][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][165] overhead, spent [577ms] collecting in the last [1.1s]
[2018-09-06T04:10:50,065][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][168] overhead, spent [659ms] collecting in the last [1.3s]
[2018-09-06T04:10:54,452][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][171] overhead, spent [419ms] collecting in the last [1.1s]
[2018-09-06T04:11:03,425][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][young][174][24] duration [6.7s], collections [1]/[6.9s], total [6.7s]/[25.2s], memory [174.6mb]->[128.7mb]/[1.9gb], all_pools {[young] [51mb]->[971.3kb]/[66.5mb]}{[survivor] [8.3mb]->[6.6mb]/[8.3mb]}{[old] [115.2mb]->[121.1mb]/[1.9gb]}
[2018-09-06T04:11:03,437][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][174] overhead, spent [6.7s] collecting in the last [6.9s]
[2018-09-06T04:13:34,284][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][318] overhead, spent [492ms] collecting in the last [1s]
[2018-09-06T04:13:43,866][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][327] overhead, spent [641ms] collecting in the last [1.5s]
[2018-09-06T04:13:44,871][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][328] overhead, spent [417ms] collecting in the last [1s]
[2018-09-06T04:13:45,907][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][329] overhead, spent [302ms] collecting in the last [1s]
[2018-09-06T04:13:46,908][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][330] overhead, spent [496ms] collecting in the last [1s]
[2018-09-06T04:13:47,911][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][young][331][31] duration [740ms], collections [1]/[1s], total [740ms]/[28.3s], memory [273.2mb]->[256.6mb]/[1.9gb], all_pools {[young] [64mb]->[30.9mb]/[66.5mb]}{[survivor] [8.3mb]->[8.3mb]/[8.3mb]}{[old] [200.9mb]->[217.4mb]/[1.9gb]}
[2018-09-06T04:13:47,911][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][331] overhead, spent [740ms] collecting in the last [1s]
[2018-09-06T04:14:49,015][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][392] overhead, spent [306ms] collecting in the last [1s]
[2018-09-06T04:14:51,139][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][394] overhead, spent [858ms] collecting in the last [1.1s]
[2018-09-06T04:16:32,496][INFO ][o.e.c.m.MetaDataMappingService] [jZ5pnOC] [zammad_production/x4wSm8gwTA-h0PO4MubXvw] update_mapping [Ticket]
[2018-09-06T04:16:38,662][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][496] overhead, spent [364ms] collecting in the last [1s]
[2018-09-06T04:17:12,068][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][529] overhead, spent [354ms] collecting in the last [1.3s]
[2018-09-06T04:21:14,215][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][771] overhead, spent [463ms] collecting in the last [1s]
[2018-09-06T04:22:59,881][INFO ][o.e.c.m.MetaDataMappingService] [jZ5pnOC] [zammad_production/x4wSm8gwTA-h0PO4MubXvw] update_mapping [Organization]
[2018-09-06T04:23:01,342][INFO ][o.e.c.m.MetaDataMappingService] [jZ5pnOC] [zammad_production/x4wSm8gwTA-h0PO4MubXvw] update_mapping [Ticket]
[2018-09-06T04:23:02,363][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][879] overhead, spent [586ms] collecting in the last [1s]
[2018-09-06T04:23:03,369][WARN ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][880] overhead, spent [590ms] collecting in the last [1s]
[2018-09-06T04:42:50,937][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][2066] overhead, spent [321ms] collecting in the last [1s]
[2018-09-06T04:48:07,252][INFO ][o.e.m.j.JvmGcMonitorService] [jZ5pnOC] [gc][2382] overhead, spent [298ms] collecting in the last [1s]


Assingend VPS ram is 2 GB

looking at your logs, elasticsearch alone is configured to use 2 GB, you didn’t mention if you were getting OOM messages in /var/log/messages again but it’s clear you don’t have enough memory.

Yes, I switched the VPS fixed 2GB ram to Dynamic 2-4GB RAM.
Is it possible/advise me to reconfigure elasticsearch to use small amount of RAM ?

I don’t know, above are the settings you want to change, you could try changing the 2s to a 1, you’ll either have to look at the docs or try the sledge hammer approach and run sudo grep -r /etc /opt /usr /var Xms2g and see if anything turns up.

We recommend atleast 2 cores with 4 GB of RAM if you’re running elasticsearch on the same machine.

You want to set the following options:

-Xms1g
-Xmx1g

This will recude the usuage to 1GB for elasticsearch, not sure if this is enough for Zammad to be performant.
Also ensure to write your attachments to filesystem, this will improve the overall database performance.

Here’s a reference to elasticsearch heap size configuration:
https://www.elastic.co/guide/en/elasticsearch/reference/master/heap-size.html

Yes. In /etc/elasticsearch/jvm.options
I changed
-Xms2g
-Xmx2g
----> to
-Xms1g
-Xmx1g
And rebooted. Must watch Elasticsearch behavior in next hours / days, Hope every thing work well.
Thank you

1 Like

Good news! The issue fixed. Thanks for all technical assistance. slight_smile:

1 Like

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.