Elasticsearch: StatusCode: 500 {"error":"Unable to process GET request to elasticsearch URL

I got a fresh copy of Zammad 3.2 from the GitHub repository.
And I just setup Zammad on an AWS Ubuntu 18.04 production server.
I have installed Ruby on Rails , Elasticsearch , PostgreSQL and other dependencies on the server.

I am trying to configure Elasticsearch to work with Zammad but it’s not working right.

It’s throwing an error StatusCode: 500 {“error”:"Unable to process GET request to elasticsearch URL ‘http://localhost:9200/zammad_production_ticket/_doc/_search’.

I have opened the 9200 port on the AWS server for Elasticsearch.

Here is the Elasticsearch network settings configuration (elasticsearch.yml)

# ======================== Elasticsearch Configuration =========================
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: []
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]

Infos:

  • Used Zammad version: Zammad v3.2.0
  • Used Zammad installation source: git repository source
  • Operating system: Ubuntu 18.04
  • Browser + version: Google Chrome v77
  • Elasticsearch: Elasticsearch 7.x

Expected behavior:

  • I expect all the elasticsearch search functionality of Zammad to work

Actual behavior:

  • It’s throwing an error

    StatusCode: 500

    {“error”:“Unable to process GET request to elasticsearch URL ‘http://localhost:9200/zammad_production_ticket/_doc/_search’. Check the response and payload for detailed information: \n\nResponse:\n#\u003cUserAgent::Result:0x000055f928224c38 @success=false, @body=nil, @data=nil, @code=0, @content_type=nil, @error=”#\u003cRuntimeError: Unable to process http call ‘#\u003cNet::HTTPServiceUnavailable 503 Service Unavailable readbody=true\u003e’\u003e"\u003e\n\nPayload:\n{:query=\u003e{:bool=\u003e{:must=\u003e[{:range=\u003e{“created_at”=\u003e{:from=\u003e"2018-12-31T23:00:00Z", :to=\u003e"2019-12-31T22:59:59Z"}}}], :must_not=\u003e[{“term”=\u003e{“state.keyword”=\u003e"merged"}}]}}, :size=\u003e0, :aggs=\u003e{:time_buckets=\u003e{:date_histogram=\u003e{:field=\u003e"created_at", :interval=\u003e"month", :time_zone=\u003e"Africa/Lagos"}}}, “sort”=\u003e[{:updated_at=\u003e{:order=\u003e"desc"}}, “_score”]}\n\nPayload size: 0M"}

Steps to reproduce the behavior:

  • Setup a new AWS server
  • Install Ruby on Rails, Elasticsearch, Nginx on the server
  • Clone a copy of Zammad from GitHub to the server.

Please ensure your elasticsearch Service is up and running.
To double check that it’s answering request, you might want to try: curl http://localhost:9200

Again a friendly reminder: Please do not open elasticsearch ports to the internet (if you happened to done that :slight_smile: ) without ensuring all elasticsearch accounts are password protected.

Thank you so much.

All I did was to remove Elasticsearch completely from the server.

And then re-installed a new one following the steps below

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt-get install apt-transport-https sudo wget
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-attachment
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch

Next, I made changes to my Elasticsearch config file (/etc/elasticsearch/elasticsearch.yml):

# ======================== Elasticsearch Configuration =========================
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

Save and exit.

And then restart Elasticsearch:

sudo systemctl start elasticsearch

Finally, I ran the command

RAILS_ENV=production rails r "Setting.set('es_url', 'http://localhost:9200')"
RAILS_ENV=production rails searchindex:rebuild
RAILS_ENV=production rails server -p 3000.

And then Hard refreshed my browser, and it worked.

Please endeavour to rebuild the index using this command

RAILS_ENV=production rails searchindex:rebuild

Thank you.

Normally it shouldn’t be necessary to provide the RAILS_ENV all the time.
See point 4 that does some exports to solve this:
https://docs.zammad.org/en/latest/install-source.html#initialize-your-database

Oh, thank you so much

I guess you mean that I should run the commands below in order not to always specify my Rails Environment

export RAILS_ENV=production

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.