Cannot login after moving to new machine

Infos:

  • Used Zammad version: 3.0 -> 3.2
  • Used Zammad installation source: (source, package, …) docker-compose
  • Operating system: debian -> ubuntu 16.04
  • Browser + version: chrome

Expected behavior:

  • normal login

Actual behavior:

  • cannot login, display message with invalid login credentials
  • still displays “Login over old machine IP”

Steps to reproduce the behavior:

moved from debian to ubuntu, running on docker-compose

  • zammad version 3.0.x -> 3.2.x ( is this the problem!)
    postgres DB

Steps followed :

docker-compose stop zammad-nginx zammad-websocket zammad-railsserver zammad-memcached zammad-elasticsearch zammad-scheduler

  • took backup from debian

  • dropped db
    docker-compose exec zammad-postgresql su -c ‘dropdb zammad_production’ - postgres
    docker-compose start zammad-scheduler
    docker-compose exec zammad-scheduler bundle exec rake db:create

  • tried using restore script as well as manually (imported db , ran db:migrate and cleared cache )
    ** docker-compose exec zammad-railsserver bundle exec rake db:migrate
    ** docker-compose exec zammad-railsserver bundle exec rails runner ‘Rails.cache.clear’

  • restarted all
    docker-compose stop; docker-compose start && docker-compose logs -f

my databse size on new machine is same as on old one.
without any Luck, any clues ?

Restoring the backup from a 3.0 installation will overwrite 3.2 files and let you end up in a 3.2 container together with 3.0 source files. You’ll need to push the current version again to force the container to update your files.


If you entered a login message on the login page, it will of course provide that same message. Ensure you don’t have strange proxy settings (or in general changes proxy settings) that might let you end up on the old setup.

Also you might want to have a look at the production log which might tell you what the issue is.

Thanks!
As mentioned I pulled the latest version again, the application could not start
zammad role was missing in DB , created zammad superuser.
Elastic search logs show error

The index [[zammad_production/XSLBt56EQ9-zEg-Vhzeuzg]] was created with version [5.6.14] but the minimum compatible version is [6.0.0-beta1]. It should be re-indexed in Elasticsearch 6.x before upgrading to 7.6.0.
at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkSupportedVersion(MetaDataIndexUpgradeService.java:113)
at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:87)
at org.elasticsearch.gateway.GatewayMetaState.upgradeMetaData(GatewayMetaState.java:240)
at org.elasticsearch.gateway.GatewayMetaState.upgradeMetaDataForNode(GatewayMetaState.java:223)
at org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:154)
at org.elasticsearch.node.Node.start(Node.java:705)
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:273)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:358)
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170)
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161)
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125)
at org.elasticsearch.cli.Command.main(Command.java:90)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
For complete error details, refer to the log at /usr/share/elasticsearch/logs/docker-cluster.log
{“type”: “server”, “timestamp”: “2020-02-25T14:29:48,709Z”, “level”: “INFO”, “component”: “o.e.n.Node”, “cluster.name”: “docker-cluster”, “node.name”: “50ef0bfd2ce5”, “message”: “stopping …” }
{“type”: “server”, “timestamp”: “2020-02-25T14:29:48,725Z”, “level”: “INFO”, “component”: “o.e.n.Node”, “cluster.name”: “docker-cluster”, “node.name”: “50ef0bfd2ce5”, “message”: “stopped” }
{“type”: “server”, “timestamp”: “2020-02-25T14:29:48,725Z”, “level”: “INFO”, “component”: “o.e.n.Node”, “cluster.name”: “docker-cluster”, “node.name”: “50ef0bfd2ce5”, “message”: “closing …” }
{“type”: “server”, “timestamp”: “2020-02-25T14:29:48,734Z”, “level”: “INFO”, “component”: “o.e.n.Node”, “cluster.name”: “docker-cluster”, “node.name”: “50ef0bfd2ce5”, “message”: “closed” }
{“type”: “server”, “timestamp”: “2020-02-25T14:29:48,735Z”, “level”: “INFO”, “component”: “o.e.x.m.p.NativeController”, “cluster.name”: “docker-cluster”, “node.name”: “50ef0bfd2ce5”, “message”: “Native controller process has stopped - no new native processes can be started” }

How to proceed ?

You’ll need to remove the elasticsearch volume (it should hold the node/indice directory).

That’s what the above error message is about.

After removing Volume Elastic search reported following error in logs:

{"type": "server", "timestamp": "2020-02-26T07:52:22,558Z", "level": "INFO", "component": "o.e.b.BootstrapChecks", "cluster.name": "docker-cluster", "node.name": "ae20c891c99a", "message": "bound or publishing to a non-loopback address, enforcing bootstrap checks" }
ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log
{"type": "server", "timestamp": "2020-02-26T07:52:22,571Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "ae20c891c99a", "message": "stopping ..." }
{"type": "server", "timestamp": "2020-02-26T07:52:22,597Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "ae20c891c99a", "message": "stopped" }
{"type": "server", "timestamp": "2020-02-26T07:52:22,597Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "ae20c891c99a", "message": "closing ..." }
{"type": "server", "timestamp": "2020-02-26T07:52:22,609Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "ae20c891c99a", "message": "closed" }
{"type": "server", "timestamp": "2020-02-26T07:52:22,611Z", "level": "INFO", "component": "o.e.x.m.p.NativeController", "cluster.name": "docker-cluster", "node.name": "ae20c891c99a", "message": "Native controller process has stopped - no new native processes can be started" }

Making following changes in zammad-elasticsearch block in docker-compose.yml resolved the issue

zammad-elasticsearch:
    image: ${IMAGE_REPO}:zammad-elasticsearch${VERSION}
    container_name: zammad-elasticsearch
    environment:
      - cluster.initial_master_nodes=zammad-elasticsearch
      - node.name=zammad-elasticsearch
    ulimits:
      memlock:
        soft: -1
        hard: -1

Please let me know if it is fine to keep these settings in production.
Thanks for your support!

The Elastic Search issue came probably because during migration
its version changed from 5.x to 7.6.0 . with zammad—latest
However I would prefer staying on zammad–3.2.0-12 for production.

Thanks for your assistance
Great work Zammad Team!

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.