Infos:
- Used Zammad version: 6.5.1
- Used Zammad installation type:docker-compose
- Operating system: Ubuntu
- Browser + version: various
Expected behavior:
- Elastic Search starts up correctly
Actual behavior:
- Elastic Search keeps restarting
Steps to reproduce the behavior:
- unclear
There was an issue with the NFS share which contained the docker images. After this issue has been repaired the stack was able to start up again but Elastic search keeps crashing. I suspect that there is maybe a lock left over from the crash. But I do not have any clue on how to get Elastic search up and running again or maybe how to prune it and then rebuild the index? Since the elastic search container keeps restarting I cannot simply open a console and check out the permissions etc.
fatal exception while booting Elasticsearch | @timestamp=2025-08-25T15:55:01.782Z log.level=ERROR ecs.version=1.2.0 service.name=ES_ECS event.dataset=elasticsearch.server process.thread.name=main log.logger=org.elasticsearch.bootstrap.Elasticsearch elasticsearch.node.name=5d2f4bd58f1a elasticsearch.cluster.name=docker-cluster error.type=java.lang.IllegalStateException error.message=failed to obtain node locks, tried [/usr/share/elasticsearch/data]; maybe these locations are not writable or multiple nodes were started on the same data path? error.stack_trace=java.lang.IllegalStateException: failed to obtain node locks, tried [/usr/share/elasticsearch/data]; maybe these locations are not writable or multiple nodes were started on the same data path?
at org.elasticsearch.server@8.19.2/org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:294)
at org.elasticsearch.server@8.19.2/org.elasticsearch.node.NodeConstruction.validateSettings(NodeConstruction.java:513)
at org.elasticsearch.server@8.19.2/org.elasticsearch.node.NodeConstruction.prepareConstruction(NodeConstruction.java:281)
at org.elasticsearch.server@8.19.2/org.elasticsearch.node.Node.<init>(Node.java:201)
at org.elasticsearch.server@8.19.2/org.elasticsearch.bootstrap.Elasticsearch$1.<init>(Elasticsearch.java:402)
at org.elasticsearch.server@8.19.2/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:402)
at org.elasticsearch.server@8.19.2/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:99)
Caused by: java.io.IOException: failed to obtain lock on /usr/share/elasticsearch/data
at org.elasticsearch.server@8.19.2/org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:239)
at org.elasticsearch.server@8.19.2/org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:207)
at org.elasticsearch.server@8.19.2/org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:286)
... 6 more
Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/node.lock
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:213)
at java.base/java.nio.channels.FileChannel.open(FileChannel.java:301)
at java.base/java.nio.channels.FileChannel.open(FileChannel.java:353)
at org.apache.lucene.core@9.12.2/org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:112)
at org.apache.lucene.core@9.12.2/org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:43)
at org.apache.lucene.core@9.12.2/org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:44)
at org.elasticsearch.server@8.19.2/org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:232)
... 8 more
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log
ERROR: Elasticsearch died while starting up, with exit code 1
I think this is the important bit
error.message=failed to obtain node locks, tried [/usr/share/elasticsearch/data]; maybe these locations are not writable or multiple nodes were started on the same data path