- Used Zammad version: 5.0
- Used Zammad installation type: Package
- Operating system: ubuntu
- unprocessable mails: 40
- 160 failing background jobs
- Failed to run background job #1 ‘SearchIndexAssociationsJob’ 7 time(s) with 133 attempt(s).
- Failed to run background job #2 ‘SearchIndexJob’ 3 time(s) with 57 attempt(s
Steps to reproduce the behavior:
- Search a ticket or a captured mail and does not work
i’ve checked the logs and in elastic search i found the following
flood stage disk watermark [95%] exceede on [U54V-xC8T3062QOLo3Qjjg] [zammad] [/var/lib/elasticsearch/nodes/o] free 2.5gb [2%], all incides on this node will be marked read - only
so i’ve to expand the disk and remove the block of read only on that node?
my Zammad is virtualized on vsphere so i can increase the disk space of the volume but then how can i resize the volume without formatting? i can give to zammad a lot of space because we have updated recently the memory of the servers
can someone help me on the best practise to do?
Resizing a Filesystem is not a Zammad related problem. It depend on the filesystem and other criterias like lvm and position of the requested partition.
If you doing something wrong, everything is lost!
DO A BACKUP!
Here some hits for your google search (depending on your filesystem)…
You can resize a partition easy, if it is the last partition on your disk.
If its not, is a bit more tricky and depend on your current setup.
Here is the way to resize your filesystem, if it is the last partition on the affected disk and you are using an ext filesystem.
Do a Backup!
read the instructions carefully, find out about the commands used and whether they are available in your installation.
Expand the disk in vmware,
let the kernel rescan the blockdevice infos:
echo 1 > /sys/class/scsi_device/0\:0\:1\:0/device/rescan
/0:0:1:0/ depend on your disks.
if a capacity change was detected
dmesg should show the increase like
[540256.743543] sd 0:0:1:0: [sdb] 146800640 512-byte logical blocks: (75.1 GB/70.0 GiB)
[540256.743584] sdb: detected capacity change from 64424509440 to 75161927680
then you could delete the old partition definition with fdisk and create a similar partition with the same starting blocks but with a bigger size.
then you could inform the kernel wirth
partprobe about the new partition information.
now you could increase the filesystem to the size of the partition (on ext filesystems with:
sudo resize2fs -p /dev/sdb1
df -h should now show the new (bigger) filesystem.
but after that
elasticsearch back to write in the node?
here is in read only, i’ve to do something to back to write after resizing?
[/var/lib/elasticsearch/nodes/o] free 2.5gb [2%], all incides on this node will be marked read - only
Google pointed me to
it describes two types
cluster.blocks.read_only and cluster.blocks.read_only_allow_delete
and tho to solve the problem.
Unfortunately I have no experience with it
While you’re at it. Zammad 5.0 is critically outdated and has security advisories.
You should upgrade it asap after fixing your storage issues.
If keeping Zammad up to date and administrating is too complex (which is alright!), you could also check if Zammad SaaS is a option for you and your company. No pressure of course!