Postgresql backup (very) slow

Infos:

  • Used Zammad version: 5.3.1
  • Used Zammad installation type: source
  • Operating system: Ubuntu 22.04 LTS
  • Browser + version: NA

Actual behavior:

Everything went smoothly.

Our zammad DB is approx 25gb. With mysql, mysqldump was running for about 5 minutes. With postgresql, pg_dump is running for more than 90 minutes and is consuming a lot of CPU.

I tried several pg_dump parameters without any luck like:

pg_dump --dbname dbname --username postgres --no-privileges --no-owner --compress 6 --file /opt/backup/db.psql.gz
pg_dump --username postgres -Fc dbname > /opt/backup/db.pgdump

Theses commands seem to be using only 1 CPU.

Is this normal? Am I missing smth?

That does seem a bit odd performance-wise. Could you give some more metrics on your hardware specs? Do you have any graphs on utilization and load? I/O metrics perhaps?

Based on the information you have provided, I can’t really point you in a specific direction yet. It could be caused by your machine just being slow (which I kind of doubt, since you didn’t seem to have problems while using MySQL), or maybe PostgreSQL is currently not performing at its best, possibly due to lack of tuning. The latter is far more likely to be the case.

Have you made changes to your PostgreSQL database configuration? If so: what did you change (and why did you change it)?

For comparison, one of our Zammad instances has a postgres database with 50GB of data inside. The daily backup (using the provided backup script by Zammad) starts at 03:35 and is finished at around 03:49, producing a tarball of 39GB. We are running a beefy VM with SSD-backed storage and the database has been tuned accordingly so that most of the tables/indexes fit in RAM.

1 Like

Yes, we are running on a pretty decent VM, 4vCPU, 16gb RAM with NVME storage. VM seems to be “overkill” during work hours, zammad runs very nicely. During the backup, we have 1 vCPU at 100%, the others aren’t working.

Our postgres instance is running with default parameters I guess. Would you be kind to post your pg.conf and backup script?

It doesn’t really work like that, as PostgreSQL needs to be tuned according to your specific setup. I’ve written about this in past topics.

So after looking into this problem again, I suddenly realized something which I had completely forgotten. We’ve also run into this exact problem quite some time ago. It’s not PostgreSQL being slow, it’s gzip compressing the backup files afterwards. We’ve replaced it with pigz, which is multi-threaded replacement for gzip. It is a slight change to the backup script, but a huge improvement.

I thought we had internally discussed filing a PR to ask to incorporate our changes into the zammad_backup.sh script, but I can’t find anything at the moment. I guess more pressing matters came in between at the time and we kind of forgot about it eventually. I’ll ask my coworker who came up with the idea to file a PR.

2 Likes

I’ve discussed this with my colleagues, we have even further incorporated zstd as a compression algorithm and presented our changes to Zammad as well, the PR is here, should you want to check it out.

The main issue you’re trying to compensate here is that you store all attachments in the database.
This my not be a suitable approach for big installations, you may want to consider using the filesystem storage instead (unless you’re using docker).

Ok, I’ll try this.
Thank you

Ok, switched to local filesystem storage and moved all the data with rails : Store::File.move(‘DB’, ‘File’)

Everything is now in /opt/zammad/storage.

Database files are still 25gb. I’ve tried vaccum/vaccumlo/reindex. Do I need to pgdump then restore the db to start on a fresh instance?

Just to keep you posted : I’ve dumped/restored the DB. Everything’s back to normal.

Backup is now super fast, under a minute. Database usage OK.

Thanks a lot for your insights.

2 Likes