Zammad 6.4.1 Attachment Storage Issue – Need Help!

Infos:

  • Zammad Version: 6.4.1
  • Installation Type: Package
  • Operating System: Ubuntu 22.04.4 LTS
  • Browser + Version: Any browser (currently using Chrome, Version 132.0.6834.160)

Problem:

System Configuration:

  • 985 GB disk space
  • 24 CPU cores
  • 32 GB of RAM
  • 50 to 70 active users working simultaneously

We’ve been running Zammad for two years, and over time, it has scaled to where we now have a machine with 24 cores, supporting 50 to 70 users simultaneously. The platform has been performing exceptionally well during this period. However, we are now facing a file management issue. On a daily basis, we send around 300 emails, and when including client responses, we handle up to 1000 emails per day, most of which contain attachments. Currently, we have a total disk space of 985 GB, with 849 GB in use (approximately 90%). Of this, the /opt/zammad/storage/fs/ directory alone is consuming over 815 GB. We are unsure how to best manage this situation. So far, we’ve been increasing disk space, but we’re uncertain if this is the most effective solution. Currently, we use a ‘Filesystem’ for storage, and we’re considering switching to Simple Storage S3 to potentially improve the situation but we’re unsure if this will help. We would greatly appreciate any advice or recommendations on how to effectively manage this challenge.

I‘m interested in it too!
We have 150 GB for the storage folder and I cant access it because it is to large/big over WinSCP.

1 Like

What kind of attachments do they contain? Zammad isn’t exactly suited as a filesharing platform, maybe you should focus on receiving or providing attachments via a dedicated external filesharing platform and just share links in the Zammad tickets, instead of putting those attachments into the helpdesk as well.

Oof, that is a lot of data.

Without knowing your requirements regarding data retention: You could create a Schedule to delete tickets older than X months/years. If a ticket contains an attachment which isn’t referenced (deduplicated) in another ticket, the attachment will be removed too, freeing up precious diskspace.

There are also (dirty) tricks to cleanup attachment files without touching the tickets, but that is generally discouraged, and without proper knowledge of Zammad internals could lead to crippling or corrupting your instance.

Moving to Cloud storage might mean you can more easily extend storage space, but you might want to look into the cause first, and see if you can make some improvements there.

The following database query will show the top 10 largest files, this might be helpful in investigating this issue:

SELECT size, filename FROM stores ORDER BY CAST(size AS integer) DESC LIMIT 10;
2 Likes

Thank you for your response—I really appreciate it!

Our ERP processes emails before they reach Zammad, storing the same files in both systems. We explored external storage with links, but the ERP won’t support that solution.

I also can’t delete tickets after a certain period since we need to maintain a historical record of our cases, but I can remove files.

I think the problem isn’t file size—the 10 largest files are relatively small, with the largest at 159.8 MB—but the sheer volume, especially duplicates. We’re constantly hitting storage limits: 95% last week, cleared 10%, and back to 95% in six days.

My only options seem to be manual deletion or switching to S3 for storage. If you @dvanzuijlekom or anyone has a safe approach, I’d greatly appreciate it. Otherwise, I may have to take an unconventional, undocumented route.

This is getting out of control, and I’m not sure how to handle it. :sweat:

Zammad deduplicate files in storage. So, if you have the same attachment used in several tickets it is still only stored once in your file system.

That also means, you cannot simply delete “old files”, because they might be used/referenced again in newer tickets.

1 Like

Thank you for your response!

It’s good to know that files aren’t duplicated in the database—I wasn’t aware of that.

I understand that manually deleting files isn’t an option, but I still face the issue of my disk space (985 GB total, 849 GB used), with /opt/zammad/storage/fs/ alone taking up over 815 GB.

Is switching to ‘Simple Storage S3’ the only solution, or is there any other workaround?

I look forward to your response, thanks again!

What is hindering you from increasing the disk size?

If I understand it correctly, you want to keep old tickets, but not old files, correct?

1 Like

Thank you for your reply! :pray:

I don’t have enough knowledge to determine whether what I’m doing is the right thing—just expanding the space—or if this practice might cause problems in the long run.

The issue is that I’m not sure if having a 985 GB disk with 90% of the space taken up by files is normal or if I’m managing Zammad correctly.

Yes, exactly! I want to keep the tickets but not the old files.

Thanks again for your time!

Disk usage greatly varies depending on the number of incoming tickets, number of attachments (naturally) and general usage. There’s no “out of the norm” in terms of storage usage.

It soly depends on you, your use cases and the way you handle data. Software and storage usage wise, there’s nothing to fear. I manage way bigger instances than yours.

1 Like

I truly appreciate the clarification and information—thank you very much! We will continue expanding the space.

I’m currently using ‘Filesystem’ storage, and according to Zammad’s documentation, this method is recommended for most instances, especially those with higher loads. Would you recommend switching to Simple Storage S3 in my case, or is it less efficient than ‘Filesystem’ storage?

Its a bit unclear what that actual issue is. Our could provider has volumes up to 10TB. You can also attach up to 16 volumes, giving you 160 TB in total. With your current usage (1TB per two years) that would last for 80 years and by then, probably more capable clouds would be available :slight_smile:

The issue is that I’m not sure if having a 985 GB disk with 90% of the space taken up by files is normal or if I’m managing Zammad correctly.

If you don’t know whether its normal: investigate.

Is the file storage containing the attachment of the users in the tickets or any garbage data you cannot make a sense of it?

Have you instructed your users how to have sensible attachment sizes?

Is there abnormal large files sent by customers, that do not make sense? Then reject big attachments.

I think the problem isn’t file size—the 10 largest files are relatively small, with the largest at 159.8 MB—but the sheer volume, especially duplicates. We’re constantly hitting storage limits: 95% last week, cleared 10%, and back to 95% in six days.

Here are some general “platform level” things you could do:

If there are duplicates, you could write a shell script that deduplicates and saves up space. My approach would be to generate checksums and then hard link files that are identical.

If cost is a concern you can migrate files by mtime and/or atime to slower and cheaper storage and use unionfs to have a single filesystem for Zammad. You could also use S3 under the hood for “cold” ticket attachments via fuse fs.

Another option is to use slower and cheaper storage on the block level and put a fast SSD cache on top via LVM.