Migration from mysql to postgres

Infos:

  • Used Zammad version: 6.4
  • Used Zammad installation type: (source, package, docker-compose, …) package
  • Operating system: Ubuntu 20.4
  • Browser + version:

Expected behavior:

  • I want to migrate mysql to postgres but when using pgloader receiving two errors :-
  1. 2025-07-15T13:43:01.014000Z LOG pgloader version “3.6.10~devel”
    KABOOM!
    ESRAP-PARSE-ERROR: At
  2. EASE

Actual behavior:

  • migration should happen

Steps to reproduce the behavior:

  • Tried with docker pgloader higher version.

Please let us know what exactly you did in the order. I do expect that you have followed Zammads documentation…? Migrate to PostgreSQL Server — Zammad System Documentation documentation

Yes i followed the official document and the process is getting stuck at user tbale always and later getting killed automaticaly.
2025-07-23T12:53:01.036000Z NOTICE CREATE INDEX idx_1261207_index_ticket_article_flags_on_articles_id_and_created_by_id ON public.ticket_article_flags (ticket_article_id, created_by_id);
2025-07-23T12:53:01.037000Z WARNING PostgreSQL warning: identifier “idx_1261207_index_ticket_article_flags_on_articles_id_and_created_by_id” will be truncated to “idx_1261207_index_ticket_article_flags_on_articles_id_and_creat”
2025-07-23T12:53:01.038000Z NOTICE CREATE INDEX idx_1261207_index_ticket_article_flags_on_ticket_article_id_and_key ON public.ticket_article_flags (ticket_article_id, key);
2025-07-23T12:53:01.039000Z WARNING PostgreSQL warning: identifier “idx_1261207_index_ticket_article_flags_on_ticket_article_id_and_key” will be truncated to “idx_1261207_index_ticket_article_flags_on_ticket_article_id_and”
2025-07-23T12:53:01.039000Z NOTICE CREATE INDEX idx_1261207_index_ticket_article_flags_on_ticket_article_id ON public.ticket_article_flags (ticket_article_id);
2025-07-23T12:53:01.039000Z NOTICE CREATE INDEX idx_1261207_index_ticket_article_flags_on_created_by_id ON public.ticket_article_flags (created_by_id);
2025-07-23T12:53:01.083000Z NOTICE CREATE UNIQUE INDEX idx_1261269_primary ON public.ticket_time_accountings (id);
2025-07-23T12:53:01.091000Z NOTICE CREATE INDEX idx_1261269_index_ticket_time_accountings_on_created_by_id ON public.ticket_time_accountings (created_by_id);
2025-07-23T12:53:01.092000Z NOTICE CREATE INDEX idx_1261269_index_ticket_time_accountings_on_time_unit ON public.ticket_time_accountings (time_unit);
2025-07-23T12:53:01.092000Z NOTICE CREATE INDEX idx_1261269_fk_rails_e9be901ba1 ON public.ticket_time_accountings (type_id);
2025-07-23T12:53:01.093000Z NOTICE CREATE INDEX idx_1261269_index_ticket_time_accountings_on_ticket_article_id ON public.ticket_time_accountings (ticket_article_id);
2025-07-23T12:53:01.097000Z NOTICE CREATE INDEX idx_1261269_index_ticket_time_accountings_on_ticket_id ON public.ticket_time_accountings (ticket_id);

You’re not really making any sense and besides that: your console commands and error output is incomplete. Please post the full steps you’ve followed, the commands you used and the output which was returned. Also please use the text formatting tools to make your command/log output more easily readable.

1 Like

Commands Used

1. Generated pgloader command file:

RAILS_ENV=production rake zammad:db:pgloader > /tmp/pgloader-command

2. Edited /tmp/pgloader-command to point to correct database:

lisp

LOAD DATABASE
  FROM mysql://zammad:<password>@10.11.21.102/zammad
  INTO pgsql://zammad:<password>@10.11.21.105/zammad

WITH include no drop, create no tables, preserve index names

-- AFTER LOAD DO
--  $$ alter table smime_certificates alter column email_addresses type varchar[] using translate(email_addresses::varchar, '[]', '{}')::varchar[] $$,
--  $$ alter table pgp_keys alter column email_addresses type varchar[] using translate(email_addresses::varchar, '[]', '{}')::varchar[] $$,
--  $$ alter table public_links alter column screen type varchar[] using translate(screen::varchar, '[]', '{}')::varchar[] $$


SET timezone = 'UTC'
SET client_timezone TO '00:00'
;

3. Ran pgloader:

bash

pgloader --verbose  /tmp/pgloader-command

Observed Behavior / Issue

  • The migration process successfully creates tables and starts copying.
  • It completes most tables, but when reaching these tables, it repeatedly hangs:
    • ticket_article_flags
    • ticket_time_accountings
    • ticket_shared_draft_starts
  • These tables have 0 rows in some cases but are part of large FK chains.
  • After index creation, pgloader hangs with no new output, even after 20+ minutes.

What I’ve Already Tried

  • Added WITH include no drop, create no tables to avoid reprocessing
  • Removed AFTER LOAD DO blocks that caused hangs on non-existent tables
  • Cleaned /tmp/pgloader/
  • Enabled swap and freed >20GB disk space
  • Verified counts in PostgreSQL via:

sql

SELECT COUNT(*) FROM users;
SELECT COUNT(*) FROM tickets;
SELECT COUNT(*) FROM ticket_articles;

Additional Notes

  • Total MySQL DB size: ~33.4 GB

  • Table users has ~18,000 rows with valid created_by_id and no NULL FK issues

  • All logs captured from /tmp/pgloader/pgloader.log

  • The migration runs and completes most tables.

  • However, when it reaches the following tables, it hangs indefinitely:

cpp

public.ticket_article_flags
public.ticket_time_accountings
public.ticket_shared_draft_starts
  • These are the last lines from /tmp/pgloader/pgloader.log:

log

CopyEdit

NOTICE DONE copying public.ticket_time_accountings in 0.000s
NOTICE CREATE INDEX idx_xxx ON public.ticket_time_accountings (ticket_id);
NOTICE CREATE INDEX idx_xxx ON public.ticket_time_accountings (created_by_id);
NOTICE CREATE INDEX idx_xxx ON public.ticket_time_accountings (ticket_article_id);
NOTICE CREATE INDEX idx_xxx ON public.ticket_time_accountings (time_unit);

Are you completely sure that the import is actually hanging? It might be busy processing large tables. Could you just let it run patiently overnight, without touching it?

Also, did you check the system health during the import? No disks or partitions filling up to 100% causing the import to freeze? No Out-Of-Memory situations, crashes, or processes getting killed? Does the PostgreSQL logging show any errors, or perhaps any other system logs?

1 Like

Most certainly full storage or just importing the data.
Move the attachments out of the database before you migrate the database type. It will be -much- faster and less of a pain in your butt.

zammad@manitu-ticket01:/opt/zammad$ pgloader --verbose --dry-run /tmp/zammad_pgloader_manual.load
2025-07-25T14:17:57.002000Z NOTICE Starting pgloader, log system is ready.
2025-07-25T14:17:57.006000Z LOG pgloader version “3.6.10~devel”
2025-07-25T14:17:57.026000Z LOG DRY RUN, only checking connections.
2025-07-25T14:17:57.026000Z LOG Attempting to connect to #<MYSQL-CONNECTION mysql://zammadticket@10.20.20.123:3306/zammadticket {10063397C3}>
2025-07-25T14:17:57.039000Z FATAL Failed to connect to #<MYSQL-CONNECTION mysql://zammadticket@10.20.20.123:3306/zammadticket {10063397C3}>: Failed to connect to mysql at “10.20.20.123” (port 3306) as user “zammadticket”: Condition QMYND:MYSQL-UNSUPPORTED-AUTHENTICATION was signalled.
2025-07-25T14:17:57.039000Z LOG Attempting to connect to #<PGSQL-CONNECTION pgsql://zammad@10.20.20.21:5432/zammad {100633AD43}>
2025-07-25T14:17:57.047000Z FATAL Failed to connect to #<PGSQL-CONNECTION pgsql://zammad@10.20.20.21:5432/zammad {100633AD43}>: Failed to connect to pgsql at “10.20.20.21” (port 5432) as user “zammad”: 10 fell through ECASE expression. Wanted one of (0 2 3 4 5 6 7 8).
2025-07-25T14:17:57.047000Z LOG report summary reset
table name errors read imported bytes total time read write



zammad@manitu-ticket01:/opt/zammad$

This is the another error.

where in zammadticket password in mysql is in mysql_native_password format already.

and when server is been tried to access normally without pgloader its accessible

zammad@manitu-ticket01:/opt/zammad$ mysql -h 10.20.20.123 -u zammadticket -p’io4A#hxeem4#ookaij#’ zammadticket
mysql: [Warning] Using a password on the command line interface can be insecure.
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 39330951
Server version: 8.0.42-33 Percona Server (GPL), Release ‘33’, Revision ‘9dc49998’

Copyright (c) 2000, 2025, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql> exit;
Bye
zammad@manitu-ticket01:/opt/zammad$

but when trying with pgloader it gives above error

The pgloader claims both, your PostgreSQL and MySQL connection and credential issues.

I did the postgres part and actually just put it into my search engine. The first results returns this:

The password is the problem in that case.

Hi thanks for your help migration is successfully completed.

But im getting error of “CSRF TOKEN FAILED”.

I tried following zammad official tip by changing
" Within your virtual host configuration, locate both directives proxy_set_header X-Forwarded-Proto and replace $scheme by https ."

But still the same error. my zammad services are running fine with zammad websocket etc.

But my logs say
, upstream: “http://127.0.0.1:3000/api/v1/message_send”, host: “10.20.20.154:8080”, referrer: “http://10.20.20.154:8080/
2025/08/05 14:26:22 [error] 43914#43914: *55 connect() failed (111: Connection refused) while connecting to upstream, client: 10.200.1.51, server: 10.20.20.154, request: “GET /ws HTTP/1.1”, upstream: “http://127.0.0.1:6042/ws”, host: “10.20.20.154:8080”

Kindly help. Thanks

CSRF tokens don’t just magically appear due to the migration from MySQL to PostGreSQL.
Given that your virtual host file is correct already, my guess is that your full qualified hostname and possibly HTTP type are set incorrect.

This can be set via UI or if you can’t use the UI approach any more, via rails console as well.
See the documentation for more.