Used Zammad version: First Zammad: 3.6.X. The second Zammad: 3.3.0
Used Zammad installation source: First Zammad with Package. The second Zammad with Source
Operating system: First Zammad on Ubuntu 20.04.1 LTS. The second Zamad on Ubuntu 18.04.4 LTS
Expected behavior:
First of all: I have two Zammad installations set up and in use. That means, both Zammads have both databases populated. One database is in MYSQL and the other in POSTGRESQL. I want to migrate the MYSQL database into the POSTGRESQL.
Now the question: does this work at all? Because when both databases are in use, both databases have different increments etc…
My goal is to turn two Zammad installations into one. BUT, importantly, the tickets must be migrated!
Actual behavior:
I have not tried anything yet. But I found a tutorial to migrate a MYSQL database with PGLOADER to a POSTGRESQL database. But the tutorial probably means to migrate only the single tables and NOT to migrate tables in tables.
Steps to reproduce the behavior:
As I said, I have two populated databases (MYSQL and POSTGRESQL) and I want to migrate the MYSQL database with complete content to the POSTGRESQL database.
Okay I’ll have to jump in out of line here to ensure we have no harm here.
Migrating two Zammad instances is not possible and supported.
While it technically would possible, it would require a lot of manual work because you’ll need to remap relations.
Above mentioned merging process would probably lead to “merged” tickets or even worse inconsistent data. I strongly suggest against merging two Zammad databases into the same one.
If you’re talking about two Zammad databases on the same database server then yes, that’s possible, as long as you don’t want to have them in the exact same database. The database.yml within config/ should help you here.
Please note that you’re on your own with the migration from MySQL to PostgreSQL as the translation is not trivial and needs further know how.
Second option mentioned by @olafbuitelaar is actually the best approach.
Option one ignores the migration part into the same installation which leaves you to option 2.
When using console or API keep in mind that you cannot use existing IDs but may have to manually lookup new IDs etc etc. You’ve chosen a hard project here.
@MrGeneration@olafbuitelaar
Here is a small update:
we are currently writing a script (PHP) to get the data from the old database, via API, and reassemble the ticket and write it into the new database.
The tests we have done so far have worked quite well.
The only thing we have to adjust in the tickets are the group IDs, which of course works relatively easy in PHP.
When we are done and have migrated all tickets I will report back if everything worked fine ^^
We can get to the $TICKET_ID + $ARTICLE_ID, however we have not found a way to read the $ATTACHMENT_ID from the source system API.
There is no way to get a JSON with the $ATTACHMENT_ID`s belonging to the article, also a search query (e.g. https://HOSTNAME/api/v1/ticket_attachment/search?ticket_article_id=$TICKET_ARTICLE_ID) seems not to be possible.
For the target system it doesn’t matter, here we can simply put the downloaded file into the article and POST it to the target system.
How do we get all $ATTACHMENT_ID’s per $ARTICLE_ID for the source system? How does the front-end match $ATTACHMENT_ID’s and $ARTICLE_ID? When hovering over an attachment the front-end automatically generates the correct link.
You’ll retrieve an array with all articles belonging to the ticket in question.
This payload doesn’t just contain the article ids, but also an array with attachment-meta information including the attachment id: