Pretty much as the title says an integration to summarize the contents of an article or the whole ticket.
Implementation wise it shouldn’t be too much effort:
Zammad only needs to store the user’s ChatGPT api key and at least one configuration field (“you’re inside a helpdesk system and summarize articles or tickets written in it. Only speak the truth and keep things plain and simple”).
If that setup is matched there should be a a button inside the ticket like “Summarize”. Zammad then sends an api request to ChatGPT “Please summarize the ticket which contains [Article1], [Article2]…” and adds the result an article to the ticket (maybe just as a draft).
We already implemented that in our own backend while using the zammad api and it works like a charm. It’d be really convenient to have that directly integrated (maybe with the option to create an answer draft).
I have something like this set up in osticket. Now we will soon be transitioning to zammad and I am also moving the summarization feature over. I am therefore interested in your approach. I so far take the full thread convert it to text and then prompt a small deepseek-r1 I host locally. That works nicely. Ideally I want the summary to appear at the end of the ticket in an internal note. But it seems not so easy to update those so I am currently leaning more towards a custom object which would hold the summary.
Well, it’s just a summary at the given time. If new articles are created, they are shown below, so it’s pretty clear that the summary is only valid until the time it was posted and does not include anything newer
yeah I. But for my use case I want to always have a current summary at the end… so I use a custom object for it .. I do not really like how that shows up in the UI though
If anyone works on this or other LLM feature solutions please do not restrict to only “ChatGPT integration” or statically predefine support for only single provider endpoint.
There are many alternative OpenAI API compatible, HTTP servers, LLM engines, proxys, service providers and local application network server endpoints that function as “OpenAI Compatible Server”. Please support users or administrators to specify compatible server endpoint base URL addresses, API keys, and model names.
etc
and ie Anthropic: "We’ve launched an OpenAI-compatible API endpoint, allowing you to test Claude models by changing just your API key, base URL, and model name in existing OpenAI integrations. "
The above may resolve some or most concerns that normally are raised, ie
“will need to anonymize data heavily else it might be critical in GDPR context?!”
“for us germans it would be the easiest to find some kind of on premise solution so data is not flying arround that much.”
etc see longer discussion at Integrate with GPT
Hello,
Thanks for this inquiry. Zammad team has been working on this feature exactly on and off for quite a while now, and it will be available on the develop branch soon. Summarizing the tickets and proposing course of action (for now). Further AI assisted features will be worked on in the following months.
If you’re interested in participating in the UX research and influencing the further development, please contact me through private message here or email.
Hi Ivan,
are there any news to this topic? I’m also really looking forward for a LLM integration with openai compatlible APIs. It would also be helpful to use LLM features in triggers to assign tags or assignments for example.
Best regards
philip