Hi, is it possible to connect Zammad with a custom GPT? It would be great if GPT has access to the content in Zammad to include in its answers.
https://platform.openai.com/docs/actions
Thanks,
Marc
We played a bit with it a while ago, but only for a day. Also, not sure how to integrate it properly.
I remember we had some nice ideas for some RTE tools, e.g. like wording improvements or grammar checks.
But e.g. if you want to give answers based on the data in the system, you will need
to anonymize data heavily else it might be critical in GDPR context?!
Also, you would need to index the entire ticket data and keep it up to date. For bigger systems this will get pretty expensive. Pricing of OpenAI is token based. Looked pretty expensive to me.
Do you already have some experience with OpenAI in commercial context? If yes, feel free to share ideas or use cases to this. In general, we are also very interested in this topic.
I’m using a bot which answers the tickets. The bot grabs the ticket, anonymizes the content, queries langchain, and then prompts gpt. The response gets added as a note in zammad, so that the Agent can choose to use that output partially or as completely.
The output heavily depends on what is on your langchain DB. The token use depends on your setup. I consume like 0,05$ - 0,10$ per Ticket/Answer.
This sounds interesting. How did you realize that? Which bot do you use?
sounds interesting, but imagine 1 million tickets RIP I hope that the support for AMD graphic cards get better, so I can play a bit with my gaming pc as well… to the time we checked it last time, most of the libraries were focused on nvidia graphic cards. I might play a bit with it in my winter holiday.
I think for us germans it would be the easiest to find some kind of on premise solution so data is not flying arround that much.
beautifulsoup4==4.12.2
openai==0.27.6
pandas==1.5.3
python-dotenv==1.0.0
zammad_py==2.0.1
presidio_analyzer==2.2.32
presidio_anonymizer==2.2.32
langchain==0.0.167
tiktoken==0.4.0
chromadb==0.3.22
unstructured==0.6.6
tabulate==0.9.0
pdf2image==1.16.3
pytesseract==0.3.10
requests==2.30.0
It’s a Python script embedded in a Python docker.
Runs on premise with 16gb RAM
I just tested small amount of Tickets. 100ish Tickets took like 15 Mins.
But if sb comes up with inline suggestions, that would be great
Hi @HeinzSchrot
I was very happy when I found this post.
I am working on exactly the same thing. I also wanted to add the answers as a note in Zammed so that they can be adopted in whole or in part. It motivates me to see that someone has managed to do it. But you’re already a few steps ahead of me. I’d be very interested to hear how you went about it in detail if you are open to that. I have already done first tests with a few tickets and they were very positive but I don’t have a complete setup and workflow to process all tickets yet.
If I may, a few small questions:
-
I only get requests in Zammad via e-mail channel. When I reply to a ticket in Zammad and the other person replies in turn, his reply (since he replies via e-mail) also contains my reply underneath. This makes it somewhat difficult to create question/answer pairs for longer tickets. I should be able to remove the replies from the text somehow.
What I have already tried: If I send the whole ticket history to OpenAI, I can have the last part returned to me. This works quite well but requires many requests. I also found the “Email Reply Parser” from Github but haven’t tried it yet.
Have you found a solution for that? -
Cool idea with Presidio, I didn’t know it and it’s very helpful.
Do you already do the embeddings with cleansed data? Or do you only clean the content when you send it to OpenAI for the final response? -
Do you simply save the question-answer pairs or do you also use a session memory per customer/email address to be able to personalize even more on the history per customer?
I would be pleased to read from you.
@rolfschmidt
For companies where support is part of the product/service and do not earn money per hour on support, every optimization is also worth money. And the time saved in situations where similar requests often come in is enormous.
Especially in support, you write a lot and often similar things, especially over time, when many tickets have already been answered. It’s probably not without reason that many helpdesk providers showcase AI integrations at the top of their websites. For me, in the area of helpdesk/support, it has less to do with hype than with a real increase in efficiency. Of course there are text blocks, but this way the answer is simply even more personalized and still created faster.
You are right, of course, data protection is an important topic, but you could host many things yourself, from LLMs to vector DBs and embedding servers, if this is important to a company - or remove PII from the texts as far as possible.
Hello @HeinzSchrot ,
same requirements here. Zammad usually has got a alrge amount of answered tickets to optimize the answering process. Local LLMs like Ollama (with openAI-API) provide a GDPR-compliant way to use this data as a suggestion for support agents with different skill levels in training.
Is the code you are using open source, so we can test this?
Generally it would be helpfull to know if this is on the roadmap of zammad or how this could maybe added to the roadmap with the help of external ressources. Are there already diskussions about this? Nextcloud and open xchange are bringing (local) LLM features as part of their suites, so companies that rely on open source products may steer already in the direction of using their own data (with RAG/embeddings).
Regards,
Philipp
We generally do not communicate ETAs and Roadmaps.
In fact, this topic is not that much requested. The only feature request I could find that fits this topic has one “like” on the first article.
It could be potentially integrated with privateGPT which uses local LLMs and follows and extends the OpenAI API standard
I played around with n8n in this regard. It has a no-code solution to inject different AI models into workflows for many different applications. I also went with the “add possible answer as note”-approach. So far, the results where okay’ish with GPT, but poor with Ollama (llama2/mistral). But I hadn’t had time yet to improve the additional instructions added to the prompts.
Funny, I did the same with n8n but only as a quick test. When providing own content (KB, old Tickets) as context through a vector db, the results for me were quite good (with GPT-4). But the problem I have is to generate question/answer pairs for longer tickets where my reply does not contain the customers content but the customers response does include my text (since he is replying by email) - so somehow I should be able to remove my answer from the customers response to not have it twice in the ticket.
We are currently trying and planning to move to zammad (from shared mailbox support) and we have a lot of data of questions and responses. A local LLM could be trained on that and perhaps that would be a useful start of a new response to a question…
I wouldn’t like it if an open source tool like zammad would rely on a commercial LLM and to have to share our internal data with an external AI to provide useful responses.
I am not sure, if you are allowed to use e-mails from your customers to train even a locally LLM.
Dear Heinz, I found your post by accident while I was searching for such a Zammad solution. Could we get into contact? You might be German as well, at least your name sounds German
I am from Munich, and I would highly appreciate if we could exchange some LLM possibilities for Zammad as I would like to implement this into my small business.
Thank you
Martin
PS: I searched for a PN function here, but was unsuccessful.
New users are not allowed to send private messages. In the past we unfortunately had people trying to get help directly from people disturbing the sense of this board. Sorry.
You should be able now, make sure to get consent.
Thank you @ MrGeneration
Just idea create an integration of GPT with webchat so that you can create your knowledge base and the user via webchat receives a quick automatic message from GPT, this would significantly improve user responses
@Smart089 where you able to get in touch with Heinz? I’m also interested in this flow …