Integrate with GPT

Hi, is it possible to connect Zammad with a custom GPT? It would be great if GPT has access to the content in Zammad to include in its answers.
https://platform.openai.com/docs/actions
Thanks,
Marc

We played a bit with it a while ago, but only for a day. Also, not sure how to integrate it properly.

I remember we had some nice ideas for some RTE tools, e.g. like wording improvements or grammar checks.

But e.g. if you want to give answers based on the data in the system, you will need
to anonymize data heavily else it might be critical in GDPR context?!

Also, you would need to index the entire ticket data and keep it up to date. For bigger systems this will get pretty expensive. Pricing of OpenAI is token based. Looked pretty expensive to me.

Do you already have some experience with OpenAI in commercial context? If yes, feel free to share ideas or use cases to this. In general, we are also very interested in this topic.

I’m using a bot which answers the tickets. The bot grabs the ticket, anonymizes the content, queries langchain, and then prompts gpt. The response gets added as a note in zammad, so that the Agent can choose to use that output partially or as completely.
The output heavily depends on what is on your langchain DB. The token use depends on your setup. I consume like 0,05$ - 0,10$ per Ticket/Answer.

This sounds interesting. How did you realize that? Which bot do you use?

1 Like

sounds interesting, but imagine 1 million tickets RIP :laughing: I hope that the support for AMD graphic cards get better, so I can play a bit with my gaming pc as well… to the time we checked it last time, most of the libraries were focused on nvidia graphic cards. I might play a bit with it in my winter holiday.

I think for us germans it would be the easiest to find some kind of on premise solution so data is not flying arround that much.

beautifulsoup4==4.12.2
openai==0.27.6
pandas==1.5.3
python-dotenv==1.0.0
zammad_py==2.0.1
presidio_analyzer==2.2.32
presidio_anonymizer==2.2.32
langchain==0.0.167
tiktoken==0.4.0
chromadb==0.3.22
unstructured==0.6.6
tabulate==0.9.0
pdf2image==1.16.3
pytesseract==0.3.10
requests==2.30.0

It’s a Python script embedded in a Python docker.
Runs on premise with 16gb RAM

I just tested small amount of Tickets. 100ish Tickets took like 15 Mins.

But if sb comes up with inline suggestions, that would be great

Hi @HeinzSchrot
I was very happy when I found this post. :smiley:

I am working on exactly the same thing. I also wanted to add the answers as a note in Zammed so that they can be adopted in whole or in part. It motivates me to see that someone has managed to do it. But you’re already a few steps ahead of me. I’d be very interested to hear how you went about it in detail if you are open to that. I have already done first tests with a few tickets and they were very positive but I don’t have a complete setup and workflow to process all tickets yet.

If I may, a few small questions:

  • I only get requests in Zammad via e-mail channel. When I reply to a ticket in Zammad and the other person replies in turn, his reply (since he replies via e-mail) also contains my reply underneath. This makes it somewhat difficult to create question/answer pairs for longer tickets. I should be able to remove the replies from the text somehow.
    What I have already tried: If I send the whole ticket history to OpenAI, I can have the last part returned to me. This works quite well but requires many requests. I also found the “Email Reply Parser” from Github but haven’t tried it yet.
    Have you found a solution for that?

  • Cool idea with Presidio, I didn’t know it and it’s very helpful.
    Do you already do the embeddings with cleansed data? Or do you only clean the content when you send it to OpenAI for the final response?

  • Do you simply save the question-answer pairs or do you also use a session memory per customer/email address to be able to personalize even more on the history per customer?

I would be pleased to read from you.

@rolfschmidt
For companies where support is part of the product/service and do not earn money per hour on support, every optimization is also worth money. And the time saved in situations where similar requests often come in is enormous.

Especially in support, you write a lot and often similar things, especially over time, when many tickets have already been answered. It’s probably not without reason that many helpdesk providers showcase AI integrations at the top of their websites. For me, in the area of helpdesk/support, it has less to do with hype than with a real increase in efficiency. Of course there are text blocks, but this way the answer is simply even more personalized and still created faster.

You are right, of course, data protection is an important topic, but you could host many things yourself, from LLMs to vector DBs and embedding servers, if this is important to a company - or remove PII from the texts as far as possible.