AI: Enable Zammad as a RAG (Retrieval Augmented Generation) data source instead of Chat‑Completions‑only usage

1) What is your original issue / pain point you want to solve?

The current Zammad AI implementation uses the Chat Completions API only and has no access to Zammad data as context (tickets, articles, knowledge base, attachments).
As confirmed by Zammad support, Zammad cannot currently be used as a RAG source, and this functionality is not implemented in the current AI release.

As a result, AI answers are generic, context‑free, and disconnected from the actual data that agents work with inside Zammad.


2) Which are one or two concrete situations where this problem hurts the most?

Situation 1: Ticket handling An agent asks the AI for help while working on a ticket, but the AI:

  • does not see the current ticket content
  • does not see previous related tickets
  • does not see internal notes or linked articles

This makes the AI unusable for real ticket resolution.

Situation 2: Knowledge reuse Existing Zammad knowledge base articles already contain validated answers, but the AI cannot reference or reuse them, leading to:

  • duplicated work
  • inconsistent answers
  • loss of trust in AI responses

3) Why is it not solvable with the Zammad standard?

According to Zammad support, the current AI implementation:

  • uses the Chat Completions API
  • does not implement RAG
  • does not allow Zammad to act as a data source for AI context

There is currently no configuration, API, or workaround in Zammad standard that allows controlled, permission‑based context injection from Zammad into AI requests


4) What is your expectation / what do you want to achieve?

We want Zammad to support a RAG‑based AI architecture, where Zammad itself can act as a structured knowledge source.

Expected capabilities:

  • Use tickets, articles, and optionally attachments as retrievable context
  • Scope context by:
    • ticket
    • queue
    • role / permission
  • Pass retrieved context to the LLM in a controlled and auditable way
  • Enable AI answers that are:
    • relevant
    • traceable
    • grounded in actual Zammad data

This is a prerequisite for productive AI usage in professional and compliance‑sensitive environments.


Additional useful information

Technical context

  • Current approach: Chat‑Completions‑only, no retrieval layer
  • Desired approach: Retrieval layer + context injection (RAG), independent of the specific LLM provider

Why this matters Zammad is the single source of truth for support knowledge.
An AI without access to this data cannot deliver meaningful value and will remain a novelty feature rather than a productivity tool.


Your Zammad environment

  • Average concurrent agent count: ~20
  • Average tickets a day: ~150
  • What roles/people are involved:
    • Support agents
    • Team leads
    • IT / System administrators

Anything else which you think is useful to understand your use case

This feature is especially important for:

  • Enterprise customers
  • Legal / compliance‑driven organizations
  • Teams that require strict permission boundaries and explainable AI answers

Without RAG support, AI answers cannot be trusted in regulated environments.