News KrakenD CE 2.11.2 and EE 2.11.3 (bugfixing) released

Document updated on May 21, 2025

Unified LLM Interface

The Unified LLM Interface of KrakenD allows you to interact with one or more LLMs, removing the complexity of dealing with LLMs for the end user. It allows you to set the grounds to communicate with LLMs, authenticate, and treat requests and responses back and forth.

The unified LLM interface helps you to:

  • Route to the right LLM
  • Abstract the request interface
  • Abstract the response interface

Routing to the right LLM

KrakenD’s LLM Routing and AI Proxy feature enables the distribution of AI requests across one or multiple Large Language Model. It supports various routing strategies including direct proxying, conditional routing based on request headers or policies, JWT claim-based routing, and path-based routing.

Read more about LLM Routing

Abstracting the request and response interfaces

The namespace ai/llm makes communication with several AI providers simpler as each vendor will require a different payload and communication style. The ai/llm namespace is able to pre-generate Prompt Templates for a series of vendors:

When talking to the LLM through KrakenD, the user can concentrate in the query rather than in the whole payload needed to interact with the vendor API. The gateway takes care of formatting the API request and response, and allows you to treat different LLMs using the same format, so switching a provider is an easy task.

Prompt Templates

Prompt Templates define how the system talks to each individual LLM, respecting their format and payload characteristics, allowing the user to define only the prompt.

When you talk to an LLM you need to send a body in a format that is different for each vendor. We call Prompt Templates the skeleton used to build the payload you will sned to the LLM. Prompt Templates are predefined templates that contain the variables the user needs to pass. Instead of allowing the end-user to construct the full-body that the LLM will receive (and opening the door to injecting malicious system directives), we ask just for the prompt. This is not only writing a text that you send to the LLM, but actually have the opportunity to insert place holders, values that came in the request body, headers, parameters, etc. or even apply scripting over it.

The Prompt Templates offer more control on the types of requests sent to the AI and allow you to specify system and user input to ensure the final output.

See each vendor documentation.

Unresolved issues?

The documentation is only a piece of the help you can get! Whether you are looking for Open Source or Enterprise support, see more support channels that can help you.

See all support channels