Document updated on May 2, 2021
Caching backend responses
Sometimes you might want to reuse a previous response of a backend instead of asking for the same information over the network again. In this cases, it is possible to enable in-memory caching for the desired backend responses.
This caching technique applies to traffic between KrakenD and your microservices endpoints only and is not a caching system for the end-user endpoints. To enable the cache, you only need to add in the configuration file the qos/httpcache
middleware.
When enabled all connections for the configured backend are cached in-memory. The cache content is based on the response for the final URL sent to the backend (the url_pattern
plus any additional parameters). The response is stored for the time the Cache-Control
has defined and there is no way to purge it externally.
If you enable this module, you are required to be very aware of the response sizes, caching times and the hit-rate of the calls.
Enable the caching of the backend services in the backend
section of your krakend.json
with the middleware:
{
"extra_config": {
"qos/http-cache": {}
}
}
The middleware does not require additional configuration other than its simple inclusion, although the shared
attribute can be passed.
See an example (with shared cache):
With shared cache:
{
"endpoint": "/cached",
"backend": [{
"url_pattern": "/",
"host": ["http://my-service.tld"],
"extra_config": {
"qos/http-cache": {
"shared": true
}
}
}]
}
The shared
cache makes that different backend definitions with this flag enabled can reuse the cache. When the shared
flag is missing or set to false, the backend uses its own cache private context.