@CacheResult: JCache + CDI to the rescue of microservices?

JCache API comes with several built in interceptors for CDI making its usage decoupled from the cache API itself and more user friendly.

Let’s have a look to this API.

CacheResult: the method execution killer

Probable one of the most common use cache is to avoid to pay the cost of a method each time you call it.

Reasons can be as different as:

  • Computation done by the method is expensive
  • The method contacts a remote service and you want to cut off the implied latency
  • The method accesses a rate limited resource
  • ….

In this cache @CacheResult brings a nice and easy to setup solution. Simply decorating the method with @CacheResult you will avoid the actual method invocation after the first call and while it is cached.

Basic usage

Here a sample using a service simulating a slow method:

package com.github.rmannibucau.demo.jcache;

import javax.cache.annotation.CacheResult;
import javax.enterprise.context.ApplicationScoped;

public class InsanelySlowService {
    public long compute(final long computationSize) {
        try {
        } catch (InterruptedException e) {
            throw new IllegalStateException(e);
        return computationSize;

Nothing special excepted we added @CacheResult on the method.

Running it using openejb and jcs the output will look like:

Without cache: 2142
   With cache: 1

It looks good but what happens behind the scene? We spoke about cache so we should get somewhere a cache name and a cache key.

Customizing cache interaction

By default the cache name is is the fully qualified name of the method with its signature – ie its parameter types, in our case “com.github.rmannibucau.demo.jcache.InsanelySlowService.compute(long)”. This is not a human friendly cache name but it avoids most of conflicts which is already not that bad.

The cache name segregating methods the only constraint for the cache key is to segregate the parameters and it is the default behavior of the specification. It just takes the parameters and and ensures for the same parameters you match the same key – of course ensure you implemented equals() accordingly if you pass custom parameter types.

If you want to change these parameters after having thought to a better design with at least the same guarantees you can overwrite these defaults using @CacheResult parameters.

For the cache name it is not that hard, it is a direct parameter:

@CacheResult(cacheName = "services.sleep")

For the key you need to implement a key factory and a key. For our method we know we have a long value to take into account so we can make it simple and just consider this value:

public class LongKeyGenerator implements CacheKeyGenerator {
    public GeneratedCacheKey generateCacheKey(final CacheKeyInvocationContext<? extends Annotation> cacheKeyInvocationContext) {
        // we know our method has a long param
        return new LongGeneratedCacheKey((Long) cacheKeyInvocationContext.getAllParameters()[0].getValue());

class LongGeneratedCacheKey implements GeneratedCacheKey, Serializable {
    private int cachedHash;
    private long val;

    public LongGeneratedCacheKey() {
        // no-op

    public LongGeneratedCacheKey(final long val) {
        this.val = val;
        this.cachedHash = Long.hashCode(val); // that is a cache so this will get called often

    public int hashCode() {
        return cachedHash;

    public boolean equals(final Object obj) {
        return LongGeneratedCacheKey.class.isInstance(obj) && val == LongGeneratedCacheKey.class.cast(obj).val;

Note: “javax.cache.annotation.CacheDefaults” also allows to define the cache name for a whole class if needed but this is rarely used with @CacheResult. Its main purpose is for CRUD services where a method will put in the cache, another one will evict etc…so all methods share the same cache.

Advanced usage

By default exceptions are just equivalent to “let it go” invocations but you can cache them as well if you know some of them can be “normal” or expected. One use case is to cache a ConnectionException if you don’t want to pay a timeout cost during 15 minutes because a service is unavailable and you know it will not be back immediately.

To do so you can pass exception types in “cachedExceptions” and “nonCachedExceptions” parameters of @CacheResult. Selection is done checking the exception type thrown is in cachedExceptions but not in nonCachedExceptions – yes ensure to avoid exception hierarchy hell by using it on simple and split exception types ;).

As “cacheName” you can define for these exceptions “exceptionCacheName” since exceptions and “data” are not merged in the same cache :).

Another advanced usage is to cache a result without caring about the cache on the method level. A sample is a scheduled task computing something costly each hour and its result would be used to build HTTP responses for instance. Then you can desire to just access a cache in your JAXRS service building the response but no real need to check if you have the data in the cache for the scheduled task. For these cases you can use “skipGet” parameter of the @CacheResult annotation and instead of behaving as a getOrCompute, the @CacheResult will behave as a computeAndPut:

@CacheResult(skipGet = true)

Finally the last parameter you can use to tweak your cache is “cacheResolverFactory”. This one allows you to tune the way caches are retrieved – both “data” cache and exceptions cache. This is actually quite convenient because then it allows you
to fully configure your cache using a custom CacheProvider and CacheManager. One usage is to be able to switch on/off jmx, statistics and to use your application configuration to initialize the cache. Another nice usage is to support no cache just using an “empty cache” implementation:

import javax.annotation.PreDestroy;
import javax.cache.Cache;
import javax.cache.CacheException;
import javax.cache.CacheManager;
import javax.cache.Caching;
import javax.cache.annotation.CacheInvocationContext;
import javax.cache.annotation.CacheMethodDetails;
import javax.cache.annotation.CacheResolver;
import javax.cache.annotation.CacheResolverFactory;
import javax.cache.annotation.CacheResult;
import javax.cache.configuration.Configuration;
import javax.cache.configuration.MutableConfiguration;
import javax.cache.spi.CachingProvider;
import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;
import java.lang.annotation.Annotation;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.util.Collection;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.function.Function;

import static java.util.Arrays.asList;
import static java.util.Collections.emptyIterator;
import static java.util.Collections.emptyMap;
import static java.util.Optional.ofNullable;

public class JCacheResolverFactory implements CacheResolverFactory {
    private final CachingProvider provider;
    private final CacheManager manager;
    private final Function<String, CacheResolver> cacheResolverComputer;

    protected JCacheResolverFactory() { // for proxies
        provider = null;
        manager = null;
        cacheResolverComputer = null;

    public JCacheResolverFactory(final ApplicationConfiguration configuration) {
        final Configuration config = new MutableConfiguration().setStoreByValue(false).setManagementEnabled(configuration.getJCacheConfig().isJmx()).setStatisticsEnabled(configuration.getJCacheConfig().isStatistics());

        final Class<?>[] cacheApi = new Class<?>[]{ Cache.class };
        final Function<String, CacheResolver> noCacheResolver = name -> new ConstantCacheResolver(Cache.class.cast(Proxy.newProxyInstance(
            Thread.currentThread().getContextClassLoader(), cacheApi, new EmptyCacheHandler(name, config))));

        if (!configuration.getJCacheConfig().isActivated()) {
            provider = null;
            manager = null;
            cacheResolverComputer = noCacheResolver;
        } else {
            final ClassLoader classLoader = JCacheResolverFactory.class.getClassLoader();
            provider = Caching.getCachingProvider(classLoader);
            manager = provider.getCacheManager(provider.getDefaultURI(), classLoader, configuration.getProperties());

            final Function<String, CacheResolver> activeCacheResolver = name -> new ConstantCacheResolver(ofNullable(manager.getCache(name))
                .orElseGet(() -> {
                    try {
                        return manager.createCache(name, config);
                    } catch (final CacheException ce) {
                        return manager.getCache(name);

            final Collection<String> activeCaches = new HashSet<>(asList(activated.split(" *, *")));
            cacheResolverComputer = name -> (configuration.getJCacheConfig().acceptsCache(name) ? activeCacheResolver : noCacheResolver).apply(name);

    private void shutdownJCache() {

    public CacheResolver getCacheResolver(final CacheMethodDetails<? extends Annotation> cacheMethodDetails) {
        return cacheResolverComputer.apply(cacheMethodDetails.getCacheName());

    public CacheResolver getExceptionCacheResolver(final CacheMethodDetails<CacheResult> cacheMethodDetails) {
        return cacheResolverComputer.apply(ofNullable(cacheMethodDetails.getCacheAnnotation().exceptionCacheName())
            .filter(name -> !name.isEmpty())
            .orElseThrow(() -> new IllegalArgumentException("CacheResult#exceptionCacheName() not specified")));

    private static class ConstantCacheResolver implements CacheResolver {
        private final Cache<?, ?> delegate;

        public ConstantCacheResolver(final Cache<?, ?> cache) {
            delegate = cache;

        public <K, V> Cache<K, V> resolveCache(final CacheInvocationContext<? extends Annotation> cacheInvocationContext) {
            return (Cache<K, V>) delegate;

    private static class EmptyCacheHandler implements InvocationHandler {
        private final Map<Method, Object> returns = new HashMap<>();

        public EmptyCacheHandler(final String name, final Configuration<?, ?> configuration) {
            for (final Method m : Cache.class.getMethods()) {
                if (m.getReturnType() == boolean.class) {
                    returns.put(m, false);
                } else if ("getAll".equals(m.getName())) {
                    returns.put(m, emptyMap());
                } else if ("iterator".equals(m.getName())) {
                    returns.put(m, emptyIterator());
                } else if ("getConfiguration".equals(m.getName())) {
                    returns.put(m, configuration);
                } else if ("getName".equals(m.getName())) {
                    returns.put(m, name);
                } // getCacheManager? will return null for now
                // else null is fine for void and method returning a value etc...

        public Object invoke(final Object proxy, final Method method, final Object[] args) throws Throwable {
            return returns.get(method);

Tip: JCache “components” – key generator, resolver factory… – can be CDI beans if a matching bean is available ;).

Then just specify this one in your @CacheResult:

@CacheResult(cacheResolverFactory = JCacheResolverFactory.class)


As usual caching needs some serious thoughts on how to cache and how to setup the cache accordingly business needs but JCache-CDI integration brings an easy to use solution once requirements are defined.

Althought caching brings challenges modern applications – and in particular in microservices architecture – need it to avoid to have very poor performances. So let assume it and make your users happy having an insanely fast final product 🙂

Want more content ? Keep up-to-date on my new blog

Or stay in touch on twitter @rmannibucau


9 thoughts on “@CacheResult: JCache + CDI to the rescue of microservices?

  1. Pingback: Java Weekly 35/15: Patterns, Caching, Hibernate and JSON-P

  2. bmanes

    @CacheResult doesn’t protect against stampeding, because its required to be non-atomic by the specification. It feels awkward to use at a reasonable scale because the dog piling effect can negatively impact production or even cause outages. It seems a little risky to advise without a caveat.

    1. rmannibucau Post author

      This is a good point but shouldnt have any impact. First being non-atomic means you can put multiple times for the “init” phase if the method is used concurrently, right but this is not an issue otherwise you wouldn’t use a cache but a consistent system which wouldn’t scale at all. Transactionality of a cache generally makes it as slow as database so you loose all the goodness of the caching and only takes its drawbacks. Then the next issue is when you scale instances. In this case you can get either for a very few period of time (< some 100 ms in practise) a non consistent result between nodes but can appear as a live update for clients in the case of a distributed cache or it can stay non consistent between nodes if the cache is not distributed. This last case can be or not an issue depending your application and solutions can be to distribute the cache if your data are not reference data. So globally this non-atomicity is not an issue in practise and if it is you can't use a cache IMHO.

  3. Ben

    I think you generalized atomicity to the point where my comment sounds absurd. =)

    JCache already requires atomic operations, primarily through entry processors and invoke methods. This allows blocking per-entry writes and non-blocking reads, which is ideal to avoid the dog piling effect. The consistency of the cache is not defined by the specification, so atomic operations can be used on a cache that is eventually consistent with respect to reads and writes. That might seem weird, but is still useful because per-node atomic operations do help avoid stampeding effects.

    Guarding against stampedes is a common concern with memcached and redis, which are both much less powerful than JCache api wise. The techniques required can be a bit brittle, but necessary to safely scale the caching infrastructure. Its been proven that dog piling is a real problem, so by ignoring it as a minor issue the specification assumes usage in a small scale environment. That may be true in general, but no one wants to stop dreaming.

    1. rmannibucau Post author

      Think we dont agree on the usage, for me @CacheResult underlying cache shouldnt be accessed directly then you dont care much about these issues. Your explanation sounds like a @CachePut/@CacheRemove case

      1. Ben

        @CacheResult computes the value if it is absent from the cache, so it is a non-atomic get-compute-put operation. A cache miss storm, e.g. due to expiration, can result in a large number of redundant computations for a single key. If the computation is expensive, such as a slow database query, then this can increase database latencies or cause an outage of that resource. If @CacheResut was implemented as a get-then-invoke using an entry processor for an atomic computation then this would avoid the thundering herds problem. Its the actual implementation of how @CacheResult operates rather than client usage that is concerns me.

      2. rmannibucau Post author

        I see but if your service is that critical you would already limit its access through a @Singleton or a plain lock/semaphore, cache result doesnt serve the exact same purpose IMO, no?

      3. Ben

        That would definitely help on a per-node level. It requires some care avoid latency spikes by overly an coarse policy and means developers must write custom code for each potential hotspot. Its not uncommon to see caching libraries provide abstractions to handle miss storms, e.g. memcached clients with probabilistic early recompute. Since JCache already has the necessary facilities, my view is that @CacheResult should avoid surprises that require patching the code after a production outage. Developers should expect the experts worked around the problem or warned of it (with suggested workarounds) in the JavaDoc. I agree that @CacheResult shouldn’t be a rate limiter, but I don’t think its unreasonable to expect it have a little intelligence to guard against a known problem that many caching libraries handle.

      4. rmannibucau Post author

        this is an interesting remarks which should clearly hit the expert group, not sure it does fit JCache – like it I would see it in concurrency utilities or something like that but clearly worth discussing it for next version.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s