From 05a040c2478341bab8a58a02b3dc1fe14d626d72 Mon Sep 17 00:00:00 2001 From: Ben Manes Date: Sat, 27 Nov 2021 23:08:44 -0800 Subject: [PATCH] Clarify the behavior of getAll if additional keys are loaded This is already documented on the cache loaders for their bulk loading methods: ``` If the returned map contains extra keys not present in {@code keys} then all returned entries will be cached, but only the entries for {@code keys} will be returned from {@code getAll}. ``` --- .../com/github/benmanes/caffeine/cache/AsyncCache.java | 10 ++++++---- .../benmanes/caffeine/cache/BoundedLocalCache.java | 2 +- .../java/com/github/benmanes/caffeine/cache/Cache.java | 4 +++- 3 files changed, 10 insertions(+), 6 deletions(-) diff --git a/caffeine/src/main/java/com/github/benmanes/caffeine/cache/AsyncCache.java b/caffeine/src/main/java/com/github/benmanes/caffeine/cache/AsyncCache.java index 308ddeddc0..3f0c309a05 100644 --- a/caffeine/src/main/java/com/github/benmanes/caffeine/cache/AsyncCache.java +++ b/caffeine/src/main/java/com/github/benmanes/caffeine/cache/AsyncCache.java @@ -108,8 +108,9 @@ CompletableFuture get(@NonNull K key, *

* A single request to the {@code mappingFunction} is performed for all keys which are not already * present in the cache. If another call to {@link #get} tries to load the value for a key in - * {@code keys}, that thread retrieves a future that is completed by this bulk computation. Note - * that multiple threads can concurrently load values for distinct keys. + * {@code keys}, that thread retrieves a future that is completed by this bulk computation. Any + * loaded values for keys that were not specifically requested will not be returned, but will be + * stored in the cache. Note that multiple threads can concurrently load values for distinct keys. *

* Note that duplicate elements in {@code keys}, as determined by {@link Object#equals}, will be * ignored. @@ -138,8 +139,9 @@ default CompletableFuture> getAll(@NonNull Iterable * A single request to the {@code mappingFunction} is performed for all keys which are not already * present in the cache. If another call to {@link #get} tries to load the value for a key in - * {@code keys}, that thread retrieves a future that is completed by this bulk computation. Note - * that multiple threads can concurrently load values for distinct keys. + * {@code keys}, that thread retrieves a future that is completed by this bulk computation. Any + * loaded values for keys that were not specifically requested will not be returned, but will be + * stored in the cache. Note that multiple threads can concurrently load values for distinct keys. *

* Note that duplicate elements in {@code keys}, as determined by {@link Object#equals}, will be * ignored. diff --git a/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java b/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java index df8250cced..0b35b293c4 100644 --- a/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java +++ b/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java @@ -1373,7 +1373,7 @@ void afterWrite(Runnable task) { scheduleDrainBuffers(); } - // The maintenance task may be scheduled but not running due. This might occur due to all of the + // The maintenance task may be scheduled but not running. This might occur due to all of the // executor's threads being busy (perhaps writing into this cache), the write rate greatly // exceeds the consuming rate, priority inversion, or if the executor silently discarded the // maintenance task. In these scenarios then the writing threads cannot make progress and diff --git a/caffeine/src/main/java/com/github/benmanes/caffeine/cache/Cache.java b/caffeine/src/main/java/com/github/benmanes/caffeine/cache/Cache.java index 618b912ba2..c1d7f95657 100644 --- a/caffeine/src/main/java/com/github/benmanes/caffeine/cache/Cache.java +++ b/caffeine/src/main/java/com/github/benmanes/caffeine/cache/Cache.java @@ -105,7 +105,9 @@ public interface Cache { * the value for a key in {@code keys}, implementations may either have that thread load the entry * or simply wait for this thread to finish and return the loaded value. In the case of * overlapping non-blocking loads, the last load to complete will replace the existing entry. Note - * that multiple threads can concurrently load values for distinct keys. + * that multiple threads can concurrently load values for distinct keys. Any loaded values for + * keys that were not specifically requested will not be returned, but will be stored in the + * cache. *

* Note that duplicate elements in {@code keys}, as determined by {@link Object#equals}, will be * ignored.