Conversation
Introduce a functional caching abstraction. The purpose is to provide extensible API with abstractions that are composable. Includes implementations for basic cache strategy (current) and an LRU cache (which wraps BasicCache). A custom caching strategy can be injected via Reactor constructor. All existing caching behavior remains the same (for now).
We weren't using notion of monoatomically increasing tick in LRU cache because an OrderedMap does not re-order exising items that are set(), so we depend on remove()+set(). This means the value of OrderedMap isn't used and simplifies to OrderedSet.
There was a problem hiding this comment.
Why are lookup, has, and hit all necessary? Is a get method insufficient?
There was a problem hiding this comment.
lookup and has are simply analogous to get and has in an Immutable.Map.
The reason why there seems to be extra operations is because nuclear is designed to be used with a persistent data structure for the cache. For a caching abstraction to encapsulate something like LRU, the cache has to record when an item is read from cache. Since lookup returns the actual value in cache and we can't perform side-effects, another operation is required to update the cache to track LRU.
I meant to add a comment in the code, but this interface is modeled directly after the caching protocol in clojure.core.cache, which is also for immutable data caches
There was a problem hiding this comment.
Got it. I figured the reason was to keep the cache construct immutable, IMO would be simpler for developers to allow mutability and go with get + put
There was a problem hiding this comment.
My initial approach had this, but changing from a immutable to mutable cache had negative implications (e.g. when reactor evaluates using previous state causing cache churn)
|
@loganlinn I dont like the interface but do like the approach |
If you call hit() on an item that's not actually in the LRUCache (edge case), it's a no-op, so we can just return this.
|
@scjackson Updated to support evicting a bunch of values from LRU when limit is reached |
There was a problem hiding this comment.
why bother with this case?
There was a problem hiding this comment.
It's reasonable that you successfully lookup something from the cache, but you have your own notion of whether it's stale and want to write a new value back. We do this currently with isDirtyCacheEntry
There was a problem hiding this comment.
I was more wondering why worry about whether or not the cache has the value -- we could just evict LRU values and add the item. But I guess that might lead to having CACHE_LIMIT - 1 values in cache, so nvm.
There was a problem hiding this comment.
do you think cacheLookup here should also handle the resolution of storeStates versus externalizing it?
There was a problem hiding this comment.
If the cache is a hit on a getter then subsequent evalutes will always think the value is a hit.
Maybe I am missing something here, but it seems like this is wrong in a lot of cases.
// store state 1
var result1 = reactor.evaluate(getter)
reactor.dispatch('action')
// store state 2
var results2 = reactor.evaluate(getter)
// it would seem that since the reactor cache has an entry for `getter` than it will always be a hitIs there a test case to verify this?
There was a problem hiding this comment.
If the cache is a hit on a getter then subsequent evalutes will always think the value is a hit.
Ah, good catch. This should be
const isCacheMiss = !cacheEntry || isDirtyCacheEntry(reactorState, cacheEntry)
if (isCacheMiss) {
// ...
}
// ...I'm surprised this isn't covered by a test case already. Will add one.
There was a problem hiding this comment.
do you think cacheLookup here should also handle the resolution of storeStates versus externalizing it?
The caches are generalized right now and don't have nuclear specific logic. I felt like this was still the right place for that logic
|
Design looks good. I like how we can build the LRU cache using the BasicCache. As far as implementation, see my comment regarding false positive cache hits. |
|
@loganlinn this LGTM. What do you want to do with this in terms of merging and my PR? Thoughts @jordangarcia? |
Introduce a functional caching abstraction. The purpose is to provide extensible
API with abstractions that are composable. Includes implementations for basic
cache strategy (current) and an LRU cache (which wraps BasicCache).
A custom caching strategy can be injected via Reactor constructor.
All existing caching behavior remains the same (for now).
Tangential to #208