Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CacheSerializationException when accessing distributed cache with different versions of FusionCache #329

Open
kylelambert101 opened this issue Nov 12, 2024 · 1 comment

Comments

@kylelambert101
Copy link

Describe the bug

I've got multiple services accessing a distributed cache redis backplane to keep user data up-to-date across services.

Today I upgraded FusionCache in one service from v1.2.0 to v1.4.1 and I started seeing this error coming from all services:

ZiggyCreatures.Caching.Fusion.FusionCacheSerializationException: An error occurred while deserializing a cache value
---> MemoryPack.MemoryPackSerializationException: ZiggyCreatures.Caching.Fusion.Serialization.CysharpMemoryPack.Internals.SerializableFusionCacheEntryMetadata property count is 5 but binary's header maked as 6, can't deserialize about versioning.

My understanding of what happened is that Service A, using FusionCache v1.4.1, cached some data using the 6-member version of SerializableFusionCacheEntryMetadata, then Service B, still on FusionCache v1.2.0 tried to deserialize that data and failed due to the different shape of the metadata model.

Reading the docs about wire format versioning I would have assumed that FusionCache automatically handles cases where two services are trying to access the same cache with different versions of FusionCache, but it seems like something's not working here.

I'm wondering if this is a bug with how the wire format mechanism is working or if we have something misconfigured on our end that prevents the cache from handling this use case.

To Reproduce

  • Service A exists with FusionCache 1.2.0
  • Service B exists with FusionCache 1.4.1
  • Both services connect to the same backplane and try to get/set the same cache entry

Expected behavior

FusionCache is resilient to multiple versions accessing the same distributed cache (or perhaps this is working as expected and we have it configured wrong)

Versions

I've encountered this issue with:

  • Upgrading FusionCache version v1.2.0 -> v.1.4.1
  • .NET version: 6
@jodydonetti
Copy link
Collaborator

Hi @kylelambert101 , you are right about the wire format versioning, but I usually change that only for very major changes that are totally incompatible, not even forward (old to new).

Your case instead is one where, by updating FusionCache, the new version can read the existing cached items in the cache created by the old version: the old version though cannot read the new ones created by the new version (at least for some serializers, for example JSON is way more forgiving).

More generally: normally, running multiple different versions of FusionCache at the same time, sharing the same L2 cache instance (eg: Redis) and the same cache key prefix is not something suggested, since as you observed their cache keys will be exactly the same (eg: collide) while trying to read/write the same data, but with a slightly different serialized shape.

Anyway, one easy way to solve this while you update all the other apps consuming the same L2 is to add a cache key prefix to the mix: this will avoid any cache key collisions, allowing you time to make the multiple versions to live together. Thenyou can remove the prefix when everything will be updated (ifyou want to keep things clear).

Something I'm thinking about is that in the future I may change the wire version identifier for any change at all, to avoid these problems: this on the other hand may cause data being shared less during updates. But again, the kinds of problems you pointed out would go away, so in the grand scheme of things it may be the better choice.

Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants