Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] Allow for individual cache items to be saved instead of current single-key implementation #475

Open
olyop opened this issue Feb 5, 2023 · 1 comment

Comments

@olyop
Copy link

olyop commented Feb 5, 2023

I thought I would open up a discussion as I've seen it discussed here in the past.

Currently this implementation saves the entire cache as a single value. From this I can assume that whenever checking the cache a full read, parse, and search of the entire cache happens. Since it is stored as JSON It must be using JSON.parse and JSON.stringify.

If the cache was provided as a normalized array wouldn't it be much more efficent to read/write only from the cache item that's being used. For instance make use of IndexedDB.

I have no idea if this would actually improve performance significantly or if there wouldn't be a noticable difference as parsing JSON is one of the fastest native functions in Chromium but I'm sure there would be wins in other areas.

@wodCZ
Copy link
Collaborator

wodCZ commented Sep 10, 2023

Currently this implementation saves the entire cache as a single value

Actually, that depends on the serialize option. With serialize: false and compatible driver (only LocalForage as of now I believe), JSON is not called.

whenever checking the cache

That definitely is one of the most expensive bits this library performs, but:

  • parsing is only done when calling restore, usually only once, during application bootstraping
  • serialization is only done when persisting, depending on your trigger setting. This doesn't perform read nor parse.

So, there is a space for optimization, especially looking at trigger: 'write', serialize: 'true' combo with a slow storage driver could be wasteful.

But, I think the library provides enough means for building more performant storage wrappers. Personally I haven't found a need for such wrapper, but I'm happy to review any take towards this cause.

Given that the maintainer team (of this community library) is currently very limited, I'll say in advance that if we were to merge more complex storage wrapper, or any mechanism, I'd first want to see any kind of benchmark implemented, so we can actually verify that the added value is worth the extra support we'll have to dedicate to the new feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants