-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Disussion] Benchmarks! #5
Comments
@jimmywarting re your thread from #4 (comment) Yeah, I'm getting similarish results. tinylet/test/async-to-sync-comparison.bench.js Lines 33 to 60 in 72fe63f
I think the reason that the synckit is so much faster is because it's not transferring the const msg: MainToWorkerMessage<Parameters<T>> = { sharedBuffer, id, args }
// 👆 obj is 1 👆 id is 3
// 👆 sab ptr is 2 👆 args are N 👆 That's only N+3 "things" that need to get serialized/transfered each call. Compare that to: port.postMessage([lockBuffer, executorURL, this, [...arguments]]);
// 👆 array is 1 👆 str is M length, needs to be copied
// 👆 this is usually 1 (undefined)
// 👆 arguments are N 👆 This is N+M+4. I think that might be why it's slower than synckit? |
O_o In my own test i just mostly only bench-testing the functions execution time. not the time it takes to load up a new worker. const url = "data:text/javascript," + encodeURIComponent(code)
const { default: fn } = await import(url) therefore the data url is only transfered once. |
my assumption to why synckit is faster is b/c it cheats and uses it uses postMessages instead - which is a no go for other env solutions. |
When I remove the 200 bytes of data: URL that was getting transferred each time, it reduced the time enough that now tinylet/redlet() is the fastest! I'm currently using a very crude caching system. I need to make it a bit more robust to failure so that having something throw doesn't mean game over 😅 child worker doing the recieving Lines 35 to 41 in e3d21f4
parent caller outside worker Lines 103 to 108 in e3d21f4
|
You may be right. I think that having a specialized export for Node.js that uses recieveMessageOnPort() to get 🏎🏎 speed and then a normal browser-compatible Deno-compatible version is the ideal end-game. Both exposed as the same entry point so that you don't need to care about the implementation, it just auto-routes it to the best option for your platform using export conditions |
using and while you are at it you could also use the transferable option to instead of cloning a typed array you would then instead transfer it. so using |
This comment was marked as outdated.
This comment was marked as outdated.
Tried converting Deno benchmarks to the native https://deno.land/[email protected]/tools/benchmarker Deno.bench() and still terrible results... 😭😭😭 |
@jimmywarting This is very interesting. Deno has a not-so-great postMessage() serialization and transfer procedure. This means that your trick of doing everything in a SharedArrayBuffer polling loop is orders of magnitude faster! Awesome trick! 👍 |
This is a discussion thread to discuss WHY the benchmarks are the way they are and how to improve on them
#4 (comment)
The text was updated successfully, but these errors were encountered: