-
-
Notifications
You must be signed in to change notification settings - Fork 2k
feat: inline Response.arrayBuffer inside load functions during ssr #10535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🦋 Changeset detectedLatest commit: 5d1e3cf The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Are there no native web APIs that we can use for this? I'd really like to not have to pull in a library to do this (which, by the look of it, I'm guessing is also going to be slower than doing this natively). It's not a whole lot, but it looks like this would also ship the |
|
||
return u8.buffer; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uint8Array.from
could be optimized under the hood?
I think this is equivalent Uint8Array.from(atob(text), (c) => c.charCodeAt(0))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did some testing on node v20.5.1 Ryzen 9 5950X 32GB
Uint8Array.from(atob(text), (c) => c.charCodeAt(0))
takes ~74ms to decode 1MB (out) of data
const d = atob(text);
return Uint8Array.from({length: d.length}, (_, i) => d.charCodeAt(i));
takes ~50ms
And using the for loop, ~17ms
Uint8Array.from(atob(text), (c) => c.charCodeAt(0))
is the only one with any major gc
// The Uint16Arrray(Uint8Array(..)) is done on purpose | ||
// this way the code points are padded with 0's | ||
return btoa(new TextDecoder('utf-16').decode(new Uint16Array(new Uint8Array(buff)))); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in my own tests using kit's existing base64 worked well without needing all this. But maybe I was missing a use case.
https://github.com/sveltejs/kit/blob/master/packages/kit/src/runtime/server/page/crypto.js#L210-L239
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Node 20.6.1
In my testing kit's version is faster (~5x) for smaller sizes, at 1kib they are the same, at 32kib TextDecoder is 1.5x faster,
at 64kib TextDecoder is 2.5x and at 512kib is 10x.
for smaller sizes i don't think it matters if encoding a request takes 0.0006ms instead of 0.0001ms. But encoding 128kib is 0.5ms instead of 1.7ms
this looks great, thank you! |
closes #9672
Please don't delete this checklist! Before submitting the PR, please make sure you do the following:
Tests
pnpm test
and lint the project withpnpm lint
andpnpm check
Changesets
pnpm changeset
and following the prompts. Changesets that add features should beminor
and those that fix bugs should bepatch
. Please prefix changeset messages withfeat:
,fix:
, orchore:
.