Skip to content
This repository has been archived by the owner on Nov 20, 2018. It is now read-only.

AWS Signature Version 4 performance - Can hashing in web workers help? #2033

Open
2 of 5 tasks
ranuser99 opened this issue Jul 27, 2018 · 2 comments
Open
2 of 5 tasks

Comments

@ranuser99
Copy link

ranuser99 commented Jul 27, 2018

Type of issue

  • Bug report
  • Feature request

Uploader type

  • Traditional
  • S3
  • Azure
Feature Request

Fine Uploader version

5.16.2

Our uploads take advantage of fast network connections. With our setup, when NOT chunking, Fineuploader can upload at over 500mbps+. With Version 4 chunking, it will drop to around 50mbps. Concurrent chunks/uploads are also a great feature which are rendered useless as well (diminishing returns due to the hashing penalty).

Any plans to implement hashing in web workers? Could this reduce the performance bottleneck (hashing in a separate thread) ?

It's unfortunate that AWS requires V4 for all new regions and that this method requires hashing of the actual content body. In our case we are deployed in one of these new regions, so must use V4. We also require chunking / multi-part uploads.

@ranuser99 ranuser99 changed the title AWS Signature Version 4 + AWS Signature Version 4 performance, Can hashing in web workers help? Jul 27, 2018
@ranuser99 ranuser99 changed the title AWS Signature Version 4 performance, Can hashing in web workers help? AWS Signature Version 4 performance - Can hashing in web workers help? Jul 27, 2018
@rnicholus
Copy link
Member

Concurrent chunks/uploads are also a great feature which are rendered useless as well (diminishing returns due to the hashing penalty)

I'm not following. Why does this render concurrent chunking useless?

@ranuser99
Copy link
Author

In our testing, concurrently uploading chunks do not produce a greater aggregate throughput due to the V4 hashing bottleneck. We found that concurrency of 2 produced no greater total sustained throughput and higher levels of concurrency were actually detrimental.

Another issue is that there is slight UI (main thread) freezing with single chunks due to our fast transfer speed (we're always hashing). Increasing concurrency to 2+ really hurts UI smoothness. We've also experimented with chunk sizes as well to no avail (we need 5-10MB chunks min to support large file transfers).

It's a shame as all of this is due to V4 signing and the apparent need to hash the entire request body. Do you see any way to further mitigate the V4 performance penalty? Do you think offloading hashing to web workers will produce a material improvement to both total throughput and UI smoothness?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants