Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POST requests caching #506

Open
krizhanovsky opened this issue May 24, 2016 · 1 comment
Open

POST requests caching #506

krizhanovsky opened this issue May 24, 2016 · 1 comment
Labels
cache enhancement good to start Start form this tasks if you're new in Tempesta FW low priority
Milestone

Comments

@krizhanovsky
Copy link
Contributor

krizhanovsky commented May 24, 2016

RFC 7231 4.3.3:

   Responses to POST requests are only cacheable when they include
   explicit freshness information (see Section 4.2.1 of [RFC7234]).
   However, POST caching is not widely implemented.  For cases where an
   origin server wishes the client to be able to cache the result of a
   POST in a way that can be reused by a later GET, the origin server
   MAY send a 200 (OK) response containing the result and a
   Content-Location header field that has the same value as the POST's
   effective request URI (Section 3.1.4.2).

There are 3 cases to cache POST (see Caching POST and Caching HTTP POST Requests and Responses for details):

  • idempotent POST - nothing is changed at the server, e.g. this could be a web-search form which always return the same results for the same POST arguments, just like GET. We can safely fulfill the requests from the cache, but GET requests to the same URI aren't fulfilled by the cache entry since we should use <URI + POST body> as a cache key;
  • non-idempotent POST, e.g. a comment for a blog. The request must be forwarded to origin server, but appropriate response can be cached by URI cache key to service following GET requests to the same resource (next visitors will just see the blog post with the new comment). POST body shouldn't be used as cache key;
  • Just invalidate the URI as RFC 7234 4.4 requires. Invalidation must be done as setting a flag in TfwCacheEntry, so the cache entry can be returned as stale response with appropriate warning. Location and Content-Location must be processed.

Cache behavior for the POST types should be explicitly specified in configuration file.

@krizhanovsky krizhanovsky added this to the 0.6 OS milestone May 24, 2016
@krizhanovsky krizhanovsky modified the milestones: backlog, 0.6 KTLS, 0.8 TDB v0.2 Jan 9, 2018
@krizhanovsky krizhanovsky modified the milestones: 1.2 TDB v0.2, 1.1 QUIC Aug 8, 2018
@krizhanovsky krizhanovsky modified the milestones: 1.1 QUIC, 1.0 Beta Feb 2, 2019
@krizhanovsky krizhanovsky modified the milestones: 1.0 Beta, 1.1 Network performance & scalability, 1.1 TBD (Network performance & scalability), 1.1 TDB (ML, QUIC, DoH etc.) Feb 11, 2019
@krizhanovsky
Copy link
Contributor Author

krizhanovsky commented Mar 5, 2020

As the RFC 7231 4.3.3 and the original discussion say, caching POSTs are quire and requires complex logic and passing the original requests to upstream anyway. The original motivation for the issue was to mitigate HTTP POST DDoS, but it's more efficient to do with #488 tracking the upstream stress condition.

Since the task is low priority, move it to backlog. The eBay's case still can be faced.

@krizhanovsky krizhanovsky changed the title [Cache] POST requests caching POST requests caching Mar 16, 2020
@krizhanovsky krizhanovsky added the good to start Start form this tasks if you're new in Tempesta FW label Feb 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cache enhancement good to start Start form this tasks if you're new in Tempesta FW low priority
Projects
None yet
Development

No branches or pull requests

1 participant