Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Lambda Edge origin request to AWS #192

Closed
wants to merge 2 commits into from

Conversation

eelayoubi
Copy link

In this PR, I did the following:

  • Moved what was in aws/ to api-gw (to preserve the same api gw integration)
  • Copied the change for Lambda that was introduced by @EdisonHarada with a small change to its own folder : /aws/lambda-edge-origin-request

The change:

serverless(app, { provider: 'aws', type: 'lambda-edge-origin-request' });

Will result in using the Lambda edge to process the calls.
If no type is provided the default is to use api-gw integration:

module.exports = options => { if (options.type === 'lambda-edge-origin-request') { return lambdaEdgeOriginRequest(options) } else { return apiGw(options) } };

@eelayoubi eelayoubi mentioned this pull request Dec 11, 2020
event.config = event.config || {};

event.request = event.request || {};
event.request.body = event.request.body || '';

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The request body in a Lambda@Edge event is an object, not a string.

See here:

export interface CloudFrontRequest {
    body?: {
        action: 'read-only' | 'replace';
        data: string;
        encoding: 'base64' | 'text';
        readonly inputTruncated: boolean;
    };
    readonly clientIp: string;
    readonly method: string;
    uri: string;
    querystring: string;
    headers: CloudFrontHeaders;
    origin?: CloudFrontOrigin;
}

} else if (type === 'string') {
return Buffer.from(event.request.body.data, event.request.body.encoding === 'base64' ? 'base64' : 'utf8');
} else if (type === 'object') {
return Buffer.from(JSON.stringify(event.request.body.data));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is adding extra "" around the returned request body.

Instead, this should be:

Buffer.from(event.request.body.data, "base64");

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, it should be as follows (I've just seen how the original code is for APIG, so I can see how this got copy-and-pasted like this):

function requestBody(event) {
  const body = event && event.request && event.request.body && event.request.body.data;
  const type = typeof body;

  if (!body) return '';

  if (Buffer.isBuffer(body)) {
    return body;
  } else if (type === 'string') {
    return Buffer.from(body, event.request.body.encoding === 'base64' ? 'base64' : 'utf8');
  } else if (type === 'object') {
    return Buffer.from(JSON.stringify(body));
  }

  throw new Error(`Unexpected event.body type: ${typeof event.request.body.data}`);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can confirm this works 👍

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, thanks for the feedback, how is this different from the current implementation? In all cases, I will update the PR to use the body variable (more readable I guess)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The implementation you had results in the request body being wrapped in "", so if you have a request body of "hello", with the previous code, you'd get a string of "\"hello\"".

AFAIK it's because you copied it from the APIG handler, which expects the body to be a string OR an already-decoded JSON object (and in the latter case, is why it uses a JSON.stringify -- to turn it back to a string).

In our case, for Edge, the body is always a string (either base64 encoded or not).

@alexcroox
Copy link

Disclaimer; excuse my ignorance I've only used this library a few days and don't fully understand it.

I've used your master branch version as I'm experiencing issues with lambda@edge + Nuxt and the response headers not being in the new array format. Your version seems to correctly fix that, except for one header:

image

I imagine set-cookie is being set by the Nuxt framework, and maybe this middleware is being triggered before it's own. Is there any way to make sure your transformations on the response headers are the last? Or should I write my own header transforms in the module.exports.handler just before it returns to guarantee they are all transformed to the array format?

@eelayoubi
Copy link
Author

eelayoubi commented Jan 4, 2021

@alexcroox I am testing using angular, and express for SSR, and in express I just set a cookie to test if it will be returned correctly to the client.

Express js:

// All regular routes use the Universal engine
app.get('*', (req, res) => {
    res.cookie('test', 'lala');
    fs.readFile(join(__dirname, '../browser/index.html'), function (err, html) {
        if (err) {
            throw err;
        }

        renderModule(AppServerModule, {
            document: html.toString(),
            url: req.url
        }).then((html) => {
            res.send(html);
        });
    });

});

This is the returned response from lambda:

{
  status: '200',
  statusDescription: 'OK',
  headers: {
    'x-powered-by': [ [Object] ],
    'set-cookie': [ [Object] ],
    'content-type': [ [Object] ],
    etag: [ [Object] ]
  },
  body: '.........'

@alexcroox
Copy link

alexcroox commented Jan 4, 2021

This is how I'm using it:

const serverless = require('serverless-http')
const express = require('express')
const { Nuxt } = require('nuxt-start')

// Import and overwrite nuxt options
const config = require('../nuxt.config.js')

config.render = { etag: true, compressor: { threshold: Infinity } }
config.dev = false

// Create nuxt instance
const nuxt = new Nuxt(config)

// Create Express instance
const app = express()

// Add nuxt.render as express middleware
app.use(nuxt.render)

const handler = serverless(app, {
  type: 'lambda-edge-origin-request',
  platform: 'aws'
})

module.exports.handler = async (event, context) => {
  // Make sure nuxt is ready to handle requests
  await nuxt.ready()

  // Execute handler
  return await handler(event, context)
}

But I'm sure it's down to the lifecycle order of serverless/Nuxt that is causing the issue rather than anything in your PR, so I'll stop derailing this PR and leave it there, thanks though!


if (Array.isArray(value) && normalizedKey === 'set-cookie') {
value.forEach((cookie, i) => {
memo[setCookieVariations[i]] = cookie;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a bug: if you have an array of 2 cookies, this results in the following headers being set:

"connection": [
    {
        "key": "connection",
        "value": "close"
    }
],
"set-cookie": "cookie1=hello",
"Set-cookie": "cookie2=world",
"foo": [
    {
        "key": "foo",
        "value": "bar"
    }
],

... because you're taking the index of the cookie, and assigning it to the Nth cookie-variant name... so 0 is set-cookie, 1 is Set-cookie, 2 is sEt-cookie, and so forth...


I would recommend:

  1. Delete set-cookie.json.
  2. Change this whole file (sanitize-headers.js) to the code below.
    • Note: as a separate improvement, the code below also omits blacklisted headers, not just read-only.
    • Most importantly, though, is that the new code handles cookie headers correctly and handles array values for other headers correctly too.
'use strict';

// See: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-requirements-limits.html#lambda-read-only-headers
const readOnlyHeaders = [
  "accept-encoding",
  "content-length",
  "if-modified-since",
  "if-none-Match",
  "if-range",
  "if-unmodified-since",
  "range",
  "transfer-encoding",
  "via"
];

// See: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-requirements-limits.html#lambda-blacklisted-headers
const blacklistedHeaders = [
  "connection",
  "expect",
  "keep-alive",
  "proxy-authenticate",
  "proxy-authorization",
  "proxy-connection",
  "trailer",
  "upgrade",
  "x-accel-buffering",
  "x-accel-charset",
  "x-accel-limit-rate",
  "x-accel-redirect",
  "x-cache",
  "x-forwarded-proto",
  "x-real-ip",
]

const omittedHeaders = [...readOnlyHeaders, ...blacklistedHeaders]

module.exports = function sanitizeHeaders(headers) {
  return Object.keys(headers).reduce((memo, key) => {
    const value = headers[key];
    const normalizedKey = key.toLowerCase();

    if (omittedHeaders.includes(normalizedKey)) {
        return memo;
    }

    if (memo[normalizedKey] === undefined) {
      memo[normalizedKey] = []
    }

    const valueArray = Array.isArray(value) ? value : [value]

    valueArray.forEach(valueElement => {
      memo[normalizedKey].push({
        key: key,
        value: valueElement
      });
    });

    return memo;
  }, {});
};


const setCookieVariations = require('./set-cookie.json').variations;
const readOnlyHeaders = [
"memoept-encoding",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: fixed in the code block I've suggested here

@dougmoscrop
Copy link
Owner

dougmoscrop commented Apr 3, 2022

Closing in favour of #196

@dougmoscrop dougmoscrop closed this Apr 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants