Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve test suite to be able to test with multiple configuration sets #83

Closed
YoshiWalsh opened this issue Jun 26, 2019 · 12 comments · Fixed by #151
Closed

Improve test suite to be able to test with multiple configuration sets #83

YoshiWalsh opened this issue Jun 26, 2019 · 12 comments · Fixed by #151
Milestone

Comments

@YoshiWalsh
Copy link
Collaborator

Currently our automated tests only deploy the site once, which means it's not possible to automatically test different configurations.

We should improve our testing setup to allow for multiple deploy/test iterations with different configurations.

@jariz Do you want to/are you able to do this? Otherwise I'm happy to look into it.

@jariz
Copy link
Collaborator

jariz commented Jun 26, 2019

@JoshuaWalsh Feel free!

@YoshiWalsh
Copy link
Collaborator Author

YoshiWalsh commented Jun 26, 2019

Mostly so I don't forget, here are some bullet points of what I'm planning so far:

  • We'll still use Jest for integration/E2E tests.
  • The with-redirects example site will be moved to a tests directory. (Because I think it's surprising to have tests relying on files in the examples directory)
  • Each describe block will perform it's own deploy, using a scoped beforeAll function.
  • Each describe block may deploy a site from a different directory.
  • Multiple describe blocks may deploy sites from the same directory, but with tweaked configs. This will be achieved via environment variables.
  • We'll add CircleCI support. This will allow us to do automated checks on each PR. It's the same thing gatsby core uses. Their new pricing has quite a generous free tier for open source projects.
  • CircleCI will run typechecking and the build, then the Jest tests.
  • CircleCI will also run linting with TSLint (and ESLint after Add ESLint #80)
  • I'll write tests to cover Entire URL gets suffixed to redirect key #81, hopefully preventing anything similar from slipping through.

What I'm not planning to include in this issue:

  • Adding unit tests (I think we should do this at some point, but I don't want to bite off more than I can chew)
  • Adding automated dependency security audits (I've created a separate issue for that, Add Snyk? #84)

Any other ideas/feedback would be appreciated.

If you'd rather, we can continue to use Azure Pipelines. I'm attracted to CircleCI due to the free plan they provide for open source projects.

@YoshiWalsh YoshiWalsh added this to the 1.0 milestone Jul 2, 2019
@YoshiWalsh
Copy link
Collaborator Author

YoshiWalsh commented Jul 25, 2019

It's been a while, so I'll provide an update. I think I've made some good progress towards this, I've made a PoC of allowing multiple deployments within the test suite. I've also come up with (what I think is) a clever way to automatically verify the permissions required for each operation. Check out my branch for more details.

Unfortunately I've been encountering some very cryptic errors in the AWS SDK, and I'm finding it challenging to debug using Jest. (Most of my breakpoints are ignored, and I'm getting an error from AWS SDK at a point where I'm not making any AWS calls, seems like a concurrency issue but difficult to track down)

Additionally I've unexpectedly caught weeb and I'm finding it tricky to find the time and motivation to work on this frustrating task now that I am medically required to watch 5+ hours of anime per day.

So in short, I'm still working on this but it's taking longer than expected. If anyone's willing I'd love to get another pair of eyes on this, maybe we can do a VS Live Share session.

@jariz
Copy link
Collaborator

jariz commented Jul 25, 2019

I've been looking through the branch but it's a bit overwhelming at first, maybe you can guide me through it a bit.

Additionally I've unexpectedly caught weeb and I'm finding it tricky to find the time and motivation to work on this frustrating task now that I am medically required to watch 5+ hours of anime per day.

This made me laugh for at least a solid 30 secs, so thanks 😛

maybe we can do a VS Live Share session.

I'm in!

@YoshiWalsh
Copy link
Collaborator Author

YoshiWalsh commented Mar 3, 2020

@jariz I finally found the willpower to revisit this and I FIGURED OUT THE CRYPTIC ERRORS!

There were two issues which combined to make things really confusing:

  1. I was getting an error from S3, but I couldn't figure out which call was to blame. Even if I commented out all the code for running a deployment, the error still appeared! Turns out if a beforeAll function throws an error, Jest swallows it and continues running the tests. Then instead of returning the output of those tests, it acts as if each test threw the error from beforeAll. This behaviour was pretty unclear and wasted a lot of my time as I looked for the problem in the complete wrong places.

  2. The S3 API was returning an unhelpful error message "The XML you provided was not well-formed or did not validate against our published schema". This ended up being caused by me calling the DeleteObjects API with an empty array. Oops.

Anyway, this is now a working proof-of-concept! I'll write some proper documentation later, but for now here's what you need to do to get started:

  1. Add the IAM policy from test-policy.json to the User or Role that you run the tests with.
  2. Run npm run-script test:e2e.

CircleCI isn't working yet, still working on it. Also the tests I do on the redirects are minimal so far, just proof-of-concept. I intend to bring across all the tests currently in bin.test.ts.

@jariz
Copy link
Collaborator

jariz commented Mar 3, 2020

Looks really promising! Just ran it and all tests appeared to pass.

On further inspection though, it appears the first deploy fails..? (the other deploys do finish)
Schermafbeelding 2020-03-03 om 14 16 36

This doesn't show up in the final summary though.
Schermafbeelding 2020-03-03 om 14 18 12

@YoshiWalsh
Copy link
Collaborator Author

Yup, the first deploy intentionally fails. It's intended to check that the IAM policy is correctly configured in order to test permissions. It attempts to deploy to a non-existent bucket without requesting the CreateBucket permission.

@jariz
Copy link
Collaborator

jariz commented Mar 4, 2020

Alright, in that case I said nothing. 😉
I've just swapped out TSlint for ESlint, so don't forget to update that in your CircleCI config.
Also, does CircleCI support windows? It'd be great if we could ditch both travis and Azure for CircleCI.

@YoshiWalsh
Copy link
Collaborator Author

It does support Windows, and NodeJS is installed by default. https://circleci.com/docs/2.0/hello-world-windows/

I'm still working on the CircleCI stuff, getting it set up right is a little tricky. I'll make sure to use ESLint.

@YoshiWalsh
Copy link
Collaborator Author

Alright, I've got another update. CircleCI is now WORKING!

Here's an example of a PR that causes tests to fail.

Pretty exciting stuff IMO. Now all that's left for me to do is rebase my changes.

@jariz
Copy link
Collaborator

jariz commented Mar 17, 2020

Awesome! This was so needed.

@YoshiWalsh
Copy link
Collaborator Author

YoshiWalsh commented Apr 27, 2020

I'm sorry this issue has dragged out so long, I know people are waiting for it. Since COVID-19 hit I lost internet connectivity for about 3 weeks, one of the benefits of living in Australia. That did give me a lot of time to think though.

After my last comment I realised I wasn't happy with my approach to cleaning up test buckets, as it gave too much room for malicious contributors to abuse AWS resources of the pipeline owner. I hope that this never becomes an issue, but I have made further changes to make such attacks unappealing. Namely, I have added an (optional) Lambda script which will remove all leftover test buckets at a configurable interval.

I have also created a Terraform template that sets up the required AWS infrastructure to run the tests.

Now I think this feature is as good as I can make it, so here's my to-do list before I can submit a PR:

  • Rebase the changes I made directly on JoshuaWalsh/gatsby-plugin-s3#master onto JoshuaWalsh/gatsby-plugin-s3#feature/issue-83

  • Write instructions on how to get set up to run the automated testing suite, as well as documentation on some of the non-obvious parts about how the testing suite works and why it works that way

  • Resolve merge conflicts between feature branch and jariz/gatsby-plugin-s3#master.

I know my estimates here have been pretty unreliable (since my last estimate was "within the week" and that was 5 weeks ago) but I'm hoping to complete 1&2 tomorrow evening.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants