Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protractor process forked/non-forked output differences causing all tests to be retried #80

Open
bchew opened this issue Oct 4, 2017 · 2 comments

Comments

@bchew
Copy link

bchew commented Oct 4, 2017

We recently noted an issue which only happens when there is only 1 failing test in the first attempt, and a subsequent failure of the test again in the second try. protractor flake is then unable to determine the failing spec from the output and falls back to re-running our whole suite of tests on the third attempt. (We are using the multi parser and have protractor sharding our tests).

Investigating it further, we found that protractor only runs in a forked mode when there is more than 1 spec to run, and the sharding of test files configuration being enabled does not make a difference, and there is a difference in output between forked and single. The condition block for this can be found here. Testing done with modifying this logic to always run protractor in forked mode validates and resolves the issue we are experiencing.

Our current workaround for now involves using a custom parser based on multi which injects a dummy test spec when there is only 1 test which has failed to ensure protractor always runs in forked mode. This issue could be resolved by potentially making the forked mode configurable in protractor, but I thought of raising the issue here to see if there is a better way to resolve this issue. Please let me know if you need further info/clarification. Thanks for writing protractor flake, it has been a very useful tool in handling flakey-ness! :)

  • Operating system and version: Ubuntu 16.04
  • Node.js version: 6.11.4
  • Protractor version: 5.1.2
  • Protractor flake version: 3.0.1
  • Protractor configuration file
 "protractor": {
    "browserCapabilities": {
      "browserName": "chrome",
      "maxInstances": 5,
      "shardTestFiles": true
    },
    "timeout": 90000
  }

protractor-flake

maxAttempts: 3
parser: 'multi'
  • Output from your test suite (I've truncated the output to the relevant bits)
..
..
[04:39:05] I/testLogger - [chrome #01-11] PID: 29509
[chrome #01-11] Specs: /home/test/tests/failed-spec.js
[chrome #01-11] 
[chrome #01-11] [04:38:20] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
[chrome #01-11] Started
[chrome #01-11] Spec started
[chrome #01-11] .
[chrome #01-11]   Advertiser add context
[chrome #01-11]     ✓ when a is created (1 sec)
[chrome #01-11] .    ✓ when b is created (0.107 sec)
[chrome #01-11] .    ✓ when the user logs in (11 secs)
[chrome #01-11] { NoSuchElementError: Index out of bound. Trying to access element at index: 0, but there are only 0 elements that match locator by.cssContainingText(".something")
--stacktrace removed--

[04:39:05] I/testLogger - 

[04:39:05] E/launcher - Runner process exited unexpectedly with error code: 4

--other tests removed--

[04:48:23] I/launcher - 0 instance(s) of WebDriver still running
[04:48:23] I/launcher - chrome #01-3 passed
[04:48:23] I/launcher - chrome #01-4 passed
[04:48:23] I/launcher - chrome #01-5 passed
[04:48:23] I/launcher - chrome #01-6 passed
[04:48:23] I/launcher - chrome #01-1 passed
[04:48:23] I/launcher - chrome #01-7 passed
[04:48:23] I/launcher - chrome #01-9 passed
[04:48:23] I/launcher - chrome #01-0 passed
[04:48:23] I/launcher - chrome #01-10 passed
[04:48:23] I/launcher - chrome #01-8 passed
[04:48:23] I/launcher - chrome #01-11 failed with exit code: 4
[04:48:23] I/launcher - chrome #01-12 passed
[04:48:23] I/launcher - chrome #01-14 passed
[04:48:23] I/launcher - chrome #01-15 passed
[04:48:23] I/launcher - chrome #01-16 passed
[04:48:23] I/launcher - chrome #01-2 passed
[04:48:23] I/launcher - chrome #01-17 passed
[04:48:23] I/launcher - chrome #01-13 passed
[04:48:23] I/launcher - chrome #01-20 passed
[04:48:23] I/launcher - chrome #01-21 passed
[04:48:23] I/launcher - chrome #01-22 passed
[04:48:23] I/launcher - chrome #01-18 passed
[04:48:23] I/launcher - chrome #01-24 passed
[04:48:23] I/launcher - chrome #01-19 passed
[04:48:23] I/launcher - chrome #01-23 passed
[04:48:23] I/launcher - chrome #01-25 passed
[04:48:23] I/launcher - chrome #01-26 passed
[04:48:23] I/launcher - chrome #01-28 passed
[04:48:23] I/launcher - chrome #01-27 passed
[04:48:23] I/launcher - chrome #01-30 passed
[04:48:23] I/launcher - chrome #01-29 passed
[04:48:23] I/launcher - chrome #01-31 passed
[04:48:23] I/launcher - chrome #01-32 passed
[04:48:23] I/launcher - chrome #01-33 passed
[04:48:23] I/launcher - chrome #01-34 passed
[04:48:23] I/launcher - chrome #01-35 passed
[04:48:23] I/launcher - chrome #01-36 passed
[04:48:23] I/launcher - chrome #01-37 passed
[04:48:23] I/launcher - chrome #01-39 passed
[04:48:23] I/launcher - chrome #01-41 passed
[04:48:23] I/launcher - chrome #01-42 passed
[04:48:23] I/launcher - chrome #01-40 passed
[04:48:23] I/launcher - chrome #01-38 passed
[04:48:23] I/launcher - chrome #01-45 passed
[04:48:23] I/launcher - chrome #01-48 passed
[04:48:23] I/launcher - chrome #01-43 passed
[04:48:23] I/launcher - chrome #01-49 passed
[04:48:23] I/launcher - chrome #01-50 passed
[04:48:23] I/launcher - chrome #01-46 passed
[04:48:23] I/launcher - chrome #01-47 passed
[04:48:23] I/launcher - chrome #01-44 passed
[04:48:23] I/launcher - overall: 1 process(es) failed to complete
[04:48:23] E/launcher - Process exited with error code 100

Using multi to parse output
Re-running tests: test attempt 2
Re-running the following test files:
/home/test/tests/failed-spec.js
[04:48:25] I/launcher - Running 1 instances of WebDriver
[04:48:25] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
Started
Spec started
.
  Advertiser add context
    ✓ when a is created (1 sec)
.    ✓ when b is created (0.114 sec)
.    ✓ when the user logs in (8 secs)
{ NoSuchElementError: Index out of bound. Trying to access element at index: 0, but there are only 0 elements that match locator by.cssContainingText(".something")
--stacktrace removed--
[04:49:00] E/launcher - Process exited with error code 1

Using multi to parse output
Re-running tests: test attempt 3

Tests failed but no specs were found. All specs will be run again.
--truncated--
@wswebcreation
Copy link
Collaborator

Good catch,

I'm not that into Jasmine / Mocha, but can we inject a console log after each failed test that will log the filename of the spec-file that failed. If so then we can have a generic implementation that would work in both of your described ways like we now have with the Cucumber parser, see here. (See also this feature request)

@NickTomlin
Copy link
Owner

Agreed; I think the best universal solution here is to write a custom reporter/logger for each framework and just consume our own output instead of trying to parse through the test runner output.

I don't use Mocha for my own specs, would you be up for collaborating on trying to get a custom reporter/logger working for Mocha? We can try to do the same for Jasmine as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants