New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: parse data from output workflow #5226
Conversation
7975a7c
to
7aa49cd
Compare
73bad24
to
9a66f03
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question below but nothing blocking necessarily.
func Test_runWorkflowAndProcessData(t *testing.T) { | ||
defer cleanup() | ||
globalConfiguration = configuration.New() | ||
globalConfiguration.Set(configuration.DEBUG, true) | ||
globalEngine = workflow.NewWorkFlowEngine(globalConfiguration) | ||
|
||
testCmnd := "subcmd1" | ||
addEmptyWorkflows(t, globalEngine, []string{"output"}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: Should we keep this test as it was and add a new one specifically about the test summary result?
Or is it that we've changed the engine in this GAF update such that we must have a test summary result here and this is the new baseline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, as a result of this change, we are now returning Test Summary objects if there are passed through to the output workflow. This represents a new base line behaviour.
|
||
_, err := globalEngine.Register(workflowId1, workflowConfig, outputFn) | ||
if err != nil { | ||
t.Fatal(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
observation: assert.NoError(t, err)
would be a nicer way to fail the test but that does go against the prevailing grain here so maybe that's a follow-up tidy-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good spot, raised #5230 to address this refactor
Pull Request Submission
Please check the boxes once done.
The pull request must:
feat:
orfix:
, others might be used in rare occasions as well, if there is no need to document the changes in the release notes. The changes or fixes should be described in detail in the commit message for the changelog & release notes.Pull Request Review
All pull requests must undergo a thorough review process before being merged.
The review process of the code PR should include code review, testing, and any necessary feedback or revisions.
Pull request reviews of functionality developed in other teams only review the given documentation and test reports.
Manual testing will not be performed by the reviewing team, and is the responsibility of the author of the PR.
For Node projects: It’s important to make sure changes in
package.json
are also affectingpackage-lock.json
correctly.If a dependency is not necessary, don’t add it.
When adding a new package as a dependency, make sure that the change is absolutely necessary. We would like to refrain from adding new dependencies when possible.
Documentation PRs in gitbook are reviewed by Snyk's content team. They will also advise on the best phrasing and structuring if needed.
Pull Request Approval
Once a pull request has been reviewed and all necessary revisions have been made, it is approved for merging into
the main codebase. The merging of the code PR is performed by the code owners, the merging of the documentation PR
by our content writers.
What does this PR do?
Adds support
--severity-threshold
when determining the exit code. This means that you can usesnyk code test --severity-threshold=high
with the go based workflow and get the success exit code if only low severity issues are found.How should this be manually tested?
Run the freshly built binary against a project that only contains low|medium severity issues, in this case I am running against https://github.com/snyk-fixtures/shallow-goof where
.gitignore
has been removed after installing so the scanner runs over node_modules.What are the relevant tickets?
CLI-304