Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a failure test rule #301

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 6 additions & 0 deletions docs/BUILD
Expand Up @@ -8,6 +8,12 @@ stardoc_with_diff_test(
out_label = "//docs:analysis_test_doc.md",
)

stardoc_with_diff_test(
name = "analysis_failure_test",
bzl_library_target = "//rules:analysis_failure_test",
out_label = "//docs:analysis_failure_test_doc.md",
)

stardoc_with_diff_test(
name = "build_test",
bzl_library_target = "//rules:build_test",
Expand Down
54 changes: 54 additions & 0 deletions docs/analysis_failure_test_doc.md
@@ -0,0 +1,54 @@
<!-- Generated with Stardoc: http://skydoc.bazel.build -->

A test verifying that another target fails to analyse as part of a `bazel test`

This analysistest is mostly aimed at rule authors that want to assert certain error conditions.
If the target under test does not fail the analysis phase, the test will evaluate to FAILED.
If the given error_message is not contained in the otherwise printed ERROR message, the test evaluates to FAILED.
If the given error_message is contained in the otherwise printed ERROR message, the test evaluates to PASSED.

NOTE:
Adding the `manual` tag to the target-under-test is recommended.
It prevents analysis failure of that target if `bazel test //...` is used.

Typical usage:
```
load("@bazel_skylib//rules:analysis_failure_test.bzl", "analysis_failure_test")

rule_with_analysis_failure(
name = "unit",
tags = ["manual"],
)


analysis_failure_test(
name = "analysis_fails_with_error",
target_under_test = ":unit",
error_message = _EXPECTED_ERROR_MESSAGE,
)
```

Args:
target_under_test: The target that is expected to cause an anlysis failure
error_message: The asserted error message in the (normally printed) ERROR.

<a id="analysis_failure_test"></a>

## analysis_failure_test

<pre>
analysis_failure_test(<a href="#analysis_failure_test-name">name</a>, <a href="#analysis_failure_test-error_message">error_message</a>, <a href="#analysis_failure_test-target_under_test">target_under_test</a>)
</pre>



**ATTRIBUTES**


| Name | Description | Type | Mandatory | Default |
| :------------- | :------------- | :------------- | :------------- | :------------- |
| <a id="analysis_failure_test-name"></a>name | A unique name for this target. | <a href="https://bazel.build/concepts/labels#target-names">Name</a> | required | |
| <a id="analysis_failure_test-error_message"></a>error_message | The test asserts that the given string is contained in the error message of the target under test. | String | required | |
| <a id="analysis_failure_test-target_under_test"></a>target_under_test | - | <a href="https://bazel.build/concepts/labels">Label</a> | required | |


6 changes: 6 additions & 0 deletions rules/BUILD
Expand Up @@ -9,6 +9,12 @@ bzl_library(
srcs = ["analysis_test.bzl"],
)

bzl_library(
name = "analysis_failure_test",
srcs = ["analysis_failure_test.bzl"],
deps = ["//lib:unittest"],
)

bzl_library(
name = "build_test",
srcs = ["build_test.bzl"],
Expand Down
64 changes: 64 additions & 0 deletions rules/analysis_failure_test.bzl
@@ -0,0 +1,64 @@
# Copyright 2021 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""A test verifying that another target fails to analyse as part of a `bazel test`

This analysistest is mostly aimed at rule authors that want to assert certain error conditions.
If the target under test does not fail the analysis phase, the test will evaluate to FAILED.
If the given error_message is not contained in the otherwise printed ERROR message, the test evaluates to FAILED.
If the given error_message is contained in the otherwise printed ERROR message, the test evaluates to PASSED.

NOTE:
Adding the `manual` tag to the target-under-test is recommended.
It prevents analysis failure of that target if `bazel test //...` is used.

Typical usage:
```
load("@bazel_skylib//rules:analysis_failure_test.bzl", "analysis_failure_test")

rule_with_analysis_failure(
name = "unit",
tags = ["manual"],
)


analysis_failure_test(
name = "analysis_fails_with_error",
target_under_test = ":unit",
error_message = _EXPECTED_ERROR_MESSAGE,
)
```

Args:
target_under_test: The target that is expected to cause an anlysis failure
error_message: The asserted error message in the (normally printed) ERROR."""

load("//lib:unittest.bzl", "analysistest", "asserts")

def _analysis_failure_test_impl(ctx):
"""Implementation function for analysis_failure_test. """
env = analysistest.begin(ctx)
asserts.expect_failure(env, expected_failure_msg = ctx.attr.error_message)
return analysistest.end(env)

analysis_failure_test = analysistest.make(
_analysis_failure_test_impl,
expect_failure = True,
attrs = {
"error_message": attr.string(
mandatory = True,
doc = "The test asserts that the given string is contained in the error message of the target under test.",
),
},
)
16 changes: 16 additions & 0 deletions tests/BUILD
@@ -1,4 +1,5 @@
load("//:bzl_library.bzl", "bzl_library")
load(":analysis_failure_test_tests.bzl", "analysis_failure_test_test_suite")
load(":build_test_tests.bzl", "build_test_test_suite")
load(":collections_tests.bzl", "collections_test_suite")
load(":dicts_tests.bzl", "dicts_test_suite")
Expand Down Expand Up @@ -46,6 +47,8 @@ unittest_passing_tests_suite()

versions_test_suite()

analysis_failure_test_test_suite()

bzl_library(
name = "unittest_tests_bzl",
srcs = ["unittest_tests.bzl"],
Expand Down Expand Up @@ -81,6 +84,19 @@ sh_test(
tags = ["local"],
)

sh_test(
name = "analysis_failure_test_e2e_test",
srcs = ["analysis_failure_test_test.sh"],
data = [
":unittest.bash",
"//lib:unittest",
"//rules:analysis_failure_test.bzl",
"//toolchains/unittest:test_deps",
"@bazel_tools//tools/bash/runfiles",
],
tags = ["local"],
)

sh_test(
name = "common_settings_e2e_test",
srcs = ["common_settings_test.sh"],
Expand Down
188 changes: 188 additions & 0 deletions tests/analysis_failure_test_test.sh
@@ -0,0 +1,188 @@
#!/bin/bash

# Copyright 2021 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# End to end tests for analysis_failure_test.bzl.
#
# End to end tests of analysis_failure_test.bzl cover verification that
# analysis_failure_test tests succeed when their underlying test targets fail analysis with
# a given error message.

# --- begin runfiles.bash initialization ---
set -euo pipefail
if [[ ! -d "${RUNFILES_DIR:-/dev/null}" && ! -f "${RUNFILES_MANIFEST_FILE:-/dev/null}" ]]; then
if [[ -f "$0.runfiles_manifest" ]]; then
export RUNFILES_MANIFEST_FILE="$0.runfiles_manifest"
elif [[ -f "$0.runfiles/MANIFEST" ]]; then
export RUNFILES_MANIFEST_FILE="$0.runfiles/MANIFEST"
elif [[ -f "$0.runfiles/bazel_tools/tools/bash/runfiles/runfiles.bash" ]]; then
export RUNFILES_DIR="$0.runfiles"
fi
fi
if [[ -f "${RUNFILES_DIR:-/dev/null}/bazel_tools/tools/bash/runfiles/runfiles.bash" ]]; then
source "${RUNFILES_DIR}/bazel_tools/tools/bash/runfiles/runfiles.bash"
elif [[ -f "${RUNFILES_MANIFEST_FILE:-/dev/null}" ]]; then
source "$(grep -m1 "^bazel_tools/tools/bash/runfiles/runfiles.bash " \
"$RUNFILES_MANIFEST_FILE" | cut -d ' ' -f 2-)"
else
echo >&2 "ERROR: cannot find @bazel_tools//tools/bash/runfiles:runfiles.bash"
exit 1
fi
# --- end runfiles.bash initialization ---

source "$(rlocation $TEST_WORKSPACE/tests/unittest.bash)" \
|| { echo "Could not source bazel_skylib/tests/unittest.bash" >&2; exit 1; }

function create_pkg() {
local -r pkg="$1"
mkdir -p "$pkg"
cd "$pkg"

cat > WORKSPACE <<EOF
workspace(name = 'bazel_skylib')

load("//lib:unittest.bzl", "register_unittest_toolchains")

register_unittest_toolchains()
EOF

mkdir -p rules
cat > rules/BUILD <<EOF
exports_files(["*.bzl"])
EOF
ln -sf "$(rlocation $TEST_WORKSPACE/rules/analysis_failure_test.bzl)" rules/analysis_failure_test.bzl

mkdir -p lib
cat > lib/BUILD <<EOF
exports_files(["*.bzl"])
EOF
ln -sf "$(rlocation $TEST_WORKSPACE/lib/unittest.bzl)" lib/unittest.bzl
ln -sf "$(rlocation $TEST_WORKSPACE/lib/types.bzl)" lib/types.bzl
ln -sf "$(rlocation $TEST_WORKSPACE/lib/partial.bzl)" lib/partial.bzl
ln -sf "$(rlocation $TEST_WORKSPACE/lib/new_sets.bzl)" lib/new_sets.bzl
ln -sf "$(rlocation $TEST_WORKSPACE/lib/dicts.bzl)" lib/dicts.bzl

mkdir -p toolchains/unittest
ln -sf "$(rlocation $TEST_WORKSPACE/toolchains/unittest/BUILD)" toolchains/unittest/BUILD

mkdir -p fakerules
cat > fakerules/rules.bzl <<EOF
load("//rules:analysis_failure_test.bzl", "analysis_failure_test")

def _fake_rule_impl(ctx):
fail("This rule fails at analysis phase")

fake_rule = rule(
implementation = _fake_rule_impl,
)

def _fake_depending_rule_impl(ctx):
return []

fake_depending_rule = rule(
implementation = _fake_depending_rule_impl,
attrs = {"deps" : attr.label_list()},
)
EOF

cat > fakerules/BUILD <<EOF
exports_files(["*.bzl"])
EOF

mkdir -p testdir
cat > testdir/BUILD <<EOF
load("//rules:analysis_failure_test.bzl", "analysis_failure_test")
load("//fakerules:rules.bzl", "fake_rule", "fake_depending_rule")

fake_rule(name = "target_fails")

fake_depending_rule(
name = "dep_fails",
deps = [":target_fails"],
)

fake_depending_rule(
name = "rule_that_does_not_fail",
deps = [],
)

analysis_failure_test(
name = "direct_target_fails",
target_under_test = ":target_fails",
error_message = "This rule fails at analysis phase",
)

analysis_failure_test(
name = "transitive_target_fails",
target_under_test = ":dep_fails",
error_message = "This rule fails at analysis phase",
)

analysis_failure_test(
name = "fails_with_wrong_error_message",
target_under_test = ":dep_fails",
error_message = "This is the wrong error message",
)

analysis_failure_test(
name = "fails_with_target_that_does_not_fail",
target_under_test = ":rule_that_does_not_fail",
error_message = "This rule fails at analysis phase",
)
EOF
}


function test_direct_target_succeeds() {
local -r pkg="${FUNCNAME[0]}"
create_pkg "$pkg"

bazel test testdir:direct_target_fails >"$TEST_log" 2>&1 || fail "Expected test to pass"

expect_log "PASSED"
}

function test_transitive_target_succeeds() {
local -r pkg="${FUNCNAME[0]}"
create_pkg "$pkg"

bazel test testdir:transitive_target_fails >"$TEST_log" 2>&1 || fail "Expected test to pass"

expect_log "PASSED"
}

function test_with_wrong_error_message_fails() {
local -r pkg="${FUNCNAME[0]}"
create_pkg "$pkg"

bazel test testdir:fails_with_wrong_error_message --test_output=all --verbose_failures \
>"$TEST_log" 2>&1 && fail "Expected test to fail" || true

expect_log "Expected errors to contain 'This is the wrong error message' but did not. Actual errors:"
}

function test_with_rule_that_does_not_fail_fails() {
local -r pkg="${FUNCNAME[0]}"
create_pkg "$pkg"

bazel test testdir:fails_with_target_that_does_not_fail --test_output=all --verbose_failures \
>"$TEST_log" 2>&1 && fail "Expected test to fail" || true

expect_log "Expected failure of target_under_test, but found success"
}


cd "$TEST_TMPDIR"
run_suite "analysis_failure_test test suite"